The drawback of a powerful type system is you can very easily get yourself into a type complexity mudhole. Nothing worse than trying to call a method where a simple `Foo` object would do but instead you've defined 60 character definition of `Foo` capabilities in the type system in the method signature.
Less is more.
If you can get a module to do the same thing with a simpler interface, then that's generally a better module; it's typically a sign of good separation of concerns. Complex interfaces are often a sign that the module encourages micromanagement of its internal state; a leaky abstraction.
A module should be trusted to do its job. The only reason a module would provide complex interfaces is to provide flexibility... But modules don't need to provide flexibility because the whole point of a module is that it can be easily replaced with other modules when requirements change.
The advantage of the U.K. plug is that live pins are physically blocked and only released when the Earth pin is present. This is why the Earth pin is slightly longer on U.K. plugs and why insulated devices have a plastic Earth pin rather than no pin at all. The advantage of this is so you cannot jam things into them (either accidentally or intentionally) without the Earth pin. Thus making the plug much safer.
I’ve found U.K. plugs to be much more secure inside the socket too. US plugs often come away from the wall when there is a little bit of weight or tension on the plug. U.K. plugs require a great deal more pressure to come loose from the socket.
If I were to bring this back to types I’d say one needs to evaluate what the requirements are: safety or convenience.
It's even better than it looks in Europe: there are actuality 3 contact points but only 2 are salient which allows for 2 easily pluggable positions (rotate 180deg). The ground is positioned twice for that matter.
Only France has a variation around that to my knowledge, that is still compatible across Europe.
And no one notices and just plugs in and out without thinking twice about it.
There are some unsung heroes here.
Speaking strictly on plugs - while I agree it is simpler/easier - I disagree that EU plug is better.
For the bulk and lack of directionality - significantly safer (pin lengths, shielded to tip), requires a ground pin even if dummy, carries 13A easy!
The safety aspects are incredible[0]. I do not worry about my small children at all.
Lived in HK where the EU (China) and UK (HK) are pretty common in one household.
I see way too many folks trying to use Omit<>(…) and Partial<>(…) creating absolute typing monstrosities. Feels like typing duct tape and it’s impossible to read the type definition when it’s generated in a tooltip.
But what is actually inside the HTTP-payloads can then have many constraints on them which are not declared anywhere. For instance your code might assume the payload is JSON with several required fields in it.
This is where a lot of developers go overboard - not just in type systems, but in general. They are so afraid of duplication, they over-generalize and end up in a quagmire of unreadable overly complicated code.
Some duplication is easy. It's just code volume, and volume shouldn't be as scary as complexity.
I prefer some duplicate lines over having to go back and forth over some source files only because some developers think that less code is better code.
The code we create should be made for humans to read, no to machines and specially not to brag about how clever is our code
Once types get so complex, I’ve no idea what’s going wrong.
Today I had code running fine but throwing errors all over the place because some deeply nested type mismatch between two libraries.
I just any’d it… i aint got no time for that shit
But just like you probably struggled with and overcame many things before, it will be the same now. It's just that you can opt out of the typesystem in typescript whereas you are forced to learn how to deal with the runtime.
But if you make it, your development experience will change drastically. The time might be very well spent.
I kinda agree - often no amount of "this is a bad idea" will teach as well as just letting someone make the mistake and actually experience the consequences.
The only problem is that hard to maintain code often does not cause any problems until you write a critical mass of it and end up trying to develop enough non trivial extra features on top of it.
It's a matter of choosing a solution that is clear, easy to understand, and easy to maintain. There are nearly limitless solutions that can fit that definition.
Restraint comes into play because devs tend to "treat every problem like a nail when they have a hammer". When devs learn new concepts, they often look for places to use that concept even when it's a bad fit.
An example of this is excessive use of inheritance when simpler types fit better. Many of us have dealt with the greenhorn that creates a giant inheritance tree or generic mess after they first learn that "neat" concept.
The reasons OP encourage restraint might be the mental overhead of understanding what's "correct", as well as needing to rely on not only yourself but other people to be correct.
Sometimes simple is faster and harder to screw up.
I feel like readable dynamically typed code is more easily "trained" onto younglings than typed equivalent.
I understand no typings allow for much much worse code bases, but my experience has been the opposite.
The devs now have guardrails in place to make sure they follow the spec...
Advanced types are invaluable when you are writing a framework or library... But in every day implementation, I agree they should be used sparingly.
I have 7 years of ts experience and I'll still 'as any' a reduce function from time to time
Ignoring the 60 different character definitions isn't going to make the problem that you have 60 possible variants go away just because you didn't type it.
Give someone (particularly a developer) the opportunity to build something complicated and undoubtedly they will. So now you have two problems, the complicated program that actually does some hopefully useful work, and another complicated program on top of it that fills your head and slows you down when trying to fix the first complicated program. You may say 'ah yes, but the second complicated program validates the first!'. Not really, it just makes things more complicated. Almost all bugs are logic bugs or inconsistent state bugs (thanks OOP!), almost none are type bugs.
However, static analysis of existing code (in Javascript), without having to write a single extra character, may well have great value in indicating correctness.
Edit:
> TypeScript's type system is a full-fledged programming language in itself!
Run! Run as fast as you can! Note that this 'full-fledged programming language' doesn't actually do anything (to their credit they admit this later on)
Edit2:
> [...] is a type-level unit test. It won't type-check until you find the correct solution.
> I sometimes use @ts-expect-error comments when I want to check that an invalid input is rejected by the type-checker. @ts-expect-error only type-checks if the next line does not!
What new level of hell are we exploring now??
I am genuinely afraid and I'm only halfway through this thing. What's next? A meta type level language to check that our type checking checks??
> 3. Objects & Records
> COMING SOON!
> this chapter hasn't been published yet.
Thank God, I am saved.
Massive productivity boost, and I have a kind of confidence in my code that I never have had before, not having used a strongly typed language before.
I've been coding C++ most of my life, and I must say, TS is starting to look more and more like C++ (which definitely isn't a good thing because it lures programmers into complexity).
Maybe it's because I came from C++.
Like in C++, I like the flexibility of not being too under-typed (impossible to not break anything) or over-typed (impossible to do anything).
What? No it frees-up my mind and speeds me up.
The mental gymnastics I have to engage in to work on large JS projects without TypeScript is unbearable. I have to switch between the two often and it’s night and day.
Typescript isn’t a “now you have two problems” anymore than types in any other language are.
The whole point of using types is making inconsistent states type bugs. Types are logic.
This article presents a great example: https://fsharpforfunandprofit.com/posts/designing-for-correc...
Most, possibly all, inconsistent state bugs and many logic bugs are type bugs with a sufficiently-expressive type system properly used. That's why type systems have progressed from basic systems evolved from ones whose main purpose was laying out memory rather than correctness to more elaborate systems.
Many bugs of these classes can be avoided with a sufficiently expressive type system. There’s a reason that Haskell programmers say if it compiles, it probably works correctly.
With a sufficiently powerful type system (and typescript is basically the only non-functional language that makes the cut here) these aren't all that distinct. But even in codebases that don't take advantage of that power, this has not been my experience. I recently converted about ten thousand lines of legacy javascript to typescript at work, and discovered several hundred type errors in the process. State bugs also slip through pretty often, but we almost always catch pure business logic errors at code review.
What if you drastically reduce the possibility of inconsistent state by making it unrepresentable at the type level?
What if you immediately know that you’ve exhaustively handled all your cases?
What if types push parsing in the right direction?
If you test-call such a function without arguments you will then know what kinds of values you can expect it to return.
The argument default values can not be inner functions but they can be any function that is in scope. Or if you are using classes it could a reference to any method of 'this'.
Then add some asserts inside the function to express how the result relates to the arguments. No complicated higher-order type-definitions needed to basically make it clear what you can expect from a function. Add a comment to make it even clearer.
Only true if you're using very simple types, i.e. number and string. But "string" is pretty close to "any" and doesn't give you much info. If my function only expects two or three possible strings, it should be typed to only take those ones.
Comments are not a solution for much of anything, btw, and they only "work" if you read them. How many comments are in your node_modules folder?
Just today I was looking at the type definition for a third-party lib (ramda)... what the heck does this even mean...
compose<V0, V1, V2, T1, T2, T3, T4, T5, T6>(fn5: (x: T5) => T6, fn4: (x: T4) => T5, fn3: (x: T3) => T4, fn2: (x: T2) => T3, fn1: (x: T1) => T2, fn0: (x0: V0, x1: V1, x2: V2) => T1): (x0: V0, x1: V1, x2: V2) => T6;
Got it?
I don't know ramda, but I assume this is only part of the type definition of compose and this is just the longest part of it. I think compose is written in such a way that it can accept a many conversion functions as you want and this is just the longest variant that is encoded in the types.
let compose f5 f4 f3 f2 f1 f0 x0 x1 x2 = f5 (f4 (f3 (f2 (f1 (f0 x0 x1 x2)))))
whose signature is,
val compose : ('a -> 'b) -> ('c -> 'a) -> ('d -> 'c) -> ('e -> 'd) -> ('f -> 'e) -> ('g -> 'h -> 'i -> 'f) -> 'g -> 'h -> 'i -> 'b = <fun>
And even that verbose signature is better than none imo.
I also recommend type-challenges: https://github.com/type-challenges/type-challenges
It works great with the VSCode extension.
What do you mean? Which extension?
There is no "dependency injection" in a functional world, take this opportunity to show your colleague how FP makes their life easier. It's just a function.
Instead of a class, implementing an interface, created by a factory, requiring a constructor, all you need is a function.
Anything that was previously a "dependency" in OO terms is now an argument to your function. If you want to "inject" that dependency you simply partially apply your function, the result is then of course a function with that "dependency" "injected" which can then be used as usual. In JavaScript there's even a nifty built-in prototype method on every function called `Function.prototype.bind` which allows you to do the partial application to create the "dependency injected" function!
Example:
```
const iRequireDependencies = (dependencyA, dependencyB, actualArgumentC, actualArgumentD, ...etc) => console.log(dependencyA, dependencyB, actualArgumentC, actualArgumentD, ...etc);
const withRandomDependencies = iRequireDependencies.bind(undefined, 'randomA', 'randomB')
withRandomDependencies('actualA', 'actualB', 'actualC', 'actualD', 'actualE') // etc
// => 'randomA' 'randomB' 'actualA' 'actualB' 'actualC' 'actualD' 'actualE'
```
Sure, there's solutions for this in the FP world, but in my experience they tend to have their own drawbacks. Admittedly, I've only ever used TS on the front-end (with no DI), so I've never really looked at what FP-style libraries exist for this.
My view on that is that it’s ok to use OOP and define classes if you are really defining an OOP style object. Back in the 90s is was taught that an object has identify, state and behaviour. So you you don’t have all three, it’s not really an object in the OOP style.
Looking at it through this lens helps make it clearer when you should add classes or just stick to function and closures.
Indeed, if you want to use emitDecoratorMetadata for automatic dependency injection, you should use classes. If the library itself takes advantage (again likely due to decorators) of classes e.g. https://typegraphql.com/docs/getting-started.html then yes, classes are again a fine choice.
The general answer is that they're useful when the type also needs to have a run-time representation (and metadata). Otherwise, not really.
A few objects contain state like say a DB connection/client or a RequestContext you pass down through your request handler middleware’s. Those are an OOP class with an interface definition.
Everything else is just functions and closures. We also generate interface objects from our GraphQL types but that’s not a real OOP type, it’s just an interface.
If you keep to that structure, you’ll largely avoid the whole polymorphism OOP type hierarchy hell and all the dangers that come with it.
As for DI (dependency injection), that’s honestly just a fancy form of passing parameters down through function calls. Technically, the RequestContext I mentioned before is a “ball of mud” provider pattern DI code smell. So maybe down the road we will use DI to create more constrained context scopes.
If I do go that route for DI, I would likely strongly follow a CQRS style class pattern to inject objects and keep them nicely named and organized. Would also fit nicely pattern wise with the existing function + closures architecture.
But yeah, overall, stick to functions and closures, use OOP style classes sparingly and you’ll get the best of all worlds.
If you got your first taste of typescript from angular and have a full stack background in c#/ Java class based style will make you feel right at home.
React seems to oscillate between the 2 styles.
My recent work in Svelte send to favor functions and types.
IMO the biggest benefit of classes is the code organization it brings. Have you ever seen a “util” class or folder. That’s what tends to happen to a code base without strong cohesion. It becomes hard to find anything.
With typescript/js you have modules as a pretty good substitute.
What I love about typescript is that you can mix the two.
Mostly classless module based with the occasional class (logger with a constructor to pass in the current module name for example) seems to be what I like most now. Use what makes sense.
It comes down to choice; pick one for the project and be consistent. That’s all that matters.
DI does not require OOP, and you don’t need DI to write good TS/JS code. DI is common in OOP, but if you aren’t using OOP you don’t need DI anyway.
Because DI is just “give me the dependencies I need when I declare I need it” you can use simple classes as scopes similar to CQRS patterns and continue doing functional programming from there.
It’s quite neat how you can interchange between the two and have it work rather nicely.
Technically, you could even do the same thing with closures and avoid OOP style classes all together even.
DI lives on, it just looks a little bit different than the constructor injection we’re used to seeing in OOP.
Though I would be wary of introducing patterns and paradigms that make sense in a different language when Typescript offers an ultimately simpler solution. Working against the grain helps nobody. Goes for both OOP and FP, really.
ES modules, functions, and well designed TS models get you 95% of the way.
0 - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
The biggest argument is that my functional-ish code is always 3x shorter with the same features, though.
Also, even if you didn't want to mock that way, you can get dependency injection with functions just by taking a parameter for a dependency. If dependency injection is the only reason you have to use a class, you probably shouldn't use a class.
in Java it means "you should implement this contract"
in Typescript it means "this data type has this particular shape. It may have methods, too".
in Go it means "I'd like users of this code to implement these methods" (client interfaces).
In all cases you have to work differently with them. It's not even about OOP, I think, to the point where I'm not sure now if the keyword 'interface' is part of OOP at all.
Showing fully resolved types in Intellisense would be the single largest usability enhancement they could make for me right now..
I know the universe at large has moved away from eclipse, but I loved their rich tooltips where you had nice structured representation (not just a blob of text from lsp) and could click through and navigate the type hierarchy.
Although, as a Haskell developer, I am curious what type system TS is using (System F? Intuitionist? etc) and what limitations one can expect. Aside from the syntax of TS being what it is, what are the trade-offs and limitations?
I was under the impression, and this was years ago -- things are probably different now?, that TS's type system wasn't sound (in the mathematical logic sense).
As a consequence, it has aspects of structural types, dependent types, type narrowing, and myriad other features that exist solely to model real-world JavaScript.
As far as soundness: it's not a goal of the type system. https://www.typescriptlang.org/docs/handbook/type-compatibil...
This means that it's impossible to write this function:
function isStringType<T>(): boolean { return ... }
const IS_STRING: boolean = isStringType<string>();
At best you can do something like this, which is inconvenient for more complex cases: function isStringType<T, IsString extends boolean = T extends string ? true : false>(isString: IsString): boolean { return isString }
const IS_STRING_1: boolean = isStringType<string>(true); // compiles
const IS_STRING_2: boolean = isStringType<string>(false); // type error
You basically need to pass the actual result that you want in and just get a type error if you pass in the wrong one. Still better than nothing.Link if you want to play with it online: https://www.typescriptlang.org/play?#code/GYVwdgxgLglg9mABDA...
Put another way, you can't do reflection with TypeScript.
You can write that function in C++ templates, and I naively assumed that it's possible in TypeScript too, since from my observations TypeScript allows complex typing to be expressed easier in general than C++.
It is a bit annoying sometimes that you can’t have overloaded functions with different types, but in that case you can usually just give the overloads different names, and usually that’s better for readability anyway. (Or if you really want to, write one function and use JS reflection to do the overloading manually) (but you really don’t!)
Here’s an interesting discussion of the overloading question in Swift: https://belkadan.com/blog/2021/08/Swift-Regret-Type-based-Ov...
it is not - dynamic reflection certainly has its issues, but static reflection is absolutely fine
> it’s good to be forced to do without it.
it just leads to people reinventing it even more badly with separate tools
I actually think this is a super cool and elegant way to do overloading
But I will check if maybe I can use DeepKit to auto-generate files with the reflection info I need as a separate build step.
Worse there is one value that is both a user-definable TypeScript type and a JS value.
How is being able to check code correctness at compile-time even close to "just linting and documentation basically"? This has to be a bad faith argument
> Worse there is one value that is both a user-definable TypeScript type and a JS value.
What does this even mean? I don't think you understand TypeScript.
Can't get enough of the fireworks!
Maybe finishing one of the more advanced chapters would be enough to lure people who are more experienced to check back on progress / pay / whatever you want traffic for.
If the author is reading this, the proposed solution to the `merge` challenge is:
function merge<A, B>(a: A, b: B): A & B {
return { ...a, ...b };
}
That's the "obvious" solution, but it means that the following type-checks: const a: number = 1;
const b: number = 2;
const c: number = merge(a, b);
That's not good. It shouldn't type check because the following: const d: number = { ...a, ...b };
does not type check.And I don't know how to express the correct solution (i.e. where we actually assert that A and B are object types).
Also, looking forward to further chapters.
You can do this:
function merge<
A extends Record<string, unknown>,
B extends Record<string, unknown>
>(a: A, b: B): A & B {
return { ...a, ...b }
}
const result = merge({ a: 1 }, { b: 2 }) const result = merge({ a: 1 }, { a: "fdsfsd" })
The correct type is quite complex and depends on whether or not `exactOptionalPropertyTypes` is enabled.EDIT: I think this is correct for when `exactOptionalPropertyTypes` is off.
type OptionalKeys<T extends { [key in symbol | string | number]?: unknown }> = { [K in keyof T]: {} extends Pick<T, K> ? K : never }[keyof T]
function merge<
A extends { [K in symbol | string | number]?: unknown },
B extends { [K in symbol | string | number]?: unknown },
>(a: A, b: B): {
[K in Exclude<keyof B, keyof A>]: B[K]
} & {
[K in Exclude<keyof A, keyof B>]: A[K]
} & {
[K in keyof A & keyof B]: K extends OptionalKeys<B> ? A[K] | Exclude<B[K], undefined> : B[K];
}
That's for when `exactOptionPropertyTypes` is enabled. With it disabled, then you'd replace `Exclude<B[K], undefined>` with `B[K]`.As to whether this is a good idea. Ah... it's not :P
Technically TypeScript "object types" only describe properties of some value. And in JavaScript... arrays have properties, and primitives have properties. Arrays even have a prototype object and can be indexed with string keys. So... {} doesn't actually mean "any object", it means "any value"
At its boundaries, TypeScript has blind-spots that can't realistically be made totally sound. So the best way to think of it is as a 90% solution to type-safety (which is still very helpful!)
We have come far.
Previously, I wrote my route definitions with types for both path params and query params in one file, and used TypeScript to enforce that the equivalent back-end definitions (async loaders etc.) and front-end definitions (React components) were kept in sync.
When I first implemented this in a previous project, I found many instances where routes were expecting query params but they were being dropped off (e.g. post login redirects).
Supporting things like parameters for nested routes certainly means the TS types themselves are non-trivial, but they're the kind of thing you write (and document) once, but benefit from daily.
Examples of stuff that can and should be 100% type checked:
// ...
template: {
path: idPath("/template"),
subroutes: {
edit: { path: () => "/edit" },
remix: { path: () => "/remix" },
},
},
onboarding: {
path: () => "/onboarding",
query: (params: { referralCode?: string }) => params,
subroutes: {
createYourAvatar: { path: () => "/createyouravatar" },
},
},
// ...
Routing: // Path params
navigate(routes.template.edit({ id: props.masterEdit.masterEditId }));
// No path params, just query params (null path params)
navigate(routes.onboarding(null, { referralCode }))
// Nested route with query params (inherited from parent route)
navigate(routes.onboarding.createYourAvatar(null, { referralCode }))
React hooks: // Path params
const { id } = useRouteParams(routes.template.edit)
// Query params
const { referralCode } = useRouteQueryParams(routes.onboarding);
API routes: // ...
play: {
path: () => "/play",
subroutes: {
episode: {
path: idPath("/episode"),
},
},
},
// ...
Relative route paths (for defining nested express routers): const routes = relativeApiRoutes.api.account;
router.post(routes.episode(), async (req: express.Request, res: express.Response) => {This seems like a great time to bring up "Why Static Languages Suffer From Complexity"[1], which explains the "statics-dynamics biformity" that leads to languages like TypeScript that are actually two languages: the runtime one and the compile-time type-system one.
[1] https://hirrolot.github.io/posts/why-static-languages-suffer...
After so many years of JS programming, moving to a company that uses TS extensively (in a huge scale) feels life changing. You don't even know the effect until you use it daily. Even so, daily usage of typescript at a large web application doesn't seem to be using its full potential. I feel like libraries creator and maintainer use them more in the definitions that they created (i.e redux toolkit type is mind blowing).
Thanks for creating this lesson, it will definitely teach me a lot
Particularly enjoy the confetti :)
It’s very opinionated about the way you structure your code and basically makes anything thats not fully fp-ts hard to integrate, and also is quite hard for general JS people to wrap their head around.
It’s been designed by FP people for FP people and if there are some on your team who are not fully on board or are just starting to learn FP - expect lots of friction.
At my company it was mostly scala coders and “cats” lovers (category theory stuff lib for scala) mixed in with regular nodejs devs and I could sense a lot of animosity around fp-ts and its use.
But on a more practical note, the more they converted their codebase to fp-ts the more they reported massive compile time slowness. Like it would start to take minutes to compile their relatively isolated and straight forward services.
From what I gathered, if you want to go fp-ts its just too much friction and you’re much better off picking up a language designed from the bottom up for that - scala / ocaml / elixr / etc.
To be honest once I’ve been comfortable enough with the more advanced TS features, you can write plain old javascript in a very functional style, and thats actually pretty great, especially if you throw date-fns, lodash/fp or ramda into the mix, and it remains largely approachable to people outside of FP and you can easily integrate external libs.
IMHO, functional TS is great with ramda, currying etc. and solves a lot of problems nicely. See also https://mostly-adequate.gitbook.io/mostly-adequate-guide/ch0...
Ramda et al seem like a good compromise. Looking through its docs though, doesn't JS have a lot of this stuff covered? ie filter, map, reduce etc. What new stuff is it bringing in that covers say the 90% of most use cases?
interface ClientMessage { content: string }
function isClientMessage(thing: unknown): thing is ClientMessage { return thing !== null && typeof thing === 'object' && typeof thing.content === 'string' }
expect(isClientMessage('nope')).toBeFalse()
expect(isClientMessage({ content: 'yup' })).toBeTrue()
but user-defined type guards basically duplicate the interface, are prone to error, and can be very verbose. io-ts solves this by creating a run-time schema from which build-time types can be inferred, giving you both an interface and an automatically generated type guard:
import { string, type } from 'io-ts'
const ClientMessage = type({ content: string })
expect(ClientMessage.is('nope')).toBeFalse()
expect(ClientMessage.is({ content: 'yup' })).toBeTrue()
Very nifty for my client/server monorepo using Yarn workspaces where the client and server message types are basically just a union of interfaces (of various complexity) defined in io-ts. Then I can just:
ws.on('message', msg => {
if (ClientMessage.is(msg)) {
// fullfill client's request
} else {
// handle invalid request
}
})Only thing missing is additional validation, which I think can be achieved with more complicated codec definitions in io-ts.
In practice though I find that they don't mesh well with the language and ecosystem at large. Using them in a React/Vue/Whatever app will catch you at every step, as neither the language nor the frameworks have these principles at their core. It takes a lot of effort to escape from their gravitational pull. Using Zod [2] for your parsing needs and strict TS settings for the rest feel more natural.
It could work in a framework-agnostic backend or logic-heavy codebase where the majority of the devs are in to FP and really want / have to use Typescript.
0 - https://gcanti.github.io/fp-ts/
1 - https://gcanti.github.io/io-ts/
2 - https://zod.dev
If you go all in and use every utility it provides to write super succint FP code, it can get pretty unreadable.
type Expect<T extends true> = T;
`T extends true` puts a type constraint on the parameter, which then needs to be assignable to the literal type `true` to type-check.
this kind of stuff is often confusing when working with teams. using simple dumb stuff is always the better option when you can.
P.S. You might like this http://beza1e1.tuxen.de/articles/accidentally_turing_complet...