Haskell is admittedly, probably the most powerful widely (or even somewhat widely) used language for doing this, but this general pattern works really well in Rust and TypeScript too and is one of my very favorite tools for writing better code.
I also really like doing things like User -> LoggedInUser -> AccessControlledLoggedInUser to prevent the kind of really obvious AuthZ bugs people make in web applications time and time again.
I've found this pattern to be massively underutilized in industry.
Imagine you have to distinguish between unescaped and escaped strings for security purposes. Even with a dynamically typed language, you can keep escaped strings as an Escaped class, with escape(str)->Escaped and dangerouslyAssumeEscaped(str)->Escaped functions (or static methods). There's a performance cost to this, so that's a tradeoff you have to weigh, but it is possible.
Another way of doing this is Application Hungarian[1], though that relies on the programmer more than it does on the compiler.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
That part is (de facto) required for dynamically typed languages, but not for statically typed ones where the newtype constructor/deconstructor can be elided at compile time. Rust and C++ especially both do the latter by having true value types available for wrappers that evaporate into zero extra machine code.
But then just this moment I wondered: do any major runtimes using models with no static type info manage to do full newtype elision in the JIT and only box on the deopt path? What about for models with some static type info but no value types, like Java? (Java's model would imply trickiness around mutability, but it might be possible to detect the easy cases still.) I don't remember any, but it could've shown up when I wasn't looking.
As for other JVM languages like Kotlin and Scala, they have basically what "newtype" is, but it can only be completely erased in the byte code when they have a single field.
This just isn't true.
In any dynamic language you would not get these guarantees at compile time. You'd get random failures at runtime. That's not safety of any kind.
Also, part of the goal of languages like Haskell is that they help you think about your code before it runs. All of that is lost.
> Imagine you have to distinguish between unescaped and escaped strings for security purposes
That would be a nightmare in many languages. You'd have to rewrite large parts of the code to be compatible with one or both. And in many languages you'd have to duplicate your code entirely.
In other languages, the result would so ugly, you would never want to touch that code. Imagine doing this with say, templates in C++.
>There's a performance cost to this
There is no performance cost in Haskell! This is entirely undone by the compiler.
Also, because the compiler understands what's going on at a much higher level, you can do things like deriving code. You can say that your classified strings behave like your regular strings in most contexts, like say, they're the same for the purpose of printing but not for the purpose of equality, in one line.
You can do it in Assembly. That doesn't mean it's cost effective.
The Confucian philosophy that people act like water coming down a mountain, seeking the path of least resistance comes to play.
Haskell, OCaml, F#, and their ilk can yield beautiful natural domain languages where using the types wrong is cost prohibitive. In languages without those guarantees every developer needs discipline to avoid shortcuts, and review needs increase, and time-pressure discussions rehashed.
And of course Rust and TypeScript were heavily influenced by Haskell... they just don't mention it and call things differently, to avoid the "monads are scary, I need to write a tutorial" effect. Though it's less about monads and more about things like type classes.
Imitation is the sincerest form of flattery.
Rust wikipedia says otherwise
Personally, never enjoyed Haskell's syntax (or lack of it) and tendency to overthinking. But I did enjoy SML/NJ and OCaml to some extent.
my experience is that ocaml is more powerful than rust for enforcing this sort of type safety, because you have gadts that give you more expressive power, and polymorphic variants and object types (record row types) that give you more convenience. and the module system and functors of course.
you also avoid some abstraction limitations/difficulties that come from the rust borrow checker for places where garbage collection is just fine
Somehow, it feels like a better solution than these complicated type systems. Does any other language do this outside BEAM?
The point of using the type system to do something like distinguish between sanitized and unsanitized strings is specifically to prevent these kinds of security breaches.
Erlang was designed for traditional telecom, where reliability of connections was the biggest factor, not security. I fail to see how Erlang’s approach can deal with the issue of security breaches or corrupted user data.
But I did want to add something the article also touches on: types can be not only about ensuring safety or correctness at runtime, but also about representing knowledge by encoding the theory of how the code is supposed to work as far as is practical, in a way that is durable as contributors come and go from a codebase.
Admittedly this can come at the cost of making it slower to experiment on or evolve the code, so you have to think about how strongly you want to enforce something to avoid the rigidity being more painful than valuable. But it's generally a win for helping someone new to a codebase understand it before they change it.
Edit: another thought I had is that type mistakes do not always causes crashes. Silent corruption can be much more insidious, e.g. from confusing types which mean something different but are the same at the primitive level (e.g. a string, number or uuid)
There are some expectations where that's a reasonable response to a violation, but there are many expectations where the violation implies a bug elsewhere and crashing the process will do nothing to address that that wouldn’t have been better accomplished with stronger compile time checking.
The amount of silencing (implementer error, but quite prevalent) of errors I’ve seen in typescript codebases are horrifying. Essentially ”try happy path, catch everything else and return generic error”, the result is is mostly the same for the user, but night and day for me who is trying to fix it.
type NewType<T, Name> = T & { readonly __brand: Name };
I don't really see a big problem here? type NewType<T> = T & { __brand__ : Symbol }
---This seems wrong; the type spelled `Symbol` refers to the boxed interface for symbols[0]. I suspect you meant to write `unique symbol` there, but it can't be used in that position.
I'm not sure if `NewType` in your comment is supposed to stand in for a specific newtype (in which case it probably doesn't need to be generic[1]) or if it's supposed to be a general-purpose type constructor for any newtype (in which case it should take a second type parameter to let me distinguish e.g. `EmailAddress` from `Password`[2]). The use of `unique symbol`s is also only really necessary if you want to keep the brand private to force users to go through a validation function or whatnot, otherwise you can just use string literal types.
I agree these incantations aren't big problems (it all falls out naturally from knowledge of TypeScript's type system, and can be abstracted away as per my comment in [2]), but the fact that you goofed in the very comment where you were trying to make that point is causing me to second-guess myself.
[0]: https://github.com/microsoft/TypeScript/blob/v6.0.3/src/lib/...
https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
You do not need Haskell for that eg it works in Python (via pydantic, attrs data classes)
In my backend system I represent users with different variant states to avoid a lot of unrepresentable states.
As for underutilization, I think only functional languages, Rust and C++ support variants and that might be one reason: people just make blobs of state and choose which fields to use instead of encoding states and make some combinations unrepresentable. Javascript, Java, C# or Python do not have Variant types to the best of my knowledge.In Ocaml and Haskell and with pattern matching they are very natural. In Rust with enums, same. In C++, they are so so but still usable compared to the others that do not have.
In my load tests I even went, since I launch thousands of clients, with a boos.MSM to drive the test behavior. One state machine per user.
I believe `const` functions in Rust are actually be guaranteed to be pure, though I haven't followed that feature closely and there may be nuances.
In most languages purity is a norm rather than enforced by static analysis. I definitely agree that it's much safer to assume that an arbitrary Haskell function is pure than it is to assume that of an arbitrary TypeScript function.
On our last product, we decided to start switching from Typescript to Rust on the backend because we got tired of crashes. I consider that to be one of the greatest technical mistakes I've made ever, as our productivity slowed massively. I'll just share two time-draining issues that only occur in Rust: (1) Writing higher-order functions (e.g.: a function to open a database connection, do something, and then close it -- yes, I know you can use RAII for this particular example), which is trivial in Haskell and TypeScript and JavaScript and C++ and PHP, turned out to be so impossible in Rust [even after asking Rust-expert friends for help], that I learned to just give up and never try, though it sometimes worked to write a macro instead. (2) It's happened many times that I would attempt a refactoring, spend all day fixing type errors, finally get to the top-level file, get a type error that's actually caused somewhere else by basic parts of the design, and conclude the entire refactoring I had attempted is impossible and need to revert everything.
On top of that, Rust is the only modern language I can name where using a value by its interface instead of its concrete type lies somewhere between advanced and impossible, depending on what exactly you're doing.
I came away concluding that application code (as opposed to systems or library code) should, to a first approximation, never be written in Rust.
However I share your conclusion, outside scenarios where having automated resource management as the main approach is either technically impossible, or a waste of time trying to change pervasive culture, I don't see much need for Rust.
In fact those that write comments about wanting a Rust but without borrow checker, the answer already exists.
Way back as an undergrad in 2011, I contributed to Plaid, a JVM language whose main feature is based on affine and linear types. I'm one of the very few people in the world who knew what borrowing is before Rust had it. So I know first-hand that borrow-checking is perfectly compatible with garbage collection.
(2) In such situations the compiler (type system or borrow checker) is telling you that what you wanted to do has hidden bugs, and therefore refuses to compile. Usually that's a good thing.
(3) &dyn Trait
(2) No, it stems from a compiler limitation (imposed in large part by the need for static memory layout), not because there's anything intrinsically buggy about doing this.
(3) Look up "dyn-compatibility", for the largest, but not the only, problem with doing this.
Only other language that I think gets close to rust ergonomics is Kotlin, but it suffers from having too many possibilities for abstractions.
On my line of work we don't do Web servers from scratch, we use lego pieces like with enterprise integrations.
Think Sitecore, Dynamics, Sharepoint, Optimizely, Contentful, SAP, Mongolia, Stripe, PayPal, Adobe, SQL Server, Oracle, DB2,.....
Axum offers very little over existing .NET, Java, nodejs SDKs provided by those vendors.
The things I found quite difficult or impossible in Rust were to me pretty basic patterns for modularity and removing duplication that it's really shocking that these complaints are not more common.
I currently have but two hypotheses for why.
First, the second problem I mentioned only comes from using tokio, which causes your top-level program to secretly be using a defunctionalized continuation data type, derived from where exactly in other files you put your await's, that might not be Send. If you're not using tokio, you won't experience that issue.
Second...I was kinda told to just give up on deduplication and have lots of copy+pasted code. This raises the very uncomfortable hypothesis that Rust afficionados are some combination of people who came to Rust early and never learned traditional software design and don't know what they're missing, and people who were raised on traditional good software engineering but then got hit with Rust's metaphorical baseball bat of lack-of-modularity over and over until they got used to being hit with a baseball bat as a normal pain of life.
I don't like either of these explanations (esp. with tokio seeming quite dominant), so I'm awaiting an explanation that makes more sense. https://xkcd.com/3210/
As a customer of Mercury, it's truly one of the critical companies my toolkit, and I just can't help but feel that their choosing of Haskell made their progress, development and overall journey that much better. I realize that you can make this argument with most languages, and it's not to say that a FP lang like Haskell is a recipe for success, but this intentional decision particularly pre "vibe coding" and the LLM era seems particularly prescient, of course combined with their engineering culture that was detailed in the post.
Looks to me like you can build amazingly robust pieces of software with functional programming.
However, I am divided.
I have a backend that works in NiceGUI for a product. It does the job. The code is reasonable and MVVM. The most important task it does is connecting to a websocket per customer and consume data to present some analytics.
I will not have a great deal of customers, maybe in the tens or maximum hundreds visiting the website.
I also want REPL and/or hot reload, but I am aware that as I grow features (users admin panels, more analytics, etc) maybe functional programming can do a good job transforming data pipelines.
But Haskell or Ocaml are static. I guess if I want something later that grows and scales and is still dynamic Clojure or Elixir should be a good choice. But at the same time I am afraid that if at some point I need to refactor, things will go wrong.
Currently I use Python with Mypy. All is written in the backend: the frontend is generated by NiceGUI from the backend.
Haskell is so so correct that it tends to get a bit on the way and you tend to encode everything in the type system. This is a blessing for correctness and a curse for other stuff (tracing, debugging, adding side-effects).
This is the reason why I am looking at Ocaml instead of Haskell: not so pure, more pragmatic and supports imperative programming well.
As I said, it is double-edged.
In functional/declarative style, you generally describe how things should be, not how things are made, and you let the language piece everything together to get the expected result in the end. It is all well and good (and even better) if you did everything right, but what if you didn't and you don't get the expected result? How do you find the bug?
In a language like C, it is relatively straightforward: go line by line, look at the execution state (the RAM, essentially) between each step and if it isn't as expected, something wrong must happen at that line, so you step in and progress like that. Harder to do when the language goes out of its way to hide the state from you, as it is the case for functional programming.
It is interesting that the longest section of the article is about this problem: "design for introspection", where the author has to go out of his way to make his code debuggable. A good insight on the often overlooked practical use of Haskell.
No other (mainstream) language comes close.
But what about situations where the code cannot be written in such a form (like shared memory concurrency)? I use transactions for that.
No other (mainstream) language comes close.
And that's without the low hanging fruit of no nulls, no implicit integer casts, etc.
It is absolutely true that debugging Haskell code is harder than debugging other languages. If you took away the bottom 90% of footguns, how could it not be?
We didn't create many bugs, and usually functionality could be added very rapidly (e.g., we were the first to achieve a certain certification for hosting sensitive data on AWS).
Though occasionally functionality had to be added more slowly, because we had to write from scratch what would be an off-the-shelf component in a more popular platform. But once we did it, it worked, and we were back to our old velocity, and not slowed by the bloat and complexity of dozens of off-the-shelf frameworks. We could also adapt rapidly because we controlled a manageable platform, which is how we were able to move fast to AWS when there was a need.
The system also had some technical bits of architectural secret sauce from the start (for complex data, and Web interaction), which enabled a lot of rapid development of functionality, and also set the tone for later empowering smartness.
One difference with our system, from the Haskell fintech, was that our team size was very small (only 2-3 software engineers at a time, and someone who managed all the ops). So we didn't have the challenges of hundreds of people trying to coordinate and have a coherent system while getting their things done. Instead, there was usually one person doing more technical and architectural changes to the code, and a prolific other person doing huge amounts business logic functionality for complex processes.
With careful use of current/near-term LLM-ish AI tools, software development might find some related efficiencies of very small and incredibly effective teams. But the model that comes to mind is having a small number of very sharp thinkers keeping things on an empowering and manageable path -- not churning massive bloat to knock off story points and letting sustainability be someone else's problem.
> The problem is that we cannot trust code we cannot instrument. If a third-party binding makes HTTP calls through concrete functions, we have no way to add tracing, no way to inject timeouts tuned to our SLOs, no way to simulate partner outages in testing, and no way to explain the 400ms gap in a trace except by squinting at it and developing theories. So we write our own. More work upfront, but the clients we write are observable by construction, because we built them that way from the start.
Some people call this "high-level," too.
I will say, though, that 2 million lines of code is much less code than it sounds like at first glance, especially for a company in a highly-regulated space like finance, plus a few years of progress.
Absolutely not an objective metric, but I have found that Haskell just has a different "aspect ratio". Line count may be somewhat lower, but the word count is essentially largely the same as more imperative OOP languages.
a) Haskell's reputation for terseness partially comes from its overrepresentation in academic / category-theoretic circles, where it's typically fine to say things like `St M -> C T`. But for real software it's a lot more useful to say things like `TransactionState Debit -> Verified Transaction` etc etc.
b) The other part of Haskell's terse reputation is cultural, something extending back to LISP: people being way too clever about saving lines with inscrutable tricks or macros. I imagine that stuff is discouraged at a finance company like Mercury in favor of clarity and readability: e.g. perhaps the linter makes you split monadic stuff into pedantic multiline do expressions even if you can do it in a one-liner with >> and >>=.
The advantages to Haskell are theoretically obvious. The downsides are harder to intuit.
The temptation is to model _everything_ as types. The codebase itseld becomes a _business specification_, not an application. Every policy change is a major refactor (some of which are shockingly high-touch thanks to Haskell safety).
The lesson is you cannot have your cake and eat it too. Eventually you become trapped by your types.
Haskell is really impressive and powerful, perhaps especially at this scale. However it brings its own unique problems. The temptation to model business logic as types leads to rigid structures. And the safety these structures bring can blind you to other classes of risk.
Tbvh the biggest downside of a Turing complete type system is that you can theoretically implement an application that compiles to dust.
Mercury is pretty tame by comparison I’ll admit.
it's so easy to scout when a company has this haskell philosophy. either by the interviewers themselves or by the bloggers they hired to guide their team.
the trick? i just..lie. "oh yeah i'm super pragmatic. i'm not hardline about haskell. i don't think you should be fancy." see how easy it is? i am suddenly hired and got a fat raise. and if the company moves off haskell? i quit immediately, get another haskell job, and talk to my former coworkers on the way out to embolden them to do the same.
it helps that i have the "real world" stuff on my resume.
i rode the 2010s job hopping ride as a haskeller doing this. each time a 20-30% raise. and i get to still write haskell. and i am always a top percentile haskeller at the company so i can code however tf i want lolol. suddenly - singletons, Generics, HKD!
so here's to earning another million bucks "noodling around with Haskell" :cheers:
Congrats I guess? Not sure where the abuse/guilt comes from.
I've made all my money over a decade in Haskell. Millions. Paid for all my stuff.
It all started with a recruiter on LinkedIn
I've been this person, and I've worked with this kind of person, and been the victim of this kind of person. They love language X, or framework Y, and are convinced that so many problems in front of them are shaped in a way that would be solved through the application of it.
They now have a hammer and they go searching for nails to hit with it.
I've been in shops that used Haskell, and it was... fine? It's I guess nice for people who enjoy writing in it -- I prefer other FP languages personally. I like nerdy things like that and used to hang out on Lambda the Ultimate or whatever. But I don't think there's any real secret powers in Haskell or most other tools. I've been burned too many times by that kind of approach.
If only cross-compilation became easy so that I can develop on my chip Macs and deploy on x64/AMD Linux servers.
>statically linking Haskell binaries is quite a challenge
>build requirements really slow down the process. I have to use dockers to help cache dependencies and avoid recompiling things that have not changed, but it is still slow and puts out large binaries.
Also, the Docker-based deployment takes a lot of time as it needs to recompile each module. While you can cache some part of it, it's still slow.
Meanwhile with Go it's painless. And i am not the only one having this issue:
https://news.ycombinator.com/item?id=47957624#47972671
Such a shame Haskell is beautiful and performant language still build is slow.
> This is not a complaint about volunteer maintainers. It is simply one of the ambient risks of building serious systems on a smaller ecosystem.
And so instead of paying the lib authors who already have domain expertise and know their codebase, they chose to rewrite it from scratch/fork without contributing back. So classic.
> The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five day.
This is a strange way to describe a transaction processing system with total amount of money it processed. The reliability or scalability is not measured as O(n) dollars. In theory, $248bln or $2bln can be done in one transaction although I know it is not the case in reality. It would be impressive to see the typical system design properties like transactions per second, latency distribution, etc.
what does that mean?
The Mercury site also looks way better than most other banks I have ever used (load speed is also very good.) On the danger of seeming like a shill (I'm not), I'm tempted to try them out.
In languages with option types, if you want to weaken the type requirement for a function parameter, or strengthen the guarantee for a return type, you have to change the code at every call site. E.g, if you have a function which you can improve by changing
- a parameter Foo to Option<Foo> or
- a return value Option<Bar> to Bar
you would have to change the code at all call sites. Which could be anything between annoying and practically impossible.
In languages that solve null pointer errors instead with untagged union types (like TypeScript or Scala 3), this problem doesn't occur. So you can change
- a parameter Foo to Foo | Null or
- a return value Bar | Null to Bar
and all call sites of the function can remain unchanged, since the type system knows that weakening the type requirement for a parameter, or strengthening the promise for a return type, is a safe change than can't cause a type error.
So yes, option types do avoid null pointer exceptions, but they solve the issue in a very suboptimal way.
Actually I think you can just change concrete argument `Foo` to type constraint in Haskell as well using a type class. So the function would be something like `foo :: ToMaybeFoo a => a -> .. ->`. And you would implment `ToMaybeFoo` instance for `Foo` and `Maybe Foo`.
Agree that this is more involved than typescript, but you get to keep `null` away from your code...
> but you get to keep `null` away from your code...
I don't think this would be desirable once we have eliminated null pointer exceptions with untagged unions.
It is quite simple. Instead of accepting a concrete type `Foo`, the function is changed to accept types that can be converted to `Option<Foo>`. Since both `Foo` and `Option<Foo>` can be converted to `Option<Foo>`, the existing call sites that passes `Foo` would not require changing.
If you were calling a function which might return null (String | Null), you will already have null handling at the call site, but if you now change that function such that it never returns null (String), you still have the (now unnecessary) null handling, but this doesn't hurt and you don't have to change anything at the call site.
Likewise, if you were passing a String to a function that doesn't accept null (String), the call site already made sure that the parameter isn't null, and if you change the function so that it does now accept null (String | Null), again nothing needs to be changed at the call site.
I must admit I’ve never had this problem in application development. In fact, I do want to change my callers because strengthening the contract is an opportunity to simplify the callsites - they no longer have to handle the optionality. The change might carry some semantic meaning too, why are you getting x instead of Maybe x all of the sudden? Are there some other things you should reconsider in the callers? I can see how it could be useful in library development, but there are also patterns to account for this that are idiomatic to Haskell.
>A couple million lines of Haskell, maintained by people who learned the language on the job, at a company that moves huge amounts of money? The conventional wisdom says this should be a disaster, but surprisingly, it isn't. The system we've built has worked well for years, through hypergrowth, through the SVB crisis that sent $2 billion in new deposits our way in five days,1 through regulatory examinations, through all the ordinary and extraordinary things that happen to a financial system at scale.
This one is quite telling. Do people have counter examples?
Obviously Mercury is successful, and obviously Haskell is how they did it. So it's essential to their success. Would it be instrumental to anyone else's anywhere else doing anything else? Can't possibly know, I don't think.
You can still compare lines of code and bug rate over the same period of time.
Being able to minimize boilerplate and have strong refactoring and bug resistant types is a huge edge.
The only problem is their ecosystems are limited so you might spend more time than you like implementing an API or binding a system library.
I’ve been using Mercury for 5 years. In that time, I’ve been able to wire transfer money without having to worry it might disappear (functionally impossible at certain other banks), created hundreds of virtual debit cards each with their own limit and pulling from different accounts, created dozens of accounts (a “place to put money”) named by function (each of my household utilities gets its own account, with an automatic rule to pull in money whenever it gets paid out), and… well, I think that covers everything.
This has given me unprecedented insight into my financial life. I know exactly how much I spend on groceries, on each utility, and on entertainment. I can project ahead and get a burn rate for my household. And my ex wife uses it too, on the same login, which is as easy as “make an account named with her first name” and a corresponding virtual debit card.
I’m convinced the only reason people don’t use Mercury is that they don’t know what they’re missing.
You have to pay for personal banking (a couple hundred a year iirc), but the business banking is free. If you want to try them out, you can start an LLC for a few dollars (at least in Missouri) and get overnight access to Mercury. All that’s required is your EIN.
They’ve been one of the single best products I’ve ever used. The sole wrinkle was when they canceled all their existing virtual cards due to reasons, which threw my recurring billing into chaos. But every great company is allowed at least one mega annoyance, and that one was a blip.
If you’re wondering whether to try them out, the answer is yes, and I’m excited for you to discover how cool it is. https://www.mercury.com
Very well could be true because I had no idea who or what they are.
Do they have strong low level automation support for the customer programmatically even for personal accounts? I use ledger for plaintext accounting for both personal and business and sync of data is slightly annoying, perhaps Mercury’s products solve that trivially?
I made this to solve it https://sras.me/accounts/
Feel free to use it as it stores data on your browser's local storage only. For syncing between devices, you would be able to use Google firebase's free tier and export your accounts (after compressing and encrypting) there and import from another device. Let me know if you want to try it..
I thought that was the whole point of banks - does money randomly disappear when people do wire transfers?
Try a better programming language next time, dagnabbit!!!
(There will be downvotes I suppose. More lines of code the better?)