A monad is anything you can flatmap with
The monad of list is you flatmap a list on a list and instead of getting a list of lists, as you would if you just mapped, you get a single flattened list
The monad of Result is you flatmap many function calls (like http requests or whatever) on each other and instead of getting many results, you get a single flattened result
Most of you already know this, without necessarily even knowing what a Monad is
Monad literally just means "one thing" - you take many things, and flatmap them into one
Thanks for attending my ted talk
- Write 5 paragraphs setting up an imaginary scenario involving fantasy elements of aliens, dragons, and a magical kindom where they speak using message boxes
- Introduce basic category theory by starting with what a functor is
- Explain all the effects of a monad in such general terms that it basically amounts to anything and everything - since a function can be anything and do everything and it's just function composition
- Write some snippets of Haskell, and just assume that you're familiar with the syntax
- Talk about how delicious burritos are
"You can do IO now." So what? I could do IO before that as well.
Very rarely are practical explanations discussed. Even if they are discussed, the treatment is shallow and useless.
Haskell is based on Miranda, and Miranda is based on Hope. Purely functional languages were really purely functional, academic experiments with no way to express side effects, so no way to express practical programs.
Philip Wadler took the monad (the name that already existed in category theory), and showed how computations could be expressed in Haskell with the “do notation” as an example. That made Haskell practical without breaking the “beauty” of the language, by having to introduce new special syntax or something outside the type checker capacity.
So, I don’t think there’s a motivation besides being an exercise in expressivity within the limitations of pure functional programming. Similar ideas in describing computation as lazy executed instructions already existed elsewhere, like the interpreter pattern.
* Explicitly define the order of evaluation (important in Haskell, where lazy evaluation makes the default order of evaluation difficult to trace)
* Useful mental model that helps with 1) design and 2) understanding new concepts
* Abstraction. Ignore irrelevant details. Write the standard library once, use it in many different situations.
def doFunctionsInSequence1(): Option[Set[Int]] = {
val r1 = f1(null)
if(r1.isEmpty) {
return None
}
val r2 = f2(r1.get)
if(r2.isEmpty) {
return None
}
return f3(r2.get)
}Whether this is the best thing since sliced bread or not, is left as an exercise to the reader.
IMO it's because option is a monad, list is a monad, io is a monad, async is a monad, try-except is a monad, why invent different magic syntax and semantics for all of them when there's a perfectly good abstraction that covers the lot, and that lets you write functions that are agnostic to which particular monad they're in to boot.
But from what I observed its a group of fancy foreach loops that they put under same name for some reason
(And in the context of the previous paper, this one motivates Applicative well I think: https://www.staff.city.ac.uk/~ross/papers/Applicative.pdf)
That said, I've never really understood the enthusiasm the industry has for introducing Monads outside of Haskell. As I understand it, at the time Philip Wadler wrote his paper, Haskell was pretty painful to use due to its adherence to purity. Monads were presented as a way to maintain purity while providing a principled way to support all kinds of effectful computations. But without some of the features Haskell provides (I'm thinking of typeclasses and HKTs in particular), and given that almost any language you'll be introduced to outside of Haskell already has ways to do e.g. IO or whatnot, it almost always ends up feeling like bolting something on with not a lot of benefit.
Don't get me wrong, I think there's value in stuff like https://github.com/fantasyland/fantasy-land --I find organizing how I think about computations around these algebraic concepts helps me a lot, personally. But that's distinct from introducing these concepts into day-to-day work in a non-Haskell language, especially on a team, which is often more trouble than it's worth unless everyone has already bought into it and is willing to deal with the meaningful friction introducing this stuff produces.
I assume the overabundance of Monad tutorials and libraries has to do with the cachet of knowing this relatively obscure, intellectual thing and being able to explain it to your peers, or to be more charitable, perhaps it's a byproduct of getting excited about learning this new, distinct way to approach computation and wanting to share it with everyone. But the end result is that now we have tons of ridiculous tutorials and useless Monad libraries in tons of languages.
And another joke says the best way to explain a monad tutorial is to write another one, so sorry for this.
Just think of it as a box.
If amazon sent items themselves, it would be hard to pack, no way to standardize, things would break often or fall out of their respective boxes.
Now, if you put it into one of the standardized boxes, that makes things 100x easier. Now you can put these on a conveyor belt, now you can have robots sorting these, now you can use tape to close them, standardization becomes easy as it's not "t-shirt,tennis ball,drill" but just "box box box".
So now you can do all kinds of things because it's all a box. And you can also stress test the box.
It's the same with these.
A. You can just have a function that: calls a something on IO, maps it's values, does a calculation, retries if wrong, stores the result, spits it out.
Or B. you can have functions that calls any function on IO, functions that map any value to any other value, functions that take any other function and if that function fails calls another function or retries, one that stores any value given to it and returns with information if it saved or not etc.
The result is the same in the end, but while 1 makes the workflow be strictly defined only for that case, and now you have to handle every turn and twist manually (did the save save? what if not? write a check, write a test that ensures its not and the check works, same if it does...) the 2 lets you define workflows with pre-tested, pre-built blocks that work with any part of your codebase.
And it makes your life 1000x easier because now you have common components that work with any data type inside your codebase, do things your way always, are 100% tested and make it easier to handle good cases, bad cases, wiring and logistics. And you can build pipelines out of them. Because at the end, what it does is just lets you chain functions that return wrapped values.
And you end up with code like:
val profileData = asAsync { network.userData(userId) } //returns a Async<Result<UserData, Error>
.withRetries(3) // Works on Async, and returns Result, retries async if fails
.withTraceId(userId) //wrapped flatmap that wraps success into Trace<T> and adds a traceId
.mapTrace(onError = { ErrorMappingProfile }, { user -> Profile(user.name, user.profileId) } // our mapTrace is a flatMap for Trace objects, so it knows how to extract trace objects, call the functions and wrap them again
.store("profile_data") //wrapped mapCatching again for storage explicitly that works on Trace objects, knows how to unwrap them, stores them,
.logInto(ourLogger) // maps trace objects into shared logger
Each of these things would before have to be manually written inside the function, the whole function tested for each edge case. if/else's, try/catch, match/when/switch.
This way, only thing you need to cover with tests now is `network.userData()`, as all other parts are already tested, written and do what they say they do. And you can reuse this everywhere in your projects. Instead of being a function you call with data, it becomes a function you give a box and it returns a box. Then you can give it to any other function that needs a box. If boxes make no sense, think of the little connectors on lego bricks, or pipe connectors in plumbing, or stacking USB adapters or power strips.
I can't stress enough how much this approach helped me in real life cases - refactoring old codebases especially, as once you establish some base primitives, the surface area starts massively collapsing as the test surface area increases.
Popular well.. submissions in:
2019 (3 points) https://news.ycombinator.com/item?id=19207241
2022 (3 points) https://news.ycombinator.com/item?id=30277518
2024 (11 points) https://news.ycombinator.com/item?id=40332349
0: https://blog.sigfpe.com/2006/08/you-could-have-invented-mona...
I've spent a lot of time wrapping my head around monads; whenever I thought I "got it," I would come across some exotic monad that completely blew my mind. The best way to understand them is not to rely on analogies but just follow the rules—everybody says that, but it took me a while to truly realize it.
See, for example, the Tardis monad or the Cont monad: https://www.reddit.com/r/haskell/comments/446d13/exotic_mona...
That the best way to understand Monads, was to write a Tutorial about Monads.
Which does make sense. To understand a subject, the best way is to teach the subject.
When I saw that link it immediately reminded me of this: https://blog.plover.com/prog/burritos.html
>Monads are like burritos
And then a few links down is this link to monad tutorials.
Weird coincidence.
I'm surprised that this tutorial isn't on the wiki.