> Monads really are just a convenient way to build up action values
Except when they aren't, because lists/arrays (with map) and Maybe (with map) and .. are something totally different. And that's the problem with _all_ these monad tutorials/how-tos/...: the problem is, that monads (and functors) are not burritos or elephants but _both_. So they are hard to understand looking only at a single instance.
Btw. monads are not the only way FP deals with effects, see algebraic effects. https://github.com/yallop/effects-bibliography https://www.youtube.com/watch?v=DNp3ifNpgPM
Haskell without laziness is Purescript, btw.
But even so, I think something is lost in reducing monads to a clump of related behaviors and properties. You still need intuition about what these behaviors actually mean and what kinds of real-world objects actually conform to these properties.
So you are right that monad "is not" a nondeterministic computation or a container or whatever, but a monad "is" a common set of behaviors/properties that we should expect nondeterministic computations and containers to conform to.
Abstraction without intuition is obfuscation!
[1] https://en.wikipedia.org/wiki/Eternalism_(philosophy_of_time...
Functional programs don't _do_ anything, they are a list of commands some external interpreter (what you call the runtime, but doesn't have to be a runtime) will execute or compile.
Functional programs only contain descriptions of the various commands, but it is an external interpreter executing the commands.
Functional programs return "recipes".
"recipes" are picked up by some executor that will turn them in dishes (the side effect).
If all of this seems generic allow me some typescript.
```
// we can declare a a function that takes no argument and return a value IO.
type IO<A> = () => A
declare const program: () => void; // thus can be rewritten as
declare const program: IO<void> // this is the PURE function I can reason about
declare const execute: IO<void> => void // this is the interpreter that will execute the commands and have side effects
```
Now, let's declare a side effectful functions:
```
const log: (s: string) => IO<void>
// desugarized: (s: string) => () => void `` `
Notice how the signature is similar to the standard console.log: (s: string) => void
except that it's lazy. Laziness is one of the most convenient ways to express "commands" rather than side effects in a language like JavaScript, but there are also alternatives.
Now we can have a pure program that logs to the console some string.
`log("foo")` does not "execute" any console.log, it still needs to be executed:
```
const program: IO<void> = log("foo");
// program() will actually print "foo" in stdout
```
Note: we could've encoded IO in different ways, e.g. with a struct/interface rather than a function and had the interpreter actually reason about the side effect on its own.
Now, the only missing part is: how do I compose such IO functions together? Well, there are various ways but the most common ones are applicative functors and monads. I am not going to delve deeper in this comment on those topics because it would take long but I hope I have transmitted my point:
in functional programs you return programs (I like to think about them as recipes, recipes don't DO anything), those programs may compose effectful commands, the actual execution of the commands is shoved inside an external interpreter.
This is quite obvious in some languages like Haskell, it's less obvious in others.
But why would someone want to do this? It feels like mental overhead. So what advantages are there to a system like this?
The laziness allows the pure language constructs like "if" to be lifted up into the IO value. That is, if you embed an if statement into one of the (pure) functions used in an IO value somewhere, only the branch that the "if" would take is ever evaluated. However, technically speaking, you can also look at the branches happening for all inputs; if you input a true or a false from the user, you can look at that as a true branch, a false branch, and some additional error branches, even if the code doesn't look like it. A strict language would not permit this. (Though even in a strict language, real-world useful IO values end up with enough function calls in them that a strict language wouldn't manifest the full tree either.)
If you input a full Int32, you can look at that as defining 2^32 possible branches in the IO value, and the execution will take one of them. In reality, the program does not do 2^32 different things, and a better view of what the program is doing is the more conventional one for almost every purpose. (Although that view will still pass through a lot of possible states!) But in terms of understanding how the IO value is "pure", this view is momentarily helpful.
The only operation in a Haskell program that violates this is unsafePerformIO, which really does penetrate this abstraction and work like a "normal" program. Otherwise, no matter how much Haskell code you write, technically all you're doing is making a bigger and more complicated pure IO value, which defines how to have an effect on the world but does not itself have an effect on the world. Executing the program is what finally puts the two together.
To put it another way, Haskell has a completely clear separation between being a program and executing a program. If I say to someone, "pick up that glass, fill it with water, and water this plant", that statement is itself a "pure value". The execution of that statement is where things get impure. Some languages do not have this clear separation, most notably Perl but the dynamic languages in general don't. Many others do, as having a compilation step all but forces this sort of separation, they just don't think of it that way and there can be leaks here and there, and features that may blur the line deliberately. Haskell does have a very clear separation, and in that separation, with laziness, it is almost like IO is just one big macro language for putting together programs, in some sense beyond what even Lisp would dream of.
And in another sense it's just a funny way to write conventional programs with really weird pretensions, and there's a certain value to that point of view too, which is that when you're done blissing out on the hippie math juice, when you actually sit down and write code in Haskell you're doing much the same thing you do in any other language and treat IO just like any other source code. But, as with the story of enlightenment... "first it was a mountain, then it was not a mountain, then it was a mountain again"... where you end up on this journey is not quite the same as where you started.
Or simply
input -> fn1 -> fn2 -> fnn -> mutate!
Where 'fn' are all immutable.
Example: input “I step forward”, output “monster appears”, next input “I try to shoot it”.
That second input wouldn’t be there if the output weren’t executed.
In any case, what I'm going for is that in both imperative and functional programming languages I would have a function/method like "handleInput" that takes an input and decides what to do with it. The difference would be that in a classical OOP setting handleInput would be a method of your GameState class [1] while in FP handleInput would look something like "Input -> (GameState -> GameState)", i.e. a function that takes an input value and returns a function that transforms the game state in some way (alternatively and equivalently, thanks to associativity/currying: a function that takes an input value and a current game state and returns a new game state).
[1] I know, this obviously is a very contrived example, it'd only work for very simple games. Game programming patterns are interesting but not the focus here.
I had real performance problems a while back attempting to build a very large file, my code could only produce the write instructions out of order and the file was too big to hold in memory, so what I ended up doing was writing writing instructions in radix-grouped batches to a bunch of temporary files, and then reading and evaluating them to build the large file.
This seems counter-intuitive, as it more than doubles both the amount of data written to disk as well as adds a reading-step, but doing it this way means the data is written in a way the hardware can deal with a lot more efficiently. Sequential access to and from the instruction files (off a mechanical drive), and densely clustered writes to the big output file. (on an SSD, strictly sequential writes matters less than being in the same block)
This reduced the runtime from several hours to like 5 minutes.
That doesn't sound right.
Not that I've ever seriously tried to learn Haskell, but in the past every time I've lazily come across an article about it it's always seemed like a bizarre confusing world, even though I know how functional programming (in the sense of purity) works.
Now there's just one thing missing. We all know what this style of programming is. It's asynchronous programming with callbacks. Seriously, Haskell folks, if you started with "all side effecting functions are kind of like async operations with a completion callback (the stuff people do in JavaScript all day), and then we have some syntactic sugar to make it suck less" you'd have a much easier time getting people to wrap their head around all this.
(Yes, I know the details aren't exactly the same, but drawing parallels to stuff people already know matters)
Seriously, this idea of there being a central effect dispatcher (the bit that runs `main` behind the scenes) is so eerily like the coroutine scheduler in async coroutine paradigms that I can't believe more people haven't drawn parallels between these programming styles.
If people stuck to just explaining the IO monad specifically, which is a lot like async code with callbacks, then things would be better.
I wrote about this a few years ago: https://two-wrongs.com/the-what-are-monads-fallacy
I think you're right in saying that async is not representative of all monads, but it does help with questions that many beginners have, like "How do I get the value out of the maybe/IO/your-monad-here?"
Self-plug: I even wrote a short post about this quite recently, after making a similar comment on HN (https://frogulis.net/writing/async-monad)
Probably the best reason to use FP imo. It just cuts out so much overhead when trying to read other people’s code.
In ML languages, code forms a beautiful tree structure where you can see the binding definition of each binding by looking down and right.
let x =
something
really
big
and
complicated
whereas in a language built around statements it will be scattered all over: let x = null;
x = something();
x = somethingElse(x);
x = anotherThing(x);
In the first case the definition of x is in one indented block; you can read it and move on.something(x, result);
somethingElse(result, result2);
anotherThing(result2, result3);
with the occasional surprise mutation when you just have to do
something(x)
and x contains the result.
I also got caught recently with MomentJS with that one when doing mydate.add and discovering it also mutated the original instead of just returning the result.
I'm a very average programmer and far from a FP purist (I mostly use JS and Rails) and I'm surprised how much I now use some FP principles and how it feels very natural to me.
C# extension methods aren't the perfect solution for this, but I love that they exist to at least make chaining possible without needing to modify the class that I'm applying the function to. D has an even better version of this called UFCS which makes it so `Bar something(Foo f)` can either be called as `something(x)` or `x.something()` without needing any special annotations like C# requires.
let x = anotherThing(
somethingElse(
something()
)
);I have a tidbit in the first paragraphs: http://blog.vmchale.com/article/effects
Certainly laziness has influenced a lot of things in the world of FP. But it's really helpful to distinguish laziness from FP, especially considering the problems that laziness introduces.
I mean, I get where you're coming from, but the thing that you've described is that a monad and there's not an easy way to get it there.
But yes, I am with you, you want to explain to someone that Haskell's model of side effects is a sort of metaprogramming, much simpler than macros, we just give you a data type for “a program which does some stuff and eventually can produce a ______” and ask you to define a value called `main` which is a program which produces nothing special. And it's the compiler's job to take that program and give it to you as a binary executable that you can run whenever you like.
You also want to give people a number of other examples of things that are monads. “a nullable ____”, “a list of ____s,” “an int (summable) with a ____,” and maybe an example that is not like “a function from ____s to ints,” or “a set of ___s.”
The key to telling someone what a monad is, involves trying to explain to them that in some sense “a program which produces a program which produces an int” is not terribly more descriptive than just “a program which produces an int.” If you can combine this with the adjective being outputtish and universal you have a monad.
Ultimately, that's very similar to what old-school ways of declaring Promises do in Javascript. You're creating a data structure that you then attach a new function to execute with the results.
What we talk about when we have about monads is the highly generic interface coupled with the highly generic combinators. This is not something shared by arrays and promises. In other words, none of them are monads in any practically meaningful sense.
Yes, and monadic code suffers from the same problem, that's why Haskell has do-notation
do
b <- fun1 a
c <- fun2 b
pure c -- using Purescript's pure instead of Haskell's `return`
instead of bind(fun1, (a) => bind(fun2, (b) => pure(b)))E.g. you may have a dry-run function that doesn't actually change the state but only describes what it would do. Or you could have a function that generates a verbose log of all steps executed. Etc.
That doesn't necessarily work for IO, as that gets special treatment by the runtime, but you can do it with your own types.
I do understand that in reality it's simply a tag value to ensure that the compiler correctly orders and threads operations correctly, but I still think it's semantically cute :D
FWIW C and C++ also have this operator, it does bitwise right shift assignment there.
Erlang is a functional programming language and doesn't need to jump through hoops to do side effects.
So is OCaml. So is...
You could finagle your way around that, but the result would be something like unsafePerformIO in current Haskell, where it's hard work to ensure that it's used correctly.
No where was this more apparent in my career then when I was on a team of Scala-using FP programmers who were building part of a real hardware provisioning system (and the org structure tipped in their favor) - the number of arguments about the sheer amount of things which can and definitely would go bad across hundreds of servers when you tried to do things - versus their desire to ignore these problems as infrequent so they wouldn't have to try and code the handling in a functional way (FP is terrible at logging, so even getting that done adequately was an argument).
Basically the paradigm does tend towards collapsing once the problem grows too complicated for someone to keep in their head because it actively fights the sort of procedural reasoning humans do pretty much natively in favor of dealing with abstract math which very few people are any good at.
Not really, unsafePerformIO has its place - just its not in typical Haskell programs.
Haskell is referentially transparent, but you can't say the same for many other FP languages like erlang, or ocaml.
> ipfs resolve -r /ipns/nauseam.eth/coding/random/how-side-effects-work-in-fp/: no link named "coding" under QmdzzonFE9eX6FGs8UbCyoC2XS5NQjaG6gaqhgeUyTHnag
It's really hard to take someone serious when they make big, untrue, statements saying the real web is "centralized" right on every page of their site, to peddle crypto scams.
https://chadnauseam.com/reasoning-quiz
> Neo-Nazis are holding a demonstration in a small town, waving swastikas around and shouting about Hitler. They seem to be pretty peaceful so far, so the First Amendment says you probably can’t get rid of them. However, their demonstration is near a main street and it could be a minor inconvenience to the traffic trying to go through.
>
> [ ] Allow the neo-Nazis to demonstrate.
>
> [ ] Break up the demonstration on the grounds of ‘blocking traffic’.
I am at a loss for words. Nazi sympathy under the guise of tolerance.
Well, it's not completely centralized, but it's more centralized than using IPFS for the backend and ENS for the namespace. I think that's hard to debate right? If I take down my server right now chadnauseam.com will go down for everyone. But if anyone has my IPFS page pinned, it will stay up for everyone no matter what I do (barring exploits in IPFS I don't know about). So in that sense it really is more decentralized.
> Nazi sympathy under the guise of tolerance.
I don't understand why you see a quiz that gives you the option to pick either way as promoting one option over the other
That is, this code:
x = foo()
y = bar(x)
z = bar(x)
Should mean the same thing as:
y = bar(foo())
z = bar(foo())
Because it's obvious to anyone that has written any sort of complex program that those are not the same thing. foo() can be an expensive operation like an HTTP call. Or it might depend on a database which can change state underneath it.
I assume FP has answers for these things but the tutorials never cover them. They all imagine a world without state or expensive operations to show how wonderful it is. And that's an easy world to program in.
They are the same thing in Haskell (except for when forcing the thunks into eager values happens, due to the weirdness of laziness, but that has nothing to do with purity).
> I assume FP has answers for these things but the tutorials never cover them.
Except this one does:
> The <- works like an =, except it signals that equational reasoning doesn't apply to this value. You can't replace what_the_user_typed with getline - your program won't compile.
That just raises a different problem:
y = bar(generateUUID())
z = bar(generateUUID())
>Except this one does:
It doesn't explain it in the context of real world programming.
Not in Haskell! I recommend reading the rest of the post.
This isn’t even special to FP, it’s a basic problem when building a compiler, namely whether inlining/common subexpression elimination is beneficial.
To drive the point home, even in a language like Haskell these two examples might have a factor of two in execution time between them.
It can't, in Haskell, because it's a pure language. That's basically the definition of pure! But you still need effects, even in a "pure" language, and the whole point of the article is about how to support effects in a pure language.