Slightly different perspective from Grokking Simplicity[1]: functional programming is not about avoiding mutation because "mutation is bad." In fact, mutation is usually the desired result, but care must be taken because mutation depends on when and how many times it's called.
So good FP isn't about avoiding impure functions; instead it's about giving extra care to them. After all, the purpose of all software is to cause some type of mutation/effect (flip pixels on a screen, save bits to storage, send email, etc). Impure functions like these depend on the time they are called, so they are the most difficult to get right.
So Grokking Simplicity would probably say this:
1. Avoid pre-mature optimization. The overhead from FP is usually not significant, given the speed of today's computers. Also performance gains unlocked by FP may counter any performance losses.
2. If optimization via mutation is required, push it as far outside and as late as possible, keeping the "core" functionally pure and immutable.
This is similar to Functional Core, Imperative Shell[2]; and perhaps similar to options 1 or 2 from the article.
(Of course, if you really need impure functions for eg performance, `unsafePerformIO` has you covered in Haskell.)
People should try to stop thinking of mutation as something to be avoided, and start thinking of it as something to be managed.
Mutating state is good. That's usually the whole point.
What's bad is when you create "accidental state" that goes unmanaged or that requires an infeasible effort to maintain. What you want is a source of truth for any given bit of mutable state, plus a mechanism to update anything that depends on that state when it is changed.
The second part is where functional programming shines -- you have a simple way to just recompute all the derived things. And since it's presumably the same way the derived things were computed in the first place, you don't have to worry about its logic getting out-of-sync.
However, nobody had any experience with it. Now we do. And I think what that experience generally says is that it's a bit overkill. We can do better. Like Rust. Or possibly linear types, though that is I think much more speculative right now. Or other choices. I like mutable islands in a generally immutable/unshared global space as a design point myself, as mutability's main problem is that it gets exponentially more complicated to deal with as the domain of mutability grows, but if you confine mutability into lots of little domains that don't cross (ideally enforced by compiler, but not necessarily) it really isn't that scary.
It was a necessary step in the evolution of programming ideas, but it's an awful lot to ask that it be The One True Idea for all time, in all places, and that nobody in the intervening decades could come up with anything that was in any way an improvement in any problem space.
> However, nobody had any experience with it. Now we do.
Working intimately with C++ in the '90s, immutability as a concept was neither considered counterculture nor without significant experience employing it. At that time and in C++, it was commonly known as "const correctness" and was a key code review topic.
Go back another decade or two when K&R C ruled the land and that's a different story ;-).
Or OCaml perhaps ? They have taken a very pragmatic approach to fp and allow mutations because thats the pragmatic choice sometimes.
That's pretty easy to do in eg Haskell or even Rust.
I feel that early texts were good at this. Turtle Geometry is my personal favorite book in this vein. I seem to recall we spent a long time going over how to double buffer graphics so that you could be working on one buffer while letting the system draw the other. Not sure what texts we used for that, back in the day.
Later texts, though, go through a lot of hurdles to hide the fact that things are actively changing. The entire point is to change things.
We've other 'entire points' along the way.
Allocation and freeing of memory are fundamental to computing. We don't do malloc and free any more.
Control-flow (selecting which instruction to follow next) is also fundamental. We don't to goto anymore.
Thinking this further, this is also performance related. I think there is an interesting relationship between FP and DOD:
A technique of data oriented design is to keep state minimal and lazily derive data when you actually need it, which may involve recomputing things. The rationale is that compressed, normalized data requires less fetching from memory and computation on it is faster.
In contrast caching and buffering, both of which are heavily stateful and require a lot of additional memory, are often necessary, because they minimize inherently slow operations that are out of your control. Those kinds of things are often best implemented as (computational) objects with encapsulated, internal state and small, general interfaces, like OO has taught us.
But once the data in your control, this mindset has to be flipped on its head. You want to model your in-memory data not that differently from how you'd model for databases: Neatly aligned, normalized data, with computed columns, views and queries to get richer answers.
Interestingly if you follow this approach, then code starts to look more similar to functional code, because you potentially need the whole context to derive values from it and a lot less like independent objects that send messages to each other.
That should read, “plus one mechanism”. Another place where people get into trouble is thinking “a” means >= 1 instead of exactly one.
When the state has multiple entities trying and failing to manage it consistently is when things get bad. Functional tends to make for more friction which discourages doing a lot of state, which to some extent controls the superlinear complexity of state management by making each piece dearer.
Yup. Rust can't abstract over mutability. For owned values, it defaults to exclusive ownership, and needs explicit Rc<T> and clone() to share them. For references, in practice it requires making separate `foo()` and `foo_mut()` functions for each type of the loan.
Even though this sounds horribly clunky, it works okay in practice. Ability to temporarily borrow exclusively owned objects as either shared or exclusive-mutable adds enough flexibility.
Rust is known for being difficult, but I think a lot of that is due to lack of GC. Rust can't make any reference live longer, and programmers have to manually get scopes of loans right, or manually use Rc<RefCell> to have a mini DIY GC. Perhaps a shared XOR mutable with a GC could be the best of both?
Rust quietly has several other features in order to improve the quality-of-life of its ownership model. Two examples: if you have a mutable reference then Rust will coerce it to an immutable reference if one is required, and if you have a mutable reference then Rust will transparently re-borrow it when calling functions that accept mutable references in order to allow you to use the mutable reference more than once despite the fact that they do not implement Copy.
However, I don't think a GC will catch on in Rust in its current form. Rust's userbase likes it as a no-GC language. Plus a 3rd party library GC wrapper type can't save you from having to learn ownership and lifetimes used by literally everything else. Once you invest time to learn the "zero-cost" references, a runtime Gc is less appealing, and won't be as ergonomic than the built-in references.
Swift, OCaml, and Mojo are trying to add some subset of Rust-like ownership and borrowing, but in a simpler form.
Since it's not needed and it's massively worse than reference counting (assuming you only change reference counts when essential and use borrowing normally) due to the absurd behavior of scanning most of the heap at arbitrary times, there is no Rust GC crate in widespread use.
> For example, let's say you're iterating over some structure and collecting your results in a sequence. The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that's what pretty much everyone would use.
Well, no, this is straight confusion between what’s expressed by the program and what’s compiled. The idiomatic code in Ocaml will end up generating machine code which is as performant than using mutable array.
The fact that most programming languages don’t give enough semantic information for their compiler to do a good job doesn’t mean it necessary has to be so. Functional programmers just trust that their compiler will properly optimize their code.
It gets fairly obvious when you realise that most Ocaml developers switch to using array when they want to benefit from unboxed floats.
The whole article is secretly about Haskell and fails to come to the obvious conclusion: Haskell choice of segregating mutations in special types and use monads was an interesting and fruitful research topic but ultimately proved to be a terrible choice when it comes to language design (my opinion obviously not some absolute truth but I think the many fairly convoluted tricks haskellers pull to somehow reintroduce mutations support it). The solution is simple: stop using Haskell.
I get what you're trying to say, but that is provably false. As great as the OCaml compiler is, it currently is not capable of the aggressive optimizations that GHC can do with lists.
More often than not, the compiler mostly won't have enough static assertions to reliably generate machine code like that in a real world application (unless explicit mutation is used, of course).
> Functional programmers just trust that their compiler will properly optimize their code.
Precisely. This is why having safe local mutation as a language level feature can give more control to the programmer. We no longer have to rely on the compiler to correctly guess whether a routine is better expressed as an array or a cons list.
> The whole article is secretly about Haskell.
and ML, Koka, Clean, Mercury. The article is about allowing local mutation without breaking referential transparency at the language level.
"Stop using haskell" is a very shallow conclusion, IMO.
This cannot be true in general. There are machine code patterns for which arrays are faster than linked lists. The OCaml compiler, great as it is, won't turn linked list source code into array machine code. Therefore, there is idiomatic code in OCaml that will not be as performant as arrays.
> It gets fairly obvious when you realise that most Ocaml developers switch to using array when they want to benefit from unboxed floats.
This is one example why your statement above is not true.
You are misreading my comment. I’m not intentionally contradicting myself in two paragraphs next to each other (I’m not always the brightest but still).
The point is that contrary to what the article states ML developers are not avoiding mutations because they are uneasy to use but because they trust their compiler when they know it will do good. Proof is that in other case they will use mutations when it makes sense to do so because the compiler does not do a good job.
The first paragraph refers to the specific case my comment quotes just before: data structure traversal and storage of elements in a set.
Why not? If the compiler can see that you have a short-lived local linked list and are using it in a way for which an array would be faster, why would it not do the same thing that an array would do?
"The fact that most programming languages don’t give enough semantic information for their compiler to do a good job doesn’t mean it necessary has to be so. Functional programmers just trust that their compiler will properly optimize their code."
> Well, no, this is straight confusion between what’s expressed by the program and what’s compiled. The idiomatic code in Ocaml will end up generating machine code which is as performant than using mutable array.
I disagree with. There are different ways to get close to the performance of `Array.map` with lists (best case scenario you don't care about order and can use `List.rev_map`), but you will always have memory (and GC) overhead and so lists are strictly inferior to arrays for the presented use case.
this forces me to optimize the f# code by thinking in terms of low-memory, mutable data structures when interfacing with external libraries.
I like to think we helped with this several years ago when making official guidance: https://learn.microsoft.com/en-us/dotnet/fsharp/style-guide/...
But this is not true, is it? I thought that Haskell solution for side effects was just tagging the functions with side effects, and make the compiler handle functions with side effects as regular programs, no laziness, no memoization, no reordering. 'Monad' is just a mathematical structure that regular programs obey, also 'do notation' is just a style that allows Haskell programmers to write imperative style code.
Come'on FP hackers, prove me wrong!
Was the article updated since you wrote this? I don’t see the text you’re referring to.
> Well, no, this is straight confusion between what’s expressed by the program and what’s compiled.
You’re getting at an important point here, but then you seem to fall into this same trap when you write:
> … the many fairly convoluted tricks haskellers pull to somehow reintroduce mutations
Monads started out as a way to represent the semantics of effects in a mathematical context, to support formal representations of the semantics of programming languages that were more tractable from the perspective of analysis and proofs. Even mainstream compilers ended up using related techniques, like static single assignment, for which an equivalence to continuation-passing style exists, and they did this for the same kinds of reasons: tractability of analysis and to support automated transformations.
The use of monads for writing ordinary code - as opposed to language semantics - in Haskell exploited these techniques, allowing effects to be expressed in a purely functional way. But at its root, this is a rigorous way of expressing scenarios that require effects, it’s not just some sort of “convoluted trick”. There are benefits to doing this that go beyond just a hack to implement effects in a pure language.
Which is why it’s unlikely that people who understand these issues will “stop using Haskell”, despite the learning curve barrier it seems to cause (arguably because people tend to learn to program in ad-hoc ways, which Dijkstra notoriously bemoaned.) But many of the most powerful languages have such a barrier, it just takes different forms depending on the nature of the language.
I mean, that statement is untrue but even then I wasn't talking about monads here (monads are not a convoluted trick as far I'm concerned). I was thinking of lenses.
> Which is why it’s unlikely that people who understand these issues will “stop using Haskell”
Plenty of people who value being able to analyse program and do proof don't use Haskell. The heart of the debate is "Is being pure worth the cost?".
While everyone has to trust their compiler will make reasonable optimizations to some extent, there becomes a certain point of complexity where it becomes difficult to intuitively know if a "sufficiently smart compiler"[1] will properly optimize which is problematic.
I realize you're arguing Haskell is worse than Ocaml in this regard, but I'd argue it's harder to reason about how functional code will be translated into machine code than comparable procedural code in general.
I think it's funny that they make you write linked list code as a metaphor for generators, but it seems like it should be the other way round.
(Also, it has exceptions which are a bad language feature, and typed throws which are a worse one.)
* https://hackage.haskell.org/package/bluefin-0.0.2.0/docs/Blu...
* https://hackage.haskell.org/package/bluefin-0.0.6.0/docs/Blu...
As an aside, I watched an Alexis King stream (which I can't now find) in which she did a deep dive into effect systems and said something along the lines of: algebraic effect systems should not change their behaviour depending on nesting order e.g. Either<State<>> vs State<Either<>>.
Does bluefin have a particular philosophy about how to approach this?
> I'll be sure to check it out
Great! If you have any questions or thoughts then feel free to file an issue on the repo (https://github.com/tomjaguarpaw/bluefin/issues/new).
https://hackage.haskell.org/package/mtl-2.3.1/docs/Control-M...
You're right, through a stroke of luck it's possible to use `ExceptT e ST r` and either handle the exception part first, to get `ST r`, or handle the ST part first to get `Either e r`, so in that sense you can "mix" exceptions and ST. That doesn't work for all transformers though. If you have `Stream (Of a) ST r` then you must consume the stream first. You can't run the ST part an get a pure stream `Stream (Of a) Identity r`. So in that sense ST can't be mixed with other effects. Bluefin does allow you to do that, though.
Compare ExceptT e (StateT m a) and StateT (ExceptT e m a): if you just want your computation to have state and exceptions the difference shouldn’t matter.
The downside in Tcl is that if you refactor some code suddenly you can add a new reference and drop into accidentally quadratic territory because now everything is being copied. This leads to some cute/ugly hacks that are only explainable in terms of interpreter implementation details, purely to reduce a refcount in the right spot.
some_func $value[set value “”]
where the [set value “”] bit reduces the refcount. There’s also a fairly widespread idiom of using the K combinator for this[1]: some_func [K $value [set value “”]]
It’s one of those things that has become second-nature to people in-the-know, but is a total headscratcher otherwise.[1]: https://wiki.tcl-lang.org/page/K#c2a6014c2d129837889d8a8000d...
I think Roc is doing this.
Granule has uniqueness types like Clean built onto a linear type system, which offers some additional advantages.
> The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that's what pretty much everyone would use. But if you asked an OCaml programmer, they would almost certainly use a linked list instead.
I agree with this sentiment. However, OCaml does have mutable arrays that are both efficient and convenient to use. Why would a programmer prefer a list over them? In my opinion, the main benefit of lists in this context is that they allow pattern matching and inductive reasoning. To make functional programming languages more suited for array programming, we would thus need something like View Patterns for arrays.
A related issue is that mutation can actually be slower than fresh allocations in OCaml. The reason for this is that the garbage collector is optimized for immutable datastructures and has both a very fast minor heap that makes allocations cheap and expensive tracking for references that do not go from younger to older elements. See: https://dev.realworldocaml.org/garbage-collector.html#scroll...
> Unfortunately, this makes it impossible to use any standard functions like map on linear values and either makes linearity nearly useless or inevitably creates a parallel, incomplete universe of functions that also work on linear values.
You can implement polymorphism over linearity: this is done in Frank Pfenning's SNAX language and planned for the uniqueness types in a branch of OCaml.
> This might sound a little dangerous since accidentally holding on to a reference could turn a linear time algorithm quadratic
No, the in-place reuse optimization does not affect the asymptotic time complexity. But it can indeed change the performance drastically if a value is no longer shared since copies are needed then.
> A tracing garbage collector just doesn't give you this sort of information.
It is possible to add One-bit Reference Counts to a garbage collector, see https://gitlab.haskell.org/ghc/ghc/-/issues/23943
> for now even these struggle to keep up with tracing garbage collectors even when factoring in automatic reuse analysis.
I investigated the linked benchmarks for a while. The gap between Koka and Haskell is smaller than described in that initial comment, but a tuned GHC is indeed a bit faster than Koka on that benchmark.
> And if you only need mutation locally inside a function, using ST makes your code fundamentally more imperative in a way that really forces you to change your programming style. This isn't great either and doesn't exactly help with readability, so the mental overhead is rarely worth it.
What?? You are explicitly opting into writing mutation code. Of course that's going to change your programming code. It is expected to be different. Readability is increased because it clearly delineates a different coding style. And it's not even that different from other monadic code, even when you compare to non-mutating monadic code.
Terrible article.
Discus language: http://discus-lang.org/
Thesis: https://benl.ouroborus.net/papers/2010-impure/lippmeier-impu...
The thesis is the more interesting of those two links IMHO. The intro is chapter 1 that starts at page 17 of the pdf. It has one of the better critiques of Haskell that I've seen, and explains why uncontrolled mutation is not the answer. Reference types ala ML aren't the answer either, in his view.
> For example, let's say you're iterating over some structure and collecting your results in a sequence. The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that's what pretty much everyone would use.
> But if you asked an OCaml programmer, they would almost certainly use a linked list instead.
What? One of OCaml’s most notable features as a functional programming language is how it was designed to support mutation (see “the value restriction”, e.g. https://stackoverflow.com/questions/22507448/the-value-restr...) In my own OCaml programs I used mutation whenever appropriate (my only complaint would be that I wish there were a little more syntactic sugar around e.g. hash table access).
I wanted to like this post but it seems like low-effort clickbait.
Especially if the approach isn't really Copy on Write, but Copy only when someone might want to use the old value. Default to trying to mutate in place, if you can prove that is safe.
For most locals, that should be rather doable, and it would be a pretty big gain. For function parameters it probably gets hairy though.
Because it doesn’t do copy-on-read, you have to know whether there are references other than yours that can read the data. A single bit “at some time there were at least two references to it” doesn’t suffice, as it would mean you can’t detect when the last reference goes away, so it would leak memory (lots of it)
> A lot of the same benefit (but not all) should be available based on static analysis right?
That’s an (very important) implementation detail that makes reference counting perform reasonably well. You don’t want increase-decrease cycles in tight loops, for example.
What does "always copy" mean? What and why would you copy?
The author mentions linear types. This is a bit of a pet peeve of mine because, while very useful, linear types are not the concept that many people think they are and they are not implemented in rust (and neither are affine types). What rust implements is referred to as a "uniqueness type".
The difference has to do with how they deal with what linear-types-people call "exponentials". A linear type, as the article mentions, is the type of a value must be "consumed" exactly once (where consuming means passing it into a function, returning it, or sometimes destructuring it). Of course, in this mad world of ours we sometimes need to consume a value more than once, and indeed a language with only linear types would not be turing-complete. This escape hatch is called an "exponential", I guess because exponential is kind of like the opposite of linear. A value with an exponential type can be used as often as you want. It is essentially most types in most programming languages.
IF a function expects a value with a linear type, can you pass an a value with an exponential type to it? The answer is that you can. Try this in linear haskell if you don't believe me. A function taking a value with a linear type just says "I consume this value exactly once, but you can pass me whatever you want". The restriction is that values with linear types can only be passed to functions that expect linear types. A value with a linear type must be consumed exactly once, so you certainly can't pass it to a function that expects a value with an exponential type, because it might use that value twice. In other words, a linear type is a restriction on the callee.
Those familiar with rust will notice that this is not how rust works. If a function takes T, and you have &T, you just cannot call that function. (Ignore Clone for now.) However, in the world of linear types, this would be allowed. This makes linear types not useful for the article's purposes, although they are still very useful for other things.
What rust wants is a constraint not provided by linear types. Where linear types are a restriction on the callee, rust wants to be able to restrict the caller. It wants to be able to restrict the caller in such a way that it can know that there are no other references to a variable that's been passed into a function. People call this a "uniqueness type" because you can say "I want this type to be 'unique'" (where 'unique' means not-aliased). Honestly this name doesn't make a lot of sense, but it makes a little more sense when you think of it in terms of references. If a reference is unique, then it means that no other reference that points to the same object (which is the requirement rust imposes on mutable references). So while a linear type allows you to pass a non-linear variable to a function that expects a linear one, rust doesn't allow you to pass a non-unique variable to a function that expects a unique one.
And adding this requirement to mutations resolves 90% of the issues that make mutability annoying. Mutability becomes challenging to understand when:
1. You have multiple references pointing to the same data in memory. 2. You change the data using one of these references. 3. As a result, the data appears to have changed when accessed through any of the other references.
This simply cannot happen when mutability requires values to not be aliased.
> Those familiar with rust will notice that this is not how rust works. If a function takes T, and you have &T, you just cannot call that function. (Ignore Clone for now.)
I think this is wrong. An exponential type in Rust is a type that implements `Copy`. The analogy in Rust is:
fn linear_fun<T>(x: T) {
// ...
}
fn main() {
let foo = 5;
linear_fun(foo);
println!(foo);
}
And that compiles fine: `foo` is implicitly copied to maintain that `linear_fun` owns its parameter.You can ignore `Clone`, but ignoring `Copy` destroys the premise, because without it Rust has no exponential types at all.
EDIT: I agree Rust solves the issue of mutability fairly well. Furthermore, I think practical linear types can be added to a Rust-like type system with Vale's (https://vale.dev/) Higher RAII, where a "linear type" is an affine type that can't be implicitly dropped outside of its declaring module.
I don't know if this is what Vale does, but to enforce "can't be implicitly dropped outside of its declaring module" in Rust I would add two changes:
- Whenever the compiler tries to insert implicit drop code for a linear type outside of its declaring module, it instead raises an error.
- Type parameters get an implicit `Affine` auto-trait, like `Sized`. If a type parameter is `?Affine`, the compiler will refuse to insert implicit drop code. Standard library generic parameters will be `?Affine` wherever possible, e.g. containers like `Vec` and `HashSet` will have `T: ?Affine`, but the methods that could implicitly destroy an element like `HashSet::insert` will have plain `T`.
it somewhat strains the analogy because rust is implemented in a very elegant way (where references can be used multiple times because they implement Copy), but the analogy to exponentials in rust would be references. Just imagine clone and copy aren’t a thing, and that references have a special case that allow them to be used multiple times while owned values can be used at most once. The thing to note is that if you have an owned value, you can pass a reference to it to as many functions as you want (so long as those functions expect references). But if you have a reference, you can’t pass it to a function that expects an owned value unless the type provides you a way to make a copy.
You can imagine starting with this and then building up to rust in a nice way. You first implement passing an owned value as a move. Then first add types that can be used multiple times because they are still valid after a move. (the Copy trait.) And then you make references Copy since they meet that criteria.
I blame Haskell and the sudden fixation on absolute purity that manifested just as the VC-startup set decided it was the next secret sauce, to the point of literally redefining what "functional programming" meant in the first place.
I think that fixation has forced a lot of focus on solving for "how do we make Haskell do real work" instead of "how do we make programming in general more predictable and functional", and so the latter fight got lost to "well we bolted lambdas into Java somehow, so it's functional now".
Bullseye.
EDIT: Found this: https://github.com/HigherOrderCO/HVM1/blob/master/guide/HOW....
Let's use the fibonacci sequence as an example. Let's write it the classic, elegant way: f(n) = f(n-1) + f(n-2). Gorgeous. It's the sum of the two previous. With the caveat that f(n=0|1) = n. In Python:
# fib for basic b's
def fib(n):
## Base case
if n == 0 or n == 1:
return n
return fib(n-1) + fib(n-2)
Right off the bat, performance is O(n)=n*2. Every call to f(n-1) will also need to compute f(n-2) anyways! It's a mess. But since Python passes arrays and dictionaries as pointers (cough, sorry! I meant to say references) it's super easy to memoize: # optimize-pilled memoize-chad version
def fib(n, saved={}):
if n in saved:
return saved[n]
if n == 0 or n == 1:
saved[n] = n
else:
saved[n] = fib(n-1) + fib(n-2)
return saved[n]
Okay, now our version is nearly as fast as the iterative approach.This is the normal pattern in most languages, memoizing otherwise "pure" functions is easy because you can reference a shared object using references, right? Even with multithreading, we're fine, since we have shared memory.
Okay, but in Hoon, there are no pointers! Well, there kinda are. The operating system lets you update the "subject" of your Urbit (the context in which your programs run), and you can do this via the filesystem (Clay) or daemons (Gall agents, which have their own state kind of).
But to do this within a simple function, not relying on fancy OS features? It's totally possible, but a huge pain the Aslan.
First, here's our bog-standard fib in Hoon:
|= n=@ud
?: (lte n 1)
n
%+ add
$(n (dec n))
$(n (sub n 2))
Now, I memoize on the way down, by calculating just f(n-1) and memoizing those values, to acquire f(n-2): :- %say
|= [* [n=@ud ~] [cache=(map @ud @ud) ~]]
:- %noun
^- [sum=@ud cache=(map @ud @ud)]
=/ has-n (~(get by cache) n)
?~ has-n
?: (lte n 1)
[n (~(put by cache) n n)]
=/ minus-1 $(n (dec n))
=/ minus-2
=/ search (~(get by cache.minus-1) (sub n 2))
?~ search 0
(need search)
:- (add sum.minus-1 minus-2)
(~(put by cache.minus-1) n (add sum.minus-1 minus-2))
[(need has-n) cache]
and that works in the Dojo: > =fib-8 +fib 8
> sum.fib-8
21
but it sure is easier in Python! And I'm not picking on Hoon here, it's just pure functional programming that makes you think this way - which as a hacker is fun, but in practice is kinda inconvenient.I even wonder how much faster I actually made things. Let's see:
> =old now
> =res +fib 18
> sum.res
2.584
> (sub now old)
1.688.849.860.263.936
:: now with the non-memoized code...
> =before now
> +fib 18
2.584
> (sub now before)
1.125.899.906.842.624
Ha! My super improved memoized code is actually slower! That's because computing the copies of the map costs more than just recurring a bunch. This math should change if I try to compute a bigger fib number...Wait. Nevermind. My memoized version is faster. I tested it with the Unix time command. It's just that Urbit Dojo has a wierd way of handling time that doesn't match my intuition. Oh well, I guess I can learn how that works. But my point is, thinking is hard, and in Python or JS or C I only have to think in terms of values and pointers. And yes, that comes with subtle bugs where you think you have a value but you really have a pointer! But most of the time it's pretty easy.
Btw sorry for rambling on with this trivial nonsense - I'm a devops guy so this is probably super boring and basic for all you master hn swe's. But it's just a tiny example of the constant frustrations I've had trying to do things that would be super simple if I could just grab a reference and modify something in memory, which for better or worse, is how every imperative language implicitly does things.
With Fibonacci numbers, you cab just compute two if them outright:
def fib_(n: int) -> tuple[int, int]:
if n == 0:
return (1, 1)
prev, this = fib_(n - 1)
return (this, this + prev)
def fib(n): return fib_(n)[0]
Now there is no mutable state that survives between function calls, the performance is linear.With true memoization though accessing a previously computed value.would be constant time.
This is precisely what I did in my Hoon solution :) However, I wasn't aware that this approach is the standard way, and I'm glad to have learned that! Thanks
++ fib
|= a=@
^- @
~+
?: (lte a 1) a
%+ add
$(a (sub a 2))
$(a (sub a 1))
Try e.g. (fib 100) (don't try it without the ~+)The compiled code is itself memoizable at the VM execution level. This memo cache is transient within one system event (i.e., pressing enter after (fib 100) to get the result).
Thanks so much for this enlightening comment :3
Although I'm curious why Hoon doesn't just detect and cache identical computations by default. I guess it's a tradeoff, since using ~+ is more memory intensive, and you don't always want that either. Especially is Urbit itself is already fairly memory intensive.
FP proponents spotted a small number of problems which arose in certain specific OOP implementations and stubbornly decided that OOP itself was to blame.
Some FP proponents have pointed out that passing around instance references to different parts of the code can lead to 'spooky action at a distance.' For example, if references to the same child instance are held by two different parent modules and they both invoke state-mutating methods on the child instance, it becomes difficult to know which of the two parent modules was responsible for the mutation of state within the child instance...
The mistake of FP proponents is that they failed to recognize the real culprit of the flaws that they've identified. In the case of 'spooky action at a distance', the culprit is pass-by-reference.
If you keep that in mind and consider what OOP pioneers such as Alan Kay have said about OOP "It's about messaging," it becomes an irrefutable fact that this flaw has nothing to do with OOP but is merely a flaw in specific implementations of OOP... Flawed implementations which neglected the messaging aspect of OOP.
To summarize it simply; with OOP, you're not supposed to pass around instances to each other, especially not references. What you're supposed to pass around are messages. The state should be fully encapsulated. Messages are not state, instances are state. Instances shouldn't be moved around across multiple modules, messages should.
If you approach OOP with these principles in mind, and make the effort to architect your systems in such a way, you will see it solves all the problems that FP claims to solve abd it doesn't introduce any of its problems... Which are numerous and would require an entire article to enumerate.
Svelte has these stores that are globally reactive. The reactivity is convenient, but the stores can also be globally updated. This could result in chaos that I wished to corral.
So I tweaked stores so they remained globally reactive, but could only be directly updated internally (from "inside the nation").
To update the the state from outside the nation, an event (message) must be sent to the nation state, which handles the direct update.
It's interesting how FP community took one issue which affects many OOP implementations and decided to throw the baby out with the bathwater... Completely ignoring the reason why OOP was invented in the first place.
I've worked on several FP projects. First of all, they're never pure FP projects because that's just not possible. Those which were close to pure FP were a tangled mess of curried functions. Many modules had weird technical names with unclear responsibilities, most features had to traverse many different files/functions with lots of unrelated data passing through various modules with detailed child state which had to be passed down from high up in the 'module' hierarchy and traverse many layers to finally get to the module which would actually use that state. Higher level modules exposed complex functions with complex input and output parameters (had to concern themselves with the highly specific state requirements of child and parent modules; including configurations!) which would have to be modified for every single high level AND low-level requirement change... I'm not surprised why big companies who use FP started using monorepos; FP makes monorepos necessary because every small change requires updating a large part of the code hierarchy because everything is intertwined with everything else (tight coupling). The higher level components have to know exactly what state the child components expect because the data comes from a central place and must be passed down.
The alternative approach would be to have each child sync itself with a global store; but then, it implies that you no longer have a single owner for each piece of state; thus, you might end up with different children which have overlapping responsibilities which update the same part of the global state, creating 'spooky action at a distance' which makes it hard to trace where the change originated... This takes us back to the very reason why FP community invented FP in the first place. In other words, FP doesn't solve the 'spooky action at a distance' problem which was its entire raison d'etre.
Frameworks like React had to invent a whole host of technical concepts and tools to help manage and track complex state changes... E.g. Redux, Saga, etc... But then we still end up with a situation where all our components are tightly intertwined with the specific implementation of that specific framework since the number of variations of how people use Redux/Sagas, etc... across different projects/companies is substantial. This means that components built for one project are generally not immediately compatible with components built for a different project. If the two projects use a different global store mechanism or they use the same mechanism but have different names for the same actions (to dispatch state changes), the components will not be compatible across different projects without a lot of refactoring to bring the component into alignment with both projects. This isn't modular! This isn't easy to read! It's not maintainable!