- Cannot horizontally scroll the code snippets on homepage when it overflows. The scroll bars appear but swiping the snippet does nothing. - Footer links are unresponsive (loon, GitHub, MIT Licence links) - In the changelog page, scrolling makes the hamburger menu hide release dates behind it - Hamburger close chevron looks misaligned (not sure if this was a deliberate choice)
That said, I wish that part of Loon were less coupled to the allocation model though. What made you opt for mandatory manual memory management in an otherwise high-level language? And effects?
There are two things common in language design that, honestly, strike me as unnecessary:
1. manual allocation and lifetime stacking, and
2. algebraic effects.
On 1: I think we often conflate the benefits of Rust-style mutability-xor-aliased reference discipline with the benefits of using literal malloc and free. You can achieve the former without necessitating the latter, and I think it leads to a nicer language experience.
It's not just true that GC "comes with latency spikes, higher memory usage, and unpredictable pauses" in any meaningful way with modern implementations of the concept. If anything, it leads to more consistent latency (no synchronous Drop of huge trees at unpredictable times) and better memory use (because good GCs use compressed pointers and compaction).
On 2: I get non-algebraic effects for delimited continuations. But lately I've seen people using non-flow-magical effects for everything. If you need to talk to a database, pick a database interface and pass an object implementing the interface to the code that needs it. Effects do basically the same thing, but implicitly.
"Oh, it's not a global. Globals are bad. Effects are typed and blend into the function signature. Totally different and non-bad."
No. Typing the effects doesn't help: oh, sure, in Koka I can say that my function's type signature includes the "database connection" effect. Okay, that's a type. Where does the value backing that type come from? Thin air? No, the value backing an effect comes from the innermost handler, the identity of which, in a large program, is going to be hard to figure out.
Like all global variables, the sorts of "effects" currently in vogue will lead to sadness at scale. Globals don't stop being bad when we call them something else: they're still bits of ambient authority that frustrate local reasoning. It's as if everyone started smoking again but called cigarettes "mist popsicles" and claimed that they didn't cause cancer.
There's no way around writing down names for the capabilities we give a program and propagating these names from one part of the program to another. Every scheme to somehow free us from this chore is just smuggling in ambient authority by another name. Ambient authority is seductive. At small scales, it's fine. Better than fine! Beautiful. Then, one day, as your program scales and its maintainership churns, you find you have no idea who implements what.
Software engineering develops antibodies against these seductions. The problem is that the antibodies are name-based, so when we dress up old, bad ideas with new names, we have to re-learn why they're bad.
P.S. You might object, "You're talking about dynamic-extent effects. What about lexically-scoped effects systems?", you might ask. "These fix the problems with dynamic-extent effects."
Sure. Lexical effects are better. That's why every decent language already has a "lexically-scoped effect system". It's called let-over-lambda, or if you squint, an "object". We've come full circle.
That was basically my intent with this project, but I took the laziest way to get there lol