Lol, wut? What about about race conditions, null pointers indirectly propagated into functions that don't expect null, aliased pointers indirectly propagated into `restrict` functions, and the other non-local UB causes? Sadly, C's explicit control flow isn't enough to actually enable local reasoning in the way that Rust (and some other functional languages) do.
I agree that Go is decent at this. But it's still not perfect, due to "downcast from interface{}", implicit nullability, and similar fragile runtime business.
I largely agree with the rest of the post! Although Rust enables better local reasoning, it definitely has more complexity and a steeper learning curve. I don't need its manual memory management most of the time, either.
Related post about a "higer-level Rust" with less memory management: https://without.boats/blog/notes-on-a-smaller-rust/
Aside from Rusts ownership model you can use the type system to enforce certain things. A typical example is that Rust uses different String types to force programmers to deal with the pitfalls. Turns out if you have a file name in an operating system it could not be a valid string, or you could have valid Unicode text that could not be a filename. Rust having different types for OS Strings and internal Unicode means when you want to go from one to the other you need to explicitly deal with the errors or choose a strategy how to handle them.
Now you could totally implement strings within Rust in a way that wouldn't force that conversion and programmers would then yolo their way through any conversion, provided they even knew about the issue. And the resulting error would not necessarily surface where it orginated. But that would be programming Rust like C.
In my experience many C libraries will just happily gulp up any input of any remotly valid shape as if it was valid data without many devs being even aware there were cases or conversions they would have had to deal with. You recognize exceptionally good C devs by the way they avoid those pitfalls.
And these skilled C devs are like seasoned mountaineers, they watch their every step carefully. But that doesn't mean the steep north face of the mountain is the safest, fastest or most ergonomic way to get to the summit. And if you believe that C is that, you should be nowhere near that language.
I remember the first time I was using gettext and wonder "wait, why do I have to switch the language for my whole program if I need it for just this request?" and realized that's because GNU gettext was made like that.
And had not GNU/FSF made C the official main language for FOSS software on their manifesto, by the time when C++ was already the main userspace language across Windows, OS/2, Mac OS, BeOS, that "It is the reason for C's endurance" would be much less than it already is nowadays, where it is mostly UNIX/POSIX, embedded and some OS ABIs.
There's so many good high-level languages to choose from, but when you need to go low-level, there's essentially only C, C++, Rust. Maybe Zig once it reaches 1.0.
What we need isn't Rust without the borrow checker. It's C with a borrow checker, and without all the usual footguns.
Which would look a lot like... Rust!
We have many popular high-level languages, but I disagree that they are good. Most of them are fragile piles of crap unsuitable for writing anything larger than a throwaway script.
(In my subjective and biased assessment, which is hovewer based on professional experience)
I agree with the other comment. Then what you need is Rust without all the bells and whistles (pattern matching, Cow, Rc, Result, Option ...).
Would modules be needed or can preprocessing work still. How more advanced will the type system need to be. And how will pointers chanhe to fix all footguns and allow static borrow checking.
Zig is basically Modula-2 in C syntax clothing.
But that doesn't mean it's a good idea to use such style for PRs, lol.
Rust is certainly not the simplest language you'll run into, but C++ is incredibly baroque, they're not really comparable on this axis.
One difference which is already important and I think will grow only more important over time is that Rust's Editions give it permission to go back and fix things, so it does - where in C++ it's like venturing into a hoarder's home when you trip over things which are abandoned in favour of a newer shinier alternative.
Additionally, given its ML influence, too many people enjoy doing Haskell level FP programming in Rust, which puts off those not yet skilled in the FP arts.
Also the borrow checker is the Rust version of Haskell burrito blogs with monads, it is hard to get how to design with it in mind, and when one gets it, it isn't that easy to explain to others still trying to figure it out.
Hence why from the outside people get this opinion over Rust.
Naturally those of us with experience in compilers, type systems theory and such, see it differently, we are at another level of understanding.
Perl is so bad about this that I once worked on a very old codebase in which I could tell approximately when it was written based on which features were being used.
Furthermore some programmers really like complicated languages like Rust, Haskell, etc others like straightforward languages like Go, Python, etc.
I am a fan of Rust but it’s definitely a terse language.
However there are definitely signs that they have thought about making it as readable as possible (by omitting implicit things unless they’re overwritten, like lifetimes).
I’m reminded also about a passage in a programming book I once read about “the right level of abstraction”. The best level of abstraction is the one that cuts to the meat of your problem the quickest - spending a significant amount of time rebuilding the same abstractions over and over (which, is unfortunately often the case in C/C++) is not actually more simple, even if the language specifications themselves are simpler.
C codebases in particular, to me, are nearly inscrutable unless I spend a good amount of time unpicking the layers of abstractions that people need to write to make something functional.
I still agree that Rust is a complex language, but I think that largely just means it’s frontloading. a lot of the understanding about certain abstractions.
(1) The "intimidating syntax". Hey, you do not even need to be using <$> never mind the rest of those operators. Perl and Haskell can be baroque, but stay away from that part of the language until it is useful.
(2) "Changes are not localized". I'm not sure what this means. Haskell's use of functions is very similar to other languages. I would instead suggest referring to the difficulty of predicting the (time|space) complexity due to the default lazy evaluation.
FTA:
> In contrast, Haskell is not a simple language. The non-simplicity is at play both in the language itself, as evidenced by its intimidating syntax, but also in the source code artifacts written in it. Changes are not localized, the entire Haskell program is one whole — a giant equation that will spit out the answer you want, unlike a C program which is asked to plod there step by step.
Edited to make the critique more objective.
Anothere way of putting it is, if you didn't care about backwards-compatibility, you could greatly simplify C++ without losing anything. You can't say the same about Rust; the complexity of Rust is high-entropy, C++'s is low-entropy.
They didn’t do the best with what they had. Sure, some problems were caused by C backwards compatibility.
But so much of the complexity and silliness of the language was invented by the committee themselves.
Rust is a dead simple language in comparison.
Rust doesn’t have a standard, it has a book, so you should refer to the initialization section from Stroustrup’s C++ book to keep things fair.
The big weak spot really is lack of community outside of Apple platforms.
In Rust, for most users, the main source of complexity is struggling with the borrow checker, especially because you're likely to go through a phase where you're yelling at the borrow checker for complaining that your code violates lifetime rules when it clearly doesn't (only to work it out yourself and realize that, in fact, the compiler was right and you were wrong) [1]. Beyond this, the main issues I run into are Rust's auto-Deref seeming to kick in somewhat at random making me unsure of where I need to be explicit (but at least the error messages basically always tell you what the right answer is when you get it wrong) and to a much lesser degree issues around getting dyn traits working correctly.
By contrast C++ has just so much weird stuff. There's three or four subtly different kinds of initialization going on, and three or four subtly different kinds of type inference going on. You get things like `friend X;` and `friend class X;` having different meanings. Move semantics via rvalue references are clearly bolted on after the fact, and it's somewhat hard to reason about the right things to do. It has things like most-vexing parse. Understanding C++ better doesn't give you more confidence that things are correct; it gives you more trepidation as you know better how things can go awry.
[1] And the commonality of people going through this phase makes me skeptical of people who argue that you don't need the compiler bonking you on the head because the rules are easy to follow.
† This phrase would have been idiomatic many years ago but it is still used with the same intent today even though its meaning is no longer obvious, the idea is that a farmer at market told you this sack you can't see inside ("poke") has a piglet in it, so you purchase the item for a good price, but it turns out there was only a kitten in the bag, which (compared to the piglet) is worthless.
Fair, but this relative. C++ has 50 years of baggage it needs to support--and IMO the real complexity of C++ isn't the language, it's the ecosystem around it.
I've heard this story be accounted to Gauss, not Euler.
The earliest reference is a biography of Gauss published a year after his death by a professor at Gauss' own university (Gottingen). The professor claims that the story was "often related in old age with amusement and relish" by Gauss. However, it describes the problem simply as "the summing of an arithmetic series", without mention of specific numbers (like 1-100). Also, it was posed to the entire classroom - presumably as a way to keep them busy for a couple of hours - rather than as an attempt to humiliate a precocious individual.
It's a detail, but this is a little bit off. RAM latency is roughly around ~100ns, CPUs average a couple instructions per cycle and a few cycles per ns.
Then in the analogy, a stall on RAM is about a 10 minute wait; not quite as bad as losing entire days.
Take Apple's latest laptops. They have 16 CPU cores, 12 of those clocking at 4.5 GHz and able to decode/dispath up to 10 instructions per cycle. 4 of those clocking at 2.6 GHz, I'm not sure about their decode/dispatch width but let's assume 10. Those decoder widths don't translate to that many instructions-per-cycle in practice, but let's roll with it because the order of magnitude is close enough.
If the instructions are just right, that's 824 instructions per nanosecond. Or, roughly a million times faster than the 6502 in the Apple-II! Computers really have got faster, and we haven't even counted all the cores yet.
Scaling those to one per second, a RAM fetch taking 100ns would scale to 82400 seconds, which 22.8 hours, just short of a day.
Fine, but we forgot about the 40 GPU cores and the 16 ANE cores! More instructions per ns!
Now we're definitely into "days".
For the purpose of the metaphor, perhaps we should also count the multiple lanes of each vector instruction on the CPU, and lanes on the GPU cores, as if thery were separate processing instructions.
One way to measure that, which seems fair and useful to me, is to look at TOPS instead - tera operations per second. How many floating-point calculations can the processor complex do per second? I wasn't able to find good figures for the Apple M4 Max as a whole, only the ANE component, for which 38 TOPS is claimed. For various reasons tt's reasonable to estimate the GPU is the same order of magnitude in TOPS on those chips.
If you count 38 TOPS as equivalent to "CPU instructions" in the metaphor, then scale those to 1 per second, a RAM fetch taking 100ns scales to a whopping 43.9 days on a current laptop!
This scenario where all your 16 cores are doing 10 instructions per clock assumes everything is running without waiting, at full instruction-level and CPU-level parallelism. It's a measure of the maximum paper throughput when you're not blocked waiting on memory.
You could compare that to the maximum throughput of the RAM and the memory subsystem, and that would give you meaningful numbers (for instance, how many bytes/cycle can my cores handle? How many GB/s can my whole system process?).
Trying to add up the combined throughput of everything you can on one side and the latency of a single fetch on the other side will give you a really big number, but as a metaphor it will be more confusing than anything.
There are some things that feel a little weird, like the fact that often when you want a more complex data structure you end up putting everything in a flat array/map and using indices as pointers. But I think I've gotten used to them, and I've come up with a few tricks to make it better (like creating a separate integer type for each "pointer" type I use, so that I can't accidentally index an object array with the wrong kind of index).
Rust is one of those languages that change how you think, like Haskell or Lisp or Forth. It won't be easy, but it's worth it.
For the other 10% software that is performance-sensitive or where I need to ship some binary, I haven't found a language that I'm "happy" with. Just like the author talks about, I basically bounce between Go and Rust depending on what it is. Go is too simple almost to a fault (give me type unions please). Rust is too expressive; I find myself debugging my knowledge of Rust rather than the program (also I think metaprogramming/macros are a mistake).
I think there's space in the programming language world for a slightly higher level Go-like language with more expressiveness.
Too bad the binaries are 60MB at a minimum :(
The worst part is that `no-floating-promises` is strange. Without it, Knex (some ORM toolkit in this codebase) can crash (segfault equivalent) the entire runtime on a codebase that compiles. With it, Knex's query builders will fail the lint.
It was confusing. The type system was sophisticated enough that I could generate a CamelCaseToSnakeCase<T> type but somehow too weak to ensure object borrow semantics. Programmers on the codebase would frequently forget to use `await` on something causing a later hidden crash until I added the `no-floating-promises` lint, at which point they had to suppress it on all their query builders.
One could argue that they should just have been writing SQL queries and I did, but it didn't take. So the entire experience was fairly nightmarish.
If it isn't catching it often enough, without a lot more extra type assertions, you may be missing a Typescript strict flag like noImplicitAny. (In general I find that I prefer the full `"strict": true` in the compilerOptions of tsconfig.json and always write for all strict checks.)
Also if your codebase is still relying on "explicit" `any`, try `unknown`.
Also, yeah Knex doesn't look like the best ORM for a Typescript codebase. Typescript support in Knex is clearly an after thought, and the documentation admits it:
> However it is to be noted that TypeScript support is currently best-effort.
Which is arguably the correct way to handle this situation anyway. It makes it unmistakably clear that you are intentionally throwing away a return value.
And then gave up in disgust.
Look, I'm no genius, not by a long shot. But I am both competent and experienced. If I can't make these things work just by messing with it and googling around, it's too damned hard.
(Of course they actually do want Haskell but they probably need to get there gradually)
> Even rust with all its hype lacks in this area imo.
This is surprising to me! I find that Rust has pretty excellent tooling, and Cargo is substantially better than the package manager in most other languages...
I think the author probably misread the numbers. If CPU executed 1 instruction every second, it would take just 1—2 minutes to read from uncached RAM, no need to be overly dramatic.
Overall, this reads to me like a very young programmer trying to convince themselves to learn Rust because he heard it's cool, not an objective evaluation. And I'm totally on board with that, whatever convinces you, just learn new things!
The Rust compiler does manage memory and lifetimes. It just manages them statically at compile-time. If your code can’t be guaranteed to be memory-safe under Rust’s rules, it won’t compile and you need to change it.
I use Rust now for everything from CLIs to APIs and feel more productive in it end to end than python even.
I know this is not the fault of the language, and that is unfortunate.
The author doesn't want manual memory management, but still decides to go with Rust.
> Rust... But it requires me to manage memory and lifetimes, which I think is something the compiler should do for me.
This was brilliant performance art. Bless your heart Dear Author, I adore you.
You can use Bun to compile to native binaries without jumping through hoops. It's not mature, but it works well enough that we use it at work.
I really wanted to like rust and I wrote a few different small toy projects in it. At some point knowledge of the language becomes a blocker rather than knowledge the problem space, but this is a skill issue that I'm sure would lessen the more I used it.
What really set me off was how every project turned into a grocery list of crates that you need to pull in in order to do anything. It started to feel embarrassing to say that I was doing systems programming when any topic I would google in rust would lead me to a stack overflow saying to install a crate and use that. There seemed to be an anti-DIY approach in the community that finally drew me away.
It's a byte string.
> rune is the set of all Unicode code points.
We copied the awful name from Go … and the docs are wrong.
Five different boolean types?
Zero values. (Every value has some default value, like in Go.)
Odin also includes the Billion Dollar Mistake.
> There seemed to be an anti-DIY approach in the community that finally drew me away.
It's a "let a thousand flowers bloom" approach, at least until the community knows which design stands a good chance of not being a regretted addition to the standard library.
If that creator's vibe happens to match yours this could be beautiful, at least for personal projects. It's hard to imagine this scaling. A triple A studio hiring panel: "You've applied for a job but we write only Jai here. We notice you haven't submitted any obsessive fan art about Jonathan Blow. Maybe talk us through the moment you realised he was right about everything?"
I think for some devs, if you import from the standard library, that somehow counts as DIY, whereas if you import from libraries that aren't distributed with the compiler, it's anti-DIY.
What's so damning to me is how debilitatingly unopinionated it is during situations like error handling. I've used it enough to at least approximate its advantages, but strongly hinting towards including a crate (though not required) to help with error processing seems to mirror the inconvenience of having to include an exception type in another language. I don't think it would be the end of the world if it came with some creature comforts here and there.
Rust seems low-level too, but it isn't the same. It allows building powerful high-level interfaces that hide the complexity from you. E.g., RAII eliminates the need for explicit `defer` that can be forgotten
It's fast, compiles to native code AND javascript, and has garbage collection (so no manual memory management).
As an added bonus, you can mix Haskell-like functional code and imperative code in a single function.
If I were to write such a list, the answer would probably come down to "because I wanted to pick ONE and be able to stick with it, and Rust seems solid and not going anywhere." As much as Clojure and Ocaml are, from what I've heard, right up my alley, learning all these different languages has definitely taken time away from getting crap done, like I used to be able to do perfectly well with Java 2 or PHP 5, even though those are horrible languages.
I think that of all those options, Typescript and Zig feel closest related. Zig has that same 'lightness' when writing code as Typescript and the syntax is actually close enough that a Typescript syntax highlighter mostly works fine for Zig too ;)
Out of all the languages I've used, Go programs are the ones that have the highest percentage chance of working "first try". I think that has a lot to do with the plain and strongly typed style.
Rust allows low level programming and static compilation, while still providing abstraction and safety. A good ecosystem and stable build tools help massively as well.
It is one of the few languages which managed to address a real life need in novel ways, rather than incrementing on existing solutions and introducing new trade offs.
regarding redefining functions, what could the author mean? using global function pointers that get redefined? otherwise redefining a function wouldn't effect other modules that are compiled into separate object files. confusing.
C is simple in that it does not have a lot of features to learn, but because of e.g. undefined behavior, I find its very hard to call it a simple language. When a simple bug can cause your entire function to be UB'd out of existence, C doesn't feel very simple.
In haskell, side effects actually _happen_ when the pile of function applications evaluate to IO data type values, but, you can think about it very locally; that's what makes it so great. You could get those nice properties with a simpler model (i.e. don't make the langague lazy, but still have explicit effects), but, yeah.
The main thing that makes Haskell not simple IMO is that it just has such a vast set of things to learn. Normal language feature stuff (types, typeclasses/etc, functions, libraries), but then you also have a ton of other special haskell suff: more advanced type system tomfoolery, various language extensions, some of which are deprecated now, or perhaps just there are better things to use nowadays (like type families vs functional dependencies), hierarchies of unfamiliar math terms that are essentially required to actually do anything, etc, and then laziness/call-by-name/non-strict eval, which is its own set of problems (space leaks!). And yes, unfamiliar syntax is another stumbling block.
IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different. The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
I wonder if the author considered OCaml or its kin, I haven't kept track of whats all available, but I've heard that better tooling is available and better/more familiar syntax. OCaml is a good language and a good gateway into many other areas.
There are some other langs that might fit, like I see nim as an example, or zig, or swift. I'd still like to do more with swift, the language is interesting.
I think the author means that the language constructs themselves have well-defined meanings, not that the semantics don't allow surprising things to happen at runtime. Small changes don't affect the meaning of the entire program. (I'm not sure I agree that this isn't the case for e.g. Haskell as well, I'm just commenting on what I think the author means.)
> IME, Rust is actually more difficult than Haskell in a lot of ways. I imagine that once you learn all of the things you need to learn it is different.
Having written code in both, Rust is quite a lot easier than Haskell for a programmer familiar with the "normal" languages like C, C++, Python, whatever. The pure functionality of Haskell is quite a big deal that ends up contorting my programs into weird poses, e.g. once you run into the need to compose Monads the complexity ramps way up.
> The way I've heard to make it "easier" is to just clone/copy data any time you have a need for it, but, what's the point of using Rust, then?
Memory safety. And the fact that this is the example of Rust complexity just goes to show what a higher level Haskell's difficulty is.
Composing monads is another one of those painful parts of haskell. I remember being so frustrated while learning Haskell that there was all of this "stuff" to learn to "use monads" but it seemd to not have anything to _do_ with `Monad`, and people told me what I needed to know was `Monad`. Someday I wanna write all that advice I wish I had received when learning Haskell. A _lot_ of it will be about dealing with general monad "stuff".
The thing that frustrated me in Rust coming from something like Ruby was how frequently I could not _do_ a very straightforward thing, because, for example, some function is a fnOnce instead of fnMulti, or the other way around, or whatever. Here's some of the experience from that time https://joelmccracken.github.io/entries/a-simple-web-app-in-.... It became clear to me eventually that some very minor changes in requirements could necessitate massive changes in how the whole data model is structured. Maybe eventually I'd get good enough at rust that this wouldn't be a huge issue, but I had no way of seeing how to get to that point from where I was.
In contrast, I can generally predict when some requirement is going to necessitate a big change in haskell: does it require a new side effect? if so, it may need a big change. If not, then it probably doesn't. But, I've found it surprisingly easy to make big changes from the nice type system.
I really don't get when rust folks claim "memory safety" like this; we've had garbage collection since 1959. Rust gives you memory safety with tight control over resource usage; memory safety is an advantage that Rust has over C or C++, but not over basically every other language people still talk about.
If you just clone/copy every data structure left and right, then you're at a _worse_ spot than with garbage collection/reference counting when it comes to memory usage. I _guess_ you are getting the ability to avoid GC pauses, but, why not use a reference counted language if that's the problem? copy/clone data all of the time can't be faster than the overhead from a reference counting, can it??
In haskell, I did find that once I understood the various pieces I needed to work with, actually solving problems (e.g. composing monads) is much easier. I don't generally have a hard time actually programming Haskell. All that effort is front-loaded though, and it can be hard to know exactly what you need to learn in order to understand some new unfamiliar thing.
Your preferring Rust over Haskell is totally fine BTW, I'm just trying to draw a distinction between something that's hard to _use_ vs something that's hard to _learn_. Many common languages are much harder to use IME; I feel like I have to think so hard all of the time about every line of code to make sure I'm not missing something, some important side effect that I don't know about that is happening at some function call. With Haskell, I can generally skim the code and find what's important quite quickly because of the type system.
I do plan to learn Rust at some point still whenever the planets align and I need to know something like it. Until then, there are so many other things that interest me, and not enough hours in the day. I still wonder if I have really missed out on some benefit from learning to think more about data ownership in programs.
Maybe if you want to skip all the off-by-1 errors, double frees, overflows, underflows, wrong API usage, you don't need to maintain multiplatform build environment, and you don't support multiple architectures.
I mean, in this sense, assembly is even easier than C. Its syntax is trivial, and if that would be the only thing that matters, people should write assembly.
But they don't write assembly, because it's not the only thing that matters. So please stop considering C only in terms of easy syntax. Because syntax is the only thing that's easy in C.
Eh.... yeah? I suppose technically? But not _really_. Rust gives you the option to do that. But most programs outside of "I'm building an operating system" don't really require thinking too hard about it.
It's not like C where you're feeding memory manually, or like C++ where you have to think about RAII just right.
What does that mean, and what is it about native programs (i.e. programs AOT-compiled to machine code) that makes them feel solid? BTW, such programs are often more, not less, sensitive to OS changes.
> realizing that I was just spawning complexity that is unrelated to the problem at hand
Wait till you use Rust for a while, then (you should try, though, if the language interests you).
For me, the benefit of languages with manual memory management is the significantly lower memory footprint (speed is no longer an issue; if you think Haskell and Go are good enough, try Java, which is faster). But this comes at a price. Manual memory management means, by necessity, a lower level of abstraction (i.e. the same abstraction can cover fewer implementations). The price is usually paid not when writing the first version, but when evolving the codebase over years. Sometimes this price is worth it, but it's there, and it's not small. That's why I only reach for low level languages when I absolutely must.
Im a little late here and as a Java user most of the time people tell me:
1. They just want to ship a binary. Most are not aware of Jpackage, but correct me if I’m wrong that just makes installers right? I’m hopeful that the “hermetic” work from Leyden will help here.
2. They frequently complain about Java’s memory usage, but don’t really understand how setting the heap size works and what the defaults are. I’m also hopeful that ZGC’s automatic heap sizing will solve this.
With those two features I think the view of Java will change, as long as there is good build tooling for them. It would be nice to make that the default, but that would break many builds.
You may be technically correct that they are more sensitive to the kernel interface changes. But the point is that native, static binaries depend only on the kernel interface, while the other programs also depend on the language runtime that's installed on that OS. Typical Python programs even depend on the libraries being installed separately (in source form!)
Many binaries also depend on shared libraries.
> while the other programs also depend on the language runtime that's installed on that OS
You can (and probably should) embed the runtime and all dependencies in the program (as is easily done in Java). The runtime then makes responding to OS selection/changes easier (e.g. musl vs glibc), or avoids less stable OS APIs to begin with.
TFA also concludes
Since I want native code ...
I think by "solid" they mean as close to metal as possible, because, as you suggest, one can go "native" with AOT. With JS/TS (languages TFA prefers), I'm not sure how far WASM's AOT will take you ... Go (the other language TFA prefers) even has PGO now on top of "AOT".A JIT compiler compiles your code to machine code just as an AOT compiler does, so I don't think that's what's meant here (and they don't mean the level of the source code because they consider Haskell to be "native").
... what? Speed is no longer an issue? Haskell and Go? ??? How'd we go from manual memory management languages to Haskell and Go and then somehow to Java? Gotta plug that <my favorite language> somehow I guess...
It seems to me you have a deep misunderstanding of performance. If one program is 5% faster than another but at 100x memory cost, that program is not actually more performant. It just traded all possible memory for any and all speed gain. What a horrible tradeoff.
This thinking is typical in Java land [1]. You see: 8% better performance. I see: 28x the memory usage. In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
[1]: https://old.reddit.com/r/java/comments/n75pa0/java_beats_out...
Because that's what's discussed in the article, which discusses Go and Haskell specifically.
> In other words, had the Rust program been designed with the same insane memory allowance in mind as the Java program, it'd wipe the floor with it.
No, it wouldn't (I've worked with C and C++ for almost 30 years - including on embedded and safety-critical hard realtime software - with Java for over 20, and I work on the HotSpot VM). That's because tracing GCs convert memory to speed, but that's not the case for other memory management techniques. To see why, look at a highly simplified view of tracing collectors (modern collectors don't quite work like that, but the idea generalises): When the heap is exhausted, live objects are traced and compacted to the "top" of the heap. I.e. the cost of each collection is only dependent on the working set, i.e. the size of objects that are still live. Because the working set is more-or-less a constant for a given program under a given workload, the larger the heap the less frequent the collections (each of a constant cost), and so the cost of memory management with a tracing collector goes to zero as the heap size grows. There are details in the actual implementations that are worse and others that are better than this idealised description, but the point is that tracing garbage collection is a mechanism that is very effectively converts RAM to speed as its cost scales with the ratio working-set/heap-size. This is not the case for manual memory management or for primitive ref-counting GCs.
Of course, even when memory management is zero, there are still computational costs, but Java compiles to the same machine instructions as C with that kind of work (with some important caveats in certain situations that will soon be gone). It is true that even outside those specific areas, you can, with significant additional effort, get a C program (or a program in any other low-level language) to be faster than a Java program but that's due to the availability of micro-optimisations (that we don't want to offer in Java as to not to complicate the language or make it too dependent on a particular hardware/OS architecture), but that effect isn't large.
Good luck to the author with trying Rust. I hope he writes an honest experience report.
> Technically, there are a lot more options, and I wrote a long section here about eliminating them piecewise, but after writing it I felt like it was just noise.
Uh? I am guessing OP doesn't like virtual machines maybe, cause Java and C# sound like something that fits what they want. Both support AoT compilation though now...
Also the assumption about Typescript to Wasm being not "solid" seems wrong.
I mean I find it super weird that the author's only option for "native typescript" is Rust.
short-circuited reading this
"A searching or sorting utility function is called with an invalid pointer argument, even if the number of elements is zero"
But because of broad choices like:
"The execution of a program contains a data race"
"An object is referred to outside of its lifetime"
These are essentially categories of mistake we know programmers make, and in C the result is... Undefined Behaviour. No diagnostics, no exit, no errors, just throw your hands in the air and give up, anything might happen.
My personal memorable one was bit shifting 32bit values by varying amounts, and our test vectors all failing after a compiler update, because some of the shifts were by 32. Undefined behaviour.
Over the last year I’ve started to write every new project using it. On windows, on linux and mac.
It is honestly a wonderful language to work with. Its mature, well designed, has a lot of similarities to rust. Has incredible interop with C, C++, Objective-C and even Java as of this year which feels fairly insane. It also is ergonomic as hell and well understood by LLM’s so is easy to get into from a 0 starting point.
Function calling is irksome, with implicit parameters and mandatory parameters vaguely mixing? And the typing is appalling - there are multiple bottom types with implicit narrowing casts, one of them being NSObject, so if you’re doing any work with the Apple APIs you end up with a mess.
We got it right with Java and Rust; C++ does a passable job; why Swift had to be as incomprehensible as typescript, I cannot fathom.
Also, how is its type system and metaprogramming? Does it have type polymorphism, typeclasses, macros, etc?
In terms of its language features it has all of those and more, sometimes too many in my opinion.
I personally favour languages that are clear in their vision. However at its core Swift is a highly performant language that is designed really well, has beautiful syntax, and in the last 5 or so years I have been impressed with its direction. It is being developed by a good team who listen to their community but not at the expense of the languages vision.
My favourite aspect of using it though is its versatility. Wether you’re working on embedded systems, a game, a web server, or even a static site generator, it always feels like the language is there to support your vision, but still give you the fine grained control you need to optimise for performance.
I also collaborate with a friend who is a Rust developer and he’s always super happy to work on a Swift project with me so I feel like that’s enough praise when you can pull a rest dev away from their beloved.