Implementing each of these features have required whole program refactorings in a large-scale codebase performed by few individuals while hundreds of other developers where simultaneously evolving the software and implementing new features.
Having been a C++ programmer for over 10 years, none of these refactorings would have payed off in C++ because of how time consuming it would have been to track all bugs introduced by them.
Yet they do pay off in Rust because of my favourite Rust feature: the ability to refactor large-scale software without breaking anything: if a non-functional change compiles, "it works" for some definition of "works" that's much better than what most other languages give you (no memory unsafety, no data-races, no segfaults...).
Very few programming languages have this feature, and no low-level programming language except for Rust has it.
Software design is not an upfront-only task, and Rust let's you iterate on software design as you better understand the problem domain or the constraints and requirements change without having to rewrite things from scratch.
(One language that does some of this is Erlang, but AFAIK the stdlib data structures like digraphs, sets-of-sets, etc. aren’t actually the ones the compiler uses, but are rather there for use by static verification tools like Dialyzer. Which means that the Erlang digraph doesn’t know how to topsort itself, even though there’s a module in the Erlang compiler application that does topsort on digraphs. Still feels like being a second-class citizen relative to the runtime’s favoured compiler.)
Rust does give you access to its internal data-structures on nightly. These change quite often, so you will need to update code that uses them pretty much every week.
Why doesn't many language does this in some stable form? Because that sets the internal data-structures and APIs of the compiler in stone forever, which makes it infinitely harder to improve the compiler and implement new language features.
Doesn't mean I wouldn't like it myself, but it is a genuine drag on continued compiler development.
I think such a type would be less useful than you'd think, for precisely the same reason why a linked list in the stdlib is pretty much useless: almost always, you don't want the stdlib allocating nodes; you want an intrusive data structure instead. Really, the problem is that the compiler needs to store extra data with the nodes that the stdlib can't know about, but you don't want two separate types `stdlib::GraphNode` and `compiler::ControlFlowNode` because you usually need to be able to convert between these types in both directions. (one direction can be handled by embedding one of the types in the other; but the reverse direction will require overhead for an extra pointer, or horribly unsafe pointer arithmetic)
Of course in Rust, there could still be a digraph trait and the stdlib could still provide generic algorithms.
Though it's also not so rare in compilers that nodes are members of multiple graphs simultaneously (with different edges in each, e.g. control flow nodes are typically not just part of the control flow graph, but also belong to a dominator tree). It's non-trivial to create a graph abstraction that can handle all these cases while remaining efficient (you don't want to put nodes in a HashSet just to check whether a graph algorithm already visited them), so it's not surprising that compiler developers don't bother and just write the algorithm directly for their particular data structures. In the end, most graph algorithms are only about a dozen lines, much simpler than the abstractions that would be required to re-use them across completely different graphs.
By the same token, I'd like to see dynamic languages with a tracing JIT, show what types are used in the actual runtime. There was an article posted to HN a few years back, where some researchers noted that most dynamic language servers seem to go through an initial startup period, where there's some dynamic type shenanigans, then settle down to a state where the types are basically static.
C# has https://docs.microsoft.com/en-us/dotnet/csharp/programming-g... (only partially what you want I think)
I think crater also deserves some credit here - given how much Rust source is in Cargo, it's very useful to be able to run a refactored compiler against all those packages and see which don't compile or start failing their unit tests.
Ada has had the same characteristics since '83. Thanks to strict typing (among other features), Ada programmers have enjoyed "safe" code refactoring for decades.
But it is nice that new languages like Rust are finally picking similar ideas and design choices.
Rust is still far from Delphi, Eiffel, .NET Native experience though.
Although it is great that it keeps improving.
That would be super interesting to read because it is mainly a C++ compiler and some C++ features like two-phase lookup and macros make it quite hard to do things in parallel. You have to do things in a certain order but I suppose that if its query-based as well it will work.
The only option I know is /CGTHREADS but that only uses multiple-threads for optimizations and code generation which is something that the Rust compiler has been able to do for a very long time (e.g. there these are called codegen-units and LLVM supports these so it is quite trivial for a frontend to do so as well).
But as far as I remember it isn't fully multi-threaded across all phases.
1. You get access to all of the JVM libraries.
2. You don't have to work within the confines of the borrow checker.
3. Kotlin “Common” targets 3 platforms: LLVM, JVM, and JS (whereas, Rust only targets LLVM).
4. Kotlin is probably a more terse language, and suitable for doing algorithm / problem-solving interviews, and therefore a good one to be fluent in.
5. Kotlin's greater industry traction means that it might be more useful professionally. Outside of Android, I've also heard of servers/back-ends being written in Kotlin.
6. ANTLR is well-documented, compared to LALRPOP (the best existing Rust parser generator), and ANTLR targets/generates code for several mainstream languages (versus only Rust with LALRPOP). ANTLR is also probably a more useful skill to have for future jobs/projects.
But despite of all the pluses of using Kotlin, I'm still leaning a bit closer to Rust, because of all the good things I'm hearing about it.
The second order effects are even worse. After a minute, the programmer will start thinking about other things, running flow.
If compiles regularly take 5min, devs will leave their desks (and honestly, who can blame them for it).
My editor (emacs) uses `cargo watch -x check -d 0.5` to run `cargo check` (which is blazing fast for incremental edits) to type check all the code and show "red squiggles" with the error messages inline.
So my interactive workflow with Rust is only edit-"type check"-edit-"type check" where "type check" takes often less than a second.
Asynchronously in the background, the whole test suite (or the part that makes sense for what I'm doing) is always being run. So if a test actually fails to run, I discover that a little bit later.
I don't know any language for which running all tests happens instantaneously. With Rust, if anything, I write less tests because there are many things I don't need to check (e.g. what happens on out-of-bounds accesses).
This is the best workflow I've ever had. In C++ there was no way to only run type checking, so one always had to run the full compilation and linking stages, where linking took a long time, and you could get template instantiation errors quite late, and well, linking errors. I don't think I've ever seen a Rust linking error. They probably do happen, but in C++ they happen relatively often (at least once per week).
gcc has -fsyntax-only. Despite the option name, this also includes type checking and template instantiation. AFAIK it reports all compiler errors, though it skips some warnings that are computed by the optimizer (e.g. -Wuninitialized).
I never invoke clang or gcc directly. When using cargo, I use `cargo check` instead of `cargo build`. But in C or C++ depending on the project `make check` might not exist, or it might build all tests and run them, or do something else entirely like checking the formatting using clang-format.
-fsyntax-check on GCC and IIRC clang as well.
If you use Emacs you can integrate it w/ flycheck.
Maybe doing `CXXFLAGS="$CXXFLAGS -fsyntax-check" CFLAGS="$CFLAGS -fsyntax-check" make ...` ?
I've found it a helpful practice as a programmer to be intentional about this.
With a little awareness, you can identify situations where it really would be best to busy-wait while something compiles. (If the wait is not all that long and switching tasks harms focus.)
And with a little mental discipline and practice, you can train your mind not to wander. You don't totally blank your mind out, but you also don't let unrelated thoughts distract you. Just continue to think about the same thing you were when the compile started. Don't shift gears mentally, just ease up on the mental gas pedal.
It's so easy to let anxiety or guilt about wasting 2 minutes lead you into giving up focus, which is more precious. It's a false economy, and thus doing something else with those 2 minutes is actually more of a temptation than a smart idea.
(Of course, it's better if the tools are just fast! But sometimes you can't have that.)
It's especially useful when only making minor changes to the code (which is pretty common).
If you compare the compiled artifact (rlib) of a crate with the source code, you'll quickly see that the compiled artifact is much larger than the source code. libglutin-0c732c31a1d003fb.rlib has 8.1 MB while glutin-0.22.0-alpha1.crate has 53 KB.
Most people have shitty internet. And often it's nothing you can do about it because you have shitty ISPs. You can buy a computer with good CPUs and those are usually cheaper in comparison than good internet for a year, at least in many rural areas in the US. It's not just the US, some other countries have it even worse.
Now of course if you have good internet and a bad CPU it's a good deal, so there should definitely be an option to use it, maybe even with autodetection. But I think cargo has too much dependency on the internet, not too little. There should be no manual input required to turn off precompiled crates downloads if it is faster to compile the crates locally.
If I would enjoy compiling everything from scratch I would be using Gentoo.
Will they have a trusted compile farm, only supporting a subset of targets, or involve some kind of distributed trust model supporting whatever people use? Will it be greedily populated with a specific subset of targets / features or lazily populated based on combinations people actually use?
Crates.io is already a security trainwreck in progress. Do we really need to add even more attack vectors?
It's also solving a non-problem. I modify source code downloaded from crates.io zero times per day, so I compile each crate only once. Compile times matter for code I write myself: I modify (and therefore compile) that code dozens of times per day.
The faster Rust gets the more overall time LLVM takes.
See https://github.com/bjorn3/rustc_codegen_cranelift#not-yet-su... for progress
For example, on macOS in debug builds, the compiler and linker are reasonably fast, but then 2/3rds of the compilation time is spent in "dsymutil", presumably chewing through megabytes of the debug info.