From a maintenance perspective, it's really appealing to have a lot of small clearly defined passes so that you can have good separation of concerns. But over time, they often end up needing to interact in complex ways anyway.
For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Or maybe you want to do error reporting separately from resolution and type checking. But in practice, a lot of errors come from resolution failure, so the resolver has to do almost all of the work to report that error, stuff that resulting data somewhere the next pass can get to it, and then the error reporter looks for that and reports it.
Instead of nice clean separate passes, you sort of end up with one big tangled pass anyway, but architected poorly.
Also, the performance cost of multiple passes is not small. This is especially true if each pass is actually converting to a new representation.
> For example, you might think you can do lexical identifier resolution and type checking in separate passes. But then the language gets extension methods and now it's possible for a bare identifier inside a class declaration to refer to an extension method that can only be resolved once you have some static type information available.
Some of this is language design though. If you make it a requirement that scope analysis can be done in isolation (so that it's parallelizable), then you design things like imports and classes so that you never have identifiers depend on types or other libraries.
I've been working on a language lately and thinking about passes - mostly where they run and what previous state they depend on - has been a big help in designing language so the compiler can be parallel, cache, and incremental friendly.
I miss getting to talk to you about music!
> Some of this is language design though.
Totally, but it's no fun to be stuck between users wanting some eminently useful feature and not being able to ship it because of your compiler architecture.
Nanopass had a DSL for describing what forms you delete from each intermediate language and which forms you add. Each pass was generally pretty small then, usually just replacing one form with some set of other forms.
Be cause each pass was really small it was pretty easy to reorder them. Sometimes we figured out if we moved one optimization sooner then later passes worked better. Or we realized some analysis was useful somewhere else so we rearranged the passes again. The pass ordering made it really clear which analysis results are valid at each point in the compilation process.
There's also a question of data about the trees (like, a flow graph) being recomputed for each nanopass. Also expensive.
Admittedly, this framework has extensive metaprogramming (it's a Racket DSL), so it probably has much less redundancy, but is probably even harder to debug.
To an extent, it's impossible to implement nanopass in a way that's neither redundant nor confusing, without a projectional editor. Because either each pass's AST is fully redefined, which adds redundancy, or most pass's ASTs are defined in many places, which adds confusion. But my wild speculation is that one day projectional editors will gain popularity, and someone will make a compiler where one observe and debug individual passes.
I'm creating a language/compiler now, and I'm quite certain that I did not have enough passes initially, but I hope I'm at a good spot now - but time will tell.
https://andykeep.com/pubs/dissertation.pdf
Also see the this text:
Bottlenecks are changing and it's pretty interesting.
Nanopass is compiler passes, which each have their own IR, and run once in a fixed sequence.
Egraphs also require the IR to be defined a specific way, which prevents some optimizations. My understanding of https://github.com/bytecodealliance/rfcs/blob/main/accepted/... is that cranelift’s egraph optimizations are pure expression reordering/duplication/deduplication and rewrite rules.
https://blog.sigplan.org/2021/04/06/equality-saturation-with...
[1] The acyclic e-graph: Cranelift's mid-end optimizer https://news.ycombinator.com/item?id=47717192
The optimal number of passes/IRs depends heavily on what language is being compiled. Some languages naturally warrant this kind of an architecture that would involve a lot of passes.
Compiling Scheme for instance would naturally entail several passes. It could look something like the following:
Lexer -> Parser -> Macro Expander -> Alpha Renaming -> Core AST (Lowering) -> CPS Transform -> Beta / Eta Reduction -> Closure Conversion -> Codegen
I learned how to build a compiler using the nanopass approach and I still think it was a good way to learn, but I would not build another real compiler that way.
Building a lot of passes was great for initial development of the compilation pipeline, but was terrible for maintenance and relatively limited for creating optimizations. Bugfixes in an early pass would frequently require changes across several subsequent ones, especially if any change in representation were required.
About 8 months in, I adopted the a Sea of Nodes[2] approach for the optimizer, type inference, and scheduling but kept my nanopass ingestion, allocation and codegen passes. At the 12 month mark, the v1 compiler was functional but had limitations.
For the next leg of the project I decided to switch fully to Sea of Nodes. The final compiler was much better; orders of magnitude faster, more flexible, much easier to debug and test, more features, and about the same amount of code.
Sea of Nodes takes the core idea of isolating passes to its logical conclusion by narrowing the scope of each change down to a single transformation on a single graph node. Operations are worklist based and can happen in any order, eliminating the pass ordering problem. The IR is a graph that follows simple and consistent semantics from parsing through all phases down to code generation. Going from 20+ IR dialects/sub-dialects to a single graph representation was a huge plus.
[1] https://github.com/pangloss/pattern [2] https://github.com/SeaOfNodes/Simple
Also I doubt that a common framework can be used by many languages at all. Usually a mature language compiler is self-hosted, which makes near to impossible to incorporate in it some thirdparty library/framework written in some other language.
Define "very good".
The very simplest optimizations get you almost all the benefit. Proebsting's Law applies: Compiler Advances Double Computing Power Every 18 Years