Rust does give you access to its internal data-structures on nightly. These change quite often, so you will need to update code that uses them pretty much every week.
Why doesn't many language does this in some stable form? Because that sets the internal data-structures and APIs of the compiler in stone forever, which makes it infinitely harder to improve the compiler and implement new language features.
Then you have Eiffel, Smalltalk and Lisp variants.
Scheme and Lisp, for example, only expose their AST. If you only care about the AST, you can access it from stable Rust, and there are great libraries for working with it, doing AST folds, semi-quoting, etc.
The OP wanted to work on the CFG graph. You could write a library to compute the CFG from the AST, but you don't have to because Rust exposes this as well. The CFG data-structures are much more tied to the intermediate representations of the Rust compiler, and these do change over time as new features are added to the language.
Some tools do use the compiler internal CFGs though. For example, rust-clippy is a linter that's built on top of most of the compiler-internal data-structures, not only type-checking, but also CFGs. The rust-semverver is a tool that detects semver breaking changes between the last released version of a library and the current one, is built on top of the type checking data-structures, and can deal with all kinds of generics (types, lifetimes, etc.).
These tools are typically tied to particular versions of the nightly compiler and do break often, but for example rust-clippy is distributed via rustup, so you always get a version that works with whatever nightly compiler you have, and well, you also get a version that "magically" works with a stable Rust compiler.
Other people have built all sort of tools on top of this, from Rust interpreters, to instrumentation passes that compute the maximum stack size requirement of embedded applications, e.g., by using the CFG to compute the deepest possible stack frame of a program, and the size of the stack frame for that case.
1. Implementation-specific thing that has no formal equivalent. Structs of structs of structs, with business logic intermingled with the structure. Plenty of examples of this. I’m not suggesting anyone put these in the stdlib; that’d be silly.
2. ADT with no formalism behind it, that implements a particular set of behaviours “for best performance”, with no guarantee about the implementation or the time/space complexity, and a set of exposed operations that only state what they do, not how they do it, such that you can’t guess what algorithm a given operation is going to use.
Example of #2: Objective-C’s NSDictionary. It only guarantees that it “acts like” a dictionary; it doesn’t guarantee that it’s O(1), because being O(1) with a high constant can actually be worse for performance in some cases. So instead, it makes no guarantees, and tries to optimize for performance by actually switching between different implementations as the data held reaches different size thresholds.
I’m not suggesting anyone put these in the stdlib either, if they currently live in a specific application, because this widens the scope of their stability requirements. Right now only the compiler maintainers can optimize this ADT into doing something entirely different, and then ensure that the compiler (the only consumer) is happy with the result; if the ADT were available in the stdlib, they wouldn’t be able to test all consumers, so they’d have to be much more conservative with their changes, lest they break an edge-case. (Changing how sorting an array ADT works, for example, can break code that assumed that sorting either nearly-sorted or entirely-randomized sets was optimal.)
And, of course, sometimes in a compiler the Control Flow Graph ADT will be of this type. If it is, fine, don’t export it. But usually—because of how compiler maintainers interact heavily with compiler theorists—it’s instead the third kind:
3. Formal data structures, where the name comes from a paper which defines the expected behaviour and a base-level implementation; where either that implementation, or another paper’s pure improvement on said implementation (without changing any of the semantics) is what you can expect to find. As well, the names of all algorithms implemented against the ADT are also well-defined in papers, such that you can know what algorithm you’re using by the name of the function. (Usually, in these cases, you’ll have multiple algorithms that do the same thing differently living as sibling functions in the same module.)
Examples of #3: the geometric primitives and index search-tree types that PostGIS exposes to C-level code. Any implementation of vector clocks. Or, my favourite: regular expressions (i.e. deterministic and nondeterministic finite automata.)
Also, example to make the naming point: for a particular use-case, the use of a segment tree might be an optimization over use of an interval tree. But in code that uses these types of formal data structures, you’d never find an ADT named “IndexTree” that could happen to be either; instead, you’d expect separate SegmentTree and IntervalTree implementations, and then maybe a strategy-pattern ADT wrapping one or the other. But the concrete implementation of the formalism is very likely to exist, because people tend to just sit down and implement these things by translating pseudocode in papers into modules; and then tend to want things to stay that way, because keeping a 1:1 mapping from the module to the paper, and then linking the paper in module-doc, is one of the only good ways to get new maintainers to understand what the heck has been implemented here.
It’s #3 that I would suggest is a good candidate for stdlib inclusion. These data structures don’t change in a way that breaks their guarantees, because their existence is predicated on a particular formalism, and nobody cares about improvements to the performance of a formalism that break the formalism. (Fun analogy: nobody would care about improvements to speed running a video game that required changing the code of the game. You’re trying to optimize that game, not some other one!)
You started arguing that the Rust compiler should expose its internal data-structures, which it does, and somehow ended arguing that the Rust standard library should expose spatial data-structures for geometry processing.
I have no idea how you got from one to the other, but writing a huge wall of text full of disorganized thoughts shows very little appreciation for the time of those participating in this discussion thread, so I won't interact with you anymore.
The standard library exposes an API for some of them, and you can access the internal data-structures of the Rust by adding a line to your program to import them (requires unstable Rust) or by adding a line to your Cargo.toml to load them from crates.io.
The argument is flawed in that these are set in stone in any practical way. This is because Rust binaries do not run on a whiteboard, they run on real hardware, which means that there are thousands of different ways to efficiently implement these data-structures depending on what your precise use case is, and most of them affect their API design. For example, while Rust is lucky enough to provide a HashTable with a Google Swisstable-compatible API, C++ std::unordered_ containers are not, and cannot be upgraded.
The argument is also flawed into assuming that Rust is a static language. It isn't, the language is evolving, and as the language evolves the requirements on the internal data-structure changes, resulting in changes to the algorithms and data-structures being used (not only their implementation).
For example, at the end of last year Rust shipped non-lexical lifetimes, which uses a complete different borrow checking algorithm, data-structures, etc. The old ones just are not compatible with it.
Then there is also the fact that many people work on improving the performance on the Rust compiler. This means that the graph data-structures used are often not generic, but exploit the graph structure and the precise type of the graph nodes. As these data-structures are parallelized, made cache friendly, allocation-free, exploit usage of other platform-specific features like thread-locals, atomics, ... their APIs often changes. Also, often somebody just "discovers" that there are better algorithms and data-structures for implementing one pass, and they just change them.
Putting these in the standard library incurs a massive cost since this makes them almost impossible to evolve inside the Rust compiler, adds a huge maintenance cost, etc. And for what value? If you want to precisely use what the Rust compiler uses, you can already add a line to your program to import that from somewhere else that makes it clear that these are not stable. If you want general graph data-structures, chances are that your graph won't look like the ones used by the rust compiler. There are hundreds of crates for manipulating graphs in crates io, depending on how big the graphs are, whether you can exploit some structure, the common operations that you want to do with them, etc.
Sure, on the whiteboard, all of them probably have the same or similar complexity-guarantees, but on real hardware constant factors make a big difference for real problems.