In which the C++ committee continues to not acknowledge that its problem is being a committee, in the most ridiculously bureaucratic sense of that word.
If the only way to get my contributions accepted into a project involves writing a paper about it, sure, I can do that. If it involves writing a paper about it, and then having endless meetings about it that could have been emails, some of which I have to physically travel to, I can't be bothered. I've left actual paying jobs over that, I'm not doing it for free.
And sure, I'm an individual, and most of the people they're talking about here are representatives of companies. But the effort-to-results ratios still exist, and C++ has managed to tip them to the point that making an entire new language is less effort than proposing a C++ change.
C++ has painful experience on what happens when you don't carefully consider all proposed changes and so miss something. Export is an obvious example, but there are others that seemed good until painful experience later showed why not. Templates would look very different if they knew then what people would do with them.
If most people wrote papers that were so perfect in their construction that no one would ever need to ask questions about their content, as every relevant question would be answered by reading the paper, then there wouldn't need to be a need to shepherd it through meetings. But in my limited experience, most papers aren't like that. In the numerics study group, we had one paper at the most recent meeting that was so vague, we eventually decided we had no idea what the paper was actually proposing, so answering the question "would we like to move forward with this idea" was impossible. And with the author not being present... well, that's more or less the end of the road for that idea.
Making it easier to add more features to C++ can't fix the problem of being unable to simplify the language by removing unsafe and legacy features.
If they wanted C++, but only hated the committee process, they'd have forked the language and worked on compiler extensions (like WHATWG bypassed W3C process). But instead they all went for clean slate with some level of interoperability.
So no, they are not failing to acknowledge that. It's literally the point of the quote you're responding to.
The "look at this pretty place I got to go to" picture immediately after that section does nothing to help this impression.
The biggest memory safety problem for C is array overflows. I proposed a simple, backwards compatible change to C years ago, and it has received zero traction. Note that we have 20 years of experience in D of how well it works.
https://www.digitalmars.com/articles/C-biggest-mistake.html
It'd improve C++ as well.
I really do not understand why C adds other things, but not this, as this would engender an enormous improvement to C.
Using an ifdef to maintain source level compatibility doesn't work as two pieces of code will see the same function using different ABIs.
That said I agree entirely - the conflation of array and pointer is the biggest flaw, it's what "necessitated" the null termination error that people are so fond of calling the biggest mistake.
Before C++ got the STL, all collections libraries shipped with compilers used to have bounds checking enabled by default, apparently that is too much performance loss for the standard library.
Walter's proposal added bounds checking being used unless explicitly disabled, like in any sane systems language.
1. convenience
2. attractive appearance
3. ubiquity
4. better error messages (because compiler knows what they are)
5. one construct instead of two (vector and span)
6. overflow behavior selectable with compiler switch
For example:
#include <vector>
std::vector<int> v;
vs: int[] v;
Many D users have remarked that it's the single best feature of D.C has no such thing that I am aware of
I think C++ would have been better off with a closer equivalent to include_bytes! (Rust's compiler intrinsic masquerading as a macro, which gives back an immutable reference to an array with your data in it) - but the C++ language doesn't really have a way to easily do that, and you can imagine wrestling with a mechanism to do that might miss C++ 26, which is really embarrassing when this is a feature your language ought to have had from the outset. So settling on #embed for C++ 26 means it's done.
I was concerned that maybe include_bytes! prevents the compiler from realising it doesn't need this data at runtime (e.g. you include_bytes! some data but just to calculate a compile time constant checksum from it) but nope, the compiler can see it doesn't need the array at runtime and remove it from the final binary just as a C++ compiler would with #embed.
Also, I think C++ can still use the same preprocessor as C at this point (it's been a while since I've had to deal with that)? If you're going to diverge the preprocessor you should get more benefit out of doing so than "not having #embed". For that matter, having important features like #embed only available via preprocessor also helps undermine the pointy-haired trolls who (allegedly?) keep trying to deprecate the preprocessor entirely in favor of some proprietary build system.
I would also love to see a world where all C, C++ dependencies magically port themselves to Rust without FFI or a first-cut rewrite that a hobbyist did. 10-20 years maybe?
Unless:
> One of the concerns is that C and C++ are being discouraged for new projects by several branches of the US government[1], which makes memory safety important to address.
Reading these posts really does make it seem like C and C++ are a derided, ancient construct of better days when we trusted software engineers and didn't write code for connected systems. It's just not possible to go back to those times.
While I'm extremely interested in Rust, the ecosystem for my entire industry is based on C++ with no change in sight, and built on C operating systems. Because, to date, we write code that executes on a machine that is not taking input from a user, and so does not have the brand of security concerns that make Rust attractive (for the most part). Here, static analyzers get us what we need at the 80/20 level.
1. https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI...
And yes, it will either take decades to purge IT ecosystems from them, or they finally get some #pragma enable-bounds-checking, #pragma no-implicit-conversions (yes there are compiler specific ways to get things like this), and similar, so that they can stay in the game of safe computing.
I really like your idea of building a language from multiple parts.
Or multiple DSLs.
Maybe you could have a DSL for scheduling the code, a DSL for memory management and a DSL for multithreading. A DSL for security or access control And the program is weaved together with those policies.
One of my ideas lately would be how to bootstrap a language fast. The most popular languages are multi paradigm languages. What if the standard library could be written with an Interface Description Language and ported and inherited by any language by interoperability
Could you transpile a language's standard library to another language? You would need to implement the low level functionality that the standard library uses for compatibility.
I started writing my own multithreaded interpreter and compiler that targets its own imaginary assembly language.
https://GitHub.com/samsquire/multiversion-concurrency-contro...
I like Python's standard library, it works
I feel I really enjoy Java's standard library for data structures and threading.
Regarding the article, I hope they resolve coroutines completely. I want to use them with threads similar to an Nginx/nodejs event loop.
I tried to get the C++ coroutine code on GCC 10.3.1 working from this answer to my Stackoverflow post but I couldn't get it to compile. I get co_return cannot turn int into int&&.
https://stackoverflow.com/questions/74520133/how-can-i-pass-...
10-20 years sound as a pipe dream.
PFR can be rewritten in very little code, assuming c++14(?); magic-enum is long enough to just use.
I generally have one TU for just serialization, and don't let PFR and magic-enum "pollute" the rest if my code. This keeps compile times reasonable. (The other is to uniquely name the per-type serializer: C++'s overload resolution is O(n^2)). I then write a single-definition forwarding wrapper (a template) that desugars down to the per-type-name serializers. It strikes a good balance between hand-maintenance, automatic serialization support, and compile-time cost.
https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2022/p12...
It makes me really sad reading about the objections to pack indexing as this library needs it a LOT (and currently, doing it with std::get<> or similar is pretty pretty bad and does not scale at all past 200 elements in terms of build time, compiler memory usage & debug build size)
It's already being used (for many years in fact) to implement JSON serialization and deserialization in arson without depending on Template Haskell (kind of like macros).
Isn’t this a damming indictment of the language and everything that is wrong with it? How can something so simple be so hard?
CppFront is just like Carbon and Val, with a completly different syntax, translating to C++ is just an implementation detail, he just markets in a different way given his position at ISO, most likely not to raise too many waves.
Carbon is DOA as it hacks a compiler, and Circle isn't even in active development (again if it would compile to C++ that would be a better direction).
At the same time putting ideas from Carbon to CppFront is possible (I wish Carbon developers would also think about going the preprocessing direction).
It's not intended as a separate language.
This can be done by compilers without any change to the language standard.