printf("Number %d, String %s", n, s);
but the types of n and s aren't int and char*, all hell breaks loose and you have the origin of a million CVEs. But how do you make that function signature typesafe, without the tools of modern templates? You sort of can't. One thing you can do is do string append stuff, but the syntax then is annoying: print("Number " + to_string(n) + ", String " + s);
This also creates a bunch of temporary strings, which is not ideal (and still relies on operator overloading, btw). In this world, using operators for this does make some sense: std::cout << "Number: " << n << ", String " << s;
Like, seen from this perspective, it's not the worst idea in the world, it does work nicely. The syntax really isn't too bad, IMHO (the statefulness part, though, really sucks). It also allows you to add formatting for your own types: just overload operator<<!Clearly, properly type-safe std::format is vastly superior, but the C++ of the 90s simply didn't have the template machinery to make that part of the standard library.
Both languages will have bidirectional compatibility with C++ so that code written in C++ can be directly accessed from Carbon and code written in Carbon can be directly accessed from C++. Neither of them use an FFI for compatibility.
std::cout << "Player is at" << _player_pos;
And get the x/y/z breakdown automatically. fmt::println("Player is at {}", _player_pos);It becomes cumbersome very fast.
Anyway I have no idea if cpp2 would get support from microsoft or other devs, but cpp2 seems like the most humble and "least risky" solution for the future of C++, and I really want it to be.
What I remember the most that Herb Sutter said in his cpp2 talk, is that it aims to avoid 95% of bad coding practices that C++ allows today.
It's safe to say that beyond the valid criticism of C++, that it quite a good goal and it would improve C++, without using a new language, and that's good, because a new language causes problems: new toolchains, new semantics, new specifics, no experience on a new language.
Cpp2 is not a new language, it is the same semantics of C++, except it has a new syntax and enforces good practices.
One very interesting point: in the future, cpp2 allows a cpp2-only compiler to be born, and it would still live next to C++ binaries without problem. That cpp2 compiler might probably be much faster since the cpp2 is a smaller stricter subset.
Carbon is a brand-new language that's designed to have seamless interoperability to C++.
Cpp2 is C++, just with a different syntax. (The goal is to even be able to mix Cpp2 and C++ syntax in the same source file. The initial implementation transpiles Cpp2 to C++ source code, similar to how the original C++ implementation transpiled C++ to C, rather than implementing a full compiler.)
Their overall goals appear to be different, too: The Carbon README states, "Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should." With that mindset, it sounds like more of a stopgap for projects already invested in C++. In contrast, I believe that Sutter would argue that C++ still has the potential to be a viable choice for many tasks, and Cpp2 is a way to realize that.
Doesn’t he work for Microsoft?
That ship has sailed. They already have C#.
Btw Microsoft is definitely interested into adopting new languages, just look all the effort they've been pouring into Rust lately.
For example, string interpolation:
"Hello, (msg)$!\n"
Why “(msg)$” and not “$(msg)”? Surely the latter is easier to parse?Maybe it's not much extra workload, but ()$ requires you to pattern-match all bracketed content in a string as a possible capture, and the parser can't determine whether it's a capture or not until the $ or a matching close bracket. Consider parsing "((x)$)" would require parsing the entire string to determine the first bracket wasn't a capture, then we can treat just the first character as a literal, then we have to re-parse the rest of the string again, and can't be sure it's a capture until the $, and then we can evaluate it as a real expression and then continue with the final $.
Other interesting cases would be "(x+(x)$)" or even "((x+(x)$)$)". I'm not sure I could easily predict the parsing of latter, but prefix and permitted unbracketed single word variable names "(x+$x)" is clearer and the second would either be "(x+$(x)$)" or "($(x+$(x)))" both of which are explicit using the prefix form and could simplify to "(x+$x$)" and "($(x+$x))" where the intent seems clearer.
There's the precedent that many languages already use the prefix form which would help newcomers with familiarity, and in fact many of these languages wouldn't even need the brackets and $var would be sufficient, and I can't really see why the spec requires the brackets in the ()$ syntax in string literals but not for expressions in code.
Rust for example has a single `move` syntax for all-capture vs. no-capture toggle, e.g. `|x| x + foo` (`foo` is stored as a reference) vs. `move |x| x + foo` (`foo` is moved into the closure). While I do want an additional mode for uniformly applying specific methods (typically `.clone()`) for captured elements, that is almost enough for typical closures.
Also, if my reading of the documentation is correct, `$` has to be attached to each occurrence of captured elements. Like, `:(i) = i + foo$ * (foo$ + 1)`. Doesn't that look strange? It is even possible to mix two variants of captures like `:(i) = i + foo$ * (foo&$* + 1)`, and it's not entirely obvious to me what will happen to `foo$` when `foo&$*` is updated. Treating these "upvalues" as a sort of an implicit structure (e.g. `$.foo`) is much more consistent, and a prefix form `$foo` can be regarded as its shorthand.
One thing I've noticed recently is that pretty much no language has a good way to simultaneously define a nested structure and assign that structure a name "for later reuse".
For example -- suppose I'm dealing with some serialized data structure that come from some external system. Very likely the data model behind this value involves "nested values" which have themselves have some type of which might be reused in multiple places by that external system.
When the goal is to just solve problems -- the approach i like to take is to focus on the values i want to consume and produce -- which might themselves contain lots of nested types each with some amount of reuse ...
I'd really like a language feature that supports simultaneously defining a type where it's relevant within some other data structure _and also allows_ giving that embedded thing a name for independent reuse ...
I wonder if this postfix $ syntax is related to that use case at all ...
(... this comment a speculation based on names of things only without even reading the whole article ...)
The priorities of a programming language syntax are to be readable first, consistent second.
When it comes to syntax that is used more frequently then the importance of readability increases, and the importance of consistency drops, because people will become familiar with the syntax through frequency of use, so there is no need for them to be guided by consistency. Yet they will be reading the code more often.
When it comes to syntax that is used infrequently then consistency is of more importance because you want users to be able to intuit the syntax. Since it will be infrequently used, the readability is of lesser impact.
> BufferSize: i32 == 1'000'000;
So "value : i32 = 10" is variable, but "value : i32 == 10" is a constant.
The difference is so subtle I'm not sure I like it.
Later in the documentation you can find "equals: (a, b) a == b;" which is a function but it feels like I need to decipher it because "==" is not for alias in this case.
Retaking the example of "equals: (a, b) a == b;" it feels also odd to omit the braces because they are even enforced for if/else branches.
I have to admit that everything was interesting until the "Summary of function defaults" part.
[It is probably worth watching for the Compiler Explorer interjection toward the end alone -- Matt Godbolt Appreciation Society]
Carbon is (was?) a fantastic proposal, but not sure if it has lost steam since it was introduced or how well it is being adopted (be it inside Google or outside)?
Being able to incrementally/interchangeably use/call existing C++ code (and vice versa) seems like a great design choice (in Carbon) without having to introspect the actual generated code.
Not sure how easy it is to get the cppfront-generated C++ to bridge with existing C++ code (and vice versa)?
I don't think anyone outside Google will seriously adopt this before it reaches v1.0. Even within Google, they may choose other options.
[0] - https://github.com/carbon-language/carbon-lang/blob/trunk/do...
At some point, keeping C++ semantics matters, since having different semantics would obviously prevent using previous C++ codebases, or make it more difficult to make those work together, and that may be why Carbon may not be a good choice.
Cppfront, Herb Sutter's proposal for a new C++ syntax - https://news.ycombinator.com/item?id=32877814 - Sept 2022 (545 comments)
and also Cppfront: Autumn Update - https://news.ycombinator.com/item?id=37719729 - Sept 2023 (8 comments)
I actually wish ++ and -- operators were removed. This would simplify everything, nothing to remember whether it's prefix or postfix operator, whether it copies something or not, you would just do "value += 1" and be done with it.
- Less mental overhead.
- Remove an extra way of doing the same thing.
There's no way to fix this in a reverse-compatible way for existing code (which is one of the constraints of cpp2- it must work with all existing C++ so that it is possible for existing projects to migrate regardless of size).
I mean, there are people that already think that...
A few of my friends and I did Advent of Code in cpp2 this year and it was a (very buggy) blast.
You mean buggy ironically because you were discovering cpp2, or because cpp2 itself was buggy?
My take away from the exercise is that this is not a language for human beings. I was successful in writing it, but it was extremely difficult and frustrating. Part of the frustration is because conceptually what I wanted to accomplish was not difficult, but figuring out how to express it was a nightmare. I am not new to the language, I've been writing C++ since 2009, it was the first language I learned and I've spent nearly every day of my life since then writing at least some C++ code. Even so, I can't say that I truly understand this shit.
I'm hoping cpp2 brings us someplace closer to a language that mere mortals can understand. I don't want the next generation writing C++.
Are you sure the problem is the language itself, and not the inherent complexity of that kind of metaprogramming? As far as I'm aware, Lisp is the language with the cleanest support for such metaprogramming, yet metaprogramming-heavy Lisp code is still quite hard to read. I'm not aware of a programming language in which a compile-time ECS would be easy to write/read.
Cpp2 isn't an alternative syntax to C++, as much as C++ and Objective-C aren't alternative syntaxes for C, even though they support a subset of it, and were born exactly the same way, code translators into C.
C didn't evolve into them, they became their own ecosystem, tainted by the underlying C compatibility.
The only alternative that is really a Typescript for C++, is Circle.
This is a step in the wrong direction.
Contracts are all about introducing undefined behaviour if you don't satisfy a precondition.
In practice this improves software quality on many levels by clearly defining requirements on interface boundaries that would otherwise be implicit or just documented.
Of course you can have special debug modes where you actually check that contracts are being satisfied.
Cppfront generates #line pragmas which tell the generated .cpp file which source lines to "blame" for each piece of generated code. This isn't something new and fancy for cppfront, it's a bog-standard pragma that your debugger already understands. So it will work the exact same as your current debugging workflow even if you mix cpp and cpp2 source files.
Or ReScript to OCaml
Or Gleam to Erlang
1. Very slow compilation.
2. Poor encapsulation, adding private functions requires recompiling all dependents, see (1).
3. Comically huge symbols make debugging much harder than it needs to be -- today gdb OOM'd my 16GB laptop when trying to form a backtrace of a typical QT application coredump.
Unfortunately it doesn't seem like cppfront can fix these issues. It may still be a worthwhile effort in other respects, of course.
Although QT is not a tiny framework, and I don't really know if modern C++ tools are really good enough for this sort of problem, since C++11 to 20 probably caused those tools to explode in memory consumption
But I am not surprised at all. I remember around 2013, I would use bullet physics and the Ogre3D engine, and I had to tell visual C++ to increase its memory capacity because the compiler would refuse to continue.
Of course, the elephant in the room is the C++ preprocessor. I haven't looked too closely into cppfront, but if I had the chance, I'd give the C macro system a bullet.
My favorite "macro" system by far is zig's comptime, which is beautiful and elegant. Zig code can simply elect to be executed in the compiler instead of at runtime. For example, here's how the print() function compiles in zig. Its a thing of beauty:
https://ziglang.org/documentation/master/#Case-Study-print-i...
During release builds sure, optimize away with LTO and whatever is needed to make it vroom. During development, waiting several minutes for a minor private function update is just absurd.
All of them will result in incurring a performance penalty, either at the runtime, or at the start-up time, or in the compiler/linker, and memory blowouts. But it will solve the recompilation problem.
There are workarounds like pimpl (aka. C style encapsulation). But this requires extra boilerplate and indirection. C++ modules might fix it at some point, but after 35 years of not having them in C++ most real life codebases aren't set up that way and may never be.
Even worse, this dependency is transitive. Dependencies to allow defining these private methods an fields are exposed too, forcing inclusion of headers to all members of the class, even if it's only implementation details.
>> under what circumstances does (2) hold?
To add a private member variable or function, you need to put it in the class definition in the header file. Then anything that includes the header needs to be recompiled.
Then a few years later I read the spec for std::launder that I realised C++ was not really designed to be understood.
It's a shame because it's actually a rather nice language in some ways. Here's hoping that this project or something similar takes off and separates the good bits from the bad.
I have great hope that Herb can create with his cppfront project “The Very Best of C++” to carry that tremendous legacy forward.
If I was to throw my hat into a “C++ successor”, it would be https://www.hylo-lang.org/ with its “all the safeties” and “tell you when you’re doing it sub-optimal” approach.
But I don't find it promising that after apologising in 2023 for missing their self-imposed 2022 deadline to ship something that works and other people can use, in Q2 2024 it doesn't look like their new 2023 roadmap got done either. Maybe they're going to eventually deliver this amazing thing. Maybe they're just going to learn some lessons (probably for the Swift community) and never ship Hylo per se. Certainly 2025 "Take over the world" looks... ambitious with nine months left to do all the stuff left from 2023 and all the work described for 2024 on top.
Here's the source of C++'s vector class:
https://gcc.gnu.org/onlinedocs/gcc-4.6.2/libstdc++/api/a0111...
In comparison, vec in rust. (Note you need to scroll down a few pages to start seeing non-trivial functions. There's a lot of block comments.):
https://doc.rust-lang.org/src/alloc/vec/mod.rs.html#398
Or list in Go:
https://cs.opensource.google/go/go/+/master:src/container/li...
To my eye, that C++ code is by far the hardest code to read.
However you're being slightly unfair because Rust's Vec is just defined (opaquely) as a RawVec plus a length value, so let's link RawVec, https://doc.rust-lang.org/src/alloc/raw_vec.rs.html -- RawVec is the part responsible for the messy problem of how to actually implement the growable array type.
Still, the existence of three C++ libraries with slightly different (or sometimes hugely different) quality of implementation means good C++ code can't depend on much beyond what the ISO document promises, and yet it must guard against the nonsense inflicted by all three and by lacks of the larger language. In particular everything must use the reserved prefix so that it's not smashed inadvertently by a macro, and lots of weird C++ idioms that preserve performance by sacrificing clarity of implementation are needed, even where you'd ordinarily sacrifice to get the development throughput win of everybody know what's going on. For example you'll see a lot of "pair" types bought into existence which are there to squirrel away a ZST that in C++ can't exist, using the Empty Base Optimisation. In Rust the language has ZSTs so they can just write what they meant.
should it though? there's a million ways to learn C++. Reading the std code definitely isn't one - technically the std could be entirely compiler builtins. If you want to read positive examples take A Tour of C++ 3rd edition (https://www.amazon.ca/Tour-C-Bjarne-Stroustrup/dp/0136816487)