> C++ is often described as complex, hard to learn, and unsafe. That reputation is undeserved. The language itself is not unsafe. On the contrary: it is precise, honest, and consistent. What is unsafe is how it is used if it is misunderstood or if one remains in old patterns.
I think this take needs to stop. It’s a longer way to say “skill issue”. Meanwhile, decades of industry experience have shown that the governing principles of even modern C++ make it incredibly hard (expensive) to deliver high quality software. Not impossible - there’s lots of examples - but unreasonably hard.
C++ is fundamentally unsafe, because that’s how the language works, and if you think otherwise, you don’t know C++. There are patterns and paradigms that people use to limit the risk (and the size of the impact crater), and that’s helpful, but usually very difficult to get right if you also want any of the benefits of using C++ in the first place.
Certain people will disagree, but I surmise that they haven’t actually tried any alternative. Instead they are high on the feeling of having finally grokked C++, which is no small feat, and I know because I’ve been there. But we have to stop making excuses. The range of problems where C++ is unequivocally the superior solution is getting smaller.
> In my streams I showed how RAII can thereby also be applied to database operations: a connection exists as long as it is in scope.
Only if that connection object doesn’t support move — we’re 12 years of C++ standards past the arrival of move, and it still leaves its source in an indeterminate state.
> With std::variant, C++ gained a tool that allows dynamic states without giving up type safety.
variant is not quite type-safe:
https://en.cppreference.com/w/cpp/utility/variant/valueless_...
> With C++20, std::ranges decisively extends this principle: it integrates the notions of iterator, container, and algorithm into a unified model that combines type safety and readability.
Ranges may be type-safe, but they’re not safe. Like string_view, a range is a reference, and the language does not help ensure that the referent remains valid.
I haven’t watched the streams he referred to, but… I am fairly certain the language itself says no such thing. You may be thinking of the standard library, which states that certain classes of moved-from objects have unspecified state. If you’re writing your own DB connection class, you can define moves to leave the object in whatever state you prefer, or disallow moves.
Admittedly it’s still a weird example IMO, because external factors can sever the connection while the object is in scope.
How can "decades of experience" show the deficiencies of Modern C++, which was invented in 2011?
If you've worked on a project built entirely on the technologies and principles of modern C++, and found that it caused your software to be low-quality, then by all means share those experiences. C++ has no shortage of problems, there is no doubt. But hand-waving about "decades" of nondescript problems other people have had with older C++ versions, is somewhat of a lazy dismissal of the article's central thesis.
Please read the sentence you're quoting again.
So I agree it has its quirks but if the defaults keep changing and improving it keeps evolving into a safer by default thing compared to before.
No, I do not mean into you must be super-skillfull anymore: I mean that with all of that in, things are much safer by default.
Things keep improving a bit slower than we would like (this is design by committee) but steadily.
Unreal Engine is C++ based and plenty of games have used it.
Fundamentally, when it comes to safety, its either everything or nothing. Rust is by definition unsafe, because it has "unsafe" keyword. If the programmer has enough discipline not use use unsafe everywhere, he/she has enough discipline to write normal C++ code.
But as far as C++ goes, the main problem is that the syntax still allows C style pointers and de referencing for compatibility with C code. Generally, if you stick to using std library constructs and smart pointers for everything, the code becomes very clean. Unique ptr is basically the thing that inspired Rust ownership semantics after all.
The range of issues where the superior solutions offer language features superior to the features of modern C++ is getting smaller too.
To be clear, I like and continue to use modern c++ daily, but I also use rust daily and you cannot really make a straight faced argument that c++ is catching up. I do think both languages offer a lot that higher languages like go and Python don't offer which is why I never venture into those languages, regardless of performance needs.
Honestly tho, I keep the tool in my belt because I believe it is still the best for what I use it for: low latency financial applications and game engines.
If I find some time to migrate from c++ to a different language I may for certain games, but thats a future bridge to cross.
The notion that just using the new fancy types automatically makes everything memory safe has to stop. std::expected contains either a value or an error. If you call .value() and you're wrong you get an exception. If you call .error() and you're wrong you get undefined behaviour. This was added in C++23. Since there's no destructuring you have to call these methods btw, just don't make any mistakes with your preconditions! Regardless 90% of memory safety errors I see are temporal. Unless we completely ban references and iterators they will not be going anywhere. Using unique_ptr instead of new does not do anything when you insert into a map while holding a reference to an element.
Developers also have to be able to make their own things. We can't pretend that absolutely everything we will ever need is bundled up in some perfect library. To write a typesafe application you need to be able to create your own domain specific abstractions, which to me precludes them looking like this:
template <class ty>
concept db_result_tuple = requires { typename remove_cvref_t<ty>; }
&& []<class... Es>(std::tuple<Es...>*) {
return all_result_args_ok_v<Es...>;
}( static_cast<typename std::add_pointer_t<remove_cvref_t<ty>>>(nullptr) );It was slightly improved in C++26 under hardening:
https://en.cppreference.com/w/cpp/utility/expected/error.htm...
And it was fixed in c++26
> But all of us—the C++ developers—must go back to school. We must learn C++ anew, not because we have forgotten it, but because through evolution it has become a different language. Only those who understand the modern language constructs can use the new tools properly and unfold the potential of this generation of libraries.
Once you get to that point, you might as well create and learn a different language.
Nope, it's still incredibly valuable to be able to c++14 and c++26 two different translation units and then later link them together (all without leaving the familiar toolchains and ecosystems). That's how big legacy projects can evolve towards better safety incrementally.
This flies in the face of modern principles like building all your C++, from source, at the same time, with the same settings.
Languages like Rust include these settings in symbol names as a hash to prevent these kinds of issues by design. Unless your whole team is a moderate-level language lawyer, you must enforce this by some other means or risk some really gnarly issues.
C++ is valuable, because the existing tooling enables you to optimize the runtime peformance of a program (usually you end up with figuring out the best memory layout and utilization).
C++ is valuable becaus it's industry support guarantees code bases live for decades _without the need to modify them_ to latest standards.
C++ is valuable because the industry tooling allows you to verify large areas of the program behaviour at runtime (ASAN etc).
I simply don't understand what type of industrial use this type of theoretical abstraction building serves.
Using the metaprogramming features makes code bases extremly hard to modify and they don't actually protect from a category of runtime errors. I'm speaking from experience.
I would much rather have a codebase with a bit more boilerplate, a bit more unit tests and strong integration testing suite.
The longer I use C++ the more I'm convinced something like Orthodox C++ is the best method to approach the language https://bkaradzic.github.io/posts/orthodoxc++/
This keeps the code maintainable, and performant (with less effor than metaprogramming directed C++).
Note: the above is just an opinion, with a very strong YMMV flavour, coming from two decades in CAD, real time graphics and embedded development.
Metaprogramming style in C++20 only has a loose relationship to previous versions. It is now concise and highly maintainable. You can do metaprogramming in the old painful and verbose way and it will work but you can largely dispense with that.
It took me a bit to develop the intuitions for idiomatic C++20 because it is significantly different as a language, but once I did there is no way I could go back. The degree of expressiveness and safety it provides is a large leap forward.
Most C++ programmers should probably approach it like a new language with familiar syntax rather than as an incremental update to the standard. You really do need to hold it differently.
This was my takeaway as well when I revisited it a few years ago. It's a very different, and IMO vastly improved, language compared to when I first used it decades ago.
What makes C++ valueable is being a TypeScript for C, born in the same UNIX and Bell Labs farm (so to speak), allowing me to tap into the same ecosystem, while allowing me to enjoy the high level abstractions of programming languages like Smalltalk, Lisp, or even Haskell.
Thus I can program on MS-DOS limited with 640 KB, an ESP32, Arduino, a CUDA card, or a distributed system cluster with TB of memory, selecting which parts are more convinient for the specific application.
Naturally I would like in 2025 to be able to exercise such workflows with a compiled managed language instead of C++, however I keep being in the minority, thus language XYZ + C++ it is.
This is true for MANY other languages too, I don't see how this makes c++ different. With gdb its quite the opposite, handlig c++ types with gdb can be a nightmare and you either develop your own gdb glue code or write c-like c++.
> C++ is valuable becaus it's industry support guarantees code bases live for decades _without the need to modify them_ to latest standards.
In times of constant security updates (see the EU's CRA or equivalent standards in the US) you always gotta update your environment which often also means updating tooling etc. if you don't wanna start maintaining a super custom ecosystem.
I don't see this as a positive in general, there is bit rot and a software that is stuck in the past is generally not a good sign imo.
> C++ is valuable because the industry tooling allows you to verify large areas of the program behaviour at runtime (ASAN etc).
Sanitizers are not C++ exclusive too and with rust or C# you almost never need them for example. Yes C++ has extensive debugging tools but a big part of that is because the language has very few safeguards which naturally leads to a lot of crashes etc..
I think the idea of using only a small subset of C++ is interesting but it ignores the problem that many people have, you don't have the time to implement your own STL so you just use the STL. Ofc it gives me more control etc. but I'd argue most of the time writing orthodox c++ won't save time even in the long run, it will save you headaches and cursing about c++ being super complicated but in the end in modern environments you will just reinvent the wheel a lot and run into problems already solved by the STL.
That's why better to use lldb and it's scripts.
> I think the idea of using only a small subset of C++ is interesting but it ignores the problem that many people have, you don't have the time to implement your own STL so you just use the STL.
Yeah, agree. It's just much easier to take a "framework" (or frameworks) where all the main problems solved: convenient parallelism mechanisms, scheduler, reactor, memory handling, etc. So it's turning out you kinda writing in your own ecosystem that's not really different from another language, just in C++ syntax.
I can imagine it might be insanely faster to compile
I think you're arguing from a position of willful ignorance. The article is clear on how it lauds C++'s std::printnl, not printf.
http://en.cppreference.com/w/cpp/io/println.html
Here's what the article argues:
> With std::format, C++ has gained a modern, powerful, and safe formatting system that ends the classic, error‑prone printf mechanisms. std::format is not merely convenient but fully type‑safe: the compiler checks that placeholders and data types match.
Solid remark, and the consensus on how std::printnl and std::format are an important improvement over std::cout or C's printf.
Compare to <iostream>, which is stateful and slow.
There's also std::format which might be safe and flexible and have some of the advantages of printf. But I can't use it at any of the places I'm working since it's C++20. It probably also uses a lot of template and constexpr madness, so I assume it's going to be leading to longer compilation times and hard to debug problems.
Sure there is FreePascal and Lazarus, sadly it doesn't get enough love.
Somebody really needs to rethink the entire commitment to meta-programming. I had some hope that concepts would improve reporting, but they seem to actually make it worse, and -- if they improve compile times at all, I'm not seeing it.
And it has nothing to do with historicity. Every time I visit another modern language (or use it seriously) I am constantly reminded that C++ compile times are simply horrible, and a huge impediment to productivity.
The whole point of a programming language is to be an industrial productivity tool that is faster to use than hand writing assembly.
Performance is a core requirement industrial tools. It's totally fine to have slow compilers in R&R and academia.
In industry a slow compiler is an inexcusable pathology. Now, it can be that pathology can't be fixed, but not recognizing it as a pathology - and worse, inventing excuses for it - implies the writer is not really industrially minded. Which makes me very worried why they are commenting on an industrial language.
However too many folks are stuck in the UNIX command line compiler mindset.
I keep bumping into people that have no idea about the IDE based compilation workflows from C++ Builder and Visual C++, their multihreaded compilation, incremental compilation and linking, pre-compiled headers that actually work, hot code reloading, and many other improvments.
Or the CERN C++ interpreters for that matter.
Many don't seem to ever have ventured beyond calling gcc or clang with Makefiles, and nothing else.
This never changed.
In the past, hacking was exploiting human errors in writing faulty code. These days, its pretty much the same thing except the faulty code isn't things like buffer overflows due to no bounds checking, but more higher level faulty software with things like password reuse, no 2 factor authentication, and so on.
- https://stallman.org/articles/on-hacking.html
I also think that named parameters would go a long way toward improving the language.
Lastly, explore some way to make possible a breaking change with "old C++".