It makes you exceptionally paranoid about failure states and practically requires a bit of thought and planning before attempting any non-trivial change.
The mindset of "it's fine to ignore all error conditions and let the default exception handler print a stack trace to the user" results in software that is annoying to the user.
That is the reason it should be used as a learning tool. So that you know the nitty gritty details without anyone "managing" it away from you.
Like running in a weight vest.
Edit: ah, perhaps you meant, in addition to using raw C, one should also learn how to use such static analyzers & cie
The "purpose" here is to instill a sense of paranoia around error handling, so that the resulting program from the PoV of the user appears to handle whatever bizarre combination of inputs the user put in.
It's the difference between telling the user "Failed to open foo.txt (file not found). Do you want to create it?" and ending the program with a 100-line stack trace with "FileNotFoundException" buried somewhere in there.
As the article says, checked exceptions are not the solution here.
I've literally never seen, in a professional working environment, exception languages (Java, C#, Python, etc) actually check that the file they tried opening was actually opened, and if it wasn't, directing the user with a sensible message that allowed the user/operator to fix the problem.
In C, the very first time that you fail to check that `fopen` returned non-NULL, the program crashes. Then you check if it returned NULL, and need to put something in the handling code, so you look at what `errno` has, or convert `errno` to a string.
I will bet good money that you could grab the nearest C#/Java/Python/JS/etc engineer to you, ask them to find the most recent code they wrote that opened and read/wrote a file, and you'll find that there is literally no code to direct the user if the file-open failed. The default runtime handler steps in and vomits a stack trace onto the screen.
In C, you are forced to perform the NULL-check, or crash. Sure, many devs are simply going to have a no-op code-path for the error cases, doing `if ((inf = fopen(...)) != NULL) { DoSomethingWith(inf);}`, but proceeding on success is a code-smell and easy to visually spot as an error.
The exception languages make it virtually impossible to spot the code-smell of handled (or improperly handled) exceptions, and make it easy because the dev can just read the stack trace, create the file needed, and proceed with programming the rest of the app.
What a good program must do when a file open failure is encountered is direct the user in some way that they can fix the problem. For example "file doesn't exist. Create it [button], Choose a file [button]", or "Permission denied. Try running as a different user.", or "File is locked. Are you already running $PROGRAM?", or "$FOO is a directory. Specify a file.".
[EDIT: Yes, seeing a stack trace in a shipped product is one of my personal bugbears that I feel very strongly about. If it's a stack trace for an expected error (like failure to open/read/write a file) I absolutelydo get annoyed by this public display of laziness. And yes, this is one of those hills I'll die on before I leave it!]
Even the simplest c/line programs annoy me no end when the application simply dumps a stack trace to the screen. Sure, I can dig into it, but the average user is going to ask for help on stackoverflow, just to figure out what must be done to fix the error.
I believe language like Pascal and Python are good to initiate you to algorithmic. After that, I agree you would need to dive into languages such as C, Rust and why not Assembly language to have a better understanding of your machine.
In our case, after one year of coding with Pascal, we spent the remaining years in focusing on C and C++ (Builder)
After that, it's time to move to pragmatic choices (job market requirements in terms ofdevelopment stack)
It was not a problem for me as I was learning Turbo C and Visual C++ 5/6.0 a year or two beforehand.
Everyone else in the class, though, were sooooo frustrated with their "89 errors, 103 Warnings" all because they forgot to add a semicolon in the code.
Truth is they were not getting anywhere to understanding how to write or care about the quality of the code.. leading to proper planning, etc. They would keep changing something to reduce the errors/warnings.
Personally, I think every person has their own journey into the world of programming. For me, I was happy for it to be C, with a bit of Pascal and Visual Basic. For someone else, perhaps Scheme and Javascript. Another maybe Java.
Some developers/programmers, in my opinion, are not for C. controversial ... I know.
How is that better for developing a sense of paranoia around error states?
"Throw it at the wall and see what sticks" does not exactly lead to "extreme paranoia managing errors".
Today developers don't build that skill, so you see applications that just fails silently everywhere or produce nonsense errors. The better developer tools you have the less your developers will need to learn UX skills to be able to do their work.
If you run this
use std::fs::File; pub fn main() { let fp = File::open("test").unwrap(); }
You get this:
thread 'main' panicked at /app/example.rs:11:33: called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "No such file or directory" } note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
The language incentives you to propagate errors up to main and then just let it fail.
The consequences of doing something incorrect or nonportable is sometimes that the expected behavior occurs. This can be validated by testing on the couple of platforms (or just one) that the program supports, and kept working.
Another thing we need to consider is that reliable is not the same thing as robust, let alone secure. A program that appears reliable to a user who uses it "as directed", giving it the expected kinds of inputs, will not necessarily appear reliable to a tester looking for ways to break it, or to a cracker trying to exploit it.
A truly reliable program not only handles all its functional use cases according to its requirements, but is impervious to deliberate misuse, no matter how clever.
Security flaws are found in reliable, well-debugged programs used in production by millions.
A practical example of this is word size issues: A program that casts pointers to ints everywhere is perfectly reliable on 32-bit machines, but it will die horribly on any LP64 machine, which are most 64-bit machines. Related are endianness issues, which is why projects have tended to stop supporting big-endian systems: They're just too rare to scrounge up anymore, and unless you're actively testing on them, bugs can slip in which will not be caught on little-endian hardware.
Similarly, OS developers stop supporting architectures when they can no longer find working examples of them. This is because emulators have bugs, and without a source of truth (working hardware) it's very hard to determine if a bug you just found is in the OS or the emulator; add unreliable hardware to that and things just get worse. Bob Supnik (former DEC VP, creator of SimH) has a PDF:
You can do the same thing with tagged unions to implement a poor man's sum types. It is significantly more verbose than in a language that has syntactic support for this, but you get similar compile time safety guarantees.
Completely opposite experience here. C is great for explorative coding because it's just structs and functions.
There's no agonizing about whether some piece of code should go here or better there, wrapped in this or that concept or high level abstraction.
Instead you start with an empty screen and incrementally build your ideas from small building blocks, much like in Lisp or Forth.
Completely disagree. The lack of screwing around selecting abstractions forces you to make something productive right away and not stress about refactor.
Retrospective: https://cacm.acm.org/opinion/retrospective-an-axiomatic-basi...
Here is a pdf of the retrospective along with the original paper : https://harrymoreno.com/assets/greatPapersInCompSci/2.2_-_An...
How Did Software Get So Reliable Without Proof? C.A.R. Hoare 1996 (?) http://users.csc.calpoly.edu/~gfisher/classes/509/handouts/h...
That's something I can understand because when I wanted to buy a motorbike I was advised to ride a bicycle first since it's more difficult to control. Except that the White House called recently for companies to not use non memory safe languages such as C to build software.
Just sensationalist journalism.
Interpreting this as "stop using C/C++" isn't much of a stretch. Yes, it is not a demand. Anticipating such a demand isn't a bad bet, however.
Who is an authority, anyhow? The White House is citing NIST, DHS, Microsoft, Cambridge DSCT, Google and others. Whom do you offer?
I don't like this myself. We're rapidly building tools that could conceivably solve memory safety in C/C++ code bases. I don't want C pilloried by Authority and its group thinking ways.
https://stackoverflow.blog/2024/03/04/in-rust-we-trust-white...
In 6 hours, I will share the link to the official and related White House PDF document.
What language are you talking about? Go to godbolt and try that with any of the compilers there for C or C++.
It is also hard to handle errors more meaningfully than instantly terminating the process at the first whif of something going sideways.
And once you do write such a thing, try making automated tests to exercise it!!
How many programs actually check the return value of close() ?
Sure, this sounds a bit Linux/POSIX specific. There are only a few billion devises running such code, perhaps I’m overreacting…
I think a critical difference is that in C the program is more liable to simply crash if errors aren't correctly handled, whereas in Java/Python/etc the program can just log a stack trace and keep on truckin', even if the bug is actually quite severe. In some cases a crash is preferable - e.g. if something goes wrong in a text editor while saving data, it's a lot better for the user if the program crashes versus the alternative where the editor runs as normal but saving doesn't work. Crashes in C also bring more urgency for developers to actually fix the bug compared to a try/catch in Python that simply buries it "until I get a chance to debug it properly." (But crashing also leads to a lot of frustration when the error wasn't that important and the C program should have just kept going.)
Exceptions are exceptionally good at error handling - they always do the correct default (bubbling up if not handled, bringing a stacktrace with them, and by default they auto-unwrap the correct return value, not making the actual business logic hard to decipher), plus they make error handling possible on as wide scope as needed (try block vs a single return value).
I absolutely fail to see how a “random” C program would fair better, I’m sure the errno state is not checked at every line, as it is a trivial human error to leave that out. You can’t forget exceptions, which is how it should be!
If anything, that java/python text editor will catch every exception at the top level, save the file and exit with an error message and it will be the C program that either randomly crashes, or ignores some exceptional state.
One bug took several months to track down. This is cognitive dissonance at its finest.
2) C programs, at least the ones we use now, are a product of a lot of use and debugging
3) It didn't take too many debugging sessions as a C programmer to learn to program a bit more carefully.
4) and the more it gets used, the more error codes it encounters, and the more robust the handling gets. I think a dirty secret of software engineering isn't that the most complicated/heavily used code gets the most and most useful comments, it's that it also get the most error handling/detection code, and for the vast majority of non-core loop code: error handling ..isn't.
No. Original programmers that just happened to use C were better, not the other way around.
So yes. C programmers were better.