Even in simple example code like this you can forget a check. In this case that result would be undefined if any call to devide failed.
I'd much rather have my program blow up with a readable stack trace pointing to where it happened than it working with a basically random value and then maybe blowing up somewhere totally unrelated or worse, destroying user data.
You don't need exceptions to get a callstack and you can assert that values are valid and force a crash / callstack dump when they're not.
On top of that, you can compile the asserts out for release builds if you're confident they won't be hit.
The memory protection catches a lot of out-of-bounds memory references pretty well, and if you enable core dumps, you can extract neat backtrace from the core file (provided your routines fail-forward in case of invalid arguments). Moreover, some compilers can be instructed to instrument your code & data, including stack, with guard data, meant to trip process if it accesses wrong memory region. GNU malloc does some guardians if you sest $MALLOC_CHECK_.
If you aren't worried about vendor lock-in, you can use GCC's __attribute__((warn_unused_result)) [1]
--
[1] http://sourcefrog.net/weblog/software/languages/C/warn-unuse...
And even if I did: If you consider the faulty main() in the linked article: How would you use assert() there to make sure that result as used after the call to foo() is actually usable? If foo() returns -1 (because any of the calls to divide returned -1) then result is undefined.
Exception-safe RAII is pretty much an all-or-nothing affair.
some folks have decided to get off the C++ feature treadmill and go back to, well... getting things done with solid languages (e.g. C) instead of learning about the latest C++ non-solutions to non-problems.
Yes, that's my concern with exceptions as well. It seems like the Java model (if I remember it right -- it's been a decade), which requires a method to either handle an exception that a sub-method throws or explicitly allow it to be thrown, would be preferable and help people avoid accidentally ignoring an exception.
I'd like to see support for exceptions like this in C++0X, but I haven't bothered to check to see if it's there...
Checked exceptions is a hotly debatted topic and I think the world has kinda finally come around to deciding that the are a bad idea overall. Google it and see for yourself.
Ironic that C++ is moving towards that as the Java community is moving away (by relying more on RuntimeException, which isn't checked).
If an exception is being thrown then something is wrong, if something isn't wrong then you implemented your exceptions incorrectly as exceptions shouldn't exist in normal program flow.
So to recap, you're writing a crap ton of more code just so you can return your error code _slightly_ faster than it would take an exception. You're optimising your failure cases, which (in the _vast_ majority of cases) is UTTERLY ABSURD.
Example: a listen loop which handles disconnections through exceptions. This isn't stupid but it's not very efficient.
You are correct, I was somewhat disingenuous with _slightly_ faster. It is lots faster but lots faster in error cases, which from a philosophical angle is still absurd.
As long as you use your exceptions for "bad shit" (uncommon error conditions or completely unexpected failures or returns) then I still strongly believe that the performance comparison is silly.
According to the article there are two methods used to implement exceptions in C++ - one that has higher overhead when you throw an exception (zero-cost) and one that has higher overhead when you call a function that might throw an exception (setjmp/longjmp).
Unfortunately the author didn't go over the latter method, which would have been more interesting.
The holy grail would be some method of ensuring the code cannot fail e.g. weirdly constrained argument semantics. Thus separating algorithm from constraints instead of shuffling them together on the page like a deck of cards.
But seriously, this is a large part of the power of c++'s type system. Taking the article's example, if the argument types were of (user class) 'non_zero_float', there's no possibility for error.
You still have to check that your input is non-zero at some point, but you've now focused it into one place (the 'non_zero_float' class ctor), and other chunks of your program depending on those type semantics no longer need to worry about it.
It would be better to have some way of getting the compiler to optimize constraints, perhaps by proving at compile time that the error is impossible.
Shouldn't the advice of this article just be "use exceptions"?
Binary size (or maybe more accurately in this case, binary code layout) can be highly relevant for speed due to the instruction cache.
As for ease of development, there are issues with C++ exceptions regarding this as well: some C++ libraries aren't exception safe, and neither are practically all C libraries. This is something you need to worry about whenever you pass a function pointer into a library, as there might be an exception-unsafe function higher up in the stack. Propagating an exception up through it is potentially extremely dangerous.
That said, using exceptions can still be a good idea, especially if your code doesn't need to be portable or if you know the platforms in advance, and you are careful about passing around function pointers. All you need to do is ensure that any of your code that might be called from third-party code with questionable exception semantics won't throw or propagate any exceptions, e.g. by installing a catch-all exception handler in it.
I think the C++ implementation of exceptions has a lot to answer for though, in poisoning too many developers on the concept. It really is an awful implementation.
From the Gentoo wiki: -Os is very useful for large applications, like Firefox, as it will reduce load time, memory usage, cache misses, disk usage etc. Code compiled with -Os can be faster than -O2 or -O3 because of this. It's also recommended for older computers with a low amount of RAM, disk space or cache on the CPU. But beware that -Os is not as well tested as -O2 and might trigger compiler bugs.
I believe Apple compiles (a lot or all?) of their stuff with -Os.
Anyway C++ exceptions are awful. ;)
Size matters a lot because harddrives are stinkin' snails when compared to CPU and RAM. All that stuff needs to be loaded from somewhere, and while SSD's have changed the scheme a bit, there's still a major gap between storage and memory.
My 486 built in 1993 had 8 KB cache, 4 MB RAM, and a 120 MB HD.
My desktop built in 2009 has 2 MB cache, 2 GB RAM, and a 250 GB HD.
Okay, the cache has lagged behind by one or three doublings compared to the other storage types. But that's still pretty close to proportional in a world of exponential gains.
If you are going for ultra-high performance, do you even have error-checking? Do you write it in assembler?