I'd like to add that "growth over time" also solves the "No Silver Bullet"[1] paradox: while Brooks's analysis[2] seems obviously correct, we also obviously have code that seems orders of magnitude larger than it has any business being[3].
As far as I can tell, Brooks doesn't consider cumulative effects of growth over time, but only a single iteration of analysis → development. So a 20% difference in efficiency is just a 20% difference in outcome. However, that same 20% inefficiency per iteration yields a 40x difference in outcome over 20 years. Compound interest.
[1] https://en.wikipedia.org/wiki/No_Silver_Bullet
[2] "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity.
[3] MS Office is >400MLOC. Xerox PARX had "personal computing" in ~20KLOC. Even assuming MS Office is 100x better at "officing", that leaves 99.5% of the code unaccounted for. Similar analyses can be done for web browser (WWW.app, 5KLOC, Mozilla 15MLOC) and other operating systems etc.
The inflationary phenomena described here is more about incremental decorative and accidental complexity. The root cause is not the human enterprising drive to go boldly where no man gone before.
Looking at the examples of adding headers and command line bells and whistles I wonder what human trait drives this? Those seem (product-)management and architecture/framework type decisions possibly driven by a need to have things uniform and under control.
Moore's law is tiring (it is more obvious on the data center side than in devices yes). Will we become more frugal again?
We could argue, is it half a fold? is it two folds? depends on the day.
We're about to have Android running on toasters. And I can't stop myself from asking the question - why?
I would disagree with that on so many points. Today, more than ever we have massive differences. "Hey Siri/Google", the camera functionality is just incomparable, the maps, ...
In the case of consumer white goods the business case is that expensive mechanical components and security mechanisms are replaced by electronic ones that are cheaper. And indeed, counting inflation, today's whitegoods are far cheaper than they ever were. This is happening in power adapters, but also in washing machines and kettles. This means that half the components only exist in the virtual sense and you'd need half the design, a plastic molding factory, and a master's degree to have any hope in hell of fixing them. But they're 1/10th to 1/5th of your monthly pay, and last 2-10 years, so why bother ?
But the story is the same at a high level for everything from cell phone radios to motor controllers for washing machines. Virtual components, simulated in microcontrollers are far cheaper (and far less repairable) than a real component ever will be.
Also, I don't feel like the price of appliances was dropping over my lifetime, so I have to ask - where do those apparent savings go? They're definitely not being passed on to consumers.
> I would disagree with that on so many points. Today, more than ever we have massive differences. "Hey Siri/Google", the camera functionality is just incomparable, the maps, ...
I'll grant you camera, because chips and algorithms do get better. Siri/Google doesn't really feel like that much of an achievement over what was possible 10 years ago, except nobody tried to build that product then, and smartphones weren't exactly popular. As for maps, I'll only point to Google Maps application, which is constantly degrading in quality and functionality for the past 5+ years...
2-10 years is an extremely short lifespan for white goods; 20-30 years is more like it, and 40-50 years is not uncommon.
In the case of consumer white goods the business case is that expensive mechanical components and security mechanisms are replaced by electronic ones that are cheaper.
It's not just that, but even mechanical components are deliberately made weaker to use less material and thus cost less. Sometimes manufacturers push that boundary a little too far, Samsung's "exploding" washing machines being one of the latest examples of this.
If you can run android on it, you can increase the size of your hiring pool a hundred fold. That matters a lot for big companies trying to ship products at scale.
Of course I don't get why a toaster or a fridge really needs a CPU at all, but that's another matter.
Phillip (time traveler from 1984): What on earth can a person do with 4 gigabytes of RAM?
Martin (from 2012): Upgrade it immediately.
But on the other hand, some of what might be perceived as bloat is really useful, even necessary, to someone. For example, show me a "light" desktop operating system, Unix desktop environment, or web browser, and I'll probably tell you that many of my friends couldn't use it, because they're blind. (As for me, I'm legally blind, but I have enough sight to read a screen up close, so I don't need a screen reader, but I often use one.) Accessibility requires extra code. In Windows 10, UIAutomationCore.dll is about 1.3 MB, and will no doubt get bigger as Microsoft continues to improve the accessibility of Windows. But you can't write that off as bloat.
Elsewhere on this thread, there was some discussion of microcontroller software versus Android. A user interface written for a microcontroller with a screen is inaccessible to a blind person, except by memorizing a sequence of button presses, if the UI is simple enough and the device has physical buttons in the first place. But if the device is running Android, the infrastructure for accessibility is there; just add a text-to-speech engine and TalkBack (edit: and make sure the application follows accessibility guidelines). That ain't gonna fit in 1 MB or less of flash and 512K or less of SRAM. So sometimes we may be inclined to rant about bloat, but there's actual progress, too.
I'm convinced that's not true. Consider the case of CSS and ARIA roles. You could very easily write CSS/JS components defined on roles using CSS 2.1 attribute selectors, and accessibility just follows naturally. For instance, define your tab panels using role="tab", and screen readers should immediately understand it, while for visual designs, your CSS and JS just select on the appropriate roles. You then use classes for non-semantic content, like font and colour instead of using classes for everything as is currently standard.
So you're not duplicating code, you're writing better semantic markup once which can be properly interpreted multiple ways in different media.
Not how it's currently done, but there's no reason it can't be done that way, even for desktop UIs.
The reality is that we use so much code because it is easier that way. And the added ease is very hard to remove.
Then there is data. A single picture is more storage than my first few computers had. Compression isn't magic.
Start using libraries for a tiny feature without consider the whole size impact is what is driving code inflation. Now and then I find a very tiny project which executable is big just because they are linking again Boost for just one couple of classes.
It depends on how it's been implemented.
It's modular, so you should just have to pull in what you use.
The premise for the article. But it provides no proof of it. And I can just as well say "for me it seems software in general have gotten much safer over the years, because I barely remember a time a program crashed."
* Use of undocumented magic numbers (1)
* No use of getter/setter pattern
* Inflexible design (datatype of result is fixed, no template pattern implemented)
* Manual memory management (no garbage collection used)
* No infrastructure for automated testing included
* No unit tests available
* Code has not changed for Years (code smell!); Probable stale code, to be removed in next release.
Intuitively this feels correct, I wonder if anybody has studied it.
Like, say you have a program that uses a user-specified program + arguments, like parallel or find -- but for evaluating a boolean condition.
true/false would help.
This is where a bit wisdom differentiates people. There is no winning trying to match exponential generality with our finite ability. So wise people seek essence and accept the reality that there will always be cases that are uncovered and the important thing is not to lose the essence -- bury the essence amid mountain of glut is not much different from losing the essence, therefore, it is often more plausible for the minimal approach than the bloat. The essence covers 80% of the time and the rest 20% of the time we cope with it.
In the case that you might not be using a shell and when you need the true/false such as with parallel or find? Why not write your true/false program then? In fact, I write my own parallel script every single time when I really need parallel. It is not hard. Takes about the same time for me to go through the man page.
Author rants about how the size of the executable for the Unix `true` command has increased "exponentially" from 0 to 22KB over ~30 years, for no apparent reason other than "because it can", referring to cosmology, of course.
"...true and false commands also don’t need an option that can invert the result, or one that would allow it to send its result by email to a party of your choice."
The article is humorous in nature, but not without a point.
OP, thanks for sharing the article.