But yeah. I agree. Why does Lightroom take forever to load, when I can query its backing SQLite in no time at all?
And that's not even mentioning the RAM elephant in the room: chrome.
Younglings today don't understand what a mindbogglingly large amount of data a GB is.
But here's the thing: it's cheaper to waste thousands of CPU cores on bad performance than to have an engineer spend a day optimizing it.
No, it really isn't. It's only cheaper for the company making the software (and only if they don't use their software extensively, at that).
Assume it costs $800 for an engineer-day. Assume your software has 10,000 daily users and that the wasted time cost is 20 seconds (assume this is actual wasted time when an employee is actively waiting and not completing some other task). Assume the employees using the software earn on average 1/8 of what the engineer makes. It would take less than 4 days to make up for the employee's time. That $800 would save about $80,000 per year.
Obviously, this is a contrived example, but I think it's a conservative one. I'm overpaying the engineer (on average) and probably under-estimating time wasted and user cost.
I 100% agree on saving human time. Human time is expensive. CPU time is absolutely not.
Run the lifetime cost of a CPU, and compare it to what you pay your engineers. It's shocking how much RAM and CPU you can get for the price of an hour of engineer time.
And that's not even all! Next time someone reads the code, if it's "clever" (but much much faster) then that's more human time spent.
And if it has a bug because it sacrificed some simplicity? That's human hours or days.
And that's not even all. There's the opportunity cost of that engineer. They cost $100 an hour. They could spend an hour optimizing $50 worth of computer resources, or they could implement 0.1% of a feature that unlocks a million dollar deal.
Then having them optimize is not just a $50 loss, it's a $900 opportunity cost.
But yeah, shipped software like shrinkwrapped or JS running on client browsers, that's just having someone else pay for it.
(which, for the company, has even less cost)
But on the server side: yes, in most cases it's cheaper to get another server than to make the software twice as fast.
Not always. But don't prematurely optimize. Run the numbers.
One thing where it really does matter is when it'll run on battery power. Performance equals battery time. You can't just buy another CPU for that.
yet piles and piles of abstractions are considered acceptable and even desirable while having significant negative effects on code readability.
I'm only half-joking.
[EDIT] For extra lulz let them use a language with a bunch of fancy modern language features so they get a taste of what those cost, when they realize they can't afford to use some of them.
And microcontrollers will never get abundant capacity because smaller and more efficient means less battery, no matter the tech level.
So it's not like "everyone should know the history of the PDP-11" which I would disagree with.
During my schooling we built traffic lights and stuff on tiny machines, and even in VHDL, even though desktop machines were hundreds of MHz. They both have a place still.
Even after closing all tabs, since tabs (and extensions) are basically programs in this operating system.