I copied 153Gb of data onto my laptop earlier over my fiber connection because the project I'm working on needs it and I couldn't be bothered to go find the external drive with it on in the storage closet in the 2nd bedroom.
I can buy 500GB of really fast m2 SSD for 153 quid (approximately 30p per GB) or terabytes of storage for 153 quid.
I got a new thinkpad a few weeks ago, I specced it with 16GB on one slot because I fully intend to upgrade to 32GB fairly soon with virtualisation I can bump up against 16GB, Let that sink in, My time is so precious (to me on my machines and my employer on theirs) that I'm happy to virtualise entire operating systems and allocate billions of bytes of memory and storage to save some of it.
Hardware is absurdly cheap and I can't really see that changing for a while, from a systemic point of view it's ridiculously more efficient to spend a lot of money in a few places (Intel, Samsung, IBM etc) than to spend a lot of money in every place.
Every time Intel puts out a processor that is 10% faster at the same price everyone elses software just got 10% faster for free* (*where free = the price of the new processor).
There just isn't a market incentive (financial or otherwise) to rollback bloat, if there where it would be a competitive advantage and everyone would be doing it, that they aren't shows that it isn't.
I suspect a lot of the reason why Linux installs stayed so relatively lean was because for a long time most people had CD burners not DVD burners, once those where common install ISO's blew right past 650Mb, I think Fedora 26's was 1.3Gb, I didn't really pay any attention.
In any case, that is irrelevant as 14nm and ipc are pretty much maxed out, and from this point on, this is it. Unless CPUs move away from silicone, this is as fast as it gets (save for adding cores to the problem).
On multithreaded workloads that I care about its not just a little faster, it's a lot faster.
There is still a lot of fruit to be had in that direction I think and that's before you consider the other areas left for performance improvement.
Of course for some workloads/people they are already butting up against a different cost/benefit and they do care about ekeing every cycle out the processor but for me it hardly matters.
My desktop at work runs a development version of our main system faster under vagrant than it runs in production since I've got more RAM and a machine with twice as many cores.
It's a strange market when that happens..
The main culprit was extracting and copying all of the small source files that come with it.
Could a .NET expert break it down for me why VS takes the size and memory it does. I know why VS Code needs 150 MB of RAM (JavaScript), but VS should be written in C++ right?