This is the core of the article and it's basically guesswork. From all I read, I tend to believe the opposite actually. Cloud is a lot greener than home servers.
Utility scale power, utility scale chip production, exabyte-scale storage racks, generally more efficient chips (xeons vs desktop models), more efficient server PSUs.
Not to mention less overhead like deliveries. You only need 1 truck to deliver the hardware to the data center, whereas you might need 30-50-100 to deliver computers to individual households.
> But given that a home server can run on 10W of electrical power, and potentially off of a solar panel I found this unpersuasive. I didn't have any quantitative estimates then, and still don't now.
This is laughable. It's not an argument for individual servers in households, if anything, it's an argument for utility-scale solar and more efficient software that runs efficiently on 10W CPUs.
Yah, sure!
"I didn't have any quantitative estimates then, and still don't now. However, it's likely that a world in which there is one server per household or per street would be more electrically efficient than the current world of billionaire cloud servers."
My neighbors are powered by gas too. The only way this would work is if we all also bought solar as the author suggests, however encouraging your average homeowner to not only run a home server but also invest in solar for it is a non-starter.
Also I suspect that a home server could just sleep the CPU a fair % of the time. On top of that, you don't need to run a billing system that charges different users for various things.
Sorry, but I don't buy it. DCs are frequently powered by solar (https://www.google.com/about/datacenters/renewable/, https://sustainability.fb.com/innovation-for-our-world/susta...), and this article gives no actual evidence other than home server being able to run on 10W. Solar and Wind projects are successful in part because of economies of scale. While local-scale grids may be great, using them to power services that run on cloud infrastructure today don't seem like a wonderful application.
I have no doubt that the aggregate sum of my all of my cloud usage across Google, FB, Amazon, etc. amounts to larger than 10W, but if you summed up all of the different pieces of those services that I use, I doubt you'd ever be able to scale them all down to something that could run at home, forget doing it on 10W. There may be a small sliver of that which is possible (e.g. EMail), but the fact of the matter is that it's almost irrelevant.
Coincidentally, the author of this post maintains a project that develops a home server system, which means there's plenty of vested interest in pushing this non-analysis.
So it's not really 10 watt local server vs a shared of a cloud. It should be something like 0.5 watts or similar to upgrade your home router to something that could provide internet services. Likely just double the ram and provide a microSD card for storage and that would be all you need. A 256GB microSD card, even a fast one is around $50.00.
Seems like a bit of smarts, like a CDN, or even IPFS and collectively home servers could make quite a bit of sense with close to zero power overhead.
The hardware lifecycle should also be considered. Although home environments often recycle equipment, cloud environments are in a better position to get more bang per watt, and can upgrade their SAN storage capacity easily to handle changes in demand.
I suppose there is a lot of inefficiency in hitting all those switches and routers on the way to a cloud's network. However that's shared with other users as well.
So, theoretically, heating your house with CPUs would be greener than using central heating...
If you live in a cold area, a home server's heat wouldn't be wasted, in fact it would theoretically lower your heating bill(by .01% or less, but hey!) as less electricity would be used on the less efficient heater.
Take, for example, developing a website. Maybe it's a site for a small municipality to post community updates. This work could be done by a small, local consulting group or a gigantic, multi-national one. All else being equal, the cost of the project will necessarily be higher with the large consultancy than with the small.
And experience bears this out. I've been in both large and small consultancies, and the only difference was how many layers of management were on top of the decision process. There is no "economy of scale" for developing to a customer's needs. And there is nothing in the management overhead of the giganto-corp that improves the project for the customer. All that overhead has to get paid for in some way, so it necessarily leads to higher project costs (which probably means massive budget overruns because the big Corp probably priced the project at or under the local Corp in order to close the deal).
Large, centralized systems become something like hedges for the management org. They lose money on a long tail of small projects only to make it back with more on a few, large winners that can be milked. We look at the bottom line and say "everything is up! It must be good!", but we don't stop to look at the individual failures. While, if it were all decentralized, we wouldn't have a metric to look at to determine what direction the aggregate it going on, but that long tail would probably be served a lot better.