This was the second big thing Google had learned early on: "The equipment is reduced to its basics so it runs cooler. It can also be easily accessed and repaired quickly." -- slide 12/17
The whole sheet metal box around a server was a real waste of time if your employees are the only ones accessing the area, and the only reason they want to touch the machine is to repair it. This in contrast to NetApp (where I had worked before) which was busily designing impressive cabinets that would "stand tall" on the raised flooring of the data center.
My design a while back was to put it all on PCI cards on a PCI backplane. I saw backplanes that basically look like motherboards full of PCI slots that load into racks. I wanted to make the cards nothing but CPU and memory whose software communicated over efficient networking (not TCP/IP) through PCI DMA. My design had IO/MMU functionality in the backplane or PCI cards. At least one card having full-featured stack for management and at least one I/O card for external interface. I figured the backplane itself could be extended for that, too, with a dedicated port like motherboards do integrated GigE. Management and I/O could come through remote DMA over dedicated wires like many servers do with Ethernet so all the PCI slots could be dedicated to compute.
Dumbest thing about Facebook's model is them destroying drives. The first thing to notice, due to Ross Anderson's Security Engineering, is that those pieces still contain a lot of data if they weren't degaussed first. Next is to remember the fastest way to destroy data: use clustered, encrypting filesystems so that secrets never touch the drive. Then, you just have to delete the keys to loose the secrets. No need to trash the drives at all. The crypto can happen at the storage manager or at hardware interface with HW acceleration available for both types. I'm surprised they haven't already built this with all the smart people they have working on big-data stacks.
That said, there are a number of systems at FB where deleting a crypto key loses the linked data forever--but they still crunch the hard drives just to be really sure. The drive crunching is an incredibly tiny expenditure compared to the massive CapEx and OpEx required to build, stock, and run the datacenters. It's worth it if only for the peace of mind.
Good old Zuck!
[0] https://www.google.co.uk/maps/place/Pinnacle+Sweden+AB/@65.6...
They go out of their way to design and manufacture their own motherboards and equipment[1] to reduce their capital and operational expenses.
While the motherboard and enclosures are probably custom made, the components inside (CPU, memory, etc) are all "commodity".
I would be seriously surprised about that. Do you have some data about heat leaked to the sorroundings by water friction?
I would say the potential energy of water is mostly carried by the water itself to its final destination, by slowly heating up during descent. The surroundings are not appreciably heated, since water is a good coolant.
So let's picture a waterfall: water practically stops at the bottom, so potential energy has dissipated somehow, but surroundings are not heated, water is. The energy remains in the water, which continues its happy descent to the sea.
Of course, you lose out on all the other advantages...
One such example energy company is https://oppenfjarrvarme.fortum.se/?lang=en
https://translate.google.com/translate?sl=sv&tl=en&u=http%3A...
Governmental million to Facebook's new data center in Lulea
Scroll down to read it, basically taxpayers money went to Zuckerbergs pocket.
I don't think that facebook is good for openness of the web and/or society as a whole, the amount of power without oversight that this Zuck has is simply scary.
Edit: here's some food for thought- https://en.wikipedia.org/wiki/Criticism_of_Facebook
1) The head has to move further to read the next sector of interest, which is particularly problematic for fragmented data.
2) It is more difficult to manufacture high data density (AKA high-precision) disks in large formats, as some surface defects are cumulative, and get worse as the disk gets larger.
3) Manufacturing defects which occur pseudo-randomly increase proportionally to the surface area of the disk, so the reject rate increases as a square of the radius.
4) Smaller drives can be spun much faster, allowing for higher data rates, as the centripetal accelerations in the disk are proportional to the square of the radius (and I believe the stresses are proportional to the cube of radius).
For these reasons and many others, HDDs have been moving to smaller and smaller form factors.
The same probably applies for things like the actuator for the heads, etc.
Or else split the I/O into a zillion little independent ports.