Do I buy expensive tool X or get buy with cheap ones that make the work take longer or be less precise? Do I buy fancy machine Y or pay 10 people to do it manually?
The only new development with AI is that this was traditionally limited to relatively manual or repetitive processes, but is now expanding to knowledge work as well.
In the medium term the question will be do I just pay for people, or do I spend resources on collecting data and training a model?
Unions are stronger than the last 40 years especially at auto/medical/trucking. Likely, deglobalization is going to bring some production home, but really the question is, why dismiss or even relish worker insecurity and inefficiency?
You know, for other people you are the other people that things happen to.
Minor quibble unrelated to the main content of the post: the measures are not fixed costs of assets, but a blend of depreciation and the operating cost of power usage for those assets. Sort of a regular snapshot of an average daily accounting cost, so to speak (which is reasonable). And this was cost to Google, not taking into account what Google could make by charging Cloud customers for the use of those assets (opportunity cost).
My understanding is that the main use of this tool was actually for engineers to give reasonable-ish impact statements for their performance work. I hadn't heard of anyone using it to make serious trade-offs in project planning, since at the level where that matters, the capacity planning teams had more precise costs related to their actual budgets, as well as short term goals like "RAM has a supply chain shock so we can't get any more than X for the next quarter."
Also a pet peeve of mine, people constantly screwed up the units. "10 SWEs" is rate (cost per time), same as "10 TB of RAM", but "SWE-years" is cost (ie dollars). Many design documents use these inconsistently.
But the exact numbers are important. An H100 is ~$2/hour [1], so 1000 is $16M/year (24/7). Even if Google gets a massive internal discount that's still way more than 3 people's total cost. If you have to choose between 16 (highly paid, senior) people or 1000 H100s would you have to think about the choice?
Then again when you revisit this comment in a few years' time the original comparison may be correct.
[1] https://gpus.llm-utils.org/h100-gpu-cloud-availability-and-p...
And if you really insist on 24/7 comparison, you would come to need about 12 people, as the expected productivity per person is of about 6 hrs/day. Factor in the fact that people need vacations, sick days, weekends off… it looks like 1000 H100 might actually be a good trade-off in the very near future.
Very unlikely that Google pays $16M for a single H100. Amazon has it at $44k: https://www.amazon.com/Tesla-NVIDIA-Learning-Compute-Graphic...
Unfortunately 1 Jeff Dean SWE != SWE