story
The difference between an average diet and a vegan diet via Scarborough et al. 2023/Poore & Nemecek 2018 is in the realm of 1450kg CO2e/year.
Assuming those numbers, that difference is around 14,500 prompts per day, or ~5.3M prompts per year.
So unless the prompt estimates are off by more than two orders of magnitude...
Basically, there's a lot of words in your initial link, but they all hinge on the readers taken the stated energy assumption for a single (undefined) prompt at face value. If that initial assumption is wrong (at min, it's poorly defined in your link) all further conclusions are invalid.any a scientific publication have done this same trickery =].
They don't define what a query is when they are talking about AI power usage. If we want to get serious, we'd tie usage to tokens since we can actually track token usage.
Huh? The latter blog post does link to the former's blog, but not as a source for that claim. It cites an Altman blog, an estimate from EpochAI, an article in the MIT Technology Review (albeit one that estimates 3x higher), and a paper put out by Google. It's really surprisingly well cited and I don't know how you came away from it thinking it was a circular reference. The google study is in the subheading!
1) I click your link
2) I click the link associated with the 0.3 Wh of energy claim in the section "The full cost of a prompt".
3) The link from 2) takes me to a blog post from Hannah Ritchie. In Hannah's post, I click a link associated with the following excerpt:
"Third, as a result, more recent estimates suggested that the assumptions I relied on (h/t to Andy Masley’s work on this) — that one standard query used 3 watt-hours (Wh) of electricity — were possibly an order of magnitude too high. In this case, I was happy to be conservative and overestimate the energy use."
4) This link takes me to the author of your original post, but earlier.
None of this quantifies cost per token, which is really the much more relevant metric than whatever a "cost per text based query" means => which I think is both quite broad and quite model dependent.