> Because you are not able to pick and choose what humanity tries out, so shall we stop entirely?
I don’t see anything that dictatorial in their suggestion.
There’s clearly something wrong in the current LLM based ecosystem. They take gigawatt-hours to train and digest the entire corpus of the internet to produce a model that writes at, what, the level of an erudite college freshman?
> not focusing on ever increasing the size of ai models, so that we don't need to build all those power-hungry ai training datacenters?
I read the italic bit as not a command to stop, but a suggestion to come up with better algorithms. Which researchers are presumably working on. Hopefully chatGPT & friends don’t suck up all the oxygen.