The context of the fine article is scaling LLMs into AGI. It's not about whether the tool is useful or not, as usefulness is a threshold well before AGI. Some folks are spooked that LLMs are a few optimizations away from the singularity, and the article just discusses some reasons why that probably isn't the case.