You might be shadowboxing a bit with a point I didn't make (or maybe your comment was intentionally orthogonal to what I raised, not sure). I work with this technology every day in a professional, commercial context. Not just LLMs, but many other ML/DL implementations that walk the gamut of downstream tasks from anomaly detection, time series forecasting, etc. I think its useful enough to be building real things with it to improve the way my business functions. In the efforts of building those inference and training stacks from scratch, I've also seen how spectacularly they can fail and how often.
I don't think AI is evil. I think autoregressive token prediction is stochastic enough to be considered unreliable in its current state. That doesn't mean I am going to stop building things with it, it just means that I've seen these systems implode regularly enough, even with grounding via RAG, that I tend to push caution first and foremost (as I did in my original message).