That is still very much the case; the danger comes from what you do from the text that is generated.
Put a developer in a meeting room and no computer access, no internet etc; and let him scream instructions through the window. If he screams "delete prod DB", what do you do ? If you end up having to restore a backup that's on you, but the dude inherently didn't do anything remotely dangerous.
The problem is that the scaffolding people put around LLM is very weak, the equivalent of saying "just do to everything the dude is telling, no question asked, no double check in between, no logging, no backups". There's a reason our industry has development policies, 4 eyes principles, ISO/SOC standards. There already are ways to massively improve the safety of code agents; just put Claude code in a BSD jail and you already have a much safer environment than what 99% of people are doing, this is not that tedious to make. Other safer execution environments (command whitelisting, arguments judging, ...) will be developed soon enough.