Is there a reason why most hacking is through social engineering? Possibly because that's often the weakest part of the entire security chain, specifically because humans are involved, and thus it's nearly always the lowest-hanging fruit for an attacker to target?
Is that a pattern we should be expanding? For sure, make the comparison when using GPT to aid with human tasks that can't be automated through any other means; but if you have a task that can be done just with a computer and without getting a human involved, it seems like a strict downgrade in security to involve an LLM into the middle of it.
It's really good for security and reliability that there isn't a second human involved on top of me that I need to go through to add a calendar appointment to my phone.