story
Well, apart from the fact that chatGPT is really incapable of developing a thought, and also apart from the fact that half will fail to delete sentences like "I'm a language model, so I can't..." (insert gist of question here), it's painfully obvious if something is LLM generated.
The moment a sentence like "it's crucial to remember" pops up, I know what this is. Then, there's also the element that it always sounds like it's speaking to a child, and it avoids actually saying things unequivocally without some sort of disclaimer, as the legal department's CYA filter will ensure.
I remain thoroughly unimpressed by the entire venture. If this is Skynet 1.0, we're all safe for centuries to come.