The stakes are a bit different for students unfortunately, who who’ll have their writing passed through some snake oil AI detector arbitrarily. This is unfortunate because “learning how not to trigger an AI detector” is a totally useless skill.
Generally, I don’t think we need AI detection. We need dumb bullshit detection. Humans and LLMs can both generate that. If people can use an LLM in a way that doesn’t generate dumb bullshit, I’m happy to read it.