He doesn’t use LLM detection tools, but he says it’s easy to identify papers with warning signs of LLM use. For some reason, using ChatGPT for his specific niche topic overuses a few obscure, rarely-used words that most people wouldn’t even recognize. The ChatGPT abusers some times have these words appearing multiple times through their essays.
He’s also caught people who cited a lot of different works and books in their reports that were outside of the assigned reading, or in some cases books that don’t exist at all. Catching them is as simple as asking them about their sources or where they acquired a copy of the text.
I see a lot of parallels in hiring and talking to junior software engineers right now. We had a take-home problem that was well liked that we used for many years, but now it’s obvious that a majority of young applicants are just using LLMs to get an answer. When we want to talk about their solution in the interview, they “can’t remember” how it works or why they picked their method.
It’s really sad to me as a long time remote worker because I see far more blatant abuse from remote candidates. Like you, bringing people on site for interviews seems to instantly scare away the LLM cheaters, but it’s expensive and time consuming for everyone involved.