> If some ChatGPT text can trick you then your process is broken anyway
This is pretty unfair and seems like victim-blaming when we have companies spending billions of dollars to create these programs with the specific intent of trying to pass the Turing test.
There’s a bit of an echo chamber on HN where people convince each other that all LLM-generated text is easy to identify, riddled with errors, and “obviously” inferior to all real-human writing. Because some LLM writing fits those criteria and is easily identified, these folks are convinced they can identify all LLM writing and anyone who can’t must be a dunce.
I didn't claim anything about identifying writing. That's a strawman. I'm talking about humans talking to each other. Even if it's in a zoom call. Any interview process that doesn't include that is broken, and that's my claim. Echo chamber or not.
Apologies for misunderstanding you, then. Agreed that human to human is critical, especially for identifying culture fit (not homogeneity of course, just interaction styles like openness, etc).
I do think people cheat video interviews with LLM help, but in-person should always be required anyway, even if it’s via proxy (“meet with a colleague from our Madrid office”).
An interviewer is a "victim"? Maybe they should just, you know, speak to their interviewees. At least in 2024 that's hardly faked by an LLM. Therefore, if you are fooled, you cheaped out, and you are hardly a victim.