At this point every remote internet checklist has to include checks for humanity, because the percentage of straight out fakes is too high. Even the questions to ask me at the end were GPT provided.
The fake job applicants are only siphoning resources from the economy at the high expense of all other parties involved. The ones who are getting screwed the most are the applicants, some of whom are concerned about making ends meet and getting auto-rejected constantly despite decades of experience. No one should stand for it.
It’s actually a constant them on HN to imagine that passing laws will magically make problems disappear. The realities of enforcing the law or even identifying perpetrators are imagined to be the easy part.
“You keep getting the stomach bug. Here take this, it’ll calm your stomach. No no, you can keep eating that expired cheese, it’s all good”
But if that isn't the case, there's no reasonably good safety mechanism to mitigate the massive amount of harm that a determined bad faith actor could cause to the economy.
But making false claims about your work history (as could be the case with the one using ChatGPT to answer questions) is a problem, isn't it? And it's wonderful to see these rebuttals made against a hypothetical something that already happened. https://www.lawdepot.com/resources/business-articles/legal-c...
A more realistic scenario would involve no enforcement by the government (except perhaps in extreme cases, like with the 'spam king' back in the day). ChatGPT's terms of service would already cover it under the "shall not be used for illegal activity" language, and it would be just enough of a deterrence to benefit a larger number of people without creating new problems. But I wasn't advocating for a specific solution, just a call to a congressman. Despite their faults and flaws, they're probably still going to do a better job than I am at making the call, or maybe it won't even be a priority for them and they'll do nothing.
Why wouldn't this be a desired outcome? Unemployment doesn't give a carte blanche to send spam.
It’s too elaborate of a Rube Goldberg strategy to take very seriously. Companies struggle to achieve simple, clear, short-term goals in tight-knit, well-aligned teams. Ain’t nobody got the skill to pull off that level of conspiracy.
Good luck.
The applicants doing fake job applications do not care about your laws at all. Many might be in foreign countries. They might plan on applying with stolen identities.
Making a law isn’t going to change a thing. Even if you did, what company is going to spend resources tracking down the likely fake identity of someone applying for a job just to hand it to law enforcement for them to ignore in their backlog forever?
I missed the part where I included that or any strategy on how it would be used as a deterrent. Clearly that's not how it is done as you pointed out, but you make it seem as if laws have no value at all, which is a rather naive take. Fraud is already illegal FYI.
I don't have a solution, other than to make a call to the people who are elected to find those solutions, if they are able to. If they can't or won't, then it is a good thing that phone call was free anyway.
What I don’t get is what’s the economic incentive for this behaviour
- I heard its a thing to get n jobs you're not qualified for to get at least the first few month salary "for free" (as an individual or as a pawn from a larger organized fraud). Not sure how common or how much truth there is to it though.
I genuinely don't understand this requirement. Isn't an interview exactly that? It's a conversation pretending to be about a technical problem/question/challenge but in reality its purpose is to find out whether you click with the person and would want to work with them. If some ChatGPT text can trick you then your process is broken anyway and everybody joining your company can expect colleagues selected by this sub-par process.
This is pretty unfair and seems like victim-blaming when we have companies spending billions of dollars to create these programs with the specific intent of trying to pass the Turing test.
Wouldn't you notice a lag between your question and the candidate's answer if the candidate had to type your question into chatGPT?Or does the candidate use some software/tool with transmits your question to chatGPT directly?