Someone forwarded an enormous amount of text over teams the other day at work. From someone (bless her) that always means well but usually averages about one spelling mistake per word and rarely goes over 20 words per message. Clearly copy paste chatgpt.
For say hn gang that thinks in terms of context shifts, information load and things on THAT wave length the problem with that situation is obvious but I realised then that is not at all obvious to the average public. She genuinely seemed to think she's helping me by spending 15 seconds typing in a prompt and having me spend the next 30 minutes untangling the AI slop.
There is zero understanding or consensus of acceptable practices around that sort of thing baked into societal norms right now.
[1] https://jabde.com/2026/02/02/utilizing-llms-as-a-data-decomp...
What the fuck are we even doing anymore?
Original prompt: "Please rewrite this information in a nice format for my insufferable asshole colleague".
I think this is a reasonable standard to hold, otherwise, like many before have said...send me the prompt. It's actually more interesting/better I know a coworker is struggling to communicate about something.
So there's a clear separation, a reply from me which I stand by and then some interesting chatbot stuff if you're into that.
I often send out the LLM version, but still check if it contains the original thoughts correctly.
It's not a bad way to extend your vocabulary & catch spelling mistakes
Please don’t do this. You probably aren’t aware of how bad this can land. It’s not just about containing your original thoughts, it’s about the verbosity, repetitiveness, and absurdity of it all.
Grammarly is a much better tool for these kinds of purposes, and it actually guides and teaches you to improve your writing along the way.
"Hey, thanks! This is a great overview, and I actually asked ChatGPT before asking here and got a lot of the same information, but what I'm really looking for is..."
They are at high risk.
Employees using ChatGPT to renegotiate their salary are showing a serious lack of cognitive awareness.
You reminded me of American colleagues that lie and say things are good when they are bad lol. Unable to be straight to the point. You're upset at the waste of time yet you thank them?
This is the root frustration spreading across workplaces everywhere. Before AI the only way for someone to generate a design document, Jira ticket, or pull request without investing a lot of their own time and effort into producing what you saw.
LLMs came along and erased that assumption. Now you don't know if that e-mail, that 12-page design document, the 100 or 1000 line PR, or those 10 Jira tickets were written by someone who invested a lot of their own time into producing something, or if they had their AI subscription generate something that looked plausible. You have to actually read and process the work, which takes 100 times more effort than it took them to make it.
For people in the working world who saw the workplace as a game of min-maxing their effort against the appearance of being a valuable contributor, LLMs are the perfect shortcut: They can now generate the appearance of doing a lot of work with no more than a few lines of asking an LLM to produce documents.
If anyone spends the 30 minutes to review the AI slop from their 15-second prompt, they'll copy your feedback into ChatGPT and send another document over with the fixes. Now they've even captured you into doing their work for them!
For teams or even entire companies that were relying on appearances of activity as a proxy for contributions, this is going to be a difficult transition. Everyone e-mail job worker in the world just received a tool that will generate the appearance of doing their job for them and even possibly be plausibly correct most of the time. One person can generate volumes of design documents, Jira tickets, and even copy and paste witty responses into the company Slack and appear to be the most engaged and dedicated employee by volume while doing less actual work than ever before.
I think teams that already had good review cultures with managers who cared about the output rather than the metrics are doing fine because anyone even a little bit engaged can spot the AI copy-and-paste employees with even a little inspection. The lazy managers who relied on skimming documents and plotting number of PRs or lines of code changed are in for a rude awakening when they discover the employees dominating their little games are the ones doing the most damage to the team.
Oh, we know. It's pretty clear in many cases.
And frankly the best signal now is: the shorter it is the greater the likelihood it was at least expensive for the human to produce. Said in another way - a shorter thing is easier to make sense of completely and if its garbage - its garbage. At least the cost borne on you was minimised!
That’s not really the point. Engineering has always operated on trust networks, not just artifacts.
Your review naturally adapts based on the level of trust you have in the author. If someone has consistently produced high-quality work, whether they used AI or not becomes mostly irrelevant.
What’s funny to me is your last paragraph. A lot of companies are so gung-ho about “AI ALL the things!” that I’m not sure as a manager if I’d get in trouble for “spotting the AI copy paste” junk. I’m supposed to make sure everyone is using AI as much as possible, after all. So, rejecting someone’s output for being low-effort AI slop and asking for a “less AI” version of it might mark me as a silly old fashioned guy who doesn’t believe in AI.
Lmgtfy was a passive-aggressive (but not really passive) way to say “hey, are you too dumb to google this?”. Sending somebody ai output feels the same to me - the message you’re sending to the recipient is “here, you’re obviously too dumb to ask an LLM about this yourself”. Except some people don’t seem to realize that’s the message they’re sending
Most people know when they are doing it. If you feel the need to obscure your LLM usage, it means you didn’t put enough of your own voice and work into the final draft and you need to do something about that.
The closest acceptable thing to share is the full chat, including your prompts. If the output is useful enough to share, then the human thought process that led to the ai output is almost always more useful than the output itself.
The Nash equilibrium here is that the market has to find a way for the people producing things with LLMs to pay people to consume them, and the market always finds a way.
Firms are only going to pay out to model producers if they are getting more in excess of the cost of financing projects over time. If a firm does not see this happen, they reduce their spend on tokens. Simple.
Its a whole lot more nuanced than some shitty game theory.
That may be the case but every day LLM’s feel less like the next big thing and more like 3D printing. Here to stay, but not nearly as ubiquitous and earth shattering as people made it out to be.
If I had to guess right now, I would say LLM’s are more significant than 3D printers, but less significant than the Internet.
If I want the LLM answer I freaking ask it myself
Sometimes I wonder if we're letting people graduate from school with no real grasp of the purpose of written communication. School strips writing of purpose, and creates artificial purposes such as using AI to combine words in order for AI to assign it a good grade. Even before the AI era, most human generated text was not worth reading.