I get the call for "effort" but recently this feels like its being used to critique the thing without engaging.
HN has a policy about not complaining about the website itself when someone posts some content within it. These kinds of complaints are starting to feel applicable to the spirit of that rule. Just in their sheer number and noise and potential to derail from something substantive. But maybe that's just me.
If you feel like the content is low effort, you can respond by not engaging with it?
Just some thoughts!
--
Because it’s not just that agents can be dangerous once they’re installed. The ecosystem that distributes their capabilities and skill registries has already become an attack surface.
^ Okay, once can happen. At least he clearly rewrote the LLM output a little.
That means a malicious “skill” is not just an OpenClaw problem. It is a distribution mechanism that can travel across any agent ecosystem that supports the same standard.
^ Oh oh..
Markdown isn’t “content” in an agent ecosystem. Markdown is an installer.
^ Oh no.
The key point is that this was not “a suspicious link.” This was a complete execution chain disguised as setup instructions.
^ At this point my eyes start bleeding.
This is the type of malware that doesn’t just “infect your computer.” It raids everything valuable on that device
^ Please make it stop.
Skills need provenance. Execution needs mediation. Permissions need to be specific, revocable, and continuously enforced, not granted once and forgotten.
^ Here's what it taught me about B2B sales.
This wasn’t an isolated case. It was a campaign.
^ This isn't just any slop. It's ultraslop.
Not a one-off malicious upload.
A deliberate strategy: use “skills” as the distribution channel, and “prerequisites” as the social engineering wrapper.
^ Not your run-of-the-mill slop, but some of the worst slop.
--
I feel kind of sorry for making you see it, as it might deprive you of enjoying future slop. But you asked for it, and I'm happy to provide.
I'm not the person you replied to, but I imagine he'd give the same examples.
Personally, I couldn't care less if you use AI to help you write. I care about it not being the type of slurry that pre-AI was easily avoided by staying off of LinkedIn.
This is why I'm rarely fully confident when judging whether or not something was written by AI. The "It's not this. It's that" pattern is not an emergent property of LLM writing, it's straight from the training data.
One, they're rhetorical devices popular in oral speech, and are being picked up from transcripts and commercial sources eg, television ads or political talking head shows.
Two, they're popular with reviewers while models are going through post training. Either because they help paper over logical gaps, or provide a stylistic gloss which feels professional in small doses.
There is no way these patterns are in normal written English in the training corpus in the same proportion as they're being output.
I think this is it. It sounds incredibly confident. It will make reviewers much more likely to accept it as "correct" or "intelligent", because they're primed to believe it, and makes them less likely to question it.
And even if you remove all of them, it's still clearly AI.
People have hated the LinkedIn-guru style since years before AI slop became mainstream. Which is why the only people who used it were.. those LinkedIn gurus. Yet now it's suddenly everywhere. No one wrote articles on topics like malware in this style.
What's so revolting about it is that it just sounds like main character syndrome turned up to 11.
> This wasn’t an isolated case. It was a campaign.
This isn't a bloody James Bond movie.
I guess I too would be exhausted if I hung on every sentence construction like that of every corporate blog post I come across. But also, I guess I am a barely literate slop enjoyer, so grain of salt and all that.
Also: as someone who doesn't use the AI like this, how can it become beyond the run of the mill in slop? Like what happened to make it particularly bad? For something so flattening otherwise, that's kinda interesting right?
I haven't yet used AI for anything I've ever written. I don't use AI much in general. Perhaps I just need more exposure. But your breakdown makes this particular example very clear, so thank you for that. I could see myself reaching for those literary devices, but not that many times nor as unevenly nor quite as clumsily.
It is very possible that my own writing is too AI-like, which makes it a blind spot for me? I definitely relate to https://marcusolang.substack.com/p/im-kenyan-i-dont-write-li...