> [...]
> I've started asking clients a simple question when they bring it up. Not to be difficult, just to understand.
> [...]
> It's not about utility. It's not even really about the chatbot. It's about visibility, the fear of looking behind.
> [...]
> No pop-ups. No blinking corners. Just content, clear and immediate.
It’s been long enough that this might even have plausibly come from a human with LLM writing overrepresented in their brain rather than an LLM. But either way there’s this record-scratch feeling that I experience on each one of these, and (fittingly) it just completely knocks me out of the groove, requiring deliberate effort to resume reading.
And, I mean, none of these is even bad in isolation, but it sure feels like we’re due either a backlash where these patterns become underused even when appropriate, or them becoming so common they lose their power (is syntax subject to semantic bleaching?). Or perhaps both. Socioliguists are going to have a blast.
In this particular case the linked article is definitely AI generated.
So if you write in a way that engages the reader, you’re going to struggle not to use em dashes and the occasional a/b contrast, because those are challenging the reader to engage… but when overused, they not only don’t have the intended effect ( to break the reader out of passivity) , they also constitute a new kind of sin.
So no, don’t “trust your gut”. Trust the math. Is it too much? Or is it just trying to jar you out of not engaging with the prose?
But yeah, I’d say this article is likely written primarily with AI. Which doesn’t mean it’s not guided with intention and potentially important, it just means the article was probably commissioned and edited by a human, not written by one.
Everytime I see this claim, I ask for links to those blog posts. I have yet to get any links to the so-called "human" pattern that AI uses.
Anyways, really hard to push through and I need to remind myself to judge the text by its meaning. But if it's some random blog, my "tolerance" is lower and I don't want to spend my time reading nonsense, I just can't stand the writing style anymore either.
They maintain such a consistent paragraph length that they're either a professional copyeditor or, as is clearly the case, are an LLM.
Humans deviate a lot more than this, they use run on sentences or lose the thread in their writing.
This blog however reads like every-other post on LinkedIn. Semi-professional tone, with a strong "You, Me" hook to most posts.
I encourage everyone to make an LLM-generated blog, don't post the articles anywhere, but generate one, to get a feeling for how these things write.
Because this is unmistakably LLM. I'd even go so far as to identify the model of these particular posts as ChatGPT.
Yet when we point this out, we're told it is "unmistakably human" and that we're rude for pointing it out.
https://adele.pages.casa/md/blog/the-joy-of-a-simple-life-wi...
The thing is, by now it doesn’t actually matter if AI or not AI or partly AI or whatever, because the record scratch is still there and still breaks my immersion. I could be oversensitive (I definitely am to some other English-language things, and also feel that others are to yet other things like em dashes), but it feels like there’s a new language/social-signalling thing now, and you may have to avoid it even if you’re not an LLM.
The art of essay-writing seems to not be something people here care about any more. If a human didn't bother to write it, why should I bother to read it?! Just post up the bullet points you would feed the LLM, and let the people who want to do so, post it into their own LLMs so they can make the Content and shovel it into their eyeballs by themselves, instead.
There's no basis for that. The reason experts - for example, scientists in their own field - use objective fact is because reasoning like the parent is highly unreliable. What evidence shows is that people way overestimate their own intuition. It's not 'courage', it's foolishness.
There've been stylistic fads before LLMs where a thing, with results just as chalkboard-screech-inducing as the current one. That this one is just a button-push away does make it worse, though, because it proliferates so greedily.
Bad writing is bad writing, and writing like an LLM is writing like an LLM. We should be able to call this out. In fact, calling out the human responsibility in it is the very opposite of dehumanizing to me.
Sure, call the style bad or even similar to LLMs, but there's no reason to believe the style came from LLMs. It existed before and people who used it before still exist and still use it now.
Hell, this person seems to be a web(site) developer, that's a very marketing-speak-heavy field. It's far morely likely that's where they "caught" thos style. It happened to me too back when I was still in it.
The whole corpus is in there, but the standard style is tuned for.
And people I read had better ability to not put in unneceasary random completely made up facts or illogical implications.
As for being dehumanizing, perhaps I did commit the sin of psychoanalysis at a distance here, but I’ve felt enough loose wires sticking out of my brain’s own language production apparatus that I don’t think pointing out the mechanistic aspects reduces anyone’s humanity.
For instance, nobody can edit their own writing until they forget what’s in it—that’s why any publishing pipeline needs editors, and preferably two layers of them, because the first one, who edits for style and grammar, consequently becomes incapable of spotting their own mechanical mistakes like typos, transposed or merged words, etc. Ever spotted a bug in a code-review tool that you’ve read and overlooked a dozen times in your editor? Why does a change in font or UI cause a presumably rational human being to become capable of drawing logical inferences they were not before? In either case, there seems to be a conclusion cache of sorts that we can’t flush and can’t disable, requiring these sorts of actually quite expensive hacks. I don’t think this makes us any less human, and it pays to be aware of your own imperfections. (Don’t merge your copy- and line editors into a single position, please?..)
As for syntactic patterns, I’ve quite often thought of a slick way to phrase things and then realized that I’d used it three times in as many sentences. On some occasions I’ve needed to literally grep every linking word in my writing to make sure I haven’t used a single specific one five times in a row. If you pay attention during meetings or presentations, you’ll notice that speakers (including me!) will very often reuse the question’s phrasing word for word regardless of how well it fits, without being aware of it in the slightest. (I’m now wondering if lawyers and witnesses train to avoid this.) Language production is stupidly taxing on the brain (or so I’ve heard), so the brain will absolutely take every possible shortcut whether we want it to or not.
Thus I expect that the priming effect I’m alleging can be very real even before getting into equally real intangibles like “taste”. I don’t think it dehumanizes anyone; you could say it dehumanizes everyone equally instead, but my point of view is that being aware of these mechanical realities of the mind is essential to competent writing (or thinking, or problem solving) in the same way that being aware of mechanical realities of the body is essential to competent dancing (or fighting, or doing sports). A bit of innocence lost is a fair trade for the wisdom gained.
(Not that I claim to be a particularly good writer.)
[1] https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
The op is a blog post. You’re talking about blog post writing. Maybe you just don’t like their style?
It’s also true llm second drafts are a thing.
And it’s true both can ‘record scratch’ you right out of attention.
As well as the now present trend as readers to be impatient and quickly bored.
And this criticism of writing style (for my take this article is perfectly readable)—what is the aim? Call for writers to perform some kind of disclosure? Because without a goal, it sounds like complaining you don’t like the soup.
> It's not about utility. It's not even really about the chatbot. It's about novelty of talking to a machine
Which of course doesn't connect to the rest of the article contents, because the AI doesn't have any intention in its writing.