So when people say they never get a good output it's because they're trying to from
thought > article
instead of
thought > exploration > direction > structure > outline > article
I record myself rambling out loud, and import the audio in NotebookLM.
Then I use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove fillers. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
I also like brainstorming by generating Audio Overviews, Slide Decks, and Reports in NotebookLM. The Audio Overviews don't sound like AI writing. The Slide Decks and Reports do sound like AI writing, if you use the defaults, but you can use custom prompts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
My initial reaction to the first half of this sentence was "Uhh, no?", but then i realized it's on substack, so probably more typical for that particular type of writer (writing to post, not writing to be read). I don't even let it write documentation or other technical things anymore because it kept getting small details wrong or injecting meaning in subtle ways that isn't there.
The main problem for me aren't even the eye-roll inducing phrases from the article (though they don't help), it's that LMs tend to subtly but meaningfully alter content, causing the effect of the text to be (at best slightly) misaligned with the effect I intended. It's sort of an uncanny valley for text.
Along with the problems above, manual writing also serves as a sort of "proof-of-work" establishing credibility and meaning of an article - if you didn't bother taking the time to write it, why should i spend my time reading it?
I'm sure it's great for pumping out SEO corporate blogposts. How many articles are out there already on the "hidden costs of micromanagement", to take an example from this post, and how many people actually read them? For original writing, if you don't have enough to say or can't [bother] putting your thoughts into coherent language, that's not something AI can truly help with in my experience. The result will be vague, wordy and inconsistent. No amount of patching-over, the kind of "deslopification" this post proposes, will help salvage something minimum work has been put into.
The elephant in the room is that AI is allowing developers who previously half-assed their work to now quarter-ass it.
I think many of the criticisms of LLMs come from shallow use of it. People just say "write some documentation" and then aren't happy with the result. But in many cases, you can fix the things you don't like with more precise prompting. You can also iterate a few rounds to improve the output instead of just accepting the first answer. I'm not saying LLMs are flawless. Just that there's a middle ground between "the documentation it produced was terrible" and "the documentation it produced was exactly how I would have written it".
Honestly I just don't read documentation three of my coworkers put on anymore (33% of my team). I already spend way to much time fixing the small coding issues I find in their PRs to also read their tests and doc. It's not their fault, some of them are pretty new, the other always took time to understand stuff and their children de output always was below average in quality in general (their people/soft skills are great, and they have other qualities that balance the team).
Most people drop a one line prompt like "write amazing article on climate change. make no mistakes" and wonder why it's unreadable.
Just like writing manually, it's an iterative approach and you're not gonna get it right the first, second or third time. But over time you'll get how the model thinks.
The irony is that people talk about being lazy for using LLMs but they're too lazy to even write a detailed prompt.
That's without even mentioning the personal advantages you get from distilling notes, structuring and writing things yourself, which you get even if nobody ever reads what you write.
Reminds me of a quote from St. Augustine's autobiography, "Confessions":
"I have known many men who wished to deceive, but none who wished to be deceived."
What would you say are the top 2 red flags missing from the piece? Would love to know
This makes me sick.
And even if the style is (or isn't) LLM'ish, it does not say/help if the (even-filtered) content makes sense / is correct or is BS
Style does matter, sure..
https://hbr.org/1982/05/what-do-you-mean-you-dont-like-my-st...
Getting the writing from your model then following up with “here’s what you wrote, here’re some samples of how I wrote, can you redo that to match?” makes its writing much less slop-y.
AI can copy 90% of your tone of voice but still use em dashes and corrective antithesis.
Ideally you'll have both /deslop and /soundlikeme (coming soon)