So I get the frustration that "ai;dr" captures. On the other hand, I've also seen human writing incorrectly labeled AI. I wrote (using AI!) https://seeitwritten.com as a bit of an experiment on that front. It basically is a little keylogger that records your composition of the comment, so someone can replay it and see that it was written by a human (or a very sophisticated agent!). I've found it to be a little unsettling, though, having your rewrites and false starts available for all to see, so I'm not sure if I like it.
Wrote about this before [0] but my 2c: you shouldn't pause and you should keep using them because fuck these companies and their AI tools. We should not give them the power to dictate how we write.
LLMs have a bias towards expertise and confidence due to the proportion of books in their training set. They also lean towards an academic writing style for the same reason.
All this to say, if LLMs write like you were already writing, it means you have very good foundations. It's fine to avoid them out of fear, but you have this Internet stranger's permission to use your em dash pause to think "Oh yeah, I'm the reference for writing style."
Long story short: I think emoji in headings and lists, em dashes, and the vile TED Talk paragraph structure of "long sentence with lots of words asking a question or introducing a possibility. followed by. short sentences. rebutting. or affirming." are here to stay. My money is that it gets normalized and embraced as "well of course that's how you best communicate because I see it everywhere."
You'll get over it.
Now you can ask for outlandish things at work knowing your boss won’t read it and his summariser will ignore it as slop — win.
\s
It’s literal content expansion, the opposite of gzip’ing a file.
It’s like a kid who has a 500 word essay due tomorrow who needs to pad their actual message up to spec.
I agree that reading an LLM-produced essay is a waste of time and (human) attention. But in the case of overly-verbose human writing, it's the human that's wasting my time[1], and the LLM is gzip'ing the spew.
[1] Looking at you, New Yorker magazine.
And it should still be worth for them to listen if you don't suck at presenting and don't just read the text from the slides.
Anyway, it's at https://www.jimkleiber.com/p35/ if you wanna check it out, all sessions posted as blog posts, I think there's a link to the ebook (pay-what-you-want) and there may be audio (I recorded myself reading the writing right after each session).
If you check it out, please let me know :-)
≈
The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. (Brandolini)
Fun, I'd make playback speed something like 5x or whatever feels appropriate, I think nobody truly wants to watch those at 1x.
There are a lot of people like me in software. I’m tempted to say we are “shouted down”, but honestly it’s hard to be shouted down when you can talk circles around some people. But we are definitely in a minority. There are actually a lot of parallels between creative writing and software and a few things that are more than parallel. Like refactoring.
If you’re actually present when writing docs instead of monologuing in your head about how you hate doing “this shit”, then there’s a lot of rubber ducking that can be done while writing documentation. And while I can’t say that “let the AI do it” will wipe out 100% of this value, because the AI will document what you wrote instead of what you meant to write, I do think you will lose at least 80% of that value by skipping out on these steps.
They want all this artisnal hand written prose under the candle light with the moon in the background. And you are a horrible person for using AI, blablabla.
But ask for feedback? And you get Inky, Blinky, Pinky, and Clyde. Aka ghosted. But boy, do they tell a good story. Just ain't fucking true.
Counter: companies deserve the same amount of time invested in their application as they spend on your response.
I've noticed that attitude a lot. Everyone thinks their use of AI is perfectly justified while the others are generating slops. In gamedev it's especially prominent - artists think generating code is perfectly ok but get acute stress response when someone suggests generating art assets.
[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...
Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.
from the preface of SICP.
I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.
Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.
I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.
Yesterday I left a code review comment that someone asked if AI wrote it. The investigation and reasoning were 100% me. I spent over an hour chasing a nuanced timezone/DST edge case, iterating until I was sure the explanation was correct. I did use Codex CLI along the way, but as a power tool, not a ghostwriter.
The comment was good, but it was also “too polished” in a way that felt inorganic. If you know a domain well (code, art, etc.), you start to notice the tells even when the output is high quality.
Now I’m trying to keep my writing conspicuously human, even when a tool can phrase it perfectly. If it doesn’t feel human, it triggers the whole ai;dr reaction.
I would have been more okay with AI generated code, it would likely have been more objective and less verbose, I refused to review something that he obviously didn't put enough effort himself to do a POC on. When I asked for his own opinion on the different solutions evaluated he didn't have one
It's not about the document per se, but the actual value of these verbose AI-generated slop, code that is executable, even if poorly reviewed, it's still executable and likely to produce the output that satisfies functional requirements.
Our PM is now evaluating tools to generate documentation for our platform based on interpreting source code, it includes description of things such as what is the title and what the back button is for but wouldn't inform valid inputs for the creation of a new artefact. This AI-generated doc is in addition to our human made Confluence docs, which is likely to add to spam and reduce quality of search results for useful information.
1. Most programmers are better than SOTA LLMs, while most artists can't match the rendering quality of a SOTA image model. Artists rightfully see image models as a way bigger threat than we see LLMs.
2. While it's true that LLMs are trained on unlicensed code and image models are trained on unlicensed art, a lot of the publicly available code was essentially released with a license of "here's some code, you can use it for whatever"
3. Code is seen as a means to an end, while art is an end in itself. Few people, even among professional programmers love programming, or program recreationally (this forum obviously has disproportionately many). Most artists love the process of making art.
Also, is a hand modeled final asset built based on AI-generated concept art still "AI"?
Who cares if a bush or a tree is fully AI-generated anyway? These "no AI whatsover on any game" people virtue signal too much to make a fair argument for whatever they're preaching about. Sure, I agree with the value of human creativity, but I also want people to be able to use whatever tools they like.
People are happy to shovel shit if they can get away with it.
In addition, I feel like there has been an overall drop in software quality along with the rise of AI driven code development. Perhaps there are other driving factors (socioeconomic, psychological, etc) and perhaps I am misattributing it to AI. Then again, could also just be all the slop.
It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...
No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.
AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.
Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.
So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.
E.g. music artists would happily post their music with unattributed cover art. I've seen graphic artists post video slideshows with unattributed music. Authors (books, blog posts) who think cover art or header images are a necessary evil.
I was talking with a lawyer who said that AI legal drafting would never happen because legal work requires high level reasoning and quality is critical, then told me that AI written software would be fine if you just sandbox it.
Edit: I think it's true, there is some amount of slop/coasting in every field, and there's nothing wrong with wanting to avoid that. But people take that too far and decide that everything in (adjacent field) is trivial when actually many fields today are just very complex.
The context in which both the code or art is used matters more than whether or not what you're AI-generating is "art".
I no longer feel joy in reading things as almost most of the writing seem same and pale to me as if everyone is putting thoughts in the same way.
Having your own way of writing always felt personal in which you expressed your feelings most of the time.
The most sad part for me is I no longer am able to understand someone's true feelings (which anyway was hard to express in writing as articulation is hard).
We see it being used from our favourite sports person in their retirement post or from someone who has lost their loved ones or someone who just got their first job and it's just sad that we no longer can have that old pre AI days back again.
However, I agree that ordinary people filtering and flattening their communication into a single style is a great loss.
So when someone wants to know something about the topic that my website is focused on, chances are it will not be the material from the website they see directly, but a summary of what the LLM learned from my website.
Ergo, if I want to get my message across I have to write for the LLM. It's the only reader that really matters and it is going to have its stylistic preferences (I suspect bland, corporate, factual, authoritative, avoiding controversy but this will be the new SEO).
We meatbags are not the audience.
A simple query like "Ford Focus wheel nut torque" gives pages with blah blah like:
> Overview Of Lug Nut Torque For Ford Focus
> The Ford Focus uses specific lug nut torque to keep wheels secure while allowing safe driving dynamics. Correct torque helps prevent rotor distortion, brake heat transfer issues, and wheel detachment. While exact values can vary by model year, wheel size, and nut type, applying the proper torque is essential for all Ford Focus owners.
And the site probably has this text for each car model.
Somehow the ways the ad industry destroyed the Internet got very varied...
Then I tried it again with Google and DDG and all three gave me the exact same page as their top result: https://www.puretyre.co.uk/tyre-information/tyre-pressures/f...
Well, we're here. HN manages to call me back to the community nearly every day with articles I would not have otherwise found by search or AI.
No doubt web search just feels inadequate by the qualities of chat initiated search. But the results of chat can lead to websites and books and all manner of media consumed by the ChatAi. The pathways are still there to find those pages.
What I think you're talking about are business statistics in consumer industry. Maybe you're thinking of a person who drinks Bud Lite and eats Lays potato chips (highly ranked sales) and envision they'll use ChatAi apps and never use a search engine again. If that's the demographic you want, then writing a website is prolly not going to reach them anyway. If you want to reach that audience, then you need to become part of that mass consumer media ecosystem. You need to create/invent virality--it's the one shared element consumer society revolves upon and industry can't get enough of.
And I know it's different, but I'm surprised the overall sentiment is so pessimistic on HN. So maybe we will communicate through yet another black box on top of hundreds of existing ones already. But probably mostly when seeking specific information and wanting to get it efficiently. Yes this one is different, it makes human contact over text much more difficult, but the big part of all of this was happening already for years and now it's just widely available.
When posting on HN you don't see the other person typing like using talk command on unix, but it is still meaningful.
Ideally we would like to preserve what we have untouched and only have new stuff as an option but it's never been like this. Did we all enjoy win 3.11? I mean it was interesting.. but clicking.. so inefficient (and of course there are tons of people who will likely scream from their GUIs that it still is and windows sucks, I'd gladly join, but we have our keyboard bindings, other operating systems, and get by somehow)
See it as code review, reflection, getting a birds eye view.
When I document my code, I often stop in between, and think: That implementation detail doesn't make sense/is over convoluted/can be simplified/seems to be lacking sanity check etc…
There is also the art of subtly injecting humor in it, with, e.g. code examples.
Absolutely disagree. A lot of the best docs I've read feel more personal, and have little extra touches like telling the reader which sections to skip or to spend more time in depending on what your background is.
Formatting and layout matters too. Docs sites with messy navigation and sidenotes all over the place might be "easy to read" if you can focus on only looking at one thing, but when you try to read the whole thing, you just get a bunch of extra noise that could've been left out.
Mind you this person is an excellent writer, they had great success with ghost writing and running a small news website where they wrote and curated articles. But for some reason the opportunity for Claude to write stuff they can never have the time for is too great for them to ignore.
I don't care if you used AI for 99.99% of your research for writing the content but when I read your content it should be written by you. It's why I never take any article seriously on linkedin, even before AI, they all lack any personalization.
I do because you should be manually verifying anything you get out of the stochastic parrot before you present it to others - and that's going to be more than 0.01% of the work.
Doesn't ai;dr kind of contradict ai generated documentation? If I want to know what claude thinks about your code I can just ask it. Imo documentation is the least amenable thing to ai. As the article itself says, I want to read some intention and see how you shape whatever you're documenting.
(AI adding tests seems like a good use, not sure what's meant by scaffolding)
> Why should I bother to read something someone else couldn't be bothered to write?
and
> I can't imaging writing code by myself again, specially documentation, tests and most scaffolding.
So they expect nobody to read their documentation.
Yes, exactly. Because AI will read it and learn from it, it's not for humans.
I'll want to communicate something to my team. I'll write 4 bullet points, plug it into an LLM, which will produce a flowing, multi paragraph e-mail. I'll distribute it to my co-workers. They will each open the e-mail, see the size, and immediately plug it into an LLM asking it to make a 4 bullet summary of what I've sent. Somewhere off in the distance a lake will dry up.
All while both sides were charged per token for processing. This is _the dream_ of these AI firms.
I believe it's already in place, making the internet a bit more wasteful.
a large part of the business models of these systems is going to consist of dealing with these systems... it's a wonderful scheme
This is the root cause of the problem. Labeling all things as just "content". Content entering the lexicon is a mind shift in people. People are not looking for information, or art, just content. If all you want is content then AI is acceptable. If you want art then it becomes less good.
> Why should I bother to read something someone else couldn't be bothered to write?
Interesting mix of sentiments. Is this code you're generating primarily as part of a solo operation? If not, how do coworkers/code reviewers feel about it?
It's a problem to use a blender to polish your jewelry. However, it's perfectly alright to use a blender to make a smoothie. It's not cognitive dissonance to write a blog post imploring people to stop polishing jewelry using a blender while also making a daily smoothie using the same tool.
I cry every time somebody tries to frame it one dimensionally.
I can take the other person's prompt and run it through an LLM myself and proceed from there.
I don't have any solutions though. Sometimes I don't call out an article - like the Hashline post today - because it genuinely contains some interesting content. There is no doubt in my mind that I would have greatly preferred the post if it was just whatever the author promoted the LLM with rather than the LLM output and would have better communicated their thoughts to me. But it also would have died on /new and I never would have seen it.
For me too and for writing it has the upside that it's sooo relaxing to just type away and not worry about the small errors much anymore.
Shouldn’t we bother to write these things?
A blog post is for communicating (primarily, these days) to humans.
They’re not the same audience (yet).
> I can't imaging writing code by myself again
After that, you say that you need to know the intention for "content".
I think it's pretty inconsistent. You have a strict rule in one direction for code and a strict rule in the opposite direction for "content".
I don't think that writing code unassisted should be taken for granted. Addy Osmani covered that in this talk: https://www.youtube.com/watch?v=FoXHScf1mjA I also don't think all "content" is the sort of content where you need to know the intention. I'll grant that some of it is, for sure.
Edit: I do like intentional writing. However, when AI is generating something high quality, it often seems like it has developed an intention for what it's building, whether one that was conceived and communicated clearly by the person working with the AI or one that emerged unexpectedly through the interaction. And this applies not just to prose but to code.
AI bloats text and every other task it does into convoluted redundant cliches. This is true for text and code. Whether it was written by an AI or not, it's not worth my time. If you wrote it 100% by hand and it still sounds like AI, it's still bad writing and still not worth my time.
I know it’s just modern writing style to preempt all responses. But can’t you just plainly state your business without professing your appreciation?
People who waste other’s time with bullshit are aholes. I don’t care if it’s My Great Friend And Partner in Crime, Anthropics LLM or it’s a tedious template written in PHP with just enough substitutions and variations to waste five sentences on it before closing it.
Actually, saying that it’s the same thing is a bit like saying “guns don’t shoot people”. At least you had to copy-paste that PHP template from somewhere and adapt it to your spam. Back in the day.
I don't understand how they can think it's a good idea, I instantly classify them as lazy and unauthentic. I'd rather get texts full of mistakes coming straight out of their head than this slop.
Also, i know a lot of non-native English speakers that use AI tools to "correct things". Because of the language barrier these people especially are less likely to ever be able to recognize the specific llm tone that precipitates.
If someone wants to me read a giant text generated by a small and poor prompt, I don't wanna read it
If someone wants to fix that by increasing the effort and do a better prompt and express better the ideas, I rather read that prompt over the llm output
Chicken.
Seriously, the degree to which supposed engineering professionals have jumped on a tool that lets them outsource their work and their thinking to a bot astounds me. Have they no shame?
Personally I find it super helpful to discuss stuff back and forth: It takes a view, explores the code and brings some insight. I take a view and steer the analysis. And together we arrive at a conclusion.
By that point the AI’s got so much context it typically does a great job summarising the thought process for wider discussion so I can tweak and polish and share.
I am the first person to respect craft in many domains, and will continue to do so.
I respect it when an actor does their own stunts or when directors choose not to use CGI.
But I will still watch the Matrix and think "holy shit that was cool".
It's all about the quality of the output.
If you care about your voice, don't let LLMs write your words. But that doesn't mean you can't use AI to think, critique and draft lots of words for you. It depends on what purpose you're writing it for. If you're writing an impersonal document, like a design document, briefing, etc then who cares. In some cases you already have to write them in a voice that is not your own. Go ahead and write these in AI. But if you're trying to say something more personal then the words should be your own, AI will always try to 'smooth' out your voice, and if you care about it, you gotta write it yourself.
Now, how do you use AI effectively and still retain your voice? Here's one technique that works well: start with a voice memo, just record yourself maybe during a walk, and talk about a subject you want, free form, skip around jump sentences, just get it all out of your brain. Then open up a chat, add the recording or transcript, clearly state your intent in one sentence and ask the AI to consider your thoughts, your intent and ask clarifying questions. Like, what does the AI not understand about how your thoughts support the clearly stated intent of what you want to say? That'll produce a first draft, which will be bad. Then tell the AI all the things that don't make sense to you, that you don't like, just comment on the whole doc, get a second draft. Ask the AI if it has more questions for you, you can use live chat to make this conversation go smoother as well, when the AI is asking you questions, you can talk freely by voice. Repeat this one or two more times, and a much finer draft will take shape that is closer to what you want to say. During this drafting state, the AI will always try to smooth or average out your ideas, so it is important to keep pointing out all the ways in which it is wrong.
This process will help you with all the thinking involved being more up-front. Once you're read and critiqued several drafts, all your ideas will be much more clear and sort of 'cached' and ready to be used in your head. Then, sit down and write your own words from scratch, they will come much easier after all your thoughts have been exercised during the drafting process.
> I need to know there was intention behind it. [...] That someone needed to articulate the chaos in their head, and wrestle it into shape.
If forced to choose, I'd use coherence as evidence of care than use it as a refutation of humanity.
I haven't even really tried to use LLMs to write anything from a work context because of the ideas you talk about here.
IMO it’s lazy and bad for expressive writing, but for certain things it’s totally fine.
This is an easy but not very insightful framing.
I want to read intelligent, thoughtful text that is useful in some way: to me, to society, to humanity. Ceteris paribus, the source of the information does not necessarily matter; it only matters as a matter of association. To put it another way, “human” vs “machine” is not the core driving factor for me.
All other things equal, I would rather read A over B:
A. high quality AI content, even if it is “only” the result of 6 minutes of human question framing and light editing [1]
B. low quality purely human content, even if it was the result of 60 minutes of effort.
There is increasingly less ability to distinguish “human” writing from “AI” writing. Some people fool themselves on their AI-detection prowess.
To be direct: I want meaningful and satisfying lives for humans. If we want to reward humans for writing more, we better reflect on why, and if we still really want that, we better find ways that work. I don’t think “buy local” as a PR campaign will be easily transferred to a “read human” movement.
[1]: Of course AI training data is drawn from humans, so I do not discount the human factor. My point is that quantifying the effort put into it is not simple.
These blanket binary takes are tiresome. There is nuance and rough edges.
In that sense, they are essentially systems that mimic online content.
Therefore, what an AI generates often reflects the perspectives of the people who originally created the training data, rather than the true thoughts of the person prompting it.
Because writing is a dirty, scratched window with liquid between the frames and an LLM can be the microfiber cloth and degreaser that makes it just a bit clearer.
Outsourcing thinking is bad. Using an LLM to assist in communicating thought is or at least can be good.
The real problem I think the author has here is that it can be difficult to tell the difference and therefore difficult to judge if it id worth your time. However, I think author/publisher reputation is a far better signal than looking for AI tells.
If you use an LLM to generate the ideas and justification and formatting and etc etc, you're just delegating your part in the convo to a bot.
Homogenization is good for milk, but not for writing.
I keep seeing this and I don't think I agree. We outsource thinking everyday. Companies do this everyday. I don't study weather myself, I check an app and bring an umbrella if it says it's gonna rain. My team trusts each other do do some thinking in their area, and present bits sideways / upwards. We delegate lots of things. We collaborate on lots of things.
What needs to be clear is who owns what. I never send something I wouldn't stand by. Not in a correctness sense (I have, am and likely will be wrong on any number of things) but more in a "yeah, that is my output, and I stand by it now" kind of way. Tomorrow it might change.
Also remember that google quip "it's hard to edit an empty file". We have always used tools to help us. From scripts saved here and there, to shortcuts, to macros, IDE setups, extensions and so on. We "think once" and then try not to "think" on every little detail. We'd go nowhere with that approach.
There's a strong overlap between things which bad (unwise, reckless, unethical, fraudulent, etc.) in both cases.
> We outsource thinking everyday. [...] What needs to be clear is who owns what.
Also once you have clarity, there's another layer where some owning/approval/delegation is not permissible.
For example, a student ordering "make me a 3 page report on the Renaissance." Whether the order went to another human or an LLM, it is still cheating, and that wouldn't change even if they carefully reviewed it and gave it a stamp of careful approval.
However, if I had an idea and just fobbed the idea off to an LLM who fleshed it out and posted it to my blog, would you want to read the result? Do you want to argue against that idea if I never even put any thought into it and maybe don’t even care?
I’m like you in this regard. If I used an LLM to write something I still “own” the publishing of that thing. However, not everyone is like this.
I think using AI for writing feedback is fine, but if you're going to have it write for you, don't call it your writing.
Example (minus the final review): https://chatgpt.com/share/698e417a-4448-8011-9c29-12c9b91318...
I still think that the final review written by ChatGPT is a bit off. But at least, it asked mostly the right questions.
How we can tell that this wasn't written by an LLM.
At this point, I'm not sure whether you're a clawdbot running amok..
Like always we have to lean on evaluating based on quality. You can produce quality using an LLM, but it's much easier to produce slop, which is why there's so much of it now.
Also you could long use "logit_bias" in the API of models which supported it to ban the EM dash, ban the word "not", ban semicolons, and ban the "fancy quotes" that were clearly added by "those who need to watch" to make sure that they can clearly figure out if you used an LLM or not.
ai;dr is what I'm going to start saying, it's just frustrating to see.
But of course, like producing code with AI, it's very easy to produce cheap slop with it if you don't put in the time. And, unlike code, the recipient of your work will be reading it word by word and line by line, so you can't just write tests and make sure "it works" - it has to pass the meaningfulness test.
https://www.thenewatlantis.com/publications/one-to-zero
Semantic information, you see, obeys a contrary calculus to that of physical bits. As it increases in determinacy, so its syntactical form increases in indeterminacy; the more exact and intentionally informed semantic information is, the more aperiodic and syntactically random its physical transmission becomes, and the more it eludes compression. I mean, the text of Anna Karenina is, from a purely quantitative vantage of its alphabetic sequences, utterly random; no algorithm could possibly be generated — at least, none that’s conceivable — that could reproduce it. And yet, at the semantic level, the richness and determinacy of the content of the book increases with each aperiodic arrangement of letters and words into coherent meaning.
Edit: add-onIn other words, it is impossible for an LLM (or monkeys at keyboards [0]) to recreate Tolstoy because of the unique role our minds play in writing. The verb writing hardly appears to apply to an LLM when we consider the function it is actually doing.
And you're wrong for suggesting that's the first use of ai;dr and further assuming that the author "stole" it from that post. https://rollenspiel.social/@holothuroid/113078030925958957 - September 4, 2024
Conclusion:
Dismissing arguments solely because they are AI-generated constitutes a class of genetic fallacy, which should be called 'Argumentum ad machina'.
Premises:
1. The validity of a logical argument is determined by the truth of its premises and the soundness of its inferences, not by the identity of the entity presenting it.
2. Dismissing an argument based on its source rather than its content constitutes a genetic fallacy.
3. The phrase 'that's AI-generated' functions as a dismissal based on source rather than content.
Assumptions:
1. AI-generated arguments can have true premises and sound inferences
2. The genetic fallacy is a legitimate logical error to avoid
3. Source-based dismissals are categorically inappropriate in logical evaluation
4. AI should be treated as equivalent to any other source when evaluating arguments
It's true that AI-generated arguments can have true premises and sound inferences, some of the time. But the models are still too likely to hallucinate. Let's say at some distant future date the hallucination rate gets down to just 10%, so it's 90% likely that the argument is well constructed. (Which I personally doubt LLMs will ever be capable of as long as they are still statistically-based; I think it will take a model that is based on facts and logical reasoning, rather than on the probability of the next word being "the" or "argument" or "premise", before LLMs will be reliably able to produce logical reasoning that follows actual logic).
But here's the thing. When I'm reading an article, I'm not looking for "is this 90% likely to be true?" I'm looking for 100%. If a source has a 10% chance of being wrong, I'm going to skip reading that source in favor of a source that has a 0% chance of being wrong, or if that's not possible then a 1% chance. Yes, that's a logical fallacy... if my goal was proving the argument wrong. But my goal is different. My goal is finding reliable information as quickly as I can. And to that end, the genetic fallacy is actually useful to apply. Not as a "it's written by AI so it's wrong" argument — that would be fallacious indeed — but "it's written by AI so I'm not going to spend time on it, I'll skip to another article that is less likely to contain hallucinations" is an actually useful metric to apply.
I've had one too many cases where I asked an LLM, "Can product XYZ do ABC?", it confidently told me "Yes, you can do ABC with XYZ and here's how to do it," then I looked at the actual documentation for XYZ and it specifically said "we can't do ABC; at some future point we plan to add it, and then you will be able to do this: (example code)". And that example code was what the LLM spat out to me saying "Yes, you can do ABC" when the truth was the opposite.
The maxim falsus in uno, falsus in omnibus doesn't really apply to LLMs, because they don't have a moral component to them. It applies to people, because someone whose ethics forbid them to lie is reliable, but someone who is willing to lie about one thing is very likely to be willing to lie about other things, and is therefore unreliable as a source of information. LLMs don't have a sense of morality, and in fact when they hallucinate they're not lying, per se, since lying requires knowing the truth and willingly saying the opposite (as opposed to being mistaken, where you think you're telling the truth even though you're speaking an objective falsehood). LLMs don't know the truth, that's just not a concept programmed into them, so they're not lying, and their willingness to "lie" once does not prove a moral defect. But the fact that they do hallucinate a measurable percentage of the time makes them just as unreliable a source of information as a person who is willing to lie.
So while I do agree that AI-generated arguments can be logically correct, it is not guaranteed that they will be correct. And while it would be fallacious to say "AI-generated, therefore false", it is still useful to say "AI-generated, therefore unreliable and I'll seek out a different source of information".
Write it first, quick self edit, then have an LLM edit. Then I edit again. It's most definitely my voice, and I love it.
You don't; you feed it to an LLM and ask it to read it for you.
This! This is my feeling exactly. I wrote about encountering work slop last year: https://lambdaland.org/posts/2025-08-04_artifical_inanity/
https://noonker.github.io/posts/2024-07-25-i-respect-our-sha...
What I dislike about reading AI writing is that it's dumb but sounds smart. If it were smart I wouldn't mind reading it. Here's an example. It's always full of metaphors that make no sense, like a closet so full of junk that it topples out as soon as you open the door. (see what I mean? the metaphor has a superficial resemblance to the topic at hand but doesn't clarify the subject at all and therefore muddles the waters as you try to understand what I might have been intending to communicate with it.)
> ..and call me an AI luddite
Oh please do call me an AI luddite. It's an honor for me.
But if the post was generated through a long process of back-and-forth with the model, where significant modifications/additions were made by a human? I don't think there's anything wrong with that.
I do think there's a great deal wrong with that, and I won't read it at all.
Human can speak unto human unless there's language barrier. I am not interested in anyone's mechanically-recovered verbiage, no matter how much they massaged it.
I think it's the size of the audience that the AI-generated content is for, is what makes the difference. AI code is generally for a small team (often one person), and AI prose for one person (email) or a team (internal doc) is often fine as it's hopefully intentional and tailored. But what's even the point for AI content (prose or code) for a wide audience? If you can just give me the prompt and I can generate it myself, there's no value there.
This take is baffling to me when I see it repeated. It's like saying why should people use Windows if Bill Gates did not write every line of it himself. We won't be able to see into Bill's mind. Why should you read a book if they couldn't bother to write it properly and have an editor come in and fix things.
The main purpose of a creative work is not seeing intimately into the creator's mind. And the idea that it is only people who don't care who use LLMs is wrong.
What? It’s nothing like that, at all. I don’t know that Gates has claimed to have written even a single line of Windows code. I’m not asking for the perfect analogy, but the analogy has to have some tie to reality or it’s not an analogy at all. I’m only half-joking when I wonder if an AI wrote this comment.