Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
If you have domain familiarity with it, have some personal insight to offer a lens through, or care about the topic deeply enough to write a summary yourself, then go ahead! I almost never post about AI given my loathing of generative ML, but I posted a critical summary in a recent “underlying shared structure” post because it was a truly exciting mathematical insight and the paper made that difficult to see for some people.
Please don’t use AI to reduce the distinctiveness of your writing style. Run on sentences are how humans speak to each other. Excess commas are only excess when you consider neurotypicals. I’m learning French and I have already started to fuck up some English spelling because of it. None of that matters in the grand scheme of things. Just add -er suffix checks to your mental proofreading list and move on with being you.
What I do is copy the URLs for reference, and summarize the issue myself in as few sentences as possible. Anyone who wants to learn more can follow the reference.
Who cares about people with reading disabilities, let's shift burden onto the reader. My time is better spent managing my Ais.
Or the reader's AI who is able to format or translate the text to make it easier to read for the reader.
Pasting a chatGPT response into a comment, and labeling it as such, feels the same to me.
It is more, not less, insulting than trying to pass an AI response off as your own.
> Would that be OK or would that count as an AI written comment?
The rule seems written to answer this directly.
Absolutely nobody cares what Perplexity has to say about the case - summary or otherwise. If you mention what the case is, I can ask claude myself if I’m interested.
Better yet, post a link to an authoritative source on the case (helpful but not required).
At minimum, verify your info via another source. The community deserves that much at least.
An AI-generated summary adds nothing positive and actually detracts from the conversation.
I looked at the decision itself sufficiently to see that it was the case I remembered and that my recollection of the facts and the decision was correct.
I just didn't include a summary because I didn't find a good one I could link to. Normally I'd write a brief one myself but I found that hard to do when Perplexity's summary was sitting right there in the next window and it was embarrassingly better than what I would have written.
The point is we don't want to read Ai summaries, we can make one ourselves if we want. Personally, with certainty, I don't want to read one from Perplexity on the basis that they do the Ai for Trump Social. (reverse-kyc if you are not aware)
For some inspiration on why this is meaningful: https://www.npr.org/2025/07/18/g-s1177-78041/what-to-do-when...
In this instance the only reason I considered using the AI summary was that there was no Wikipedia article about the case (which surprised me as it is one of the foundational cases in Commerce Clause law...although maybe all the points in it are covered in later cases that do get their own Wikipedia articles?).
Normally I'd just copy Wikipedia's summary into my comment and link to Wikipedia and to the decision itself for people that want the details.
> The point is we don't want to read Ai summaries, we can make one ourselves if we want.
How would you know if you wanted one? Someone mentioned they would like to see a case on this subject but they didn't think it would ever happen. I knew of a case on the subject, found the reference, and posted the link. At that point we are already on a tangent from what most of the thread is about and from what most people reading it care about.
The point of the summary would be to let you know if the case might actually be relevant to anything you cared about in the thread. (The answer would probably be "no" for 95+% of the people reading the comment).
All of this Ai stuff is new for society and we have a lot to work through. Here on HN, we want to err to the side of keeping as much humanity as possible. It's good to have a place like that, for fresh air and stretching our minds differently and regularly as Ai becomes more ubiquitous in our lives.
I'm not asking or advocating for using AI as a copy editor.
The post I replied to asked about using Gemini as if it's Wikipedia - that is, saying "according to Gemini" when citing a fact where one might have once wrote "according to Wikipedia" or even "according to Google."
This is a forum people hang out in part-time. It's nobody's job to go spend an hour researching primary sources to post a comment. Shallow searches and citations are common and often helpful in pointing someone in the right direction. As AI becomes commonplace, a lot of that is being done with AI.
"Can I have AI write a reply for me?"
is a very different question than
"Can I cite an AI search result?"
This rule change is clear about the former. There's room to clarify the latter.
Nope. (For an example of that, see any comment I posted to this discussion that starts with “Please don’t”.)
> "Can I cite an AI search result?"
Ah. An AI response is neither a primary source nor a reference source, and HN tends to strongly prefer those. Linking to a Google /search?q= isn’t any more welcome here than linking to an AI /search?q=; neither are stable over time and may vary wildly based on algorithmic changes. Wikipedia, as a curated reference source, is not classifiable as equivalent to either a search engine or an AI response at this time, and evidences much stronger stability, striving towards that of a classical print encyclopedia (but never reaching it).
Perhaps someday Britannica will release an AI that only provides fully factual replies that are derived in whole from the Britannica encyclopedia, but as of today, AI has not demonstrated the general veracity and reliability that even Wikipedia, the very worst of possible reference sources, has met over the years.
(Note that an Ask-A-Librarian response would be more credible than a Wikipedia page and much more credible than today’s AI attempts to replace that function; but linking such a response would still be quite problematic, not the least of which because the primary value of that response is either directly quotable and/or is citations that should be incorporated into the post itself. But if that veracity differential changes someday once the AI hallucination problem is solved at the underlying level rather than in post-filters, I’m happy to revise my position.)
I think you misspelled "convenient". More than the small effort that it takes one to share generated text, one has to consider the effort of who knows how many humans that will use their time to read it.
If a LLM wrote something you don't know about, you're not qualified to judge how accurate it is, don't post it. If you do know the subject, you could summarize it more succinctly so you can save your readers many man hours.
If LLMs evolve to the point where they don't hallucinate, lie, or write verbosely, they will likely be more welcome.