After all, no one knows I'm a dog.
When someone posts:
> You could use Redis for that, sure, I've run it and it wasn't as hard as some people seem to fear, but in hindsight I'd prefer some good hardware and a Postgres server: that can scale to several million daily users with your workload, and is much easier to design around at this stage of your site.
then the beholder is trusting not just the correctness of that one sentence but all of the experiences and insights from the author. You can't know whether that's good advice or not without being the author, and if that's posted by someone you trust it has value.
An LLM could be prompted to pretend they're an experienced DBA and to comment on a thread, and might produce that sentence, or if the temperature is a little different it might just say that you should start with Redis because then you don't have to redesign your whole business when Postgres won't scale anymore.
This implies they know the author and can trust them. If they don't know the author then there is no trust to break and they are only relying on the collective intelligence which could be reflected by the AI.
That is to say that trusting a known human author is very different from trusting any human author and trusting any human author is not that much different from trusting an AI.
This is my point.
There is no sane endgame here that doesn't end up with each user effectively declaring who they do and don't care to hear, and possibly transitively extend that relationship n steps into the graph. For example you might trust all humans vetted by the German government but distrust HN commenters.
For now HN and others are free to do as they will (and the current AI situation has been intolerable), however, I suspect in the near future governments will attempt to impose their own version of it on to ever less significant forums, and as a tech community we need to be thinking more clearly about where this goes before we lose all choice in the matter.
This already falls apart though. There are while categories of things which I find "incorrect" and would take up as an argument with a fellow human. But trying to change the mind of an LLM just feels like a waste of my time.
It often is with humans as well.
Look, I'll give you a loose example: It's not uncommon to see a post making an "error" I know from experience. I might take the time to help someone more quickly learn what I felt I learnt to help me get out of that mistaken line of thought. If it's an LLM why would I care? There's thousands of other people, even other LLMs, that I could be talking to instead.
You've set up a framework here where "mutual understanding" is the end goal but that's just not always what's on the line.
Arguing for the sake of convincing onlookers reading the conversation is more likely to be effective, and in that case it doesn't matter if the other person is an LLM.
(naturally "birds aren't real" is a correct vs not correct thing, but the same can be applied to many less-objective things like the best mechanical keyboard or the morality of a war)