How do you propose we do that?
And what do you propose we do when ChatGPT (or something like it) can create better content than most humans?
(... which I would argue it is already happening in some limited contexts, although I admit my stance is controversial).
An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread. Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.
Hmm. That's HN proposition value but that's not necessarily what keeps people coming back. I am thinking about the dopamine hits from the news cycle and comments.
If I am right then it follows that:
> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
..actually does the job.
How much chatGPT content can we/you/I stomach before we stop visiting ? We all have a different threshold (and language is a barrier for non-native, it would certainly take me longer to spot the AI).
> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.
Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?
Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?
> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
Don't most humans just regurgitate the same arguments they've just read or learned elsewhere (or even here), like 99% of the time?
And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?
> The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.
What if the AI was more interesting and intellectually stimulating than a human?
HN is a professional social network. People are most interested in what their peers have to say, not any random human being or human level AI.
Now if this human level AI is working in the field as a professional, or at least in computer science academia the it’s opinions are valid and interesting. However if it’s merely an average best possible response then it’s pointless to hear it’s opinion.
Additionally if multiple people post chatGPT responses that’s just like talking to same person using humans as sock puppets.
Yes but at least they choose what they regurgitate unless you think of most people as automatons. Personally Im more interested in human regurgitations than AI immitations of these. So far AI doesn’t bring any reasoning and cannot discern what it regurgitates but sure as hell comes off as confident (It could probably immitate being humble as well). Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.
Whether it is or isn't is beside the point.
Broad, anonymous commenting platforms are dead, they just don't know it.
So, let’s see whether ChatGPT gets the wrong answers:
Q: How long does it take to have a baby?
A: The average length of pregnancy is about 40 weeks, or 9 months.
1/2 correct - 40 weeks is 10 months from conception to birth. Q: Why is emacs better than vi?
A: […] Overall, emacs is widely considered to be a more powerful and versatile editor than vi, with a user-friendly interface and a wealth of support and resources available to users.
1/2 correct. It seems to alway agree with the question’s framing - humans would often disagree. Q: Write me a simple python function called are_hacker_news_users_confrontational() that returns either true or false.
A1: def are_hacker_news_users_confrontational():
return True
A2: def are_hacker_news_users_confrontational():
return False
This function simply returns False, indicating that Hacker News users are not confrontational. Of course, this is just a simple example, and the actual answer to this question may vary depending on various factors. Alternatively, if you want to determine whether or not Hacker News users are confrontational by analyzing their behavior on the platform, you could use natural language processing techniques to analyze the comments and posts made by users on Hacker News, and use that information to determine whether they tend to be confrontational or not. However, this would require a more complex implementation, and would require access to a large dataset of Hacker News comments and posts.
9/10 for that answer!> What if the AI was more interesting and intellectually stimulating than a human?
What if? It’s not anywhere close to that. GPT is so far from “human level” even if it sounds good. It’s statistic regurgitation, it’s not thought. If it was more intellectual, then I think there’d be a lot more change in the world than HN.
> Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?
Humans can have original thoughts. AIs that are trained on a human text corpus are by definition finding statistical correlations between preexisting things.
You can say things like “I know it’s unpopular but I like OOP because objects make it easy to assign team boundaries at work”. And the replies can be about real work experiences of real people who understand those trade offs.
An AI can discuss this, sorta, but it’s not real. The AI knows nothing of these trade offs other than inevitably mentioning Java.
> And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?
This is what I was thinking a lot about. I think the answer is no.
Humans are introspective and reflective. You are based on your experiences, yes, but you don’t just regurgitate statistically likely language. Crucially, before you answer a question you can reflect on the logic of that answer.
> Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?
Not to be weird, but I wouldn’t discriminate against a human level intelligence because it’s a machine, but a language model like GPT is absolutely not a human level intelligence.
AI-generated content can be fascinating, helpful, and in some instances, more useful and accurate than humans (medical diagnosis, technical documentation, perhaps). But if I ask for a human, I want a human.
I don't care if AI is more interesting than a human. I want a human, because I am human. I am not transhumanist.
I wonder what the correlation is between people who see no particular value in interacting with humans, and people who struggle to interact with humans.
Better is entirely too subjective for this to be true. And if you turn on showdead, ChatGPT content is already better than some of the other content submitted here. If an ML algo can take a half baked comment of mine and form it into a coherent response that other people can read and actually understand what I mean, that' is better content.
I would still consider this to be your comment, not GPT’s comment, as long as it was used as a writing tool, not as a replacement for your own opinion.
Not necessarily. Good HN content can also be factual information relevant to the topic at hand. And yes, current AI like ChatGPT might not help with that, but an hypothetical future AI which cared more about the veracity of its statements could.
I reject this notion. for me, thats not good HN content. for me, thats not why I got to HN. Maybe to others, but not me.
That said, factually incorrect content is bad, but being factually relevant is not enough. I don’t want a robot glossary filling up comments. Have you met know-it-alls that just spew factually relèvent regurgitation instead of thoughtful response?
With AI certain ideas and opinions can and will be amplified by malicious actors. We may have to resort to face to face at some point or verify human identity at times to combat this.
One technique, like all the other self-moderation that you can do on HN: stop upvoting and commenting on content that you don’t want to see boosted.
Most humans don't like too much creativity and they want ideas that they agree with.
Wonder if the death of social media is more a descent into AI generated inane and bland commentary.
But isn't that what most people are presumably doing already?
I guess my question was more intended to be: how do you differentiate between content generated by humans vs machines?
At some point, we might not be able to. Or even if we can, it could actually result in a worse experience, if machines can generate better content.
There was some deep discussion about that topic.
For example: even if all authentic content had an embedded steganographic watermark, how do you reliably authenticate recordings of recordings or otherwise degraded copies of authentic content?
[1] https://intelligence.house.gov/news/documentsingle.aspx?Docu...
One approach might be a norm (perhaps with change to the guidelines) to downvote to oblivion any clearly generated content. I don't claim to have solved the problem though!
> better content than most humans
To be clear I was only arguing against mediocre generated content, not excellent generated content. I think the latter poses a different set of (also interesting) problems.
I tend to agree... I've been growing more and more tired with the content in familiar places. eg. reddit. (As an aside I think a lot of it is driven by advertising/marketing, but not all of it...)
Anyway, your comment reminded me of the recent footage of baggage handlers at an airport, and how that dovetails nicely with the recent move by tesla to build a humanoid robot.
Looking for a ray of light in the approaching storm: Maybe these AI can be used to filter the content more effectively for us.
Then there's the question of situations where people might actually use ChatGPT in a creative way to augment discussion.
Let's say we're talking about the pitfalls of repetitive code over breaking things cleanly into small functions. You have an example of this in mind that highlights a specific pitfall that you encountered, but don't want to share proprietary code, so you might ask ChatGPT (or a future model) to generate some code that demonstrates the same thing, rather than writing it yourself.
I think we're still early enough in the tech that it's hard to create hard-and-fast rules about what kind of content should be allowed; ideally, we'll get to the point where AI can help facilitate and augment human interactions, rather than taking over completely.