An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread. Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.
Hmm. That's HN proposition value but that's not necessarily what keeps people coming back. I am thinking about the dopamine hits from the news cycle and comments.
If I am right then it follows that:
> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
..actually does the job.
How much chatGPT content can we/you/I stomach before we stop visiting ? We all have a different threshold (and language is a barrier for non-native, it would certainly take me longer to spot the AI).
> An AI cannot, by definition, create better HN content. Because HN content is about hearing what people think about a certain topic/thread.
Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?
Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?
> Hearing what an AI statistically regurgitates doesn’t satisfy the purpose even if it’s better written.
Don't most humans just regurgitate the same arguments they've just read or learned elsewhere (or even here), like 99% of the time?
And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?
> The internet is about people. I may not be friends with everyone on HN, but we’re a community meant to discuss and share together, and I personally wouldn’t want to talk to an AI over a human.
What if the AI was more interesting and intellectually stimulating than a human?
HN is a professional social network. People are most interested in what their peers have to say, not any random human being or human level AI.
Now if this human level AI is working in the field as a professional, or at least in computer science academia the it’s opinions are valid and interesting. However if it’s merely an average best possible response then it’s pointless to hear it’s opinion.
Additionally if multiple people post chatGPT responses that’s just like talking to same person using humans as sock puppets.
> Now if this human level AI is working in the field as a professional, or at least in computer science academia the it’s opinions are valid and interesting. However if it’s merely an average best possible response then it’s pointless to hear it’s opinion.
Yes, I agree with that, but I don't think that's what the parent poster was arguing.
It's also clear that I didn't phrase my question as clearly as I could, because instead of "human-level AI", I should have said this instead: "a human-level (or more intelligent) AI that has equivalent or better knowledge/experience than the people who post on HN".
> Additionally if multiple people post chatGPT responses that’s just like talking to same person using humans as sock puppets.
Yes, I agree with this as well.
But as a counterpoint, (as far as I understand) it's possible to have ChatGPTs with different levels of knowledge/experience and different personalities, as evidenced by GPT-3's fine-tuning capability (disclaimer: I've never used this functionality, so I'm not 100% sure if this is correct).
AI even at its best won’t be a part of that consensus unless it’s also working in the field.
I think AI chats can help with self improvement, but public discussion with others is the only way to have community improvement. As a disclaimer, sure AI which is indistinguishable from humans would be a part of the community but that isn’t what we have, nor is it the trajectory of current AI. What we currently have are better and better parrots, and task specific AI.
Yes but at least they choose what they regurgitate unless you think of most people as automatons. Personally Im more interested in human regurgitations than AI immitations of these. So far AI doesn’t bring any reasoning and cannot discern what it regurgitates but sure as hell comes off as confident (It could probably immitate being humble as well). Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.
And what, are you saying ChatGPT doesn't choose what it regurgitates?
It seems like these arguments are getting more and more flimsy.
I do believe people (including me) are automatons because I think free will is logically impossible in the way most people intuitively think free will is.
Edit: to clarify, I believe people usually think of free will meaning that there's some magical soul-like way that allows you to choose what you do in a principled way that is not simply a direct functional result of your composition and the interactions that you have with the environment or some additional pure randomness that the environment imposes on you (due to the universe being quantum). Which is exactly like an intelligent machine would have to work, because... well, because it has to live in the same universe that we do, so in theory a machine can theoretically do what our minds do, functionally speaking. There's no magical free-will-like behavior that humans can have that machines can't, unless you believe in souls or other magical things.
> So far AI doesn’t bring any reasoning
This is clearly untrue, as ChatGPT can definitely reason pretty well (although, not always correctly, just like humans). As far as I can see, it can reason deductively, inductively, by analogy, it does abductive reasoning, cause-and-effect reasoning, critical thinking, step-by-step reasoning, you name it.
It might not always do it correctly, and it might even not do it as well as a good human can currently, but it can do it.
> Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.
Humans do this all the time (although usually not at such an extreme level). Just look at all the posts saying ChatGPT can't do X or Y ;)
Whether it is or isn't is beside the point.
Broad, anonymous commenting platforms are dead, they just don't know it.
Your comment is very interesting, because I'm having the same experience: the more I interact with ChatGPT and read its arguments/responses, the more I'm getting weird vibes when reading arguments written by humans, although I cannot tell you exactly why (and I think I can still clearly differentiate between a human and an answer from ChatGPT that is copied verbatim, as it tends to speak in a more formal way and usually, it's also more verbose than how humans typically write here in HN).
I think it's also influencing the way I write, both to be more clear (because otherwise ChatGPT can misinterpret me), but also because I'm reading so much ChatGPT-generated content, which I believe also indirectly influences the way I write and think.
In any case, I know you said it's besides the point, but I assure you, I'm not ChatGPT or copying its answers here (unless I add a clear disclaimer) :)
So, let’s see whether ChatGPT gets the wrong answers:
Q: How long does it take to have a baby?
A: The average length of pregnancy is about 40 weeks, or 9 months.
1/2 correct - 40 weeks is 10 months from conception to birth. Q: Why is emacs better than vi?
A: […] Overall, emacs is widely considered to be a more powerful and versatile editor than vi, with a user-friendly interface and a wealth of support and resources available to users.
1/2 correct. It seems to alway agree with the question’s framing - humans would often disagree. Q: Write me a simple python function called are_hacker_news_users_confrontational() that returns either true or false.
A1: def are_hacker_news_users_confrontational():
return True
A2: def are_hacker_news_users_confrontational():
return False
This function simply returns False, indicating that Hacker News users are not confrontational. Of course, this is just a simple example, and the actual answer to this question may vary depending on various factors. Alternatively, if you want to determine whether or not Hacker News users are confrontational by analyzing their behavior on the platform, you could use natural language processing techniques to analyze the comments and posts made by users on Hacker News, and use that information to determine whether they tend to be confrontational or not. However, this would require a more complex implementation, and would require access to a large dataset of Hacker News comments and posts.
9/10 for that answer!Average length of pregnancy, ovulation to birth is 268 days -- 38.3 weeks or ~8.8 months.
But we typically count pregnancy from last period (this is easier), which makes it pretty close to the round 40 that's usually cited.
> 40 weeks is 10 months from conception to birth.
A month is ~4.35 (6957/1600) weeks, so 40 weeks is ~9.2 months (69400/6957).
Sorry, is this a cultural difference or are you just nitpicking math?
Even Wikipedia says: This is just over nine months.
I have never seen anyone argue that pregnancy takes 10 months in humans, I've always heard people say it takes 9 months (indeed, being 9 months pregnant is equivalent to saying you're just about to give birth, where I come from).
Months vary in length, 9 average months (1/12 year per month) is. 39.1+ weeks, 10 is 43.4+ weeks; 40 weeks is closer to 9 months.
> What if the AI was more interesting and intellectually stimulating than a human?
What if? It’s not anywhere close to that. GPT is so far from “human level” even if it sounds good. It’s statistic regurgitation, it’s not thought. If it was more intellectual, then I think there’d be a lot more change in the world than HN.
> Please tell me: for you, what is the difference between hearing what people think, and hearing what a human-level AI thinks?
Humans can have original thoughts. AIs that are trained on a human text corpus are by definition finding statistical correlations between preexisting things.
You can say things like “I know it’s unpopular but I like OOP because objects make it easy to assign team boundaries at work”. And the replies can be about real work experiences of real people who understand those trade offs.
An AI can discuss this, sorta, but it’s not real. The AI knows nothing of these trade offs other than inevitably mentioning Java.
> And even in the remaining 1% of cases, aren't they necessarily just functional products of the things they've seen/read/experienced (i.e. the inputs they've received, even if gathered by feedback when interacting with the world)?
This is what I was thinking a lot about. I think the answer is no.
Humans are introspective and reflective. You are based on your experiences, yes, but you don’t just regurgitate statistically likely language. Crucially, before you answer a question you can reflect on the logic of that answer.
> Are you saying a human-level AI, by definition, cannot ever be interesting to you, just because it's an AI, even though it could be like a human in every other way?
Not to be weird, but I wouldn’t discriminate against a human level intelligence because it’s a machine, but a language model like GPT is absolutely not a human level intelligence.
AI-generated content can be fascinating, helpful, and in some instances, more useful and accurate than humans (medical diagnosis, technical documentation, perhaps). But if I ask for a human, I want a human.
I don't care if AI is more interesting than a human. I want a human, because I am human. I am not transhumanist.
I wonder what the correlation is between people who see no particular value in interacting with humans, and people who struggle to interact with humans.
To me internet comments are almost on life support. Im curious if HN will have the same fate
> To me internet comments are almost on life support. Im curious if HN will have the same fate
I think I agree, in general.
I wonder what incentives someone could have for posting such comments on HN. I mean, it's clear that commercial products could benefit immensely from that (as they'd get a return on their investment), and also e.g. governments and political parties who might want to influence the public discourse about sensitive/political matters.
But why would anyone (who is not toxic already) use such a bot to post comments about technical topics, such as in discussions about programming languages, interesting bugs being discovered, open-source software being released, etc?
> I don't care if AI is more interesting than a human. I want a human, because I am human. I am not transhumanist.
I think I understand your point but I'd like to give a counterpoint: replace "human" by "white human" and "AI" by "black human" and you might see how that line of reasoning is flawed.
In other words, there might come a time when AIs could become really offended if you'd exclude them like that from social interactions, with all the repercussions that might have.
> I wonder what the correlation is between people who see no particular value in interacting with humans, and people who struggle to interact with humans.
I see value in interacting with humans, especially at this point in time, and especially in ways that machines can't (e.g. having meaningful intimate relationships, raising a family, etc). Even then, machines could theoretically do some of this better than humans, as suggested by a lot of sci-fi content (except the actual reproducing part).
But I also see value in interacting with beings that are superior to humans, assuming they are able to do what humans can, only better.
I am a human supremacist, yes.
Further, it is not unreasonable to have more interest in some cultures than others, or find the experiences of ones own culture more engaging or relevant to oneself than another. The "line of immorality" comes with banning or violently oppressing other experiences.
Again, fundamentally, I disagree with an analogy giving AI equal morality or agency to a homo sapiens. There is no room for "find replace" here.
Better is entirely too subjective for this to be true. And if you turn on showdead, ChatGPT content is already better than some of the other content submitted here. If an ML algo can take a half baked comment of mine and form it into a coherent response that other people can read and actually understand what I mean, that' is better content.
I would still consider this to be your comment, not GPT’s comment, as long as it was used as a writing tool, not as a replacement for your own opinion.
Not necessarily. Good HN content can also be factual information relevant to the topic at hand. And yes, current AI like ChatGPT might not help with that, but an hypothetical future AI which cared more about the veracity of its statements could.
I reject this notion. for me, thats not good HN content. for me, thats not why I got to HN. Maybe to others, but not me.
That said, factually incorrect content is bad, but being factually relevant is not enough. I don’t want a robot glossary filling up comments. Have you met know-it-alls that just spew factually relèvent regurgitation instead of thoughtful response?
With AI certain ideas and opinions can and will be amplified by malicious actors. We may have to resort to face to face at some point or verify human identity at times to combat this.