> " This pretty negative post topping Hacker News last month sparked these questions, and I decided to find some answers, of course, using AI"
The pretty negative post cited is https://tomrenner.com/posts/llm-inevitabilism/. I went ahead to read it, and found it, imo, fair. It's not making any direct pretty negative claims about AI, although it's clear the author has concerns. But the thrust is inviting the reader to not fall into the trap of the current framing by proponents of AI, rather questioning first if the future being peddled is actually what we want. Seems a fair question to ask if you're unsure?
I got concerned that this is framed as "pretty negative post", and it impacted my read of the rest of this author's article
For instance, in a lot of threads on some new technology or idea, one of the top comments is "I'm amazed by the negativity here on HN. This is a cool <thing> and even though it's not perfect we should appreciate the effort the author has put in" - where the other toplevel comments are legitimate technical criticism (usually in a polite manner, no less).
I've seen this same comment, in various flavors, at the top of dozens of HN thread in the past couple of years.
Some of these people are being genuine, but others are literally just engaging in amigdala-hijacking because they want to shut down criticism of something they like, and that contributes to the "everything that isn't gushing positivity is negative" effect that you're seeing.
Not sure if part of a broader trend, or a simply reflection of it, but when mentoring/coaching middle and high school aged kids, I’m finding they struggle to accept feedback in anyway other than “I failed.” A few years back, the same age group was more likely to accept and view feedback as an opportunity so long as you led with praising strengths. Now it’s like threading a needle every time.
I get it to some extent, a lot of people looking to inject doubt and their own ideas show up with some sort of Socratic method that really is meant to drive the conversation to a specific point, not honest.
But it also means actually honest questions are often voted or shouted down.
It seems like the methodology of discussion on the internet now only allows for everyone to show up with very concrete opinions and your opinion will then be judged. No opinion or honest questions... citizens of the internet assume the worst if you're anything but in lock step with them.
And most people here seem to think that's fine; but it's not in line with what I understood when I read the guidelines, and it absolutely strikes me as negativity.
So the emotional process which results in the knee-jerk reactions to even the slightest and most valid critiques of AI (and the value structure underpinning Silicon Valley's pursuit of AGI) comes from the same place that religous nuts come from when they perceive an infringement upon their own agenda (Christianity, Islam, pick your flavor -- the reactivity is the same).
Now of course I'm not including aggressive or rude posts, because they are a different category.
Though it does sort of show the Overton window that a pretty bland argument against always believing some rich dudes buckets as negative even in the sentiment analysis sense.
I think a lot of people have like half their net worth in NVIDIA stock right now.
The only subset where HN gets overly negative is coding, way more than they should.
The author (tom) tricked you. His article is flame bait. AI is a tool that we can use and discuss about. It's not just a "future being peddled." The article manages to say nothing about AI, casts generic doubt on AI as a whole, and pits people against each other. It's a giant turd for any discussion about AI, a sure-fire curiosity destruction tool.
Instead it's being shoved down our throats at every turn and is being marketed at the world as the Return of Christ. Whenever anyone says anything even slightly negative the evangelists crawl out of the woodwork to tell you how you're using the wrong model, or not prompting good enough, or long enough, or short enough, or "Well I've become a 9000000x developer using 76 agents in parallel!" type of posts.
Why are you complaining about that?
If you want to complain about AI and have no interest in learning more about it, go somewhere else. This site isn’t for that kind of discussion
Any number of Sam Altman quotes display this: "A child born today will never be smarter than an AI" "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence" "ChatGPT is already more powerful than any human who has ever lived" "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
Every bit of this is nonsense being peddled by the guy selling an AI future because it would make him one of the richest people alive if he can convince enough people that it will come true (or, much much much less likely, it does come true).
That's just from 10 minutes of looking at statements by a single one of these charlatans.
It’s certainly not the worst article I’ve read here. But that’s why I didn’t really like it.
- Positive → AI Boomerist
- Negative → AI Doomerist
Still not great, IMHO, but at the very least the referenced article is certainly not AI Boomerist, so by process of elimination... probably more ambivalent? How does one quickly characterize "not boomerist and not really doomerist either, but somewhat ambivalent on that axis but definitely pushing against boomerism" without belaboring the point? Seems reasonable read that as some degree of negative pressure.
https://github.com/algolia/hn-search
You can already access all your upvotes in your user page, so this might be an easy patch?
This is why I always think the HN reader apps that people make using the API are some of the stupidest things imaginable. They’re always self-described as “beautifully designed” and “clean” but never have any good features.
I would use one and pay for it if it had an ignore feature and the ability to filter out posts and threads based on specific keywords.
I have 0 interest in building one myself as I find the HN site good enough for me.
I've paused development on it for a bit to work on something else, but let me know if you have an interest and I'll post some sample output to github.
And actually it’s funny: self-driving cars and cryptocurrency are continuing to advance dramatically in real life but there are hardly any front page HN stories about them anymore. Shows the power of AI as a topic that crowds out others. And possibly reveals the trendy nature of the HN attention span.
I was looking for a full time remote or hybrid non-AI job in New York. I'm not against working on AI, but this being a startup forum I felt like listings were dominated by shiny new thing startups, whereas I was looking for a more "boring" job.
Anyway, here's:
- a graph: https://home.davidgoffredo.com/hn-whos-hiring-stats.html
- the filtered listings: https://home.davidgoffredo.com/hn-whos-hiring.html
- the code: https://github.com/dgoffredo/hn-whos-hiring
For instance, there are now dozens of products such as cryptocurrency-backed lending via EMV cards or fixed-yield financial instruments based on cryptocurrency staking. Yet if you want to use cryptocurrencies directly the end-user tools haven't appreciably changed for years. Anecdotally, I used the MetaMask wallet software last month and if anything it's worse than it was a few years ago.
Real developments are there, but are much more subtle. Higher-layer blockchains are really popular now when they were rather niche a few years ago - these can increase efficiency but come with their own risks. Also, various zero-knowledge proof technologies that were developed for smart contracts are starting to be used outside of cryptocurrencies too.
on the legal front, there's been some notable "wins" for cryptocurrency advocates: e.g. the U.S. lifted its sanctions against Tornado Cash (the Ethereum anonymization tool) a few months ago.
on the UX front, a mixed bag. the shape of the ecosystem has stayed remarkably unchanged. it's hard to build something new without bridging it to Bitcoin or Ethereum because that's where the value is. but that means Bitcoin and Ethereum aren't under much pressure to improve _themselves_. most of the improvements actually getting deployed are to optimize the interactions between institutions, and less to improve the end-user experience directly.
on the privacy front, also a mixed bag. people seem content enough with Monero for most sensitive things. the appetite for stronger privacy at the cryptocurrency layer mostly isn't here yet i think because what news-worthy de-anonymizations we have are by now being attributed (rightly or wrongly) to components of the operation _other_ than the actual exchange of cryptocurrency.
- Stablecoins as an alternative payment rail. Most (all?) fintechs are going heavy into this
- Regulatory clarity + ability to include in 401(k)/pension plans
With blockchain/smart contract tech you can build an app that from the user perspective looks like any other web app but that has its state fully in the blockchain and all computation done by miners as smart contract evaluation, self-funding by charging users a small amount on each transaction (something that scares off most people but crypto users are used to it and the prize can be fractions of a cent). The wallet does double duty as auth, it's just a public/private key pair after all, and that is a big feature.
Another big thing it does for you is handle synchronization -- there is a single, canonical blockchain state, and maintaining it and keeping it consistent is someone else's job, paid for and overseen by an ecosystem that is much larger than what you are building.
A friend and I built a POC Reddit clone on top of Solana this way, as just a bunch of static html/js and a smart contract, without any servers/central nodes and without users needing to install anything or act as a node themselves. I'm not aware of any other tech that can realistically do this.
Unfortunately the blockchain is a very hostile, expensive and limited computing environment. You can farm out storage to other decentralized systems (we used IPFS) and so long as you're not a custodian of anyone's money you're not as worried about security, but the smart contract environment is still extremely restrictive and expensive per unit compute.
The integration situation is broke-ass JS/TS "breaking changes twice a week to keep them on their toes" hobby software shit. If you precisely copy the examples from the docs there may be an old version where it almost works. My friend also did Rust integrations where my impression is things are somewhat better, but that's not saying much.
Decentralization is a spectrum and we were pretty radical about it back then. The motives were more about securing universal access to critical payment and communications infrastructure against generic Adversaries and the challenge of achieving bus factor absolute zero than about practicality.
AI is now a field where the claims are, in essence, that we're going to build God in 2 years. Make the whole planet unemployed. Create a permanent underclass. AI researches are being hired at $100-300m comp. I mean, it's definitely a very exciting topic and polarizes opinion. If things plateau and the claims dissappear and it becomes a more boring grind over diminishing returns and price adjustments I think we'll see the same thing, less comments over it.
It’s hard to tell how total that was compared to today. Of course the amount of money involved is way higher so I’d expect it to not be as large but expanding the data set a bit could be interesting to see if there’s waves of comments or not.
It never had a public product, but people in the private beta mentioned that they did have a product, just that it wasn't particularly good. It took forever to make websites, they were often overly formulaic, the code was terrible, etc etc.
10 years later and some of those complaints still ring true
Even 4-6 articles out of the top 10 for a single topic, consistently, seems crazy to me.
I think many here, if people are being honest with themselves, are wondering what does this mean for their career, their ability to provide/live, and what this means for their future especially if they aren't financially secure yet. For tech workers the risk/fear that they are not secure in long term employment is a lot higher than it was 2 years ago; even if they can't predict how all of this will play out. For founders/VC's/businesses/capital owners/etc conversely the hype is there that they will be able to do what they wanted to do with less costs.
More than crypto, NFT, or whatever other hype cycle is - I would argue LLM's in the long term could be the first technology where the the tech worker demand may decline as a result despite the amount of software growing. The focus on AI labs in coding as their "killer app" does not help probably. While we've had "hype" cycles in tech its rarer to see fear cycles.
Like a deer looking at incoming headlights (i.e. I think AI is more of a fear cycle than hype cycle for many people) people are looking for any information related to the threat, taking away focus from everything else.
TL;DR While people are fearful/excited (depending on who) of the changes coming, and seeing the rate of change remains at current pace, IMO the craze won't stop.
I feel like that’s an increasing ratio of top posts, and they’re usually an instant skip for me. Would be interested in some data to see if that’s true.
It’s exhausting.
Eh eh
this sums up the subject this article is about.
My intuition is that we moved through the hype cycle far faster than mainstream. When execs were still peaking, we were at disillusionment.
whats so confusing about this, thinking machines have been invented
I am so floored that at least half of this community, usually skeptical to a fault, evangelizes LLMs so ardently. Truly blows my mind.
I’m open to them becoming more than a statistical token predictor, and I think it would be really neat to see that happen.
They’re nowhere close to anything other than a next-token-predictor.
i don't understand people who seem to have strongly motivated reasoning to dismiss the new tech as just a token predictor or stochastic parrot. it's confusing the means with the result, it's like saying Deep Blue is just search, it's not actually playing chess, it doesn't understand the game—like that matters to people playing against it.
> LLMs make the easy stuff easier
I think this is the observation that's important right now. If you're an expert that isn't doing a lot of boilerplate, LLMs don't have value to you right now. But they can acceptably automate a sizeable number of entry-level jobs. If those get flushed out, that's an issue, as not everyone is going to be a high-level expert.
Long-term, the issue is we don't know where the ceiling is. Just because OpenAI is faltering doesn't mean that we've hit that ceiling yet. People talk about the scaling laws as a theoretical boundary, but it's actually the opposite. It shows that the performance curve could just keep going up even with brute force, which has never happened before in the history of statistics. We're in uncharted territory now, so there's good reason to keep an eye on it.
On the one hand, I completely agree with you. I've even said before, here on Hacker News, that AI is underhyped compared to the real world impact that it will have.
On the other, I run into people in person that seem to think dabbing a little cursor on a project will suddenly turn everyone into 100x engineers. It just doesn't work that way at all, but good luck dealing with the hypemeisters.
Some people are terminally online and it really shows...
When will people realize that Hacker News DISCUSSIONS have been taken over by AI? 2027?
ETA: I am only partly joking. It's abundantly clear that the VC energy shifted away from crypto as people who were presenting as professional and serious turned out to be narcissists and crooks. Of course the money shifted to the technology that was being deliberately marketed as hope for humanity. A lot of crypto/NFT influencers became AI influencers at that point.
(The timings kind of line up, too. People can like this or not like this, but I think it's a real factor.)
> To aggregate overall, of the 2816 posts that were classified as AI-related, 52.13% of them had positive sentiment, 31.46% had negative sentiment, and 16.41% had neutral sentiment.
Reconciled with the reading that the sentiment on HN is negative ?
-> TL;DR: Hacker News didn’t buy into AI with ChatGPT or any consumer product, it spiked when GPT-4 was unlocked as a tool for developers..
@zachperkel while a train is stimulative of impressions of something growing over time, in perspective, such as the "Trump Train", I'm pretty sure you meant trend? As in the statistical meaning of trend, a pattern in data?
AI hype is driven by financial markets as any other financial craze since the Tulip Mania. Is this an opinion, or a historical fact? Gemini at least tells me via Google Search that Charles Mackay's Extraordinary Popular Delusions and the Madness of Crowds is a historical work examining various forms of collective irrationality and mass hysteria throughout history.
Discussions about the conflicts between political parties and politicians to pass or defeat legislation, and the specific advocacy or defeat of specific legislation; those were not considered political. When I would ask why discussions of politics were not considered political, but black people not getting callbacks from their resumes was, people here literally couldn't understand the question. James Damore wasn't "political" for months somehow; it was only politics from a particular perspective that made HN uncomfortable enough that they had to immediately mod it away.
At that point, the moderation became just sort of arbitrary in a predictable, almost comforting way, and everything started to conform. HN became "VH1": "MTV" without the black people. The top stories on HN are the same as on Google News, minus any pro-Trump stuff, extremely hysterical anti-Trump stuff, or anything about discrimination in or out of tech.
I'm still plowing along out of habit, annoying everybody and getting downvoted into oblivion, but I came here because of the moderation; a different sort of moderation that decided to make every story on the front page about Erlang one day.
What took over this site back then would spread beyond this site: vivid, current arguments about technology and ethics. It makes sense that after a lot of YC companies turned out to be comically unethical and spread misery, rentseeking, and the destruction of workers rights throughout the US and the world, the site would give up on the pretense of being on the leading edge of anything positive. We don't even talk about YC anymore, other than to notice what horrible people and companies are getting a windfall today.
The mods seem like perfectly nice people, but HN isn't even good for finding out about new hacks and vulnerabilities first anymore. It's not ahead of anybody on anything. It's not even accidentally funny; templeos would have had to find somewhere else to hang out.
Maybe this is interesting just because it's harder to get a history of Google News. You'd have to build it yourself.
Sad times...
>Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes. Comments should get more thoughtful and substantive,...
Might take a long while for everyone to get on the same page about where these inference engines really work and don't work. People are still testing stuff out, haven't been in the know for long, and some fear the failure of job markets.
There is a lot of FUD to sift through.
most of them are fairly useles it feels like the majority of the sites comments are written by PMs at the FANG companies running everything though the flavor of the month llm
But let me say something serious. AI is profoundly reshaping software development and startups in ways we haven’t seen in decades:
1) So many well-paying jobs may soon become obsolete.
2) A startup could be easily run with only three people: developer, marketing, and support.