Meanwhile, simonw and his retiree friends are having the "time of their lives", so that's good I guess :)
Radical changes bring radical opportunities too, so "having the time of their lives" is not necessarily incompatible with "adapting to profound disruption."
Consider that the traits that make them optimistic about this technology are exactly the traits required to navigate this Brave New World.
The technique of feeding money into the slot-machine that generates tokens so that it can maybe generate what you want and you get the results at scale if you have enough money paradigm just isn't accessible to many people. In this context Simonw and Karpathy are starting to look more and more like degenerate gamblers who admonish everyone else for not joining in, while telling us all that the perks the casino gives them are just fabulous and we're all missing out.
And maybe you'll say "Yeah but things will get cheaper in the future, they're just early adapters who can afford it..." well, will it? And will those people make it to that shining beacon on the hill future? Or will they find themselves out of a job because of the current economic calamity that is unfolding as a result of election of an American Nero who is supported by the ultrawealthy tech oligarchs who are brining this technology into existance?
Do these people actually want to improve the lives of the common people -- or are they more concerned with getting a high score in the form of the amount in their bank account and clout on social media?
I'm a bit more optimistic about democratized access to AI. Even today's weaker open source/weight models are plenty powerful enough to supercharge our individual capabilities, and based on current trends, they won't be more than 3 - 6 months behind the frontier models. This may not bode well for the AI labs because their moat is always evaporating, but it's a huge boon to us plebs.
Consider that they're closer to death than birth and are unlikely to survive into the shit-hole world they're creating. Not passing on those traits to the next generation is a massive failure. These assholes aren't disrupting their own lives, just the poor slobs who haven't made it yet.
https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=tru...
https://news.ycombinator.com/item?id=44483567 is pretty much (paraphrasing) sucks to be you if you can't make it work.
Well, people who are not above a threshold of experience yet are not in a position to self-assess and course-correct if their long term learning is being affected. And even less so if there is pressure to be hyper-productive with the help of AI.
Speculating here but I think even seniors who rely on AI all the time and enjoy the enhanced output are going to end up with impostor syndrome over the things they suspect they can no longer do without AI, and FOMO about all the projects they haven't yet attempted with AI despite working as hard as they can.
One can argue, convincingly perhaps, that Anthropic isn’t right and/or is marketing, but what they’re saying could be complete BS but the fact that there is doubt suggests that most people believe that no one can hold it right exists.
I’m quite pro AI, but given the radical asymmetry between the upside vs the downsides (the upside is at best maximum bliss for all existing humans, which has a finite limit, while the downside is the end of humanity which is essentially infinitely bad), our march forward in this area needs to be at least slightly more responsible than what we are doing now.
At most I've seen him overhype some stuff, but probably less than most in the the tech-influencer sphere.
It wasn’t impressive when you wrote it by hand, it’s still not impressive when an AI does all the work for you.
Mocking the former is now culturally acceptable on HN, the latter not so much.
I have the opposite impression. In the past, I'd very often react "WTF who'd ever want to use it?" in my mind, whereas the comments were very kind and supportive.
Now, whenever someone submits their AI slop, they mostly hear some comments about this. The very fact that this whole thread is about bashing Simon speaks for itself. The HN community is split between those aggressively promoting it, those hating it, and the rest of us using it in one way or another, not yet sure about full-scale consequences for the future, and quite frankly powerless about it.
After several hundred billions dollars spent on LLMs, they can almost reproduce the capabilities of a partially deaf visually impaired secretary with severe brain damage.
Humans are cheaper, and they can actually learn things. Even the brain-damaged secretary can learn better than an LLM can, and it doesn't cost of hundreds of millions to train one.
Computers have been automating things for decades. My father had a private secretary at work, something considered normal for a mid-career executive back then (he was an engineer!). I've done very well in my career but a private secretary is quite out of reach. That doesn't mean that we had a "lost generation" on our hands.
And yesterday a friend showed me what his 11 year old was vibing up with Claude Code. A whole web app he can use to help organize some stuff with his friends related to Roblox (I dunno what it was meant to be, you had to log in for most of it). The kid is amazed that his father understands all the mysterious symbols Claude generates. And he probably always will, the same way I listen to stories about how my father could fix car engines with mild amazement as well.
There's a huge market for doom stories out there and the NYT is a rag that was just yesterday reporting that Adam Back was Satoshi based on nothing deeper than the journalists gut feeling. "Studies" in social science can show whatever the author wants, and the authors want clicks from their AI-hating left wing readership. Stay skeptical!
This says more about how companies have chosen to allocate pay in the current era than anything about technology though, no?
Ban AI development?
In the USA you can't even get healthcare without a job. Meanwhile tech companies are dumping billions into the race to make humans unemployable. So yeah, until people feel like their leaders can be trusted to have their back, they're going to be anxious.
It's fast approaching, and the sooner it gets here the sooner the masses turn to a Butlerian Jihad.
People with nothing to lose will feel empowered by taking everything from the people who they feel are responsible for taking everything from them.
There's a chance this kind of thing becomes a social contagion that spreads, much like suicide or school shootings.
I'm not sure what the solution is to it once it starts. I guess people like Thiel won't be able to do antichrist talks at the Vatican anymore.
> Ban AI development?
The Bulterian Jihad will never be less appealing than it is today.
Young people were already struggling to build lives and families before the AI recession. It’s hard to fathom having any hope for raising a family or finding meaningful work in the PE slop driven economy.
-----------------
Perhaps schools need to adapt to AI use and recenter the goals of education in the minds of students. If AI use impairs your development, you are only being efficient in your evasion of education.
i.e. Students need to be taught that learning to efficiently pump out AI written essays isn't the same thing as learning to reason and express themselves. AI tools will evolve and become easier and easier to pick up and use. Using your own mind is a slower and more difficult skill to develop, but it makes the difference between going through life as a human being or a mere meat-puppet for AI. It will always be far easier for a human to pick up AI tools and learn them from scratch than it will for a meat-puppet to remedy their lack of human development.
They'll get right on it.
The answer may be to focus less on output and more on the process. e.g. Instead of sending students off to do essays at home and then merely grading what gets handed in, perhaps teachers should run workshops where students work on their essays while receiving guidance. i.e. Everybody works in the classroom on their essay and talks to each other and the teacher about what they're doing. Grades would be at least partly based on participation, and teachers would get a better sense of what students are actually able to write themselves. If Johnny sits back and picks his nose in the workshop and then hands in a paper that's suspiciously good, it's probably slop even if it isn't obviously so.
Of course, doing this sort of thing would mean taking time away from lectures and wrote learning. Finding the right balance is no easy task and it's going to take good teachers to blaze the way. That can only happen if they're backed with resources and the freedom to alter curriculum.
Or maybe, you know, he's an introvert.
These types of surveys are pretty much useless. Just go by people's revealed preferences. They're using the technology. They don't have to. I'm sure most teachers and schools would prefer them not to.
Why do they have to use it? Have standards gotten higher in schools such that they will be left behind if they don't? Is there peer pressure to use it? Is there some social aspect I'm unaware of?
Of course not. People find the technology useful. Social media I understand as it's harder to break away because friends use it to communicate. But that's not true for AI.
And then they have some doomer media telling them they should be concerned and scapegoat the technology. Gen AI will prevent you from being an artist or poet?
Yeah, I just don't buy it.
People don’t do things only because they want to.
Do you think the existence of millions of trash pickers getting cancer combing through mounds of toxic waste across the world reveal a preference for getting cancer by combing through hazardous waste?
Everyone is clawing and crab-bucketing to escape, what they believe to be, the inevitable suffering of laborers in a post-labor economy.
So, if this guy I hate is using AI and AI is making the world worse then guess what - I'm using AI too. Because I'm not gonna be left behind, right?
In fact, I'm going use AI more. I'm the most AI-ist out of all the AI-believers. I'm practically and AI apostle.
Because, when our new overlords come, I intend to be spared. Not like you losers. I, for one, welcome our new overlords.
That's what they're thinking.
When you're constantly being force fed the narrative that you must use AI or be left behind, using it is no longer a revealed preference it is a survival mechanism
> Why do they have to use it? Have standards gotten higher in schools such that they will be left behind if they don't? Is there peer pressure to use it? Is there some social aspect I'm unaware of?
Did you not read the article or not read it carefully? Try again, your comment shows a massive lack of understanding and little else.
> Many respondents did acknowledge that A.I. might make them more efficient in school and the workplace, he said. But they were concerned about how the technology would affect their creativity and critical thinking skills.
So it's hurting their creativity and critical thinking skills. I wonder if they the existence of cars are hurting their ability to stay in shape.
Revealed preferences from here:
> In the study, about half of young people reported using A.I. on either a daily or weekly basis, similar to the previous year. Just under 20 percent said they did not use A.I.
The rest of the article is mostly anecdotes or vague notions about social skills.
Why don't you contribute to the conversation instead of just telling me I don't understand the issue
> The percentage of respondents ages 14 to 29 who said they felt hopeful about A.I. declined sharply since last year, down to 18 percent from 27. Young adults’ excitement about artificial intelligence dropped, too, and nearly a third of respondents indicated that the technology made them feel angry. [emphasis mine]
> ...
> In interviews, young adults cited a variety of reasons for their reservations about artificial intelligence, including the threat to entry-level jobs, the replacement of human interaction and the spread of A.I.-fueled misinformation on social media.
> Sydney Gill, 19, a freshman at Rice University in Houston, said she had been optimistic about artificial intelligence as a learning tool when she was in high school. Now, as she tries to select her college major, her outlook has become less rosy.
> “I feel like anything that I’m interested in has the potential of maybe getting replaced, even in the next few years,” she said.
A young adult can totally abstain from AI and be negatively affected by all of that. And those are the kinds of things that could make people angry at the technology.
And I am not even talking here about other ethical issues, training data, less junior job positions, job replacement of journalists with LLM-equipeed contractors, etc.
LLMs make my personal and work life so much better, but social life unbearable. Is it worth the trade-off? I guess it doesn't matter at this point.
Most inventions are a net positive: The steam engine, vaccines, chimneys.
A few are net-negative: grenades, leaded gasoline, asbestos insulation.
If we can no longer trust that a potential job candidate in a video call actually exists, they will have to be flown in. That's a cost. If we can no longer trust that an employee who wrote a document actually thought about it at all and must be questioned to make sure, that's a cost. Those costs will add up.
A written document or a video essay used to be proof-of-thought and now it's not. If we can't find new proofs of thought, and if AI doesn't get vastly better to the point where we can trust it blindly, then I think this will all be a net-negative.
One of the motivations to build data centers as fast as possible and improve tools as fast as possible may be to get to net-positive before it all gets banned. This article exists. The clock is ticking.
I don’t even think that’s actually the case - we’re in a soft recession. AI has nothing to do with it. But that’s not what kids are being told.
Great marketing campaign guys. Just wait. If you think sentiment around AI is negative now you haven’t seen shit.
"Nothing" is a stretch. Major capital being now being allocated towards building AI data centres, away from what it was doing previously, is absolutely a contributing factor. Of course not the only one, but there is never just one reason for anything.
"kids" you mean people under 30 taking jobs to have their own financial life?
Maybe there is some place left that needs young people badly enough that they are willing to open up opportunities, or someplace left ripe and weak enough that the youth will take it over by force.
I think most of us know that even if AI could do all of our jobs, it won't be to give us free products and services.
It’s trickle-down economics 2.0. The bullshit is the same.
I’m deliberately trying to understand things more deeply now to combat that. We’ll see how it goes.
How so? Colloquially, AI currently means LLMs. Why would we revere LLMs as our greatest achievement?
I think a lot of people are conflating two ongoing things: the emergence of AI and stagnant (if not recessionary) economies across the globe. It appears as if AI is resulting in so much more negative externalities but in reality if not for AI, we'd 100% be in a recession.
The social contract is being broken. Being broken just on paper, just on the hopes that it can be broken for good.
It absolutely did. Factory owners used their clout to put workers out of the job and then lobbied for military aid and capital punishment instead of negotiating with the workers. IMO, the only tactic for worker that has EVER had lasting success is solidarity through some form of unionization.
Read "Blood in the Machine" if you want to see what happened to the losers of the industrial revolution. The book does contain some fictional embellishments but that is explained up front, and noted when it comes up.
A little confused as to how exactly a handful of unprofitable companies are keeping us out of a recession? GDP is not the economy. We have been in a "recession" for a while now, not that that word even really means anything anymore.
IMO we effectively are in a recession already (and have been for a while) as far as the real job market goes, the AI boom is only stopping it from showing up in stock market valuations, which is great if you're heavily invested in the stock market, but pretty meaningless if you're a laborer without assets, with debt, and trying to find a job.
Things can certainly get worse overall than they are now (and due to bad leadership, this seems inevitable), but when they do the delta between now and when we are in an official recession will be far greater felt by people who are currently being propped up by stock and home values than it will for the many people who are already struggling.
That's fallacious thinking. Technological developments aren't instances of some kind of repeating phenomena; they're distinct, unique events with their own characteristics. You need to consider those characteristics instead of gesticulating at the past for a prediction of the future.
And even if you're correct, you're missing a lot. I'll explain by analogy: at the beginning of a genocide, as someone's community in the process of being murdered, you could totally say "genocides have happened before, some people will go away, others will survive." But that's cold comfort for someone who's about to be killed with their family. AI likely means economic death (or at least hardship) for a lot of people who don't have the needed combination of psychopathy, luck, and wealth to succeed in the new order.
Yeah. How many times I saw people here say oh yeah it's just the same as job loss during automaton-industrialization. How is that making things better? "Yeah just more mass poverty and more wealth inequality, what are you worried about!"
Also during automation there was a lot of work you could switch to and what about options now? start another vibeslop startup so that you can pay openai for tokens?
the only explanation for people saying this is that they don't understand they will be on the line later just like the people displaced now. but the dream of being the .1% who get to be on top and monetize everybody else is too tempting I guess.
I doubt most people who say things like that "dream of being the .1%". I think it's more typical they're just someone who thoughtlessly repeata propaganda memes, without considering the implications. I think that's something that software engineers are particularly prone to do, despite frequently having a self-image of being "intelligent."
Or social media, or targeted advertising, or fast food.
Yes, things look bleak for current college grads. The bitter pill to swallow is that they began college in the boom times of 2021-22, and they saw the college grads of those years walking straight off campus into high-paying jobs which don’t exist anymore. They only existed because of the obscene gobs of money whizzing around the economy post-COVID. Whether the shrinkage is due in part or in whole to AI is in the eye of the beholder. But if we had fallen into a broad-based recession, the numbers would look a lot bleaker. Plenty of companies that could automate away entry level positions with current tech haven’t done so, whether due to organizational inertia or ignorance or whatever. That organizational inertia would’ve been much more easily overcome by a market collapse.
Even in our own organization, we've almost stopped hiring juniors and interns completely. We just leverage AI more and more.
So I can understand how most Gen Zs feel threatened by AI.
There are basically 2 groups who are loving AI:
* Seniors who have deep knowledge so AI is just there to help make them accomplish their goals cheaper and faster
* Gen Zs who are starting their own businesses and have embraced AI
My advice to young people is to embrace AI as fully as you can. Learn to be extremely productive with it. Learn to use it to create businesses. Burying your head in the sand hoping AI will collapse is not going to work in their favor.
PS. You can get a pretty good idea of how young people view AI on Reddit. Reddit users tend to be younger, less affluent. Save for a few subs, most of Reddit is very anti-AI. I'd guess most of them wish AI will collapse soon so they can go back to a world where human intelligence matter more.
I don't mean wrappers around Claude or OpenAI APIs.
All of the most promising companies I know today are very small and are leveraging AI to solve physical problems in the real world that just wouldn't be possible with so few people even a few years back.
If starting a business was so easy, almost all of us who work salary would go do it. This advice is like, if your local football club gets shut down, just work hard enough to make into Manchester United
Would we? Starting a business is easy. Building a profitable business isn't even that hard. Wanting pleasure in our work is what stops us. Running a business generally isn't much fun. We work salary because it means we can focus on the enjoyable parts of the business, letting someone else deal with the crap.
However, even if that holds true (which is a big if - right now I wouldn't want to run a business backed by vibe software), and even if there are enough such business ideas to go around, there's going to be quite a lot of turmoil in the meantime.
I'm seeing the parent's point along these lines: "me and all my friends are starting businesses being the middlemen between WordPress and (people who want websites)". It's not that it won't work, it's just a shit business model.
How could your "business" ever make money if any idiot with a $20 CC subscription can recreate it in a weekend? And no, "I can prompt better than them" is not a differentiator.
If you truly believe this, you'd invest every cent you have into Nvidia, TSMC, and energy companies.How will this help them? If LLMs are going to replace workers and reduce the number of available jobs, how will fully embracing an LLM help an individual? To it seems the most it could do is put them ahead of people who won't embrace LLMs ... but if everyone took this advice then the advice would certainly do nothing.
We've always had offshoring too, and the same concerns exist there. The more corporate companies use it, and either eventually get burned and revert back, or just hold on for dear life as they circle the toilet.
Curious how these companies will fare when there are no senior-level candidates left to replace the ones that are retiring in a few years. I guess everyone's hoping AI will be good enough to just replace the entire field, as one final "fuck you" to the generations that follow, from the generations that had everything and pulled up the ladder.
I think if you want to change the world robots that can pick strawberries and change bedpans are it. People like to gush about "more Nobel prize research" an such but Nobel prizes are valuable because a limited number are given out, not because the research is valuable in and of itself. (e.g. Kuhn would tell you normal science is "apply for grant - write paper - repeat")
It's game theory. If you betray ASAP you get to monetize others who hold out.
It works until you yourself get ousted the same way. So the most enthusiastic people are old enough that they leverage their status and won't face the consequences in their lifetime OR young enough that they don't understand the proposition, have nothing to lose and when they look around and see everybody doing it they have no other choice except to do the same
If everybody took a stance against corps stealing our work and reselling it to us then we would 100% prevail but what are principles against personal profit...
"we need to work more and help train the llms of superrich to make the same money" became the new "we will have more free time and more money thanks to AI" but everybody is too busy trying to outrace the next guy so no one noticed.
If I had a genie of many wishes I'd wish for
1. No more deficit spending
2. Budgets cannot exceed prior year's intakes
3. An end to progressive taxation, but an increase in a flat tax rate to pay off all public debt. As the debt is paid a negative tax rate will replace it.
4. All politicians' pay tied to a fixed/capped multiple of the median income in the country
5. The building of a public wealth fund which is built from any benefit granted to a company through the governemnt -- want a tax break or a publicly funded stadium? Give us 50% share in the team. Want a bailout for your bank/automaker? Sell us preferred shares at high rates (to reflect the risk). Want publicly funded power plants for your GPUs? Then we want a share of your AI Company in exchange in our public wealth fund.
6. Forced public liquidity of large companies (say $1B) to ensure the public is able to participate in the overall economy, rather than just private networks of back scratchers
7. Politicians who want to invest must invest in an equal weight russell 3000 (or an even wider spread of US stocks) to ensure vested interest in the country, but divested interest in any specific company/sector.
8. Capped political spend.
9. A concerted effort to move towards known maxima rather than stepping towards local maxima with fear of going through local minima too.
10. A publicly funded opt-in national service program for building houses. If you give 4 years of your life to building houses we'll give you a 2 bed 1 bath and a salary along the way. (Obviously, details tbd, but something along that idea)