So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.
This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
-- Frank Herbert, Dune
The "government" is just the set of people who hold power over others.
Those who will own machines that can match humans will have unprecedented power over others. In effect they'll grow more and more to be the government.
Even now, companies hold more and more of the power over others and are more part of the government than ever before.
So it confuses me when you say it's what the government is for? Who would that be? If we pretend it would still be a democracy then I guess you're saying it's everyone's problem?
So here we are, let's discuss the solution and vote for it?
Taking work away from people is practically the definition of technology. We invent things to reduce the effort needed to do things. Eliminating work is a good thing, that’s why inventing things is so popular!
What ends up happening is the amount of work remains relatively constant, meaning we get more done for the same amount of effort performed by the same amount of people doing the same amount of jobs. That’s why standards of living have been rising for the past few millennia instead of everybody being out of work. We took work away from humans with technology, we then used that effort saved to get more done.
Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.
A total shift of human mentality. Humans have shown time and again there is only one way we ever get there. A long winding road paved with bodies.
You think that's going to change just because many more people find themselves without?
That may be where the USA ends up. We Australian's (and probably a few others, like the Swiss) have gone to some effort to ensure we don't end up there: https://www.abc.net.au/listen/programs/boyerlectures/1058675...
The rest you know what’s going to happen
Only if the socialists win. Capitalism operates on a completely different principle: people CREATE wealth and own everything they have created. Therefore, AI cannot reduce their wealth in any way, because AI does not impair people's ability to create wealth.
Both are fairly uncontroversial: many humans not only benefit from jobs but in fact often depend on jobs for their livelihoods, and (2) should be self-evident.
This can change if the socioeconomic system is quickly enough and quite substantially restructured to make humans not depend on being compensated for work that is now being done by robots (not only financially but also psychologically—feeling fulfilled—socially, etc.), but I don’t see that happening.
But a lot of jobs aren't like that. I doubt many people who work in, say, public relations, really think their job has value other than paying their bills. They can't take solace in the fact that the AI can write press releases deflecting the blame for the massive oil spill that their former employer caused.
There's no law of nature saying that a human must work 40 hours per week or starve.
The current dependence on work is a consequence, not a goal.
Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous
…we’ll add three hours to our day?
Bu seriously, I support what you are saying. This is why the entire consumer system needs to change, because in a world with no jobs it is by definition unsustainable.
What does the one have to do with the other?
But even then, currently plenty of people find their fun in creating - when it's not their job. And they struggle with finding the time for that. Sometimes the materials and training and machines for that also. Meanwhile a majority of current jobs involve zero personal creativity or making or creating. Driving or staffing a retail outlet or even most cooking jobs can't really be what you are looking for on your argument?
Is the road to post-scarcity more likely with or without robots?
Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
…
You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.
Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…
Procreation and progeny is our only true purpose — and one could make the argument AI would make better parents and teachers. Should we all capitulate our sole purpose in the name of efficiency?
Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.
Best summarized in this comic: https://x.com/9mmballpoint/status/1658163045502267428
They're busy selling watches whilst people can still afford them thanks to having jobs.
Not the AI company’s fault per se, but generally the US government does a very poor job of creating a safety net either intentionally, ineptitude or indifference.
By the way, attacks were also leveled against Chinese and Japanese California workers who were viewed as stealing the jobs of other “Americans”. So this viewpoint and tradition of behavior and capitalism is very long in US history.
Replace the word robot with "automation" or "industrialization" and you have the last 200 years of human history covered.
The Ludites could have won and we would all have 1500$ shirts.
Do you know any lamp lighters? How about a town crier?
We could still all be farming.
Where are all the switch board operators? Where are all the draftsmen?
How many people had programing jobs in 1900? 1950?
We have an amazing ability to "make work for ourselves", and history indicates that we're going to keep doing that regardless of how automated we make society. We also keep traditional "arts" alive... Recording didnt replace live performances, TV/Film didnt replace Broadway... Photography didnt replace painting...
E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).
Rent extraction hurts them in the long run. Because working class income gets absorbed by various forms of rent, they are more expensive to employ. Thus we fail to compete with, say, China, which socializes many costs and invests in productive industry. We are left with a top heavy society that as we can see is already starting to crumble.
None of this holds if you don't have anything of value to offer and automation is concentrating power and value, AI is the extreme end of this - at some point the charade of democracy becomes too annoying to the ones at the top, and you get there faster by trying to reign them in.
So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.
In my home country, the people building the robots and job destroying AI have captured all three branches of government, and have been saying for over 40 years that they'd like to shrink government down to a size that they could drown it in a bathtub. The government can't be relied upon to do more than move its military into our cities to violently stifle dissent.
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.
Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?
It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.
They're not coming for all jobs. There are many jobs that exist today that could be replaced by automation but haven't been because people will pay a premium for it to be done by a human. There are a lot of artisan products out there which are technically inferior to manufactured goods but people still buy them. Separately, there are many jobs which are entirely about physical and social engagement with a flesh and blood human being, sex work being the most obvious, but live performances (how has Broadway survived in an era of mass adoption of film and television), and personal care work like home health aids, nannies, and doulas are all at least partially about providing an emotional connection on top of their actual physical labor.
And there's also a question of things that can literally only be done by human beings, because by definition they can only be done by human beings. I imagine in the future, many people will be paid full time to be part of scientific studies that can't easily be done today, such as extended, large cohort diet and exercise studies of people in metabolic chambers.
The problem as I see it is not robots coming for my job and taking away my ability to earn a salary. That can be solved by societal structures like you are saying, even though I am somewhat pessimistic of our ability to do so in our current political climate.
The problem I see is robots coming for my mind and taking away any stakes and my ability to do anything that matters. If the robot is an expert in all fields why would you bother to learn anything? The fact that it takes time and energy to learn new skills and knowledge is what makes the world interesting. And this is exactly what happened before when machines took over a lot of human labour, luckily there were still plenty of things they couldn't do and thus ways to keep the world interesting. But if the machines start to think for us, what then is left for us to do?
Well, it would start by not tax-favoring the (capital) income that remains and would have to have grown massively relatively to the overall economy for that to have occurred.
(In fact, it could start by doing that now, and the resulting tax burden shift would reduce the artificial tax incentive to shift from labor intensive to capital intensive production methods, which would, among other things, buy more time to deal with the broader transition if it is actually going to happen.)
if robots are that advanced that can do most of the jobs - the cost of goods will be close to zero.
government will product and distribute most of the things above and you mostly won't need any money, but if you want extra to travel etc there will always be a bunch of work to do - and not 8 hours per day
Private enterprise will always have some level of corrupting influence over government. And perhaps it sees current leadership as the lesser of two evils in the grand scheme. But make no mistake, government DOES ultimately have the power, when it chooses to assert itself and use it. It's just a matter of political will, which waxes and wanes.
Going back a century, did the British aristocracy WANT to be virtually taxed out of existence, and confined to the historical dustbin of "Downton Abbey"?
What if the issue isn't government failing to prepare society to move forward, but instead, AI businesses moving us in a direction that more and more people don't consider to be forward?
1. We don’t need everyone in society to be involved in trade.
2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.
3. Thus, people will fear losing their ability to trade in society.
The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.
The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.
We did not make it so, this has been the natural state for as long as humans have existed, and in fact, it’s been this way for every other life form on Earth.
Maybe with post-scarcity (if it ever happens) there could be other ways of living. We can dream. But let’s not pretend that “life requires effort” is some sort of temporary unnatural abomination made by capitalists. It’s really just a fact.
We should compare how anti-government politicians talk versus how trained, educated neoclassical economists talk. The latter readily recognize that a valid function of government is to steer, shape, and yes, regulate markets to some degree. This is why we don’t have (for example) legal, open markets for murder.
Markets do not define human values; they are a coordination mechanism given a diverse set of values.
And even if the government did institute something like universal basic income, if all jobs were replaced, that would almost certainly mean a lower standard of living for the middle class, and even less socioeconomic mobility than there is now.
The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].
[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.
Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.
Humans have depended on their own labor for income since we stopped being hunters and gatherers or living in small tribes.
So it's not just a matter of "the gov will find a way", but it's basically destroying the way humanity as a whole has operated for the past 5000 years.
So yes, it's a huge problem. Everything done under the banner of "innovation" isn't necessarily a good thing. Slavery was pretty "innovative" as well, for those who were the slave owners.
If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.
On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.
This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).
That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.
I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.
See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.
TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.
So while you’ve identified the real problem we need to identify a realistic solution.
Anyway there is a name for your kind of take. It is anti-humanist.
But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.
Ngl, if someone nuked all USA's servers and wipes out the all these bullshit i'm not convinced the world would be in a worst state right now.
Let AI be used by scientific research, development, helping people out. But if it's just to sit your smelly ideas down, you may even be right, but ultimately the form, intentions and result matter more than recklessly endangering everybody.
TBH I feel like the AI discourse around human replacement smell like hard-core psychopathic behavior - or the one of a drunken dude who's driving a car just happy.
You have 0 data concerning the result it would do on society - and I definitely prefer to live in a less technological world than a world that is full of people with psychosis.
So until we find how we can solve this bottleneck I have 0 sympathy for this kind of discourse.
The government should keep its charge as the protector and upholder of justice, I don’t want it to be those things and then also become a fiat source for economic survival, that’s a terribly destructive combination because the government doesn’t care for competition or viability, and survival is the last place you want to have all your eggs in one basket, especially when the eggs are guaranteed by force of law and the basket becomes a magnet for corruption.
We as a society get to decide what is done in our society. If robots replace a few jobs but make goods cheaper for everyone that's a net positive for society.
If robots replace EVERYONE's job, where everyone has no income anymore that's clearly a huge negative for society and it should be prevented.
That's noble. The first is dystopian
The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.
You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”
The over-application of objective phrases like "valid" vs "invalid" when talking about non-formal arguments is a sickness a lot of technical people tend to share. In this case, it's dismissive of harm to humans, which is the worst thing you can be dismissive about. "Please don't make me and my family miserable" is not an "invalid argument" - that's inhuman. That person isn't arguing their thesis.
"The problem". Another common oversimplifying phrase used by us thinkers, who believe there is "the answer", as if either of those two things exist as physical objects. "The problem" is that humans are harmed. Everything else just exists within that problem domain, not as "part of the problem" or "not part of the problem".
But most importantly:
Yes, you're absolutely correct (and I hate to use this word, but I'm angry): Obviously the ideal state is that robots do all the work we don't want to do and we do whatever we want and our society is structured in a way to support that. You've omitted the part where that level of social support is very hard to make physically feasible, very hard to convince people of depending on their politics, and, most importantly: It's usually only enough to spare people from death and homelessness, not from misery and unrest. Of course it would be ridiculous to outright ban for-profit use of automation, but even more ridiculous to write a bill that enforces it, e.g. by banning any form of regulation.
Short and medium term, automating technologies are good for the profit of businesses and bad for the affected humans. Long term, automating technologies are good for everybody, but only if society actually organizes that transition in a way that doesn't make those affected miserable/angry. It isn't, and I don't think it's pessimistic to say that it probably won't.
I'd love to live in Star Trek! We don't. We won't for hundreds of years if ever. Technology isn't the limiting factor, the immutable nature of human society and resources are the limiting factors. Nothing else is interesting to even talk about until we clear the bar of simply giving a shit about what actually, in concrete reality, happens to our countrymen.
It's called capitalism
Why are people even doing the jobs?
In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.
I have a feeling that automation replacement will make this fact all the more apparent.
When people realise big truths, revolutions occur.
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.
The leader bios are particularly priceless. "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high. Out of the office, Faith coaches a little league softball team and looks after her sick mother - obligations she looks forward to being free of!"
There's some truth in all satire though. I'm just shocked YC hasn't nuked the link from the front page.
Would Sam Altman even understand the original, or would he just wander ignorantly into the kitchen and fling some salt at it (https://www.ft.com/content/b1804820-c74b-4d37-b112-1df882629...)? I'm not optimistic about our modern oligarchs.
Seems like a waste of time, but at the same time the feelings were similar to looking Hannibal Lecter in the kitchen scene.
This is a new / recent book about the Luddite movement and it’s similarities to the direction we are headed due to LLMs:
https://www.littlebrown.com/titles/brian-merchant/blood-in-t...
Enjoyed the book and learned a lot from it!
No?
Well, what's different this time?
Oh, wait, maybe they did prevail after all. I own my means of production, even though I'm by no means a powerful, filthy-rich capitalist or industrialist. So thanks, Ned -- I guess it all worked out for the best!
That's a very romantic view.
The development, production and use of machines to replace labour is driven by employers to produce more efficiently, to gain an edge and make more money.
If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.
You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.
Analogies are almost always an excuse to oversimplify. Just defend the thing on its own properties - not the properties of a conceptually similar thing that happened in the past.
Now that information work is being automated, there will be nothing left!
This "embrace or die" strategy obviously doesn't work on a societal scale, it is an individual strategy.
Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.
Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.
We either let the peoples creativity and knowledge be controlled and owned by a select few OR we ensure all people benefit from humanities creativity and own it. And the fruits that it bears advance all of humanity. Where their are safety nets in place to ensure we are not enslaved by it but elevated to advance it.
Every time we progress with new tech and eliminate jobs, the new jobs are more complicated. Eventually people can't do them because they're not smart enough or precise enough or unique enough.
Each little step, we leave people behind. Usually we don't care much. Sure some people are destined to a life of poverty, but at least most people aren't.
Eventually though even the best of the humans can't keep up, and there's just nothing left.
I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.
AI is kind of like electricity.
Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).
The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.
We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.
Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.
What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?
The benefits to society will be larger. Just think about it: when you replace a dirty dangerous jobs, the workers simply have nowhere to go, and they begin to generate losses for society in one form or another. Because initially, they took this dirty, dangerous jobs because they had no choice.
But when you firing educated workers en mass, society not only receives from software all the benefits that it received from workers, but all other fields are also starting to develop because these educated workers are taking on other jobs, jobs that have never been filled by educated workers before. Jobs that are understaffed because they are too dirty or too dangerous.
This will be a huge boost even for areas not directly affected by AI.
kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)
yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.
At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.
Second, the movement was certainly attacked first. It was mill owners who petitioned the government to use “all force necessary” against the luddites and the government acting on behalf of them killed and maimed people who engaged in peaceful demonstrations before anyone associated with the Luddite movement reacted violently, and again, even in the face of violence the Luddite movement was at its core non violent.
billions of unemployed people aren't going to just sit in poverty and watch as Sam Altman and Elon become multi-trillionaires
(why do you think they are building the bunkers?)
I know, right? Machines have been gradually replacing humans for centuries. Will we actually get to the point where there are not enough jobs left? It doesn't seem like we're currently anywhere close to the point of not having any jobs available.
Has anyone thought about how the Federal Reserve plays a role with this? Automation puts downward pressure on inflation, because it doesn't cost as much to make stuff. The Federal Reserve will heavily incentivize job creation if inflation is low enough and there aren't enough jobs available, right?
We did figure that out. The ingenious cope we came up with is to entirely ignore said problem.
You’ll waste away for a little while in some sort of slum and then eventually you’ll head to the Soylent green factory, but not for a job. After that problem solved!
Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.
I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.
this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?
if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.
this is a website about billionaires and their personal agendas.
The issue is that there will be no one earning money except the owners of OpenAI.
Take outsourcing - the issue in developed nations was underemployment and the hollowing out of industrial centers. You went from factory foreman to burger flipper. However, it did uplift millions out of poverty in other nations. So net-net, we employed far more and distributed wealth.
With Automation, we simply employ fewer people, and the benefits accrue to smaller groups.
And above all - these tools were built, essentially by mass plagiarism. They train even now, on the random stuff we write on HN and Reddit.
TLDR: its not the automation, its the wealth concentration.
It's that the people failed to elect and wield a government that ensures all humanity benefits from it and not a select few who control it all.
And I think it will become clear that the governments that are investing in it to benefit their people who have ownership versus the ones who invest in it to benefit just a handful of the rich are the ones who will keep society stable while this happens.
The other path we are going down is you will have mass unrest, move into a police state to control the resistance like America is doing now, and be exactly what Peter Thiel, Elon Musk, and Larry Ellison want with AI driven surveillance and Orwellian dystopian vision forcing people to comply or be cut out of existence deactivating their Digital IDs.
At the bottom of this page, there is a form you can fill out. This website says they will contact your local representative on your behalf. (And forward you any reply.)
Here's the auto-generated message:
I am a constituent living in [state] with urgent concerns about the lack of guardrails surrounding advanced AI technologies. It is imperative that we act decisively to establish strong protections that safeguard families, communities, and our children from potential harms associated with these rapidly evolving systems.
As companies continue to release increasingly powerful AI systems without meaningful oversight, we cannot rely on them to police themselves, especially when the stakes are so high. While AI has the potential to do remarkable things, it also poses significant risks, including the manipulation of children, the development of bioweapons, the creation of deepfakes, and the threat of widespread unemployment.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections that may be necessary for their residents.
Thank you for your time.
[name]
New York
Ideas are nice, and important, but there needs to be an action vector for those ideas to have practical value.
Here's a starter example: any company whose main business is training AI models needs must give up 10% of their company to a fund whose charter is long-term establishing basic care (food, water, electricity, whatever) for citizens.
I'm sure people will come at me with "well this will incentivize X instead!" in which case I'd like to hear if there are better thought out proposals.
The problem really is political systems. In most developed countries, wealth inequality has been steadily increasing, even though if you ask people if they want larger or smaller inequality, most prefer smaller. So the political systems aren't achieving what the majority wants.
It also seems to me that most elections are won on current political topics (the latest war, the latest scandal, the current state of the economy), not on long-term values such as decreasing wealth inequality.
The problem is that the longer you refrain from equitably distributing wealth, the harder it becomes to do it, because the people who have benefited from their inequitably distributed wealth will use it to oppose any more equitable distribution.
Probably because most politics about how to "equitably distribute the wealth" of anything are one or both of "badly thought out" and/or "too complex to read".
For example of the former, I could easily say "have the government own the AI", which is great if you expect a government that owns AI to continue to care if their policies are supported by anyone living under them, not so much if you consider that a fully automated police force is able to stamp out any dissent etc.
For example of the latter, see all efforts to align any non-trivial AI to anything, literally even one thing, without someone messing up the reward function.
For your example of 10%, well, there's a dichotomy on how broad the AI is, if it's more like (it's not really boolean) a special-purpose system or if it's fully-general over all that any human can do:
• Special-purpose: that works but also you don't need it because it's just an assistant AI and "expands the pie" rather than displacing workers entirely.
• Fully-general: the AI company can relocate offshore, or off planet, do whatever it wants and raise a middle finger at you. It's got all the power and you don't.
For this to work at scale domestically, the fund would need to be a double-digit percentage of the market cap of the entire US economy. It would be a pretty drastic departure from the way we do things now. There would be downsides: market distortions and fraud and capital flight.
But in my mind it would be a solution to the problem of wealth pooling up in the AI economy, and probably also a balm for the "pyramid scheme" aspect of Social Security which captures economic growth through payroll taxes (more people making more money, year on year) in a century where we expect the national population to peak and decline.
Pick your poison, I guess, but I want to see more discussion of this idea in the Overton window.
There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.
What government in the foreseeable future would go after them? This would tank the US economy massively, so not US. The EU will try and regulate, but won't have enough teeth. Are we counting on China as the paragon of welfare for citizens?
I propose we let the economy crash, touch some grass and try again. Source: I am not an economist.
Sure we will have the robot wrangler engineers, scientists, teachers, nurses, etc. But typically we have social unrest past like 8% unemployment. What happens when double digits of people have no jobs and all the time on their hands? Well “eat the rich” might become very literal and no amount of protection against that can really be bought. Ultimately, the only option is either a Dune-style elimination of all AI (very unlikely) or we will have to decouple “living wage income” from “job”. If you think about it, the idea that you must have a job in order to make money is more of an implementation detail. If robots produce so much value that it is actively not even viable for humans to work, the only real logical solution is to distribute the profits from the labor of the robots in a way that isn’t by hourly rate times hours worked. In fact one possible way to do this is to tax the value produced by AI and then funnel that to a universal basic income program. Everyone by default is an artist. If you want to be a nurse or teacher or scientist or engineer you can. Otherwise just produce art at your leisure while the robots work the fields and cook your meals.
1. Massive population reduction (war is a very efficient way to achieve this)
2. Birth control, to slow down population growth to a stable rate near 0
3. Eugenics, to ensure only people with needed capabilities are born (brave new world)
In this scenario, 500,000 people (less ?) in charge of millions of robots and a minority of semi-enslaved humans would freely enjoy control over the world. The perfect mix between Asimov and Huxley.
All the agitation about "building a 1984-style world" is, at best, just a step toward this Asimov/Huxley model, and most likely, a deliberate decoy.
You don't understand. Almost nobody actually thinks about this in the right way, but it's actually basic economics.
Salt.
We used to fight wars for salt, but now it's literally given away for free (in restaurants).
If "robots produce so much value" then all that value will have approximately zero marginal cost. You won't need to distribute profits, you can simply distribute food and stuff and housing because they're cheap enough to be essentially free.
But obviously, not everything will be free. Just look at the famous "cost of education vs TV chart" [1]. Things that are mostly expensive: regulated (education, medicine, law) and positional (housing / land - everyone wants to live in good places!) goods. Things that are mostly cheap: things that are mass produced in factories (economies of scale, automation). Robots might move food & clothing into the "cheap" category but otherwise won't really move the needle, unless we radically rethink regulation.
[1] https://kottke.org/19/02/cheap-tvs-and-exorbitant-education-...
It won't if robots start driving trucks.
I understand the spirit of this, but most of this alarmism is misguided in my view.
Then you stopped needing them, a USDA census in 1959 showed the horse population had dropped to 4.5 million.
Now they're mostly used for riding, and in 2023, there were about 6.65 million horses.
(Citation: https://en.wikipedia.org/wiki/Horses_in_the_United_States#St...)
There's no law of nature that says "there's always a place for more horses", and anyone who suggested there might be would get laughed at. Well, there's also no law of nature that says "there's always a place for more humans", to butcher a like from CGP Grey a little over a decade ago.
Mechanisation
But did you ever wonder what happened to the displaced workers? I'm not an expert on the agricultural changes in the USA, but in the UK, huge amount of tumult can be directly attributed to agricultural changes.
AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money, but you will be fabulously wealthy!
1. When such wealth is possible through autonomous means, how can the earth survive such unprecedented demands on its natural resources?
2. Should I believe that someone with more wealth (and as such, more power) than I have would not use that power to overwhelm me? Isn't my demand on resources only going to get in their way? Why would they allow me to draw on resources as well?
3. It seems like the answer to both of these concerns lies in government, but no government I'm aware of has really begun to answer these questions. Worse yet, what if governments disagree on how to implement these strategies in a global economy? Competition could become an intractable drain on the earth and humans' resources. Essentially, it opens up the possibility of war at incalculable scales.
Well in trekonomics [1], citizens are equal in terms of material wealth because scarcity has been eliminated. Wealth, in the conventional sense, does not exist; instead, the "wealth" that matters is human capital—skills, abilities, reputation, and status. The reward in this society comes not from accumulation of material goods but from intangible rewards such as honor, glory, intellectual achievement, and social esteem.
Right now the people that own those resources also depend on human labor to create wealth for them. You can't go from owning a mine and a farm to having a mega-yacht without people. You have to give at least some wealth to them to get your wealth. But if suddenly you can go from zero to yacht without people, because you're rich enough to have early access to lots of robots and advanced AI, and you still own the mine/farm, you don't need to pay people anymore.
Now you don't need to share resources at all. Human labor no longer has any leverage. To the extent most people get to benefit from the "magic machine," it seems to me like it depends almost entirely on the benevolence of the already wealthy. And it isn't zero cost for them to provide resources to everyone else either. Mining materials to give everyone a robot and a car means less yachts/spaceships/mansions/moon-bases for them.
Tldr: I don't think we get wealth automatically because of advanced AI/robotics. Social/economic systems also need to change.
Consumer goods have generally fallen in price (adjusted for inflation) while improving in quality relative to the 1970s, so we have become wealthier (using PG's definition of wealth):
Televisions, computers, smartphones, clothing (mass-produced apparel is cheaper due to global supply chains and automation), household appliances (items like refrigerators, washing machines, and microwaves are less expensive relative to income), air travel, telecommunications, consumer electronics, automobiles, furniture have fallen in price and gone up in quality.
Housing and healthcare are two items that have gone in the opposite direction. I think this is where AI and robots will make a difference. Houses can be 3D printed [1] and nursing and medical advice can be made cheaper using AI/robots as well.
So when are we going to start pivoting towards a more socialist economic system? Where are the AI leaders backing politicians with this vision?
Because that's absolutely required for what you're talking about here...
https://en.wikipedia.org/wiki/Social_credit
(its where the excess profits from mechinisation will be fed back to the citizens so that they don't need to work as much. That failed spectacularly.)
PG's argument is a huge amount of words to miss the point. Money is a tool that reflects power. Wealth derives from power.
> AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money,
I would gently tell you that you might want to look at the living conditions of the working class in the early 20th century. You might see planned cities like borneville or what ever the american version is. they were the 1% of working classes. The average housing was shit, horrid shit. If AI takes off and makes say 10% of the population jobless, thats what those people will get, shit.
It wasn't until those dirty socialists got into power in the UK (I don't know about other countries) did we start to see stuff like slum clearances where the dispossessed we actually re-homed. rather than yeeted to somewhere less valuable.
/s
I'm especially disgusted with Sam Altman and Darius Amodei, who for a long time were hyping up the "fear" they felt for their own creations. Of course, they weren't doing this to slow down or approach things in a more responsible way, they were talking like that because they knew creating fear would bring in more investment and more publicity. Even when they called for "regulation", it was generally misleading and mostly to help them create a barrier to entry in the industry.
I think now that the consensus among the experts is that AGI is probably a while off (like a decade), we have a new danger now. When we do start to get systems we should actually worry about, we're going to a have a major boy-who-cried-wolf problem. It's going to be hard to get these things under proper control when people start to have the feeling of "yeah we heard this all before"
That’s what makes it good satire.
in the end, if synthetic super intelligence results in the end of mankind, it'll be because a human programmed it to do so. more of a computer virus than a malevolent synthetic alien entity. a digital nuclear bomb.
The reason AI won't destroy us for now is simple.
Thumbs.
Robotic technology is required to do things physically, like improve computing power.
Without advanced robotics, AI is just impotent.
~Alan Watts…
Yeeeeess, but the inverse is also true.
Thing is, we've had sufficiently advanced robotics for ages already — decades, I think — the limiting factor is the robots are brainless without some intelligence telling them what to do. Right now, the guiding intelligence for a lot of robots is a human, and there are literal guard-rails on many of those robots to keep them from causing injuries or damage by going outside their programmed parameters.
Only an AI as _dumb_ as us would want something as stupid as domination, which after all is based on competition for resources that a long time ago were distributable in a way that could feed every human on earth etc.
I'm not saying an AI would "choose" world peace, but people somehow assume that "kill everybody but me" and even "survival at all costs" are a given for a non-biological entity. Instead these concepts could look quite irrational.
Hindus believed god was the thing you describe, infinitely intelligent, able to do several things at once etc, and they believe we’re part of that things dream…to literally keep things spicy. Just as an elephant is part of that dream.
I pasted an interesting quote in another comment by Alan Watts that sums it up better.
Simulation theory is another version of religion imo.
Would it want to? Would it have anything that could even be mapped to our living, organic, evolved conception of "want"?
The closest thing that it necessarily must have to a "want" is the reward function, but we have very little insight into how well that maps onto things we experience subjectively.
Most of us would resurrect at least some of the dinosaurs if we could, and the dodo. And we are just stupid hairless apes. If humans can be conservationists, I have to believe that a singular AI would be.
assuming it can be terrified
It all gets quite religious / physical philosophical very quickly. Almost like we’re creating a new techno religion by “realizing god” through machines.
1. Rationalists/EA's who moderately-strongly believe AI scaling will lead to ASI in the near future and the end of all life (Yud, Scott Alexander)
2. Populist climate-alarmists who hate AI for a combination of water use, copyright infringement, and decimation of the human creative spirit
3. Tech "nothingburgerists" who are convinced that most if not all AI companies are big scams that will fail, LLM's are light-years from "true' intelligence and that it's all a big slop hype cycle that will crumble within months to years. (overrepresented on this site)
Each group has a collection of "truthiness-anchors" that they use to defend their position against all criticism. They are all partially valid, in a way, but take their positions to the extreme to the point they are often unwilling to accept any nuance. As a result, conversations often lead nowhere because people defend their position to a quasi-religious degree rather than as a viewpoint predicated on pieces of evidence that may shift or be disproven over time.
Regarding the satire in OP, many people will see it as just a funny, unlikely outcome of AI, others will see it as a sobering vision into a very likely future. Both sides may "get" the point, but will fail to agree at least in public, lest they risk losing a sort of status in their alignment with their sanity-preserving viewpoint.
Finally a company that's out to do some good in the world.
However, in the US, the labor-force participation rate is 60%. There are a LOT of adults out there who don't work now. While I do think people find value in work, I would guess that number is less than 50% (potentially much lower). Which makes me think that 30% or fewer of adults are deriving significant value from their work.
On the one hand, something that effects 30% is pretty massive. But it feels less apocolyptic/overwhelming than it may seem.
Sam Altman doesn’t own AI. His investors actually own most of the actual assets.
Eventually there is going to be pressure for open ai to deliver returns to investors. Given that the majority of the US economy is consumer spending, the incentive is going to be for open ai to increase consumer spending in some way.
That’s essentially what happened to Google during the 2000s. I know everyone is negative about social media right now. But one could envision an alternative reality where Google explicitly controls and sensors all information, took over roadways with their driving cars, completely merged with the government, etc. Basically a doomsday scenario.
What actually happened is Google was incentivized by capital to narrow the scope of their vision. Today, the company mainly sells ads to increase consumer spending.
I just logged onto github and saw a "My open pull requests button".
Instead of taking me to a page which quickly queried a database, it opened a conversation with copilot which then slowly thought about how to work out my open pull requests.
I closed the window before it had an answer.
Why are we replacing actual engineering with expensive guesswork?
AI just makes it worse.
However, someone has taken a useful feature and has made it worse to shoe-horn in copilot interaction.
Clicking this button also had a side-effect of an email from Github telling me about all the things I could ask copilot about.
The silver lining is that email linked to copilot settings, where I could turn it off entirely.
https://github.com/settings/copilot/features
AI is incredibly powerful, especially for code-generation. But It's terrible ( at current speeds ) for being the main interface into an application.
Human-Computer interaction benefits hugely from two things:
- Speed - Predictability
This is why some people prefer a commandline, and why some people can produce what looks like magic with excel. These applications are predictable and fast.
A chat-bot delivers neither. There's no opportunity to build up muscle-memory with a lack of predictability, and the slowness of copilot makes interaction just feel bad.
"To be or not to be? ... Not a whit, we defy augury; there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come the readiness is all. Since no man knows aught of what he leaves, what is't to leave betimes? Let be." -- Hamlet
In the end it will be our humility that will redeem us as it has always been, have some faith the robots are not going to be that bad.
"Computer" used to be a job. Not anymore: https://en.wikipedia.org/wiki/Computer_(occupation)
What counts as "AI" is a moving target: https://en.wikipedia.org/wiki/AI_effect
I definitely think AI companies marketing claims deserve mockery...but this isn't even good/interesting/smart satire??
It feels like we've fully completed the transition to Reddit here, with its emotional and contradictory high school political zeal (being both progressive and anti-progress at the same time) dominating the narrative.
Something about upvote-based communities is not holding up well in the current climate.
If humans have regressed enough intellectually where they are praising this as "Brilliant social commentary" then we absolutely SHOULD be replaced by AI.
You might say: "but you'll need money!". Why would I need money? The robots can provide my every need. And if I need money for some land or resource or something, I would have my robots work until my need was satisfied, I wouldn't continue having them work forever.
And even if robots did take all of the jobs, they would have to work for free. Because humans would have no jobs, and thus no money with which to pay them. So either mankind enjoys free services from robots that demand no compensation, or we get to keep our jobs.
So I really don't get the existential worry here. Yes, at a smaller scale some jobs might be automated, forcing people to retrain or work more menial jobs. But all of humanity being replaced? It doesn't make sense.
Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs. The robot masters would just split away and have their own economy. Which is the same as them not existing.
In fact, Sam Altman wrote a good piece on this:
The best paid workers were mechanised/outsourced first. For example the weavers used to be a huge political force, literally re-shaping countries. Their long slow & violent decent into obscurity lead to workers rights (see the chartist movement)
Good thing there are no resources to fight over - land, minerals, and water.
The benign forms of superintelligence shaken out by non-benign forms.
>Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs.
On whose land?
In any case, it will be cheaper to buy food from the AI. The remaining economy would just be the liquidation of remaining human-controlled assets into the AI-controlled economy for the stuff they need to survive like medicine and food.
> "Stupid. Smelly. Squishy."
> "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high."
I love the marketing here. Top notch shit posting.
But besides that, no idea what this company does and it just comes off like another wannabe Roy Lee styled "be controversial and build an audience before you even have a product" type company.
That being said, still a good case study of shock marketing. It made it to the top link on HN after all.
Edit: its satire, I got got :(
Follow the links for support (or rather reserve space in the bunker)
There's a contact form to let representatives know the dangers of ai
Does AI even understand satire?
There is an intersection of certain industries and a particular demographic where adapting/retraining will be either very difficult or impossible.
Case in point:
- car factory town in Michigan
- factory shuts down
- nursing school opens in the town
- they open a hospital
- everyone thinks "Great! We can hire nurses from the school for the hospital"
- hospital says "Yeah, but we want experienced nurses, not recent graduates"
- people also say "So the factory workers can go to nursing school and get jobs somewhere else!"
- nursing school says "Uhm, people with 30 years of working on an assembly line are not necessarily the type of folks who make good nurses..."
Eventually, the town will adapt and market forces will balance out. But what about those folks who made rational decisions about their career path and that path suddenly gets wiped away?
The auto workers should leave town to find a suitable job, selling their homes to the incoming healthcare workers.
Diving into the game theory of a 4-player setup with executives/investors/customers/workers is tempting here but I'll take a different approach.
People who actually face consequences have trouble understanding how the "it might help, it can't hurt!" corporate strategy can justify almost any kind of madness. Especially when the leaders are morons that somehow have zero ideas, yet almost infinite power. That's how/why Volkswagen was running slave plantations in Brazil as late as 1986, and yet it takes 40 years to even try to slap them on the wrist.[1] A manufacturing company that decided to run FARMS in the amazon?, with slaves??, for a decade??? One could easily ask, what is to be gained by doing crimes against humanity for a sketchy, illegal, and unethical business plan that's not even related to their core competency? Power has it's own logic, but it doesn't look like normal rationality because it has a different kind of relationship with cause-and-effect.
Overall it's just a really good time to re-evaluate whether corporations and leaders deserve our charitable assumptions about their intentions and ethics.
[1] https://reporterbrasil.org.br/2025/05/why-is-volkswagen-accu...
At least with a politician you can sometimes believe it, whereas capitalism's spine is infinitely flexible.
The Corpos don’t need to go mask off, that’s what they pay the politicians for. Left and right is there to keep people from looking up and down.
{ }
if (Steal) { then Lie; then Cloud; then Money; } else { Fail; then Die; }
{ }
> in the industrial revolution of the 19th century what Humanity basically learned to produce was all kinds of stuff like textiles and shoes and weapons and vehicles and this was enough for very few countries that underwent the revolution fast enough to subjugate everybody else
what we're talking about now is like a Second Industrial Revolution but the product this time will not be textiles or machines or vehicles or even weapons the product this time will be humans themselves.
we are basically learning to produce bodies and Minds bodies and minds are going .... the two main products of the next wave of all these changes and if there is a gap between those that know to produce bodies and minds and those that do not then this is far greater than anything we saw before in history and this time if you're not part of the Revolution fast enough then you probably become extinct
once you know how to produce bodies and brains and Minds so cheap labor in Africa or South Asia wherever it simply counts for nothing
again I think the biggest question ... maybe in economics and politics of the coming decades will be what to do with all these useless people
I don't think we have an economic model to for that, my best guess which is , just a guess, is that food will not be a problem uh with that kind of Technology you will be able to produce food for to feed everybody the problem is more important ... what to do with them and how will they find some sense of meaning in life when they are basically meaningless worthless
my best guess at present is a combination of drugs and computer games
> how to prevent stop
Comrades, we can now automate a neo KGB and auto garbage-collect contra-revolutionaries in mass with soviet efficiency!
The communist solution to everything is to roll everything into a one-world monopoly. That concentration of power is exactly what we are trying to prevent. Feudalism, Corporatism, and Communism converge on the same point in the space of poltics.
AI will destroy the labor market as a means of wealth distribution but still some solution is better than nothing. Suggesting that socialism is the solution to mass automation is like suggesting the solution to a burning house is to pour gasoline on it.
If you disagree, feel free to argue your point instead of just scoffing at the idea.
A job is a decision that your boss(es) made and can be taken without your consent. You don't have the ownership of your job that you do of your marriage.
Your partner in some (most?) cases can absolutely make an executive decision that ends your marriage, with you having no options but to accept the outcome.
Your argument falls a little flat.
A job is also a mutual decision between the employee and the employer.
A marriage can also be taken without your consent through divorce (unless you are orthodox jewish or something I think?).
Note that isn’t universally true, for either case. Without mutual agreement, in the EU you can’t fire someone just because, and in Japan you can’t divorce unless you have proof of a physical affair or something equally damming.
But as a society we have to ask ourselves if replacing all jobs with AI will make for a better society. Life is not all about making as much money as possible. For a working society citizen need meaning in their lives, and safety, and food, and health. If most people get too little of this, it may disrupt society, and cause wars and riots.
This is where government needs to step in, uncontrolled enterprise greed will destroy countries. But companies don´t care, they'll just move to another country. And the ultra-rich don´t care, they'll just put larger walls around their houses or move country.
HN Comment in 2125: Why would I have casual sex with a real guy, I can have a sexual partner who I can tailor perfectly to my in the moment desires, can role play anything including the guy in the romance novel I'm reading, doesn't get tired, is tall and effortlessly strong, has robotic dexterity, available 24/7, exists entirely for my pleasure letting me be as selfish as I want, and has port and starboard attachments.
What makes you think that sex is some sacred act that won't follow the same trends as jobs? You don't have to replace every aspect of a thing to have an alternative people prefer over the status quo.
You can compete, but not for long IMO. (No pun)
How would we handle regulating sex bots? Complete ban on manufacturing and import of full size humanoid bots? They are large enough that it could be partially effective I guess. I’m imagining two dudes in a shady meetup for a black market sale of sex bot which is kinda funny but also scary because the future is coming fast.
Or in this case, a husband having police investigate and apprehend the wife in the act? Crazy times.
Sure, but we're also putting aside how people do worse without a sense of purpose or contribution, and semi-forced interaction is generally good for people as practice getting along with others - doubly so as we withdraw into the internet and our smartphones
The problem is a culture that doesn't think the profit from productivity gains should be distributed to labor (or consumers), and doesn't think that wives deserve to be happy.
I see what you did there
Any company that solves this problem will be a $10T company.
Assuming the Everdrive is M and the SNES cartridge port is F, I can understand why the Everclan men are particularly attuned to this topic. Many better-quality, more feature-rich, and cheaper SNES multicarts have hit the market; the Everdrive is looking dated.
This isn't exactly news
surprised to see this so far down. if a robot can fuck better, then we would probably both have fun fucking robots together
Machines doing stuff instead of humans is great as long as it serves some kind of human purpose. If it lets humans do more human things as a result and have purpose, great. If it supplants things humans value, in the name of some kind of efficiency that isn't serving very many of us at all, that's not so great.
Besides, in this fantasy, what’s to stop you from having the perfect robot lover as well - why are you so attached to this human wife of yours in the first place?
Skill issue.
This has been nothing but a test-run for openly fascistic tech hoes to flex their disdain for everyone who isn't them or in their dumb club.
But I think it misses the bigger picture, which you hit on: Robots are helpful, but they're still just tools. An AI can crunch data, find opportunities, and trade faster than any human ever could. It's an incredible helper!
However, humans are the ones controlling them. We decide what they trade, and we build the secure systems they rely on. An AI millionaire is still relying on infrastructure that humans have to build to be fast, cheap, and totally stable. If the foundation is shaky, the AI's complex trades fall apart.
That reality is what makes me ignore the AI hype and focus entirely on infrastructure. The smart developers know that the long game is building utility assets on robust foundations.
They're moving to systems that are super fast and near-zero cost because AI requires zero friction. They're also demanding global market reach so the assets they create aren't trapped anywhere.
You could perhaps make an argument that among the flood of AI-related submissions, this one doesn't particularly move the needle on intellectual curiosity. Although satire is generally a good way to allow for some reflection on a serious topic, and I don't recall seeing AI-related satire here in a while.
Instead of facing the new reality, some people start to talk about the bubbles, AI being sloppy, etc. Which is not generally true; mostly it's the users' psychological projection of their own traits and the resulting fear-induced smear campaigns.
The phenomenon is well described in psychology books. Seminal works of Carl Jung worth a ton nowadays.
It's also more nuanced than you seem to think. Having the work we do be replaced by machines has significant implications about human purpose, identity, and how we fit into our societies. It isn't so much a fear of being replaced or made redundant by machines specifically; it's about who we are, what we do, and what that means for other human beings. How do I belong? How do I make my community a better place? How do I build wealth for the people I love?
Who cares how good the machine is. Humans want to be good at things because it's rewarding and—up until very recently—was a uniquely human capability that allowed us to build civilization itself. When machines take that away, what's left? What should we be good at when a skill may be irrelevant today or in a decade or who knows when?
Someone with a software brain might immediately think "This is simply another abstraction; use the abstraction to build wealth just as you used other skills and abilities to do so before", and sure... That's what people will try to do, just as we have over the last several hundred years as new technologies have emerged. But these most recent technologies, and the ones on the horizon, seem to threaten a loss of autonomy and a kind of wealth disparity we've never seen before. The race to amass compute and manufacturing capacity among billionaires is a uniquely concerning threat to virtually everyone, in my opinion.
We should remember the Luddites differently, read some history, and reconsider our next steps and how we engage with and regulate autonomous systems.
What remains after is something like the social status games of the aristocratic class, which I suspect is why there's a race to accumulate as much as possible now before the means to do so evaporate.
In simple words, authenticity is the desire to work on mistakes and improve yourself, being flexible enough to embrace the changes sooner or later. If one is lacking some parts of it, one tends to become a narcissist or a luddite, being angry trying to regain the ever-slipping sense of control.
To translate to human language: gold diggers who entered the industry just for money do not truly belong to the said industry, while those who were driven by spirit will prosper.
It just screams fried serotonin-circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Do I think we should stop this type of competitive behaviour fueled by kids and investors both microdosed on meth? No. I just wouldn't do business with them, they don't look like trustworthy brand to me.
Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike. I.e. Artisan ads in billboards saying STOP HIRING HUMANS and another new york company I think pushing newspaper ads for complete replacement. Also if you're up with the latest engineering in agentic scaffolding work this type of thing is no joke.
>It just screams fried-serotonin circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Enlightenment is realizing they aren't any different from those other guys.
>Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike.
And what's your conclusion from that?