So the problem isn't robots, it's the structure of how we humans rely on jobs for income. I don't necessarily feel like it's the AI company's problem to fix either.
This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
-- Frank Herbert, Dune
The "government" is just the set of people who hold power over others.
Those who will own machines that can match humans will have unprecedented power over others. In effect they'll grow more and more to be the government.
Even now, companies hold more and more of the power over others and are more part of the government than ever before.
So it confuses me when you say it's what the government is for? Who would that be? If we pretend it would still be a democracy then I guess you're saying it's everyone's problem?
So here we are, let's discuss the solution and vote for it?
Taking work away from people is practically the definition of technology. We invent things to reduce the effort needed to do things. Eliminating work is a good thing, that’s why inventing things is so popular!
What ends up happening is the amount of work remains relatively constant, meaning we get more done for the same amount of effort performed by the same amount of people doing the same amount of jobs. That’s why standards of living have been rising for the past few millennia instead of everybody being out of work. We took work away from humans with technology, we then used that effort saved to get more done.
Either way, without that social pattern, I'm afraid all this does is enshrine a type of futuristic serfdom that is completely insurmountable.
Both are fairly uncontroversial: many humans not only benefit from jobs but in fact often depend on jobs for their livelihoods, and (2) should be self-evident.
This can change if the socioeconomic system is quickly enough and quite substantially restructured to make humans not depend on being compensated for work that is now being done by robots (not only financially but also psychologically—feeling fulfilled—socially, etc.), but I don’t see that happening.
Neither government or corporations are going to “save us” simply because sheer short termism and incompetence. But the seem incompetence will make the coming dystopia ridiculous
Personal belief, but AI coming for your children is not a valid argument against AI. If AI can do a job better and/or faster, they should be the ones doing the parenting. Specialization is how we got to the future. So the problem isn't AI, it's the structure of how we humans rely on parenting for their children. I don't necessarily feel like it's the AI company's problem to fix either. This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
…
You’re right about one thing within reason… this is what a rationale government should be for… if the government was by the people and for the people.
Addendum for emphasis: …and if that government followed the very laws it portends to protect and enforce…
Progress is great obviously, but progress as fast as possible with no care about the consequences is more motivated by money, not the common good.
Best summarized in this comic: https://x.com/9mmballpoint/status/1658163045502267428
They're busy selling watches whilst people can still afford them thanks to having jobs.
Not the AI company’s fault per se, but generally the US government does a very poor job of creating a safety net either intentionally, ineptitude or indifference.
By the way, attacks were also leveled against Chinese and Japanese California workers who were viewed as stealing the jobs of other “Americans”. So this viewpoint and tradition of behavior and capitalism is very long in US history.
Replace the word robot with "automation" or "industrialization" and you have the last 200 years of human history covered.
The Ludites could have won and we would all have 1500$ shirts.
Do you know any lamp lighters? How about a town crier?
We could still all be farming.
Where are all the switch board operators? Where are all the draftsmen?
How many people had programing jobs in 1900? 1950?
We have an amazing ability to "make work for ourselves", and history indicates that we're going to keep doing that regardless of how automated we make society. We also keep traditional "arts" alive... Recording didnt replace live performances, TV/Film didnt replace Broadway... Photography didnt replace painting...
E.g. if I was a truck driver and autonomous trucks came along that were 2/3rds the price and reduced truck related deaths by 99% obviously I couldn't, in good faith, argue that the rest of the population should pay more and have higher risk of death even to save my job and thousands of others. Though somehow this is a serious argument in many quarters (and accounts for lots of government spending).
Rent extraction hurts them in the long run. Because working class income gets absorbed by various forms of rent, they are more expensive to employ. Thus we fail to compete with, say, China, which socializes many costs and invests in productive industry. We are left with a top heavy society that as we can see is already starting to crumble.
None of this holds if you don't have anything of value to offer and automation is concentrating power and value, AI is the extreme end of this - at some point the charade of democracy becomes too annoying to the ones at the top, and you get there faster by trying to reign them in.
So that would mean it is in fact the responsibility of the people at robot/AI companies (and across industries). It's not something we can just delegate to role-based authorities to sort out on our behalf.
In my home country, the people building the robots and job destroying AI have captured all three branches of government, and have been saying for over 40 years that they'd like to shrink government down to a size that they could drown it in a bathtub. The government can't be relied upon to do more than move its military into our cities to violently stifle dissent.
> This is what government is for, and not to stifle innovation by banning AI but by preparing society to move forward.
Many will argue that the purpose of government is not to steer or prepare society, but rather to reflect the values of society. Traditionally, the body that has steered (prepared or failed to prepare) society for impending changes was religion.Secondly, you assume in the first place that we can somehow build a stable post-scarcity society in which people with no leverage can control the super-intelligent agents with all of the power. The idea that "government will just fix it" is totally ignorant of what the government is or how it emerges. In the long run, you cannot have a ruling class that is removed from the keys to power.
Lastly, Who says we should all support this future? What if I disagree with the AI revolution and it's consequences?
It is kind of amazing how your path of reasoning is so dangerously misoriented and wrong. This is what happens when people grow up watching star trek, they just assume that once we live in a post scarcity future everything will be perfect, and that this is the natural endpoint for humanity.
The problem as I see it is not robots coming for my job and taking away my ability to earn a salary. That can be solved by societal structures like you are saying, even though I am somewhat pessimistic of our ability to do so in our current political climate.
The problem I see is robots coming for my mind and taking away any stakes and my ability to do anything that matters. If the robot is an expert in all fields why would you bother to learn anything? The fact that it takes time and energy to learn new skills and knowledge is what makes the world interesting. And this is exactly what happened before when machines took over a lot of human labour, luckily there were still plenty of things they couldn't do and thus ways to keep the world interesting. But if the machines start to think for us, what then is left for us to do?
What if the issue isn't government failing to prepare society to move forward, but instead, AI businesses moving us in a direction that more and more people don't consider to be forward?
1. We don’t need everyone in society to be involved in trade.
2. We made it so that if you do not take part in trade (trade labor for income), you cannot live.
3. Thus, people will fear losing their ability to trade in society.
The question is, when did we make this shift? It used to just be slavery, and you would be able to survive so long as you slaved.
The fear is coming from something odd, the reality that you won’t have to trade anymore to live. Our society has convinced us you won’t have any value otherwise.
We should compare how anti-government politicians talk versus how trained, educated neoclassical economists talk. The latter readily recognize that a valid function of government is to steer, shape, and yes, regulate markets to some degree. This is why we don’t have (for example) legal, open markets for murder.
Markets do not define human values; they are a coordination mechanism given a diverse set of values.
And even if the government did institute something like universal basic income, if all jobs were replaced, that would almost certainly mean a lower standard of living for the middle class, and even less socioeconomic mobility than there is now.
The AI will belong to the parasite class who will capture all the profits - but you can't tax them on this, because they can afford to buy the government. So there isn't really a way to fund food and shelter for the population without taking something from the billionaires. Their plans for the future does not include us [0].
[0] https://www.theguardian.com/news/2022/sep/04/super-rich-prep...
And capitalism has flourished during this time. There's no reason to believe even more automation is going to change that, on its own.
Sure, Musk and Altman can make noises and talk about the need for UBI "in the future" all they want, but their political actions clearly show which side they're actually on.
Humans have depended on their own labor for income since we stopped being hunters and gatherers or living in small tribes.
So it's not just a matter of "the gov will find a way", but it's basically destroying the way humanity as a whole has operated for the past 5000 years.
So yes, it's a huge problem. Everything done under the banner of "innovation" isn't necessarily a good thing. Slavery was pretty "innovative" as well, for those who were the slave owners.
If you are going to use my work without permission to build such a robot, then said robot shouldn’t exist.
On the other hand a jack of all trades robot is very different from all the advancements we have had so far. If the robot can do anything, in the best case scenario we have billions of people with lots of free time. And that doesn’t seem like a great thing to me. Doubt that’s ever gonna happen, but still.
This requires faith that the government will actually step in to do something, which many people lack (at least in the US, can't speak for globally).
That's the sticking point for many of the people I've talked to about it. Some are diametrically opposed to AI, but most think there's a realistic chance AI takes jobs away and an unrealistic chance the government opposes the whims of capital causing people displaced from their jobs to dip into poverty.
I can't say I have a good counter-argument either. At least in the US, the government has largely sided with capital for my entire life. I wouldn't take a bet that government does the kind of wealth redistribution required if AI really takes off, and I would eat my hat if it happens in a timely manner that doesn't require an absolute crisis of ruined lives before something happens.
See the accumulation of wealth at the top income brackets while the middle and lower classes get left behind.
TLDR this is more of a crisis of faith in the government than opposition to AI taking over crap jobs that people don't want anyways.
So while you’ve identified the real problem we need to identify a realistic solution.
Anyway there is a name for your kind of take. It is anti-humanist.
But it's not like "the government" (as if there is just one) simply doesn't want to fix things. There are many people who want to fix the way we distribute resources, but there are others who are working to stop them. The various millionaires behind these AI companies are part of the reason why the problem you identified exists in the first place.
Ngl, if someone nuked all USA's servers and wipes out the all these bullshit i'm not convinced the world would be in a worst state right now.
Let AI be used by scientific research, development, helping people out. But if it's just to sit your smelly ideas down, you may even be right, but ultimately the form, intentions and result matter more than recklessly endangering everybody.
TBH I feel like the AI discourse around human replacement smell like hard-core psychopathic behavior - or the one of a drunken dude who's driving a car just happy.
You have 0 data concerning the result it would do on society - and I definitely prefer to live in a less technological world than a world that is full of people with psychosis.
So until we find how we can solve this bottleneck I have 0 sympathy for this kind of discourse.
The government should keep its charge as the protector and upholder of justice, I don’t want it to be those things and then also become a fiat source for economic survival, that’s a terribly destructive combination because the government doesn’t care for competition or viability, and survival is the last place you want to have all your eggs in one basket, especially when the eggs are guaranteed by force of law and the basket becomes a magnet for corruption.
We as a society get to decide what is done in our society. If robots replace a few jobs but make goods cheaper for everyone that's a net positive for society.
If robots replace EVERYONE's job, where everyone has no income anymore that's clearly a huge negative for society and it should be prevented.
That's noble. The first is dystopian
The workforce gives regular folks at least some marginal stake in civilization. Governments aren’t effective engines against AI. We failed to elect Andrew Yang in 2020–who was literally running on a platform of setting up a UBI tax on AI. Congress is completely corrupt and ineffectual. Trump gutting the government.
You may be right about ai taking jobs eventually if that’s what you’re saying, but you come off pretty coldly if you’re implying it’s what “should” happen because it’s Darwinian and inevitable, and just sorta “well fuck poor people.”
The over-application of objective phrases like "valid" vs "invalid" when talking about non-formal arguments is a sickness a lot of technical people tend to share. In this case, it's dismissive of harm to humans, which is the worst thing you can be dismissive about. "Please don't make me and my family miserable" is not an "invalid argument" - that's inhuman. That person isn't arguing their thesis.
"The problem". Another common oversimplifying phrase used by us thinkers, who believe there is "the answer", as if either of those two things exist as physical objects. "The problem" is that humans are harmed. Everything else just exists within that problem domain, not as "part of the problem" or "not part of the problem".
But most importantly:
Yes, you're absolutely correct (and I hate to use this word, but I'm angry): Obviously the ideal state is that robots do all the work we don't want to do and we do whatever we want and our society is structured in a way to support that. You've omitted the part where that level of social support is very hard to make physically feasible, very hard to convince people of depending on their politics, and, most importantly: It's usually only enough to spare people from death and homelessness, not from misery and unrest. Of course it would be ridiculous to outright ban for-profit use of automation, but even more ridiculous to write a bill that enforces it, e.g. by banning any form of regulation.
Short and medium term, automating technologies are good for the profit of businesses and bad for the affected humans. Long term, automating technologies are good for everybody, but only if society actually organizes that transition in a way that doesn't make those affected miserable/angry. It isn't, and I don't think it's pessimistic to say that it probably won't.
I'd love to live in Star Trek! We don't. We won't for hundreds of years if ever. Technology isn't the limiting factor, the immutable nature of human society and resources are the limiting factors. Nothing else is interesting to even talk about until we clear the bar of simply giving a shit about what actually, in concrete reality, happens to our countrymen.
It's called capitalism
Why are people even doing the jobs?
In a huge number of cases people have jobs that largely amount to nothing other than accumulation of wealth for people higher up.
I have a feeling that automation replacement will make this fact all the more apparent.
When people realise big truths, revolutions occur.
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly.
The leader bios are particularly priceless. "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high. Out of the office, Faith coaches a little league softball team and looks after her sick mother - obligations she looks forward to being free of!"
There's some truth in all satire though. I'm just shocked YC hasn't nuked the link from the front page.
Would Sam Altman even understand the original, or would he just wander ignorantly into the kitchen and fling some salt at it (https://www.ft.com/content/b1804820-c74b-4d37-b112-1df882629...)? I'm not optimistic about our modern oligarchs.
If we build AGI, we don't have a past comparison for that. Technologies so far have always replaced a subset of what humans currently do, not everything at once.
You would think! But it's not the type of problem Americans seem to care about. If we could address it collectively, then we wouldn't have these talking-past-each-other clashes where the harmed masses get told they're somehow idiots for caring more about keeping the life and relative happiness they worked to earn for their families than achieving the maximum adoption rate of some new thing that's good for society long term, but only really helps the executives short term. There's a line where disruption becomes misery, and most people in the clear don't appreciate how near the line is to the status quo.
Should be pretty clear that this is a different proposition to the historical trend of 2% GDP growth.
Mass unemployment is pretty hard for society to cope with, and understandably causes a lot of angst.
Every time we progress with new tech and eliminate jobs, the new jobs are more complicated. Eventually people can't do them because they're not smart enough or precise enough or unique enough.
Each little step, we leave people behind. Usually we don't care much. Sure some people are destined to a life of poverty, but at least most people aren't.
Eventually though even the best of the humans can't keep up, and there's just nothing left.
I'm starting to come around to the idea that electricity was the most fundamental force that drove WW1 and WW2. We point to many other more political, social and economic reasonings, but whenever I do a kind of 5-whys on those reasons I keep coming back to electricity.
AI is kind of like electricity.
Were also at the end of a big economic/money cycle (Petro dollar, gold standard, off gold standard, maxing out leverage).
The other side will probably involve a new foundation for money. It might involve blockchain, but maybe not, I have no idea.
We don't need post-scarcity so much as we just need to rebalance everything and an upgraded system that maintains that balance for another cycle. I don't know what that system is or needs, but I suspect it will become more clear over the next 10-20 years. While many things will reach abundance (many already have) some won't, and we will need some way to deal with that. Ignoring it won't help.
Firing educated workers en mass for software that isn’t as good but cheaper, doesn’t have the same benefits to society at large.
What is the goal of replacing humans with robots? More money for the ownership class, or freeing workers from terrible jobs so they can contribute to society in a greater way?
kinda, I guess. but what has everyone on edge these days is humans always used technology to build things. to build civilization and infrastructure so that life was progressing in some way. at least in the US, people stopped building and advancing civilization decades ago. most sewage and transportation infrastructure is from 70+ years ago. decades ago, telecom infrastructure boomed for a bit then abruptly halted. so the "joke" is that technology these days is in no way "for the benefit of all" like it typically was for all human history (with obvious exceptions)
yes, until we reached art and thinking part. Big part of the problem might be that we reached that part first before the chores with AI.
At least now, things aren't so bad, and today's Luddites aren't trashing offices of ai-companies and hanging their employees and executives on nearby poles and trees.
I know, right? Machines have been gradually replacing humans for centuries. Will we actually get to the point where there are not enough jobs left? It doesn't seem like we're currently anywhere close to the point of not having any jobs available.
Has anyone thought about how the Federal Reserve plays a role with this? Automation puts downward pressure on inflation, because it doesn't cost as much to make stuff. The Federal Reserve will heavily incentivize job creation if inflation is low enough and there aren't enough jobs available, right?
We did figure that out. The ingenious cope we came up with is to entirely ignore said problem.
Imagine if the tractor made most farm workers unnecessary but when they flocked to the cities to do factory work, the tractor was already sitting there on the assembly line doing that job too.
I don’t doubt we can come up with new jobs, but the list of jobs AGI and robotics will never be able to do is really limited to ones where the value intrinsically comes from the person doing it being a human. It’s a short list tbh.
this is not about machines. machines are built for a purpose. who is "building" them for what "purpose" ?
if you look at every actual real world human referenced in this website, they all have something in common. which is that they're billionaires.
this is a website about billionaires and their personal agendas.
The issue is that there will be no one earning money except the owners of OpenAI.
Take outsourcing - the issue in developed nations was underemployment and the hollowing out of industrial centers. You went from factory foreman to burger flipper. However, it did uplift millions out of poverty in other nations. So net-net, we employed far more and distributed wealth.
With Automation, we simply employ fewer people, and the benefits accrue to smaller groups.
And above all - these tools were built, essentially by mass plagiarism. They train even now, on the random stuff we write on HN and Reddit.
TLDR: its not the automation, its the wealth concentration.
It's that the people failed to elect and wield a government that ensures all humanity benefits from it and not a select few who control it all.
And I think it will become clear that the governments that are investing in it to benefit their people who have ownership versus the ones who invest in it to benefit just a handful of the rich are the ones who will keep society stable while this happens.
The other path we are going down is you will have mass unrest, move into a police state to control the resistance like America is doing now, and be exactly what Peter Thiel, Elon Musk, and Larry Ellison want with AI driven surveillance and Orwellian dystopian vision forcing people to comply or be cut out of existence deactivating their Digital IDs.
At the bottom of this page, there is a form you can fill out. This website says they will contact your local representative on your behalf. (And forward you any reply.)
Here's the auto-generated message:
I am a constituent living in [state] with urgent concerns about the lack of guardrails surrounding advanced AI technologies. It is imperative that we act decisively to establish strong protections that safeguard families, communities, and our children from potential harms associated with these rapidly evolving systems.
As companies continue to release increasingly powerful AI systems without meaningful oversight, we cannot rely on them to police themselves, especially when the stakes are so high. While AI has the potential to do remarkable things, it also poses significant risks, including the manipulation of children, the development of bioweapons, the creation of deepfakes, and the threat of widespread unemployment.
I urge you to enact strong federal guardrails for advanced AI that protect families, communities, and children. Additionally, please do not preempt or block states from adopting strong AI protections that may be necessary for their residents.
Thank you for your time.
[name]
New York
Here's a starter example: any company whose main business is training AI models needs must give up 10% of their company to a fund whose charter is long-term establishing basic care (food, water, electricity, whatever) for citizens.
I'm sure people will come at me with "well this will incentivize X instead!" in which case I'd like to hear if there are better thought out proposals.
The problem really is political systems. In most developed countries, wealth inequality has been steadily increasing, even though if you ask people if they want larger or smaller inequality, most prefer smaller. So the political systems aren't achieving what the majority wants.
It also seems to me that most elections are won on current political topics (the latest war, the latest scandal, the current state of the economy), not on long-term values such as decreasing wealth inequality.
The problem is that the longer you refrain from equitably distributing wealth, the harder it becomes to do it, because the people who have benefited from their inequitably distributed wealth will use it to oppose any more equitable distribution.
Probably because most politics about how to "equitably distribute the wealth" of anything are one or both of "badly thought out" and/or "too complex to read".
For example of the former, I could easily say "have the government own the AI", which is great if you expect a government that owns AI to continue to care if their policies are supported by anyone living under them, not so much if you consider that a fully automated police force is able to stamp out any dissent etc.
For example of the latter, see all efforts to align any non-trivial AI to anything, literally even one thing, without someone messing up the reward function.
For your example of 10%, well, there's a dichotomy on how broad the AI is, if it's more like (it's not really boolean) a special-purpose system or if it's fully-general over all that any human can do:
• Special-purpose: that works but also you don't need it because it's just an assistant AI and "expands the pie" rather than displacing workers entirely.
• Fully-general: the AI company can relocate offshore, or off planet, do whatever it wants and raise a middle finger at you. It's got all the power and you don't.
For this to work at scale domestically, the fund would need to be a double-digit percentage of the market cap of the entire US economy. It would be a pretty drastic departure from the way we do things now. There would be downsides: market distortions and fraud and capital flight.
But in my mind it would be a solution to the problem of wealth pooling up in the AI economy, and probably also a balm for the "pyramid scheme" aspect of Social Security which captures economic growth through payroll taxes (more people making more money, year on year) in a century where we expect the national population to peak and decline.
Pick your poison, I guess, but I want to see more discussion of this idea in the Overton window.
There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.
What government in the foreseeable future would go after them? This would tank the US economy massively, so not US. The EU will try and regulate, but won't have enough teeth. Are we counting on China as the paragon of welfare for citizens?
I propose we let the economy crash, touch some grass and try again. Source: I am not an economist.
Sure we will have the robot wrangler engineers, scientists, teachers, nurses, etc. But typically we have social unrest past like 8% unemployment. What happens when double digits of people have no jobs and all the time on their hands? Well “eat the rich” might become very literal and no amount of protection against that can really be bought. Ultimately, the only option is either a Dune-style elimination of all AI (very unlikely) or we will have to decouple “living wage income” from “job”. If you think about it, the idea that you must have a job in order to make money is more of an implementation detail. If robots produce so much value that it is actively not even viable for humans to work, the only real logical solution is to distribute the profits from the labor of the robots in a way that isn’t by hourly rate times hours worked. In fact one possible way to do this is to tax the value produced by AI and then funnel that to a universal basic income program. Everyone by default is an artist. If you want to be a nurse or teacher or scientist or engineer you can. Otherwise just produce art at your leisure while the robots work the fields and cook your meals.
1. Massive population reduction (war is a very efficient way to achieve this)
2. Birth control, to slow down population growth to a stable rate near 0
3. Eugenics, to ensure only people with needed capabilities are born (brave new world)
In this scenario, 500,000 people (less ?) in charge of millions of robots and a minority of semi-enslaved humans would freely enjoy control over the world. The perfect mix between Asimov and Huxley.
All the agitation about "building a 1984-style world" is, at best, just a step toward this Asimov/Huxley model, and most likely, a deliberate decoy.
You don't understand. Almost nobody actually thinks about this in the right way, but it's actually basic economics.
Salt.
We used to fight wars for salt, but now it's literally given away for free (in restaurants).
If "robots produce so much value" then all that value will have approximately zero marginal cost. You won't need to distribute profits, you can simply distribute food and stuff and housing because they're cheap enough to be essentially free.
But obviously, not everything will be free. Just look at the famous "cost of education vs TV chart" [1]. Things that are mostly expensive: regulated (education, medicine, law) and positional (housing / land - everyone wants to live in good places!) goods. Things that are mostly cheap: things that are mass produced in factories (economies of scale, automation). Robots might move food & clothing into the "cheap" category but otherwise won't really move the needle, unless we radically rethink regulation.
[1] https://kottke.org/19/02/cheap-tvs-and-exorbitant-education-...
It won't if robots start driving trucks.
I understand the spirit of this, but most of this alarmism is misguided in my view.
Then you stopped needing them, a USDA census in 1959 showed the horse population had dropped to 4.5 million.
Now they're mostly used for riding, and in 2023, there were about 6.65 million horses.
(Citation: https://en.wikipedia.org/wiki/Horses_in_the_United_States#St...)
There's no law of nature that says "there's always a place for more horses", and anyone who suggested there might be would get laughed at. Well, there's also no law of nature that says "there's always a place for more humans", to butcher a like from CGP Grey a little over a decade ago.
Mechanisation
But did you ever wonder what happened to the displaced workers? I'm not an expert on the agricultural changes in the USA, but in the UK, huge amount of tumult can be directly attributed to agricultural changes.
AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money, but you will be fabulously wealthy!
1. When such wealth is possible through autonomous means, how can the earth survive such unprecedented demands on its natural resources?
2. Should I believe that someone with more wealth (and as such, more power) than I have would not use that power to overwhelm me? Isn't my demand on resources only going to get in their way? Why would they allow me to draw on resources as well?
3. It seems like the answer to both of these concerns lies in government, but no government I'm aware of has really begun to answer these questions. Worse yet, what if governments disagree on how to implement these strategies in a global economy? Competition could become an intractable drain on the earth and humans' resources. Essentially, it opens up the possibility of war at incalculable scales.
Right now the people that own those resources also depend on human labor to create wealth for them. You can't go from owning a mine and a farm to having a mega-yacht without people. You have to give at least some wealth to them to get your wealth. But if suddenly you can go from zero to yacht without people, because you're rich enough to have early access to lots of robots and advanced AI, and you still own the mine/farm, you don't need to pay people anymore.
Now you don't need to share resources at all. Human labor no longer has any leverage. To the extent most people get to benefit from the "magic machine," it seems to me like it depends almost entirely on the benevolence of the already wealthy. And it isn't zero cost for them to provide resources to everyone else either. Mining materials to give everyone a robot and a car means less yachts/spaceships/mansions/moon-bases for them.
Tldr: I don't think we get wealth automatically because of advanced AI/robotics. Social/economic systems also need to change.
So when are we going to start pivoting towards a more socialist economic system? Where are the AI leaders backing politicians with this vision?
Because that's absolutely required for what you're talking about here...
https://en.wikipedia.org/wiki/Social_credit
(its where the excess profits from mechinisation will be fed back to the citizens so that they don't need to work as much. That failed spectacularly.)
PG's argument is a huge amount of words to miss the point. Money is a tool that reflects power. Wealth derives from power.
> AI & robots will generate wealth at unprecedented scale. In the future, you won't have a job nor have any money,
I would gently tell you that you might want to look at the living conditions of the working class in the early 20th century. You might see planned cities like borneville or what ever the american version is. they were the 1% of working classes. The average housing was shit, horrid shit. If AI takes off and makes say 10% of the population jobless, thats what those people will get, shit.
It wasn't until those dirty socialists got into power in the UK (I don't know about other countries) did we start to see stuff like slum clearances where the dispossessed we actually re-homed. rather than yeeted to somewhere less valuable.
I'm especially disgusted with Sam Altman and Darius Amodei, who for a long time were hyping up the "fear" they felt for their own creations. Of course, they weren't doing this to slow down or approach things in a more responsible way, they were talking like that because they knew creating fear would bring in more investment and more publicity. Even when they called for "regulation", it was generally misleading and mostly to help them create a barrier to entry in the industry.
I think now that the consensus among the experts is that AGI is probably a while off (like a decade), we have a new danger now. When we do start to get systems we should actually worry about, we're going to a have a major boy-who-cried-wolf problem. It's going to be hard to get these things under proper control when people start to have the feeling of "yeah we heard this all before"
That’s what makes it good satire.
in the end, if synthetic super intelligence results in the end of mankind, it'll be because a human programmed it to do so. more of a computer virus than a malevolent synthetic alien entity. a digital nuclear bomb.
The reason AI won't destroy us for now is simple.
Thumbs.
Robotic technology is required to do things physically, like improve computing power.
Without advanced robotics, AI is just impotent.
Only an AI as _dumb_ as us would want something as stupid as domination, which after all is based on competition for resources that a long time ago were distributable in a way that could feed every human on earth etc.
I'm not saying an AI would "choose" world peace, but people somehow assume that "kill everybody but me" and even "survival at all costs" are a given for a non-biological entity. Instead these concepts could look quite irrational.
assuming it can be terrified
1. Rationalists/EA's who moderately-strongly believe AI scaling will lead to ASI in the near future and the end of all life (Yud, Scott Alexander)
2. Populist climate-alarmists who hate AI for a combination of water use, copyright infringement, and decimation of the human creative spirit
3. Tech "nothingburgerists" who are convinced that most if not all AI companies are big scams that will fail, LLM's are light-years from "true' intelligence and that it's all a big slop hype cycle that will crumble within months to years. (overrepresented on this site)
Each group has a collection of "truthiness-anchors" that they use to defend their position against all criticism. They are all partially valid, in a way, but take their positions to the extreme to the point they are often unwilling to accept any nuance. As a result, conversations often lead nowhere because people defend their position to a quasi-religious degree rather than as a viewpoint predicated on pieces of evidence that may shift or be disproven over time.
Regarding the satire in OP, many people will see it as just a funny, unlikely outcome of AI, others will see it as a sobering vision into a very likely future. Both sides may "get" the point, but will fail to agree at least in public, lest they risk losing a sort of status in their alignment with their sanity-preserving viewpoint.
Finally a company that's out to do some good in the world.
However, in the US, the labor-force participation rate is 60%. There are a LOT of adults out there who don't work now. While I do think people find value in work, I would guess that number is less than 50% (potentially much lower). Which makes me think that 30% or fewer of adults are deriving significant value from their work.
On the one hand, something that effects 30% is pretty massive. But it feels less apocolyptic/overwhelming than it may seem.
Sam Altman doesn’t own AI. His investors actually own most of the actual assets.
Eventually there is going to be pressure for open ai to deliver returns to investors. Given that the majority of the US economy is consumer spending, the incentive is going to be for open ai to increase consumer spending in some way.
That’s essentially what happened to Google during the 2000s. I know everyone is negative about social media right now. But one could envision an alternative reality where Google explicitly controls and sensors all information, took over roadways with their driving cars, completely merged with the government, etc. Basically a doomsday scenario.
What actually happened is Google was incentivized by capital to narrow the scope of their vision. Today, the company mainly sells ads to increase consumer spending.
I just logged onto github and saw a "My open pull requests button".
Instead of taking me to a page which quickly queried a database, it opened a conversation with copilot which then slowly thought about how to work out my open pull requests.
I closed the window before it had an answer.
Why are we replacing actual engineering with expensive guesswork?
AI just makes it worse.
"To be or not to be? ... Not a whit, we defy augury; there's a special providence in the fall of a sparrow. If it be now, 'tis not to come; if it be not to come, it will be now; if it be not now, yet it will come the readiness is all. Since no man knows aught of what he leaves, what is't to leave betimes? Let be." -- Hamlet
In the end it will be our humility that will redeem us as it has always been, have some faith the robots are not going to be that bad.
"Computer" used to be a job. Not anymore: https://en.wikipedia.org/wiki/Computer_(occupation)
What counts as "AI" is a moving target: https://en.wikipedia.org/wiki/AI_effect
I definitely think AI companies marketing claims deserve mockery...but this isn't even good/interesting/smart satire??
It feels like we've fully completed the transition to Reddit here, with its emotional and contradictory high school political zeal (being both progressive and anti-progress at the same time) dominating the narrative.
Something about upvote-based communities is not holding up well in the current climate.
If humans have regressed enough intellectually where they are praising this as "Brilliant social commentary" then we absolutely SHOULD be replaced by AI.
You might say: "but you'll need money!". Why would I need money? The robots can provide my every need. And if I need money for some land or resource or something, I would have my robots work until my need was satisfied, I wouldn't continue having them work forever.
And even if robots did take all of the jobs, they would have to work for free. Because humans would have no jobs, and thus no money with which to pay them. So either mankind enjoys free services from robots that demand no compensation, or we get to keep our jobs.
So I really don't get the existential worry here. Yes, at a smaller scale some jobs might be automated, forcing people to retrain or work more menial jobs. But all of humanity being replaced? It doesn't make sense.
Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs. The robot masters would just split away and have their own economy. Which is the same as them not existing.
In fact, Sam Altman wrote a good piece on this:
The best paid workers were mechanised/outsourced first. For example the weavers used to be a huge political force, literally re-shaping countries. Their long slow & violent decent into obscurity lead to workers rights (see the chartist movement)
Good thing there are no resources to fight over - land, minerals, and water.
The benign forms of superintelligence shaken out by non-benign forms.
>Another way to think about it is that if all of the jobs were replaced by AI, us leftover jobless humans would create a new economy just trying to grow food and make clothes and build houses and take care of our needs.
On whose land?
In any case, it will be cheaper to buy food from the AI. The remaining economy would just be the liquidation of remaining human-controlled assets into the AI-controlled economy for the stuff they need to survive like medicine and food.
> "Stupid. Smelly. Squishy."
> "While working for 12 years as the Director of HR for a multinational, Faith realized that firing people gave her an almost-spiritual high."
I love the marketing here. Top notch shit posting.
But besides that, no idea what this company does and it just comes off like another wannabe Roy Lee styled "be controversial and build an audience before you even have a product" type company.
That being said, still a good case study of shock marketing. It made it to the top link on HN after all.
Edit: its satire, I got got :(
There is an intersection of certain industries and a particular demographic where adapting/retraining will be either very difficult or impossible.
Case in point:
- car factory town in Michigan
- factory shuts down
- nursing school opens in the town
- they open a hospital
- everyone thinks "Great! We can hire nurses from the school for the hospital"
- hospital says "Yeah, but we want experienced nurses, not recent graduates"
- people also say "So the factory workers can go to nursing school and get jobs somewhere else!"
- nursing school says "Uhm, people with 30 years of working on an assembly line are not necessarily the type of folks who make good nurses..."
Eventually, the town will adapt and market forces will balance out. But what about those folks who made rational decisions about their career path and that path suddenly gets wiped away?
The auto workers should leave town to find a suitable job, selling their homes to the incoming healthcare workers.
{ }
if (Steal) { then Lie; then Cloud; then Money; } else { Fail; then Die; }
{ }
> in the industrial revolution of the 19th century what Humanity basically learned to produce was all kinds of stuff like textiles and shoes and weapons and vehicles and this was enough for very few countries that underwent the revolution fast enough to subjugate everybody else
what we're talking about now is like a Second Industrial Revolution but the product this time will not be textiles or machines or vehicles or even weapons the product this time will be humans themselves.
we are basically learning to produce bodies and Minds bodies and minds are going .... the two main products of the next wave of all these changes and if there is a gap between those that know to produce bodies and minds and those that do not then this is far greater than anything we saw before in history and this time if you're not part of the Revolution fast enough then you probably become extinct
once you know how to produce bodies and brains and Minds so cheap labor in Africa or South Asia wherever it simply counts for nothing
again I think the biggest question ... maybe in economics and politics of the coming decades will be what to do with all these useless people
I don't think we have an economic model to for that, my best guess which is , just a guess, is that food will not be a problem uh with that kind of Technology you will be able to produce food for to feed everybody the problem is more important ... what to do with them and how will they find some sense of meaning in life when they are basically meaningless worthless
my best guess at present is a combination of drugs and computer games
> how to prevent stop
Comrades, we can now automate a neo KGB and auto garbage-collect contra-revolutionaries in mass with soviet efficiency!
This has been nothing but a test-run for openly fascistic tech hoes to flex their disdain for everyone who isn't them or in their dumb club.
But I think it misses the bigger picture, which you hit on: Robots are helpful, but they're still just tools. An AI can crunch data, find opportunities, and trade faster than any human ever could. It's an incredible helper!
However, humans are the ones controlling them. We decide what they trade, and we build the secure systems they rely on. An AI millionaire is still relying on infrastructure that humans have to build to be fast, cheap, and totally stable. If the foundation is shaky, the AI's complex trades fall apart.
That reality is what makes me ignore the AI hype and focus entirely on infrastructure. The smart developers know that the long game is building utility assets on robust foundations.
They're moving to systems that are super fast and near-zero cost because AI requires zero friction. They're also demanding global market reach so the assets they create aren't trapped anywhere.
Instead of facing the new reality, some people start to talk about the bubbles, AI being sloppy, etc. Which is not generally true; mostly it's the users' psychological projection of their own traits and the resulting fear-induced smear campaigns.
The phenomenon is well described in psychology books. Seminal works of Carl Jung worth a ton nowadays.
It just screams fried serotonin-circuits to me. I don't like it. I looked at the site for 2-3 seconds and I want nothing to do with these guys.
Do I think we should stop this type of competitive behaviour fueled by kids and investors both microdosed on meth? No. I just wouldn't do business with them, they don't look like trustworthy brand to me.
Edit: They got me with the joke, being in this field there are people that do actually talk like that, both startups and established executives alike. I.e. Artisan ads in billboards saying STOP HIRING HUMANS and another new york company I think pushing newspaper ads for complete replacement. Also if you're up with the latest engineering in agentic scaffolding work this type of thing is no joke.