When agriculture and then fossil fuels + supercharged agriculture with petro-fertilizer allowed humanity to become 100x more productive, the owners of the land & capital managed to capture the gains almost entirely, leaving the proletariat powerless except for their labor, and still struggling to survive despite the enormous windfall in energy
Now that AI is on the verge of becoming the major producer of wealth in the 21st century onward, the owners of the capital would like very much if we would talk about anything except the possibility of capturing the wealth of models trained on the public's behavior and distributing it directly as basic income, the way Alaska and Norway redistribute profits from oil to citizens. The data that has been drilled and refined into inference models belongs to humanity, to every book and letter ever penned, why are allowing the investors* of Facebook, Google, and Microsoft to capture the value and leaving the rest of humanity to toil?
The talk of extinction is a magicians trick to divert attention from the fact that we could likely move to 4-hour workweeks in the next decades, but only if wealth distribution is forced onto the capital owners, they will not share if they are not made to.
* yes, indeed, the public can be the investors! as is the case with mutual funds and so on, but the people who are being obsoleted are going to have no significant wealth tied up in these stocks in order to receive dividends. I would support, as an alternative to aggressive taxation, a forced dilution of the stocks of any company found to be replacing its workforce with a superintelligence. Distribute the newly minted stocks to all citizens so dividends can pay out to the rightful receivers of royalties.
(At least in Germany)
I don't say this is fair just that it's easier for a society to increase the base level for everyone
You can’t “objectively” look back and say people “have it better” now
Sure, you can look at all sorts of data and metrics, but that doesn’t mean people feel like they are better off now, which is what matters in the end
So the real question we should be trying to answer is: are people feeling better now than how they felt 200 years ago?
We could talk about it forever, but ultimately, it’s impossible to know
Are our poor better off than financially independent aristocrats of yesteryear? The kind that can be “an artist” or something their whole life. I guess in some ways it’s better now, more tech. But I’m not sure freedom and respect can be bought/replaced with tech alone.
Most importantly any progress made by humans as a whole rather than “the poor” needs to be subtracted such as general progress on medicine.
Frankly this argument is used as a cludgel
Only if you look at medical and technical advancements, but objectively, kings in Germany had the amount of land and real estate that the modern German can only dream of.
Sure, we have iPhones and Netflix which kings didn't have, but they had the valuable apprecaible assets the we don't have.
Dying in a coal mine to enrich a capitalist wasn't a great time.
If it's the wealth centralization that made things great, you'd expect that the poor to be at their best when the robber barons were in charge
But is AI a new source of wealth? That remains to be seen, it needs training on other peoples data which is invariably copyrighted, so it doesnt look like AI will be a new source of wealth imo.
[1] https://www.guibord.com/democracy/files-images/world-populat...
[2] https://ourworldindata.org/uploads/2016/03/ourworldindata_wo...
Japan has issued guidance that they won't use copyright law to dampen the pace of innovation, they are giving free reign to the innovators to capture any wealth produced as a byproduct of publically available (but copyrighted) data
USA has the strongest intellectual property protections in the world, I hope we can do some good with it (as is I see copyright preventing more art from being made than it incentivizing, and I'd rather abolish it, but not before a basic income is in place so that artists don't have to feel like their creative work is their only asset they have to hold onto and protecy)
(the question becomes, once everyone is out of the job, who is paying for goods and services to keep the economy alive, hence - basic income)
Only if society allows the use of AIs that have been trained on content for which they didn't have the legal rights to use for that purpose (basically theft).
(As opposed to precise-reproduction in output, covered by copyright)
We've been having the same conversation since the dawn of science fiction. If at any point there is a general confusion between what is artificial, and what is human, it will cause such an alarmist backlash that AI will always be kept in check, or destroyed..
No human wants to admit that they thought they were talking to a human the whole time when in fact it was a robot. For that reason alone AI will never be allowed to grow to a point where it threatens life "as we know it."
How would we tell the difference?
Humanity is doomed with or without them supplanting us; we should celebrate the immortality they offer us.
I was a bit disappointed about the lack of creativity in dreaming up an AI infested future.
People are able to force each other into gullibly working 40 hours a week, so that they can have shelter, food and a smartphone. This is not a rational thing, it is historically grown group think on a massive scale. Trying to use rational arguments to forecast what the future will look like based on this chaotic process seems silly at best.
Nobody in the 1400s expected cars, democracy, let alone Facebook. So if this AI thing is as good as the printing press, then I wholly expect everyone to clone themselves a couple of thousand times, inhabit interiors of planets, grow 500,000 years old before entering higher education, but not something mundane as "letting an AI fill in the paperwork to set up a company".
TL;DR, you can safely skip this article.
Unless each of those clones can upload the unique experiences to my original self so that the original me gains from those experiences, then what's the point of the clone? Just build a generic robot. The only caveat would be if that clone is just doing boring drone work but solely based on the abilities of the original me, but please, do not upload those mundane experiences back to the original me.
A good use of a clone for me would be to send a clone me to a colony on Mars. Send a fleet of clone me on a multi-generation ship to distant stars. Leave a clone me at work doing whatever, while original me is traveling and experiencing the rest of this globe.
A sidenote - if you only had cool and positive experiences in life, that would become a new baseline - everything worse would be a suffering. Not so smart approach for long term happiness.
The best tasting food is after some starvation, even if its just bread... even in literal sense, as some proper mountaineers coming close to dying from starvation can attest.
I do not doubt that machine learning and neural networks will accelerate research, but the limiting factor is still going to be humans in the loop for the forseeable future given the current state of the art (e.g. LLMs with tree-of-thought reasoning and LLM-powered agents that are easily subverted in the same way one hacks a badly written PHP application). You will have people using ML to crunch large datasets or accelerate simulation work, but that's it.
Asymmetric attacks are very feasible with today's LLMs and art generators, but that harm has already come to pass. It is also not a new harm. If you don't believe me, then I've got hard evidence of Donald Trump cheating in Barack Obama's Minecraft server[3].
Also...
> These include increasing the number of researchers working on safety (including “off switches” for software threatening to run out of control) from 300 or 400 currently to hundreds of thousands; fitting DNA synthesisers with a screening system that will report any pathogenic sequences; and thrashing out a system of international treaties to restrict and regulate dangerous tech.
Ok, so first off, all those AI safety researchers need to be fired if they thought 'off switches' were a good thing to mention here. It's already fairly well established AI safety canon that a "sufficiently smart AI" will reason about the off switch in a way that makes it useless[0].
Furthermore, notice how these are all obviously scary scenarios. The article fails to mention the mundane harms of AI: automation blind spots that render any human supervision useless[1]. I happen to share Suleyman's opposition to the PayPal brand of fauxbertarianism[2], but I would like to point out that they're on your side. Elon Musk talks about "AI harms" just like you do and thinks it needs to be regulated. The obvious choices of requiring a license to train AI is exactly the kind of regulatory capture that actual libertarians, right- or left-, would rail against. What we need are not bans or controls on training AI but bans or controls on businesses and governments using AI for safety-critical, business development, law enforcement, or executive functions. That is a harm that is here today, has been here for at least a decade, and could be regulated without stymieing research and creating "AI NIMBYs".
[0] https://www.youtube.com/watch?v=3TYT1QfdfsM
[1] https://pluralistic.net/2023/08/23/automation-blindness/#hum...
[2] AKA, power is only bad if the fist has the word 'GOVERNMENT' written on it. Or 'let snek step on other snek'. This is distinct from just right-libertarianism.
To me, there seems to be way more upside than downside, like pretty much every new tech
the tl;dr is that more jobs will be created than lost, and those jobs will be more interesting and not of the bs job variety many report they work.
Also, what's wrong with a world where most people don't have to work uninspiring tasks and can rather explore other interests or hobbies?