My hope is that a desire for authenticity prevents this from happening – whether that's a strong bias towards human content creators, towards speaking to a human on the phone for customer support (already something companies try to win customers on), or even winning customers on well-paid humans cooking their food for them (something that seems to be increasing).
Unfortunately, I suspect we will get a two-tiered system, where the "middle class" (whether that's disappearing is another question) can afford human content/human support/etc, and the working class are forced to endure poor experiences with AI generated content and so on. This may even get worse over time if, say, AI hits education and provides a worse quality education, but that's probably no different to what we already have with public school funding issues in the US/UK and many other countries.
I’m imagining world where companies like Netflix and Spotify introduce dirt-cheap subscription tiers that are populate with AI generated content, while they raise the prices on their existing offerings that have stuff made by humans.
If you’re poor you watch shitty, AI-generated movies on Netflix for $1.50/mo.
I suspect even worse scenario. If you're poor you watch shitty, ai-generated content for the current price. If you're persuaded to belive that you're middle class you cough up twice more to watch shitty, ai-generated content called premium becuase it will be "artisanely curated by humans (tm)". If you're really rich you will go to the theatre or opera.
I'm seeing a world where kids are put in a room with an AI that "educates" them, setting the lowest possible bar for personal development for your average kid and as quickly as possible expelling kids who are deemed to be a problem.
Having real teachers will be a luxury.
All good movies/shows are elsewhere and usually “for rent”. Anything IMDB top 100 is for rent, not included in any subscription.
Due to the economics of information, this is unlikely for contents. The cost to watch Avatar is often quite close to a local film with <1% its budget. The most interesting clips made by content creators (Ted Talks, Veritasium, MrBeast, etc.) are accessible to everyone with internet access. There will be some price differences based on access points (theater vs TV vs phone) or resolution but not necessarily the content itself.
I also suspect the best contents in the future will be co-created by humans and AI.
The way I was thinking about it was, for example, newspapers going online behind paywalls, YouTubers/Podcasters creating paid content on Patreon in order to fund their activities.
On the other side you get Buzzfeed creating AI generated quizzes (rather than news), and YouTube/TikTok content farms with AI generated scripts. Both of these are ad supported, so free to the end consumer, and therefore more accessible than a Patreon/NYT/etc subscription.
Meanwhile Hollywood has so much money that they don't know what to do with it - they give it to directors like Michael Bay to create all those pointless cgi-fests.
The latter is nothing to do with CGI or production budget: it's about marketing budget, in which AI has less to offer.
The former, in my view, will not be aided by AI. The "glorified stage plays" are (not always but on aggregate) of significantly higher quality than quite a lot of Hollywood output. AI won't change that in any positive way.
In the same sense that the revolution that occurred in the digital distribution of games and general improvements in the accessibility of tooling meant the market for games expanded into more niches rather than producing more AAA games from smaller developers.
Have you dealt with human support agents recently? It's just an exercise in gaslighting. I can't wait for AI to take over.
What should be desired, is a return to quality and respecting the user (or at least taking the time to understand them), which unfortunately seems unnecessary now with a global reach and unimpeded manipulation and influence that can ignore demands for improvement.
I asked it to forward my email to their devs, human or not ;)
Training these is still out of reach, but fine tuning is getting close (LoRA) and running them is almost easy at this point.
The products we've built so far are power-centralized, but augmentative. Where we go from there is up to people, not the nature of the technology. My hope is decentralized and augmentative, but the worst case scenario is indeed centralized and substitutive.
But also, a model running on your phone generating AI content is likely to be cheaper and not as good as human curated content in whatever form that is.
I think compute can be decentralised, while the power is still centralised, or at least those at the low end lose out on quality.
Hatred of phone calls aside, I heard an interesting take from Alex Hormozi (paraphrased): "You're on the phone and you say, are you human? No? Oh thank God, because you know the AI has a thousand times more experience than all the humans combined."
Because that's crazy to me. Humans can be reasoned with. AI can't. My experience with the likes of ChatGPT tells me that if the AI is wrong about something (which it very very often is), there's no point trying to explain to it that it's wrong or how it's wrong, it will say something like "You are right, sorry for the confusion." and follow up with the same or a similar error again.
AI might eventually become an alright first line, but losing the option to speak with a human -- an intelligent entity which can actually be reasoned with -- seems dystopic.
1 in Italy booking a train ride without providing your phone number, email and tessera sanitaria, proved a challenge last month
2 civid required us to carry our green pass everywhere (I'm not anti vaxx)
3 get stopped on some countries (like US) at a border and unable to present a phone for search might deny you entry.
The whole affair was predicted already 50-70 years ago. Jacques Ellul's La Technique is a great comprehensive read on this idiocy
Previously, I had to deal with "Junior python" or "Junior bash" crap at $work.
Finding the dangerous bugs, was measured in seconds. Helping the person to be better in the future, used to work.
Now, all the company is requesting code to ChatGPT. Hey, please, use my script.
I have to deal with "looks fine at first/quick view" code, that needs deep analysis to understand what it's trying to do, why, and where are the (100% sure) hidden "break production" kind of failures.
It's more like "were is waldo"... you know that there will be at least one or two things really really wrong, always, always one or two (or more) catastrophic details, but that they are hidden below something "apparently nice".
And what is worse, all effort to point and fix issues, is lost. Or repeated again and again.
I apologize, but as a senior sysadmin/oncall model, I cannot run your chatgpt code, until you understand how things work.
> code, that needs deep analysis to understand what it's trying to do
Because if there's on thing AI can already do really well, it's explaining what some code is doing.
But it just makes up a series of words which sound good. It's accidentally correct 75% of the time and just wrong the other 25%. (and IMO, just wrong in some way almost every time I use it).
I don't mean banal stuff like Copilot, which is a double-edged sword that might be used against junior developers. I mean world-changing benefits, one step closer to the techno-utopia.
Because on paper, the net benefits vs net negatives, for me and other AI pessimists like the author, are not worth the amount of spam, customer service bollocks and lost jobs LLM will cause to basically make mega-corporations richer.
So please tell me, what will LLMs ever do for us?
To this day I find myself doing a double take every time I'm about to ask ChatGPT to produce a ludicrous amount of output, because it would simply not be reasonable to ask that of some random person. But with LLMs you can do that, over and over again and it will comply every time.
How great is it that if I want to learn about, say, hexagonal architecture pattern, I can ask this thing to produce 500 LOC in a language I happen to be familiar with, and then interrogate it to no end until it clicks for me?
Regular chatGPT is even more dangerous because you have no idea what it's referencing most of the time. Yet it lowers the bar to this poor information to such a degree that people will be incentivised to use it regardless.
More likely we'll replace poor people's teachers with ChatGPT and whoever has more cash affords actual teachers. This is so real that we're experiencing this >today<, in a different scale with private schools and distance learning in countries such as Brasil where there are projects to reduce schools and/or move some of them to use distance learning.
LLMs are literally science fiction come to life, something out of the movies.
It's early days still, they're a bit primitive and the compute substrate is woefully underpowered for what we would ideally like... but the promise is there.
It should be the goal of every human to eliminate as many jobs as possible.
Every job that is eliminated frees up that person and the people who would have done that job in the future to do other work.
Humanity gain the productivity of that other work.
If we had not forced painful job losses on people, we would still be 97% subsistence farmers, as we were at the time of the American Revolution in the 1770's.
If you oppose progress due to the job losses, please at least be consistent and become a subsistence farmer.
I mean this is exactly what Drew is disputing.
The capital class will remove the jobs, capture the value of whatever labor is saved, and the workers who lose their jobs will be left with fewer resources and no realistic path to replace their lost income. They won't glide towards some utopian vision where everybody ends up working one day a week on their passion projects.
In some kind of abstract way these ML techniques provide potentially useful tools, but workers will not be the ones to see the benefits of more "productivity" that these tools enable.
The US can't even agree that, despite its vast wealth, health care is something everybody should receive regardless of employment. This country lacks the imagination to handle this situation in a way that improves lives for workers.
I work with a guy who has been covering up his poor use of the English language and it's been quite weird to be honest.
Fluent soudning emails full of convincing sounding gibberish is what I've been receiving. Unfortunately the guy just isn't good a communicating his ideas using written language and the LLM can't really fix that.
Same with dev. Does it actually make you better, or does it make you not think about learning so much?
I don't worry too much about the lack of correctness, as long as the learner is aware of the possibility - lots of teachers are sometimes wrong too, and a lot of information on the internet is incorrect.
But there indeed is a certain threshold of correctness. Below that threshold, the tehcnology is more harmful than helpful.
The threshold is also different for different aspects of education - for example for many uses in software engineering education current LLMs are already good enough, but they are nowhere near where they would need to be for e.g. law.
Every (tech) sword is triple-edged, and it's the third one that people trip up on because they never expect it.
That code generation is "banal" to you, or a threat to junior developers, suggests to me that it's already huge.
Likewise the job replacements you fear: that's only even possible when the AI doesn't suck massively.
The current state of downloadable models implies any corporate advantage is temporary.
Same applies for a teacher too, in various other aspects. Reducing important professions into statistical models is exactly the kind of crappification that the author's talking about. The logical conclusion of perfect sensors and tensors is not here, and the lacking substitutes along the way will be profit-driven, not solution-driven.
For free meaning that it is paid by quietly slipping ads into prescriptions or lessons?
I'm sorry... I'm supposed to trust my healthcare and child's education to a piece of software whose primary feature is its ability to effectively hallucinate and tell convincing lies?
And assuming AI is at all effective, which implies valuable (which implies lucrative,) you expect services built on it to remain free?
That's not how anything works in the real world.
You clearly don't have a deep understanding of what doctors and teachers do.
Speaking from an entrepreneur's POV, AI gives me an unfair advantage with respect to the large whales. It lets us complete projects (specifically, software projects) 10 times faster and achieve things before I had to raise millions to achieve them.
I can't tell if the net negative/positive is + or -, but it's not clear-cut for sure.
I'm building a product as an experienced engineer with 17 years in the field, and the only place LLMs would be useful without slowing me down is writing copy, whereas I would pay for a copywriter, which I will probably do.
LLM making you 10 times faster to build a product (of which coding is like 20%) needs better sources and proof, as it's a ludicrous number, and a whale with billions has access to much better tools than ChatGPT (i.e actual paid humans).
If by building product you mean "create a basic landing page in HTML", sure. But a landing page is not a company, nor a product you can sell.
For example, the right database schema or the right architecture I could easily see saving you 10x the development effort, but these are the things copilot is least able to help you with.
Am I misunderstanding something here? Or could it be that what you are finding is that your experience is helping you build faster, and this is being misattributed to AI?
That means you're envisioning a future where LLMs are so powerful that they actually cause huge societal impacts but somehow they can't detect spam, or make customer service more effective, make the general populace smarter?
You're asking a for an answer that you by definition wouldn't accept. To me LLMs are like cloud computing: cloud computing technically didn't change the world in that your average person knows what a load balance is... but there was a democratization that allowed many great things to be built at a scale that was previously not possible, and the results of that are what brought value.
Imagine you could run teams of marketing/sales/support ai-agents and focus on your core business. "The next 1B, 1 Person companies".
Just 15 years ago you'd need teams of 10+ people to do basic ml tasks you could do today. Now you have an LLM or a SaaS service that outperforms those teams for the fraction of the cost.
One artist might be able to produce a complete animated movie controlling all aspects.
But that’s the only thing I could imagine and it must be local not to be a privacy nightmare. Because I have no doubt that Google and Amazon are already working on that.
For anything else, I just agree with you and the author. It will just make capitalism worse.
I'm always getting reminders about bills I've auto-paid, that Google has dug up from the bowels of my email, and yet I still find myself scrambling on things I forgot about.
I'm highly unimpressed by anything other than the incredible confidence with which these systems will lie. Online and phone support is bad enough, AI will not make it better, not because they can't make an AI that could figure out that you need a technician to come, but because corporate won't trust the AI to not deliver a pony along with them. And who could blame them? As soon as they give AI any power it will become the tool of the stainless steel rat. Instead, it will continue to be used to fend off support calls and fudge internal metrics, just like the last round of automated support.
On a similar note, I watch little TV because of the exact same reason, because I have little control over what I want to see and most of the interesting stuff that I may want to watch may be spread across different channels and broadcasted at nearly the same time. Perhaps I should take the same step towards the internet, given that it’s becoming filled with junk that I don’t want.
Maybe you do. But for most people the reason they watch little TV is because the internet is simply better.
So no, not many people would just stop using the internet... unless there is something better.
Why write this article other than to to smugly say "told you so" in the cases where you turn out right. It is a zero risk take.
Looking at the advances in AI (Chess, Go, Protein Folding, MidJourney, ChatGPT) and your takeaway being "Humans will use this in bad ways" shows a ferocious lack of imagination.
I notice a desperate, but failing, attempt to lump the advances in AI to the same pool as Crypto greed because that was the smug naysayers nirvana.
I also get that feeling. It seems it is now fashionable to take down anything that might change things (I'm not saying that all change is good). Change resistance is something that always existed and we are already used to, but somehow this looks a bit different, more ideological.
When I read things like this it makes me think the author hasn't used ChatGPT in their job yet.
Here is a really simple example of how I used ChatGPT this afternoon that saved me, I would estimate, about 2 hours of work:
I had 2 CSV files, with different formats but which (supposedly) had the same functional information in them.
I had a very complicated BigQuery SQL statement that worked on the first file format by importing it as a blob to a table then combining it with a bunch of other CSV files. I wanted to know how much I might need to change my query if I started using the 2nd CSV file (which takes much less time to export from the system that produces it).
The query of course has a big complicated SELECT statement, but also several common table expressions and joins, some of which use columns from the CSV file I was looking to replace.
So I gave ChatGPT the 2 header rows, and the big complicated query. I asked it to tell me likely mapping between the 2 CSV files for similar columns, and to give me a list of columns that appeared in one but not the other. I asked it to mark with an exclamation mark those columns which appeared in the query.
It got some of the things wrong but because I'm pretty familiar with the query and the files I was able to pick up on those and it was much, much easier to browse the output and pick out the errors than it was to break down the query and do all that analysis from scratch.
The whole process using ChatGPT took me about 15 minutes.
And I have wins like that I would say about once per day. I mean it: I'm saving probably about 2 hours work per day by using ChatGPT on average, on tasks just like this.
Now multiply that by all the shit that people are doing all the time and think about all the needs that will get met as a result of this increase in productivity that are not currently being met, and you have an idea of why AI is fucking awesome, ESPECIALLY given the fact that we need a decreasing working population to support an increasing retired population.
The implementation always needs massaging before, but the tests are almost always great, albeit it likes to produce pointless ones sometimes.
While the new AI frontier might be led by prohibitively expensive (and closed) large language models, we’re also seeing great grass-roots progress at a smaller scale with modest models trained by the developer community. I trained a baby GPT the other day using llama2.c for my own use cases.
It’s Linux vs Sun/Unix all over again.
Incidentally I wrote about that stuff for my uni entrance exam ages 17 and it has always seemed kind of obvious and inevitable to me. It surprises me there are so many skeptics.
> Finding and setting up an appointment with a therapist can be difficult for a lot of people – it’s okay for it to feel hard.
Uff, what about crappy therapists? If an AI bot tells you to kill yourself, it's a pretty crappy AI bot. But there are a lot of crappy licensed therapists too. There's also a lot of crappy articles you can find on Google. The world is full of crappy resources. AI can and most likely will be used in all sorts of medical use cases.
I tried out one of the newer chatbots tailored for this type of interaction a couple weeks ago, and the discussion was better than what I got from probably 80% of the actual human therapists I've encountered as a patient.
Suicidal ideation shouldn’t exist but it does and you have a better chance with a therapist than without. It’s probably irresponsible right now to recommend an AI for treatment. That might change but I bet it’s further away than you’re thinking.
Yes, there are some narrow applications that will be GOOD. But will the good outweigh the bad? No, not at all. It never has.
We'd all be happier as a society if we went back in time.
https://en.wikipedia.org/wiki/Nuclear_holocaust#/media/File:...
The jumps from ~nothing general purpose/consumer-facing to GPT3, 3.5, Copilot, GPT4 all seemed enormous and pretty much back-to-back. Extending that curve points to some pretty extreme destinations (positive or negative), but now the sentiment seems to be that curve was a bit of a mirage.
I intuitively share the view that that curve was a mirage (and a byproduct of years of R&D backlog + OpenAI’s release cadence) but that isn’t coming from any rigorous analysis.
The climate change movement for example is ultimately a movement to reduce the resource usage of the working class. Reduced resource usage should in theory lead to a reduction in economic growth - but it hasn't. This is because they simply attained their growth through other means - reduced salary (adjusted for inflation), larger tax, shrink-flation (selling something smaller at the same price), etc. They will always post record profits to appease investors, and that growth will directly come from your pockets.
AI is probably the only thing slowing this down, and it likely is a bubble. When it bursts, do you think these companies will go back to employing humans for customer services? Like hell they will, you just won't get any customer support at all. You may say "fine, I'll take my money elsewhere", but you'll find yourself picking the lesser of evils [++]. Anybody who tries to offer human interaction will simply not be competitive on price, and people are relatively poorer than they were - so they have no choice. It's not as if you can go without water, gas, electric, phone, phone provider, ISP, etc.
[+] The coming economic recession will also not be enough to reset this trend, and there is no political will to address it.
[++] The government may mandate that these companies have human operators, but it won't work, they'll just maliciously comply. One human operator, a call queue of thousands of people, "our lines are unusually busy at the moment", outsource the humans to the current poorest Country and give them zero power to deal with customer queries, etc. It would be exceptionally difficult to prove they are not providing a good service, and at worst they fine them - which might still work out cheaper than dealing with customer queries.
Until that, things are just going to be as-is. Sometimes up, sometimes down, overall upwards hopefully, and sometimes advances will upset the status quo. And hopefully no tech will solidify the absurd rich-poor divide permanently.
> A reduction in the labor force for skilled creative work
I agree and disagree with this. Stable diffusion is art, but creating the art is still within the realm of artists. Also, they'll still need copyediting, refining, etc. I think creatives will transfer or complement their skills with this stuff, like some are already doing. (Example: https://m.youtube.com/watch?v=VGa1imApfdg)
I also very highly doubt that fine art will ever be 100% AI. Uniqueness drives their value.
> The complete elimination of humans in customer-support roles
Definitely not. Human customer service is key for achieving high eNPS scores. People will always want to talk to other people, even if IVR and chat can address their needs.
> More convincing spam and phishing content, more scalable scams
Definitely, but it is well documented that the most common types of scams are made to be deliberately "off" to find easy marks more quickly.
> SEO hacking content farms dominating search results > Book farms (both eBooks and paper) flooding the market
Both of these have been happening for many years. OpenAI will make it easier to stand up boilerplate hello-world starters though (as OP called out). I suppose Google will downrank sites like this to prevent incentivizing this.
> AI-generated content overwhelming social media Widespread propaganda and astroturfing, both in politics and advertising
This is the one thing I'm actually concerned about. I hope that Reddit doesn't become people talking to other people via ChatGPT assistants. That would be a cultural net loss.
There will absolutely be some great benefits provided by LLMs and the like. Alexa type devices that are more useful than a light switch. Auto spelling correction that actually works most of the time. Maybe a microwave oven that just has one ‘Heat this up’ button.
But I think the beneficial use cases will be a fraction of the overall use cases.
These technologies are going to be far more effective at enshitifying our world. Spam, scams, replacing artists and knowledge workers with tools that can produce ‘good enough’ output… and yeah, military capabilities that’ll allow humans to kill more humans more cost effectively.
I’m looking forward to the good stuff, it’ll be neat. But absolutely dreading the wave of horribleness that’ll inevitably come of this.
Again, I've said this same thing months and yet the AI bros continue to deflect with more nonsense to justify burning the planet with their snake-oil garbage.
Drews points still stand and the Deep Learning industry has no methods of efficient methods of training, fine-tuning and inference and continues to burn down the planet no matter the amount of greenwashing they continue to project.
And I can run inference on my laptop, transferring capabilities to smaller models is a thing, quantization is a thing, optimization is a thing. And, DL has been feasible for barely a decade, LLMs for a few years.
Are you talking about crypto by any chance?
It is an energy wasting snake-oil burning the planet. [0] [1] Especially when they confidently hallucinate without any transparent reasoning or explanation other than the regurgitation that it has been trained to do, or more accurately they are stochastic parrots.
The issue is fundamental to LLMs and deep learning and researchers still don't know why other than tweaking parameters and fine tuning / re-training it with GPUs still incinerating the planet with no viable alternative to such wasteful methods.
> And I can run inference on my laptop, transferring capabilities to smaller models is a thing, quantization is a thing, optimization is a thing.
We are talking worst case for inference not 'smaller models' which still need to be trained or fine-tuned to exist in the first place and for improvements. For the so-called 'serious' cloud-based LLMs, they need to continuously serve every inference and that requires a fleet of GPUs to serve lots of users as the parameter count of the model gets larger.
> And, DL has been feasible for barely a decade, LLMs for a few years.
Neural networks which are fundamental to LLMs have been around for decades and are still unexplainable black boxes which are incapable of transparent reasoning other than regurgitating responses that it was trained on. Unacceptable and useless for a wave of use-cases that require explainability.
> Are you talking about crypto by any chance?
Crypto already has viable alternatives to its energy wasting problem [2] available today right now. Deep Learning still does not.
[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
[1] https://www.independent.co.uk/tech/chatgpt-data-centre-water...
[2] https://consensys.net/blog/press-release/ethereum-blockchain...
This type of reasoning is really getting on my nerves lately.
Predicting the future is hard, yeah. But your predictions don't become systematically more accurate just by tackling "boring" and "capitalism" to them.
A lot of technologies can change our societies in emergent, non-boring ways. Climate change is an emergent effect of fossil fuel usage that you wouldn't predict by just looking at 19th century factories and imaging how they would evolve with "boring capitalism". The internet is extremely non-boring and has had profound effects on our society. Nuclear mutually-assured destruction is an extremely non-boring existential threat.
It could be that the dangers of AI is from the military, or the police, or terrorists, or from corporations seeking to replace labor, other conventional threats we already have a reference frame for, yes. Or it could be a completely novel from of disaster, like the equivalent of a school shooter getting AlphaFold 8 to make a novel virus that kills 70% of the population before we even realize there's a pandemic going on. Just because this isn't something we're used to doesn't mean it's fundamentally unlikely to happen.
There can be nuance to our view of new things. It's important we stress our ability as humans to think this way. LLMs are not all or nothing good bad polarities. They can be just another tool in your toolset. And that's fine. No hatred required.
I think he doesn't like the way Unix containers are done, preferring other ways to get them. From that URL:
> Recall that everything really is just a file on Plan 9, unlike Unix. Access to the hardware is provided through normal files, and per-process namespaces do not require special permissions to modify mountpoints. Making a container is thus trivial: just unmount all of the hardware you don’t want the sandboxed program to have access to. Done. You don’t even have to be root.
Where does he express a dislike for the concept of containers? After all, he's working on a new microkernel OS, with a permissions model that seem designed to make containers easy to implement.
All in all it’ll probably end up being a net-positive, although it’s a shame that it had to happen in exactly the way. The dawn of the internet was one of hope and optimism and the potential value that it held was an ocean compared to the eventual drops that it was mortgaged in pursuit thereof.
Search is becoming useless. People are becoming inoculated to social and its viral effect will slowly wane. People after exposure to all this value-less capitalism will eventually wise up, because that’s what makes sense, and will be left for us in terms of value will be the original oldies but goodies that we started with: Wikipedia, YouTube maybe, personal blogs, and commerce.
For many people this has already started. I don’t care about going online so much, and I’m much more interested in my community and what’s happening around me. My friends and I all use social but more as a tool and it’s increasingly becoming more local. When I meet younger people it’s even more extreme. They’re so cynical about tech that I’m convinced they’re going to usher in 3rd spaces and better urban planning and the likes when they grow up.
I’m not saying we’ll abandon tech just that we’ll only engage when there’s a legitimate value proposition. Ultimately that’s why there’s so much nonsense anyways, because it is legitimately hard to create actual value. On the long view though only value survives. I didn’t even mention “AI” but that will probably just hasten this process from a content perspective. In the future it’ll be around but we’ll just endure it, but we’ll also seek out meaningful interactions whenever we can.
In US healthcare including therapists are quite expensive.
Back in the day I was an indie dev and that was really hard on me. In my lows I thought I’d seek a therapist but at $500/half hour appointment that felt like a gut punch. Didn’t have any insurance then.
ChatGPT is instantly available and free. Yes, it’s not perfect but it’s better than nothing.
For a large part of US, mental therapists are not really accessible when they most need it.
I'll take it.
> ChatGPT is the new techno-atheist's substitute for God
Not really, no ~~true~~ AInotKillingeveryoneIst says that ChatGPT (or GPT-like) is ASI. Please stop beating this particular strawman.
(I am not sure whether your last line is your actual opinion or sarcastic, given the "no true ..." phrasing, but for what it's worth I think it's unironically correct: no one with any brain thinks ChatGPT is anything much like a superintelligence. There are people who expect AI to become godlike, for better or for worse, but the most they're saying about the likes of ChatGPT in this connection is "progress sure seems to be pretty fast these days".)
Seriously- Goes way back.
In the movie THX 1138 the population talk to a AI Jesus.
Which experts? Yann? Not sure he counts.
We already have incredible levels of spam, phishing, junk SEO content. Created from humans and scripts driven by humans.
It seems like LLMs are destroying traditional search engines (Google) much faster than they are enabling new ones (Bing + GPT).
Are we going to enter a dark age of search where the signal is drowned out by the noise for a few years? SEO blogspam was bad enough when humans had to write it, now it's becoming impossible to avoid.
We can't stop ourselves from 'crappifying' ourselves.
We are driven by local min/max in society that we can't break free from, until the system breaks.
Moloch https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
Past post on 'enshittification' from Cory Doctorow https://news.ycombinator.com/item?id=36611245
Oh come on.
downvoters: is the video game industry unethical for making billions of players expend exponentially higher compute (= emissions) for something as frivolous as slightly better graphics?
Quite literally yes. See: https://en.wikipedia.org/wiki/Boiling_frog
The reason most people don't think so is because human brains are not wired to comprehend the danger that slow buildups of a negative create.
In general, the complete waste developers have been creating by allowing themselves to build extremely inefficient systems because moore's law, has been unethical.
Imagine if the single goal of car manufacturers became to create faster and faster consumer cars with higher and higher co2 emissions, even thought those cars sit in a parking lot 95% of the time and when in use are never exercised to their potential.
Even before that, there aren't any efficient methods of training, inference and fine-tuning available today that are viable alternatives in the field of deep learning, especially after a decade of existing.
Drew's point still stands, unchallenged.
[0] https://gizmodo.com/chatgpt-ai-water-185000-gallons-training...
You could have said that for almost all tech improvements in history - electricity, medicine, radio, cars, trains, plumbing etc. Capitalism as in people selling stuff for money is just how things get done. At least to begin with.
The 'trump card' for all the AI negativity is education. Think of the 590 million Indian kids that live in poverty, for example. If they can get access to a computer and the internet, they will have access to on demand 24/7 first class education. They can even ask questions like they could of a real teacher.
The boon to human productivity and possibility for less suffering in our Capitalist earth can't possibly be outweighed by some boogey-man negatives which will probably never materialize anyway.
If they will be close, the companies will split, so that they'll never be big enough.
I see a lot of people want to blame capitalism, but look at any other system, and ultimately they all fail due to human greed. The only way to make capitalism work correctly is with regulation, because once monopolies and collusions are reached, the natural incentives disappear (i.e. the lowest cost service that delivers what the consumer values).
> Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots.
Agreed. You will earn less money (relative to cost of living), tax will increase, but yet people will still pretend your quality of life has increased - but they haven't. Many services you now can't reach a human - at all. Emails have disappeared, phone lines have disappeared - I now have to waste 5 minutes speaking to a chat bot that I know cannot solve my issue for it to maybe allow me to type text to what it claims to be a human.
> LLMs are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and ML will probably remain useful for writing cover letters for you.
In a sense, most neural networks can be modelled as some form of Markov Model. What's becoming more obvious is that the structures of these models is super important, and there is still a lot to be learned.
> Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.
Cars are a decentralised transport (as much as a transport system can be), whereas a train is a centralised transport system. The internet is also a transport system, but with packets instead of people, and this has had great success with a mixture of centralised and decentralised transport mechanisms.
The biggest problem with trains is that you create a single point of failure and an unnatural monopoly. Your bandwidth is also heavily reduced due to safety considerations (you want to travel fast over long distances, but need to increase the safety margin to do so). Unlike cars or internet packets, you can't divert a train. One can imagine a new protest group "just stop energy" (instead of "just stop oil") quite trivially bringing an entire Country to a halt by placing cars on all of the tracks.
> AI companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models.
Interesting to see that none of the climate activists so far have gone for clear winners like crypto mining, or AI training. Instead they would rather keep making the life of the every-day person miserable, as if it isn't miserable enough already.
> You will never trust another product review.
You find that people pay for reviews anyway. Somebody I know gets sent Amazon products to review, and they get to keep the products. The more positive reviews you give, the more you get selected for future reviews. The only way around this is reputation - I find somebody you trust who has reviewed a product. It's why Linus Tech Tips (LTT) and the recent review scandal was important - they have a reputation and it does inform consumers about expensive computing equipment investments.
"Of course, computers do present a threat of violence, but as Randall points out, it’s not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which aren’t going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions – it can put you in jail or keep you there. Police are using computers for facial recognition and “predictive policing”. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.
Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I don’t really like it."
The rest of the article.
There is a computer bubble, but the technology is here to stay. Once the bubble pops, the world will be changed by computers. But it will probably be crappier, not better.
Contrary to the doomer’s expectations, the world isn’t going to go down in flames any faster thanks to computers. Contemporary advances in computing aren’t really getting us any closer to AGI (Artificial General Intelligence), and as Randall Monroe pointed out back in 2018:
A panel from the webcomic “xkcd” showing a timeline from now into the distant future, dividing the timeline into the periods between “computers become advanced enough to control unstoppable swarms of robots” and “computers become self-aware and rebel against human control”. The period from self-awareness to the indefinite future is labelled “the part lots of people seem to worry about”; Randall is instead worried about the part between these two epochs.
What will happen to computers is boring old capitalism. Its staying power will come in the form of replacing competent, expensive humans with crappy, cheap robots. Language models are a pretty good advance over Markov chains, and stable diffusion can generate images which are only somewhat uncanny with sufficient manipulation of the prompt. Mediocre programmers will use GitHub Copilot to write trivial code and boilerplate for them (trivial code is tautologically uninteresting), and computers will probably remain useful for writing cover letters for you. Self-driving cars might show up Any Day Now™, which is going to be great for sci-fi enthusiasts and technocrats, but much worse in every respect than, say, building more trains.
The biggest lasting changes from computers will be more like the following:
- A reduction in the labor force for skilled creative work - The complete elimination of humans in customer-support roles - More convincing spam and phishing content, more scalable scams - SEO hacking content farms dominating search results - Book farms (both eBooks and paper) flooding the market - Computer-generated content overwhelming social media - Widespread propaganda and astroturfing, both in politics and advertising
Computer companies will continue to generate waste and CO2 emissions at a huge scale as they aggressively scrape all internet content they can find, externalizing costs onto the world’s digital infrastructure, and feed their hoard into GPU farms to generate their models. They might keep humans in the loop to help with tagging content, seeking out the cheapest markets with the weakest labor laws to build human sweatshops to feed the data monster.
You will never trust another product review. You will never speak to a human being at your ISP again. Vapid, pithy media will fill the digital world around you. Technology built for engagement farms – those computer-edited videos with the grating machine voice you’ve seen on your feeds lately – will be white-labeled and used to push products and ideologies at a massive scale with a minimum cost from social media accounts which are populated with computer content, cultivate an audience, and sold in bulk and in good standing with the Algorithm.
All of these things are already happening and will continue to get worse. The future of media is a soulless, vapid regurgitation of all media that came before the computer epoch, and the fate of all new creative media is to be subsumed into the roiling pile of math.
This will be incredibly profitable for the computer barons, and to secure their investment they are deploying an immense, expensive, world-wide propaganda campaign. To the public, the present-day and potential future capabilities of the technology are played up in breathless promises of ridiculous possibility. In closed-room meetings, much more realistic promises are made of cutting payroll budgets in half.
The propaganda also leans into the mystical sci-fi computer canon, the threat of smart computers with world-ending power, the forbidden allure of a new Manhattan project and all of its consequences, the long-prophesied singularity. The technology is nowhere near this level, a fact well-known by experts and the barons themselves, but the illusion is maintained in the interests of lobbying lawmakers to help the barons erect a moat around their new industry.
Of course, computers do present a threat of violence, but as Randall points out, it’s not from the computers themselves, but rather from the people that employ them. The US military is testing out computer-controlled drones, which aren’t going to be self-aware but will scale up human errors (or human malice) until innocent people are killed. Computer tools are already being used to set bail and parole conditions – it can put you in jail or keep you there. Police are using computers for facial recognition and “predictive policing”. Of course, all of these models end up discriminating against minorities, depriving them of liberty and often getting them killed.
Computers are defined by aggressive capitalism. The hype bubble has been engineered by investors and capitalists dumping money into it, and the returns they expect on that investment are going to come out of your pocket. The singularity is not coming, but the most realistic promises of computers are going to make the world worse. The computer revolution is here, and I don’t really like it.
The reasoning when replacing 'AI' with 'Computers' is the same, and is also Valid.
This doesn't make a good argument against the article, supposing that with 'computers' the article is wrong, so with 'AI' the article would be wrong.
It is actually valid for Computers also, as well as for AI.
Completely unserious comment but I enjoy the Alan Fisher reference, it's funny seeing my online spheres intersect.
EDIT: serious comment.
The whole post is a bit of a doomer one, the thing is in the maximalist bad world that Drew DeVault poses, human interaction (customer support, human written articles and opinions) becomes a premium, meaning the pendulum will swing as people realise the mistake. A lot of people will hurt or even die in the medium term which is true but the world he posits seems one that leads to an unstable maximum.