ZeroHedge on twitter said the following:
"According to the market, AI will disrupt everything... except labor, which magically will be just fine after millions are laid off."
Its also worth noting that if you can create a business with an LLM, so can everyone else. And sadly everyone has the same ideas, everyone ends up working on the same things causing competition to push margins to nothing. There's nothing special about building with LLMs as anyone can just copy you that has access to the same models and basic thought processes.
This is basic economics. If everyone had an oil well on their property that was affordable to operate the price of oil would be more akin to the price of water.
EDIT: Since people are focusing on my water analogy I mean:
If everyone has easy access to the same powerful LLMs that would just drive down the value you can contribute to the economy to next to nothing. For this reason I don't even think powerful and efficient open source models, which is usually the next counter argument people make, are necessarily a good thing. It strips people of the opportunity for social mobility through meritocratic systems. Just like how your water well isn't going to make your rich or allow you to climb a social ladder, because everyone already has water.
Yeah, this is quite thought provoking. If computer code written by LLMs is a commodity, what new businesses does that enable? What can we do cheaply we couldn't do before?
One obvious answer is we can make a lot more custom stuff. Like, why buy Windows and Office when I can just ask claude to write me my own versions instead? Why run a commodity operating system on kiosks? We can make so many more one-off pieces of software.
The fact software has been so expensive to write over the last few decades has forced software developers to think a lot about how to collaborate. We reuse code as much as we can - in shared libraries, common operating systems & APIs, cloud services (eg AWS) and so on. And these solutions all come with downsides - like supply chain attacks, subscription fees and service outages. LLMs can let every project invent its own tree of dependencies. Which is equal parts great and terrifying.
There's that old line that businesses should "commoditise their compliment". If you're amazon, you want package delivery services to be cheap and competitive. If software is the commodity, what is the bespoke value-added service that can sit on top of all that?
The difference is that 3D printing still requires someone, somewhere to do the mechanical design work. It democratises printing but it doesn't democratise invention. I can't use words to ask a 3d printer to make something. You can't really do that with claude code yet either. But every few months it gets better at this.
The question is: How good will claude get at turning open-ended problem statements into useful software? Right now a skilled human + computer combo is the most efficient way to write a lot of software. Left on its own, claude will make mistakes and suffer from a slow accumulation of bad architectural decisions. But, will that remain the case indefinitely? I'm not convinced.
This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.
There are already some companies using fine tuned AI models for "red team" infosec audits. Apparently they're already pretty good at finding a lot of creative bugs that humans miss. (And apparently they find an extraordinary number of security bugs in code written by AI models). It seems like a pretty obvious leap to imagine claude code implementing something similar before long. Then claude will be able to do security audits on its own output. Throw that in a reinforcement learning loop, and claude will probably become better at producing secure code than I am.
The walls and plateaus that have been consistently pulled out from "comments of reassurance" have not materialized. If this pace holds for another year and a half, things are going to be very different. And the pipeline is absolutely overflowing with specialized compute coming online by the gigawatt for the foreseeable future.
So far the most accurate predictions in the AI space have been from the most optimistic forecasters.
Im really tired, and exhausted of reading simple takes.
Grok is a very capable LLM that can produce decent videos. Why are most garbage? Because NOT EVERYONE HAS THE SKILL NOR THE WILL TO DO IT WELL!
HN is a echo chamber of a very small sub group. The majority of people can’t utilize it and needs to have this further dumbed down and specialized.
That’s why marketing and conversion rate optimization works, its not all about the technical stuff, its about knowing what people need.
For funded VC companies often the game was not much different, it was just part of the expenses, sometimes a lot sometimes a smaller part. But eventually you could just buy the software you need, but that didn’t guarantee success. Their were dramatic failures and outstanding successes, and I wish it wouldn’t but most of the time the codebase was not the deciding factor. (Sometimes it was, airtable, twitch etc, bless the engineers, but I don’t believe AI would have solved these problems)
Agreed. Honestly, and I hate to use the tired phrase, but some people are literally just built different. Those who'd be entrepreneurs would have been so in any time period with any technology.
1) I don’t disagree with the spirit of your argument
2) 3D printing has higher startup costs than code (you need to buy the damn printer)
3) YOU are making a distinction when it comes to vibe coding from non-tech people. The way these tools are being sold, the way investments are being made, is based on non-domain people developing domain specific taste.
This last part “reasonable” argument ends up serving as a bait and switch, shielding these investments. I might be wrong, but your comment doesn’t indicate that you believe the hype.
They would get amazing amounts done, but no one else could understand the internals because they were so uniquely shaped by the inner nuances of one mind.
Software exists as part of an ecosystem of related software, human communities, companies etc. Software benefits from network effects both at development time and at runtime.
With full custom software, you users / customers won't be experienced with it. AI won't automatically know all about it, or be able to diagnose errors without detailed inspection. You can't name drop it. You don't benefit from shared effort by the community / vendors. Support is more difficult.
We are also likely to see "the bar" for what constitutes good software raise over time.
All the big software companies are in a position to direct enormous token flows into their flagship products, and they have every incentive to get really good at scaling that.
Instead software development would just become a tool anybody could use in their own specific domain. For instance if a manager needs some employee scheduling software, they would simply describe their exact needs and have software customized exactly to their needs, with a UI that fits their preference, ready to go in no time, instead of finding some SaaS that probably doesn't fit exactly what they want, learning how to use it, jumping through a million hoops, dealing with updates you don't like, and then paying a perpetual rent on top of all of this.
But your hypothetical manager who needs employee scheduling software isn't paying for the coding, they're paying for someone to _figure out_ their exact needs, and with a UI that fits their preference, ready to go in no time.
I've thought a lot about this and I don't think it'll be the death of SaaS. I don't think it's the death of a software engineer either — but a major transformation of the role and the death if your career _if you do not adapt_, and fast.
Agentic coding makes software cheap, and will commoditize a large swath of SaaS that exists primarily because software used to be expensive to build and maintain. Low-value SaaS dies. High-value SaaS survives based on domain expertise, integrations, and distribution. Regulations adapt. Internal tools proliferate.
Troubleshooting and fixing the big mess that nobody fully understands when it eventually falls over?
If that's actually the future of humans in software engineering then that sounds like a nightmare career that I want no part of. Just the same as I don't want anything to do with the gigantic mess of Cobal and Java powering legacy systems today.
And I also push back on the idea that llms can't troubleshoot and fix things, and therefore will eventually require humans again. My experience has been the opposite. I've found that llms are even better at troubleshooting and fixing an existing code base than they are at writing greenfield code from scratch.
If most software is just used by me to do a specific task, then being able to make software for me to do that task will become the norm. Following that thought, we are going to see a drastic reduction in SASS solutions, as many people who were buying a flexible-toolbox for one usecase to use occasionally, just get an llm to make them the script/software to do that task as and when they need it, without any concern for things like security, longevity, ease of use by others (for better or for worse).
I guess what im circling around is that if we define engineering as building the complex tools that have to interact with many other systems, persist, be generally useful and understandable to many people, and we consider that many people actually dont need that complexity for their use of the system, the complexity arises from it needing to serve its purpose at huge scale over time. then maybe there will be less need for enginners, but perhaps first and foremost because the problems that engineering is required to solve are much less if much more focused and bespoke solutions to peoples problems are available on demand.
As an engineer i have often felt threatened by LLMs and agents of late, but i find that if i reframe it from Agents replacing me, to Agents causing the type of problems that are even valuable to solve to shift, it feels less threatening for some reason. Ill have to mull more.
Google's weird AI browser project is kind of a step in this direction. Instead of starting with a list of programs and services and customizing your work to that workflow, you start with the task you need accomplished and the operating system creates an optimized UI flow specifically for that task.
Luckily my org has a bit of a pushback attitude towards AI systems, but it will only be a matter of time before we have to compete and adapt. It's kind of depressing, and only the strong will survive.
yes, it will enable a lot of custom one-off software but I think people are forgetting the advantages of multiple copied instances, which is what enabled software to be so successful in the first place.
Mass production of the same piece of software creates standards, every word processor uses the same format and displays it the same way.
Every date library you import will calculate two months from now the same way, therefore this is code you don't have to constantly double check in your debug sessions.
Linux costs $0. Creating a linux clone compatible with your hardware from the hardware spec sheets with an AI for complicated hardware would cost thousands to millions of dollars in tokens, and you'd end up with something that works worse than linux (or more likely something that doesn't even boot).
Even if the price falls by a thousand fold, why would you spend thousands of dollars on tokens to develop an OS when there's already one you can use?
Even if software becomes cheaper to write, it's not free, and there's a lot of software (especially libraries) out there which is free.
> Even if the price falls by a thousand fold, why would you spend thousands of dollars on tokens to develop an OS when there's already one you can use?
Why do you assume token price will only fall a thousand fold? I'm pretty sure tokens have fallen by more than that in the last few years already - at least if we're speaking about like-for-like intelligence.
I suspect AI token costs will fall exponentially over the next decade or two. Like Dennard scaling / Moore's law has for CPUs over the last 40 years. Especially given the amount of investment being poured into LLMs at the moment. Essentially the entire computing hardware industry is retooling to manufacture AI clusters.
If it costs you $1-$10 in tokens to get the AI to make a bespoke operating system for your embedded hardware, people will absolutely do it. Especially if it frees them up from supply chain attacks. Linux is free, but linux isn't well optimized for embedded systems. I think my electric piano runs linux internally. It takes 10 seconds to boot. Boo to that.
I think the kind of software that everybody needs (think Slack or Jira) is at the greatest risk, as everybody will want to compete in those fields, which will drive margins to 0 (and that's a good thing for customers)! However, I think small businesses pandering to specific user groups will still be viable.
The model owner can just withhold access and build all the businesses themselves.
Financial capital used to need labor capital. It doesn't anymore.
We're entering into scary territory. I would feel much better if this were all open source, but of course it isn't.
The only existential threat to the model owner is everyone being a model owner, and I suspect that's the main reason why all the world's memory supply is sitting in a warehouse, unused.
It would be cool if I can brew hardware at home by getting AI to design and 3D print circuit boards with bespoke software. Alas, we are constrained by physics. At the moment.
Aggregation. Platforms that provide visibility, influence, reach.
People will find work to do, whether that means there's tens of thousands of independent contractors, whether that means people migrate into new fields, or whether that means there's tens of multi-trillion dollar companies that would've had 200k engineers each that now only have 50k each and it's basically a net nothing.
People will be fine. There might be big bumps in the road.
Doom is definitely not certain.
If you go to the many small towns in farm country across the United States, I think the last 100 years will look a lot closer to "doom" than "bumps in the road". Same thing with Detroit when we got foreign cars. Same thing with coal country across Appalachia as we moved away from coal.
A huge source of American political tension comes from the dead industries of yester-year combined with the inability of people to transition and find new respectable work near home within a generation or two. Yes, as we get new technology the world moves on, but it's actually been extremely traumatic for many families and entire towns, for literally multiple generations.
On the one hand, it brings a greater selection, at cheaper prices, delivered faster, to communities.
On the other hand, it steamrolls any competing businesses and extracts money that previously circulated locally (to shareholders instead).
Not sure when you checked.
In the US more food is grown for sure. For example just since 2007 it has grown from $342B to $417B, adjusted for inflation[1].
But employment has shrunk massively, from 14M in 1910 to around 3M now[2] - and 1910 was well after the introduction of tractors (plows not so much... they have been around since antiquity - are mentioned extensively in the old testament Bible for example).
[1] https://fred.stlouisfed.org/series/A2000X1A020NBEA
[2] https://www.nass.usda.gov/Charts_and_Maps/Farm_Labor/fl_frmw...
You get layed off and spend 2-3 years migrating to another job type what do you think g that will do to your life or family. Those starting will have a paused life those 10 fro retirement are stuffed.
We do not have more jobs for horses.
In this context we are the horses.
Yes, that's how technology works in general. It's good and intended.
You can't have baristas (for all but the extremely rich), when 90%+ of people are farmers.
> ZeroHedge on twitter said the following:
Oh, ZeroHedge. I guess we can stop any discussion now..
Btw, globally equality hasn't looked better in probably more than a century by now. Especially in terms of real consumption.
Automation should be, obviously, a good thing, because more is produced with less labor. What it says of ourselves and our politics that so many people (me included) are afraid of it?
In a sane world, we would realize that, in a post-work world, the owner of the robots have all the power, so the robots should be owned in common. The solution is political.
I certainly don't have much faith in the current political structures, they're uneducated on most subjects they're in charge of and taking the magicians at their word, the magicians have just gotten smarter and don't call it magic anymore.
I would actually call it magic though, just actually real. Imagine explaining to political strategists from 100 years ago, the ability to influence politicians remotely, while they sit in a room by themselves a la dictating what target politicians see on their phones and feed them content to steer them in a certain directions.. Its almost like a synthetic remote viewing.. And if that doesn't work, you also have buckets of cash :|
Globally I think we need better access to quality nutrition and more affordable medicine. Generally cheaper energy.
I tend to automate too much because it's fun, but if I'm being objective in many cases it has been more work than doing the stuff manually. Because of laziness I tend to way overestimate how much time and effort it would took to do something manually if I just rolled my sleeved and simply did it.
Whether automating something actually produces more with less labor depends on nuance of each specific case, it's definitely not a given. People tend to be very biased when judging the actual productivity. E.g. is someone who quickly closes tickets but causes disproportionate amount of production issues, money losing bugs or review work on others really that productive in the end?
Because companies want to make MORE money.
Your hypothetical company is now competing with another company who didn’t opposite, and now they get to market faster, fix bugs faster, add feature faster, and responding to changes in the industry faster. Which results in them making more, while your employ less company is just status quo.
Also. With regards to oil, the consumption of oil increases as it became cheaper. With AI we now have a chance to do projects that simply would have cost way too much to do 10 years ago.
Not necessarily.
You are assuming that the people can consume whatever is put in front of them. Markets get saturated fast. The "changes in the industry" mean nothing.
B) No amount of money will make people buy something that doesn’t add value to or enrich their lives. You still need ideas, for things in markets that have room for those ideas. This is where product design comes in. Despite what many developers think, there are many kinds of designers in this industry and most of them are not the software equivalent of interior decorators. Designing good products is hard, and image generators don’t make that easier.
Not sure about that, at least if we're talking about software. Software is limited by complexity, not the ability to write code. Not sure LLMs manage complexity in software any better than humans do.
This is someone telling you they have never had an idea that surprised them. Or more charitably, they've never been around people whose ideas surprised them. Their entire model of "what gets built" is "the obvious thing that anyone would build given the tools." No concept of taste, aesthetic judgment, problem selection, weird domain collisions, or the simple fact that most genuinely valuable things were built by people whose friends said "why would you do that?"
Yes some ideas or novel, I would argue that LLMs destroy or atrophy the creative muscle in people, much like how GPS powered apps destroyed people's mental navigation "muscles".
I would also argue that very few unique valuable "things" built by people ever had people saying "Why would you build that". Unless we're talking about paradigm shifting products that are hard for people to imagine, like a vacuum cleaner in the 1800s. But guess what, llms aren't going to help you build those things.. They can create shitty images, clones of SaaS products that have been built 50x over, and all around encourage people to be mediocre and destroy their creativity as their brains atrophy from their use.
I think the disconnect is that you are imagining a world where somehow LLMs are able to one-shot web businesses, but robotics and real-world tech is left untouched. Once LLMs can publish in top math/physics journals with little human assistance, it's a small step to dominating NeurIPS and getting us out of our mini-winter in robotics/RL. We're going to have Skynet or Star Trek, not the current weird situation where poor people can't afford healthy food, but can afford a smartphone.
Star Trek only got a good society after an awful war, so neither of these options are good.
I'd be more trusting of LLM companies if they were all workplace democracies, not really a big fan of the centrally planned monarchies that seem to be most US corporations.
Yes it was. Those industrialists were called "robber barons" for a reason.
So in that sense, yes, it’s the same
If that were true, LLM companies would just use it themselves to make money rather than sell and give away access to the models at a loss.
Competition may encourage companies to keep their labor. For example, in the video game industry, if the competitors of a company start shipping their games to all consoles at once, the company might want to do the same. Or if independent studios start shipping triple A games, a big studio may want to keep their labor to create quintuple A games.
On the other hand, even in an optimistic scenario where labor is still required, the skills required for the jobs might change. And since the AI tools are not mature yet, it is difficult to know which new skills will be useful in ten years from now, and it is even more difficult to start training for those new skills now.
With the help of AI tools, what would a quintuple A game look like? Maybe once we see some companies shipping quintuple A games that have commercial success, we might have some ideas on what new skills could be useful in the video game industry for example.
False. Anyone can learn about index ETFs and still yolo into 3DTE options and promptly get variation margined out of existence.
Discipline and contextual reasoning in humans is not dependent on the tools they are using, and I think the take is completely and definitively wrong.
From all my interactions with C-level people as an engineer, what I learned from their mindset is their primary focus is growing their business - market entry, bringing out new products, new revenue streams.
As an engineer I really love optimizing out current infra, bringing out tools and improved workflows, which many of my colleagues have considered a godsend, but it seems from a C-level perspective, it's just a minor nice-to-have.
While I don't necessarily agree with their world-view, some part of it is undeniable - you can easily build an IT company with very high margins - say 3x revenue/expense ratio, in this case growing the profit is a much more lucrative way of growing the company.
I work for a cash-strapped nonprofit. We have a business idea that can scale up a service we already offer. The new product is going to need coding, possibly a full-scale app. We don't have any capacity to do it in-house and don't have an easy way to find or afford vendor that can work on this somewhat niche product.
I don't have the time to help develop this product but I'm VERY confident an LLM will be able to deliver what we need faster and at a lower cost than a contractor. This will save money we couldn't afford to gamble on an untested product AND potentially create several positions that don't currently exist in our org to support the new product.
IME, you'll just get demoware if you don't have the time and attention to detail to really manage the process.
Its kind of funny to see capitalists brains all over this thread desperately try to make it make sense. It's almost like the system is broken, but that can't possibly be right everybody believes in capitalism, everybody can't be wrong. Wake the fuck up.
I don't know if LLMs would be capable of also doing that job in the future, but my org (a mission-driven non profit) can get very real value from LLMs right now, and it's not a zero-sum value that takes someone's job away.
I expect the software market will change from lots of big kitchen sink included systems and services to many smaller more specialized solutions with small agile teams behind them.
Some engineers that lose their jobs are going to create new businesses and new jobs.
The question in my mind: is there enough feature and software demand out there to keep all of the engineers employed at 3x the productivity? Maybe. Software has been limited on the supply side by how expensive it was to produce. Now it may bump into limits on the demand side instead.
Meanwhile LLMs are better than junior devs, so nobody wants to hire a junior dev. No idea how we get senior devs then. How many people will be scared away from entering this career path?
The job has changed. How many software engineers will leave the career now that the job is more of a technically minded product person and code reviewer?
I can't predict how it all plays out, but I'm along for the ride. Grieving the loss of programming and trying to get used to this new world.
Most companies have "want to do" lists much longer than what actually gets done.
I think the question for many will be is it actually useful to do that. For instance, there's only so much feature-rollout/user-interface churn that users will tolerate for software products. Or, for a non-software company that has had a backlog full of things like "investigate and find a new ERP system", how long will that backlog be able to keep being populated.
Other than a vast consolidation of what parts of the economy are "digital", what is going to have margin other than orphaned capital and "creative" efforts within 10 years?
EDIT: the top ranked model on openrouter based on traffic changes almost weekly now, I can't see how Amy claim of “stickiness” exists in this space.
Yeah, people are going to have to come to terms with the "idea" equivalent of "there are no unique experiences". We're already seeing the bulk move toward the meta SaaS (Shovels as a Service).
This was true before LLMs. For example, anyone can open a restaurant (or a food truck). That doesn't mean that all restaurants are good or consistent or match what people want. Heck, you could do all of those things but if your prices are too low then you go out of business.
A more specific example with regards to coding:
We had books, courses, YouTube videos, coding boot camps etc but it's estimated that even at the PEAK of developer pay less than 5% of the US adult working population could write even a basic "Hello World" program in any language.
In other words, I'm skeptical of "everyone will be making the same thing" (emphasis on the "everyone").
At my company we have a huge backlog where only the top of that iceberg is pulled every iteration to keep customers happy.
If they fired 90% of the engineers assuming a 10x increase in productivity, they might be able to offer their product at half the price. But if they keep all their engineers they'd get 10x the features and could probably charge twice as much for it.
One possibility may be that we normalize making bigger, more complex things.
In pre-LLM days, if I whipped up an application in something like 8 hours, it would be a pretty safe assumption that someone else could easily copy it. If it took me more like 40 hours, I still have no serious moat, but fewer people would bother spending 40 hours to copy an existing application. If it took me 100 hours, or 200 hours, fewer and fewer people would bother trying to copy it.
Now, with LLMs... what still takes 40+ hours to build?
Why haven't Warners acquired Netflix then, but the other way around? Even though they had access to the same labor market, a human LLM replacement?
I think real economics is a little more complex than the "basic economics" referenced in your reply.
This does not negate the possibility that enterprises will double down on replacing everyone with AI, though. But it does negate the reasoning behind the claim and the predictions made.
> If everyone had an oil well on their property that was affordable to operate the price of oil would be more akin to the price of water.
This is not necessarily even true https://en.wikipedia.org/wiki/Jevons_paradox
Will it fundamentally change or eliminate some jobs? I think yes.
But at the same time, no one knows how this will play out in the long run. We certainly shouldn't extrapolate what will happen in the job market or society by treating AI performance as an independent variable.
That is a productivity improvement, which tends to increase employment.
Anyone who lived through 90s OSS UX and MySpace would likely agree that design taste is unevenly distributed throughout the population.
I'm not sure that's true. If LLMs can help researchers implement (not find) new ideas faster, they effectively accelerate the progress of research.
Like many other technologies, LLMs will fail in areas and succeed in others. I agree with your take regarding business ideas, but the story could be different for scientific discovery.
Anecdotally it seems demand for software >> supply of software. So in engineering, I think we’ll see way more software. That’s what happened in the Industrial Revolution. Far more products, multiple orders of magnitude more, were produced.
The Industrial Revolution was deeply disruptive to labour, even whilst creating huge wealth and jobs. Retraining is the real problem. That’s what we will see in software. If you can’t architect and think well, you’ll struggle. Being able to write boiler plate and repetitive low level code is a thing of the past. But there are jobs - you’re going to have to work hard to land them.
Now, if AGI or superintelligence somehow renders all humans obsolete, that is a very different problem but that is also the end of capitalism so will be down to governments to address.
In this way, AI coding is a bummer. I also sincerely miss writing code. Merely reading it (or being a QA and telling Claude about bugs I find) is a shell of what software engineering used to be.
I know with apps especially, all that really matters is how large your user base is, but to spend all that time and money getting the user base, only for them to jump ship next month for an even better vibe-coded solution... eh. I don't have any answers, I just agree that everyone has the same ideas and it's just going to be another form of enshittification. "My AI slop is better than your AI slop".
[1] https://www.walmart.com/ip/Aquafina-Purified-Drinking-Water-...
You found the most expensive 8pck of water on Walmart. Anyone can put a listing on Walmart, its the same model as Amazon. There's also a listing right below for bottles twice the size, and a 32 pack for a dollar less.
It cost $0.001 per gallon out of your tap, and you know this..
"The 2025-26 water use price for commercial customers is now $3.365/kL (or $0.003365 per litre)"
https://www.sawater.com.au/my-account/water-and-sewerage-pri...
My household water comes from a 500 ft well on my property requiring a submersible pump costing $5000 that gets replaced ever 10-15 years or so with a rig and service that cost another 10k. Call it $1000/year... but it also requires a giant water softener, in my case a commercial one that amortizes out to $1000/year, and monthly expenditure of $70 for salt (admittedly I have exceptionally hard water).
And of course, I, and your municipality too, don't (usually) pay any royalties to "owners" of water that we extract.
Water is, rightly, expensive, and not even expensive enough.
(1) Combined water+ sewer fees. Sewer charges are based on your water consumption so it rolls into the per-gallon effective price. https://www.pgh2o.com/residential-commercial-customers/rates
If we can flatten the social hierarchy to reduce the need for social mobility then that kills two birds with one stone.
If the world needs 1/3 of the labor to sustain the ruling class's desires, they will try to reduce the amount of extra humans. I'm certain of this.
My guess is during this "2nd industrial revolution" they will make young men so poor through the alienation of their labor that they beg to fight in a war. In that process they will get young men (and women) to secure resources for the ruling class and purge themselves in the process.
I guess I agree but I want to add to your point is that, this tech is inexpensive.
And unfortunately, not in the sense where it is related to the real value of a product or need for it, but as a market condition.
But, to me, it seems that it will be more expensive anyway.
I see these possibilities: 1. Few companies own all the technology. They cut the men in the middle and they have all kinds of super apps and will try to force into that ecosystem
2. Or, they succeeded in the substitution, they keep the man in the middle but they control whom will have access and how much it is going to be charged. The goal in this case will be to be more expensive to kickstart an engineering team than using the product and ofc, their goal will be to reach that threshold.
3. They completely fail, these businesses plateau'ed and they can't make it a better condition to subvert the current balance and take the market. This could happen if a big financial risk materialize or if they get stuck without big advancements for a long time and investors starts to demand their money back.
I think we are going this 3rd route. We are seeing early signals of nonsense marketing strategy selling things that are not there yet. We see all of them silencing ethics and transparency teams. The truth is that they started to stack models together and sell as one thing which is much different from what they sold just a year and a half ago. I am not saying this couldn't be because this is really the best model, but because they couldn't scale it up even more now, even 18 months after the previous gen of giant model releases.
The truth is that they probably need to start capitalising now because the crisis they are causing themselves might hurt them bad.
We saw this decline or every bubble popping. They need to sell it too much so they can shift the risk from being on top of their money to be on top of someone else's money, and this potential is resold multiple times as investors realise the improvements are not coming. Until there is only the speculators dealing with this sorta of business, which will ultimately make those companies to take unpopular stupid decisions like it happened with bitcoin, super hero movies, NFT and maybe much more if I could think about it.
"Meritocratic climbing on the social ladder", I'm sorry but what are you on about?? As if that was the meaning in life? As if that was even a goal in itself?
If it's one thing we need to learn in the age of AI, it's not to confuse the means to an end and the end itself!