I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
Plenty of people are still ambitious and being successful.
They actually hire more junior developers
"Uhh .. to adopt AI better they're hiring more junior developers!"
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
But in general I agree with your point.
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.
Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.
https://www.cohenmilstein.com/case-study/ibm-age-discriminat...
A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.
Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.
Sounds like business as usual to me, with a little sensationalization.
Why Replacing Developers with AI is Going Horribly Wrong https://m.youtube.com/watch?v=WfjGZCuxl-U&pp=ygUvV2h5IHJlcGx...
A bunch of big companies took big bets on this hype and got burned badly.
LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!
Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.
Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).
I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.
Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.
The historic annual average is around 2%.
It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.
That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.
If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.
If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.
[1] https://www.oecd.org/en/topics/sub-issues/measuring-producti...
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.
Why are there fewer games launched in steam this January than last?
The "limits of AI" bit is just smokescreen.
Firing seniors:
> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
New workers will use AI:
> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.
If my boss asked me a question like this my reply would be "exactly what you told me to build, check jira".
If you want to know if I'm more productive - look at the metrics. Isn't that what you pay Atlassian for? Maybe you could ask their AI...
Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.
The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.
It's hard work and patience and math.
The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"
https://www.ibm.com/careers/search?field_keyword_18[0]=Entry...
Total: 240
United States: 25
India: 29
Canada: 15
Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!
Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".
Seems like IBM can no longer wait for that day.
They have their Granite family of models, but they're small language models so surely significantly less resources are going into them.
> Some executives and economists argue that younger workers are a better investment for companies in the midst of technological upheaval.
The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.
E.g. If you cut hiring from say 1,000 a year to 10 and now are 'tripling' it to 30 then that's still a nothingburger.
Think about the economy and the AI children
Ahh, what could possibly go wrong!
Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.
Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.
It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).