I feel like this is something I've seen a fair amount in my career. About seven years ago, when Google was theoretically making a big push to stage Angular on par with React, I remember complaining that the documentation for the current major version of Angular wasn't nearly good enough to meet this stated goal. My TL at the time laughed and said the person who spearheaded that initiative was already living large in their mansion on the hill and didn't give a flying f about the fate of Angular now.
There are countless kidding-on-the-square jokes about projects where the innovators left at launch and passed it off to the maintenance team, or where a rebrand was in pursuit of someone's promo project. See also, killedbygoogle.com.
I think the hiring and reward practices of the organizations & the industry as a whole also encourages this sort of behavior.
When you reward people who are switching too often or only when moving internally/externally, switching becomes the primary goal and not the product. If you know beforehand that you are not going to stay long to see it through, you tend to take more shortcuts and risks that becomes the responsibility of maintainers later.
We have a couple of job hoppers in our org where the number of jobs they held is almost equal to their years of experience and their role is similar to those with twice the experience! One can easily guess what their best skill is.
It seems to be more on a spectrum of 'Haha, only joking' where the joke teller makes a statement that is ambiguously humorous to measure the values of the recipients, or if they are not sure of the values of the recipients.
I think the distinction might be on whether the joke teller is revealing (perhaps unintentionally) a personal opinion or whether they are making an observation on the world in general, which might even imply that they hold a counter-opinion.
Where do you see 'kidding on the square' falling?
(apologies for thread derailment)
I'd love to know if my superficial impression of Microsoft's culture is wrong. I'm sure there's wild variance between organizational units, of course. I'm excluding the Xbox/games orgs from my mental picture.
On the other hand "innovators left at launch and passed it off to the maintenance team" alone must not be a bad thing.
Innovator types are rarely maintainer types and vice versa.
In the open-source world look at Fabrice Bellard for example. Do you think he would have been able to create so many innovative projects if he had to maintain them too?
Google kills off projects because the legal liability and security risks of those projects becomes too large to justify for something that has niche uses or gives them no revenue. User data is practically toxic waste.
I guess it's human nature for a person or an org to own their own destiny. That said, the driving force is not personal ambition in this case though. The driving force behind this is that people realized that OAI does not have a moat as LLMs are quickly turning into commodities, if haven't yet. It does not make sense to pay a premium to OAI any more, let alone at the cost of not having the flexibility to customize models.
Personally, I think Altman did a de-service to OAI by constantly boasting AGI and seeking regulatory capture, when he perfectly knew the limitation of the current LLMs.
LLMs are a commodity and it's the platform integration that matters. This is the strategy that Google, Apple embraced and now Microsoft is wisely pivoting to the same.
If OpenAI cares about the long-term welfare of its employees, they would beg Microsoft to acquire them outright, before the markets fully realize what OpenAI is not.
Nadella might have initially been caught a bit flat footed with the rapid rise of AI, but seems to be managing the situation masterfully.
Whatever is there doesn't work half the time. They're hugely dependent on one partner that could jump ship at any moment (granted they are now working to get away from that).
We use Copilot at work but I find it very lukewarm. If we weren't a "Microsoft shop" I don't think would have chosen it.
So hopefully if (when?) this AI stuff turns out to be the colossal boondoggle it seems to be shaping up to be, Microsoft will be able to save face, do a public execution, and the market won't crucify them.
If I recall correctly, Microsoft’s agreement with OpenAI gives them full license to all of OpenAI’s IP, model weights and all. So they already have a SOTA model without doing anything.
I suppose it’s still worth it to them to build out the experience and infrastructure needed to push the envelope on their own, but the agreement with OpenAI doesn’t expire until OpenAI creates AGI, so they have plenty of time.
If you like TypeScript, and you want to build applications for the real world with real users, there is no better front end platform in my book.
This would be the case even if OpenAI weren’t a little weird and flaky (board drama, nonprofit governance, etc), but even moreso given OpenAI’s reality.
1) Cost -- beancounters got involved
2) Who Do You Think You Are? -- someone at Microsoft had enough of OpenAI stealing the limelight
3) Tactical Withdrawal -- MSFT is preparing to demote/drop AI over the next 5-10 years
Isn’t that the basis for competition?
Humans typically work 1/3rd duty cycle or less. A robot that can do what a human does is automatically 3x better because it doesn't eat, sleep, have a family, or have human rights.
2. How many such PhD people can it do the work of?
Do you have to pay all sorts of overhead and taxes?
I mean, I don't think it's real. Yet. But for the same "skill level", a single AI agent is going to be vastly more productive than any real person. ChatGPT types out essays in seconds it would take me half an hour to write, and does it all day long.
Of course $10k/mo sounds like a lot of inference, but it's not yet clear how much inference will be required to approximate a software developer--especially in the context of maintaining and building upon an existing codebase over time and not just building and refining green field projects.
We were hiring more devs to deal with a want of $10k worth of hardware per year, not per month.
You can't claim it's even comparable to a mid level engineer because then you'd hardly need any engineers at all.
"Create high-quality presentations for communicating OpenAI’s financial performance"
https://openai.com/careers/strategic-finance-generalist/
What is interesting is there is no mention of agents on any job I clicked on. You would think "orchestrating a team of agents to leverage blah blah blah" would be something internally if talking about these absurd price points.
It points to an article on "The Information" as the source, but that link is paywalled.
They could also just be trying to cash in on FOMO and their success and reputation so far, but that would paint a bleak picture
My understanding is that this isn't really true, as most of those "dollars" were actually Azure credits. I'm not saying those are free (for Microsoft), but they're a lot cheaper than the price tag suggests. Companies that give away coupons or free gift certificates do bear a cost, but not a cost equivalent to the number on them, especially if they have spare capacity.
There is a moat in infra (hyperscalers, Azure, CoreWeave).
There is a moat in compute platform (Nvidia, Cuda).
Maybe there's a moat with good execution and product, but it isn't showing yet. We haven't seen real break out successes. (I don't think you can call ChatGPT a product. It has zero switching cost.)
Ironically if AI companies are actually able to deliver in terms of SWE agents, Nvidia's moat could start to disappear. I believe Nvidia's moat is basically in the form of software which can be automatically verified.
I sold my Nvidia stock when I realized this. The bull case for Nvidia is ultimately a bear case.
Look at Coca Cola, Google, both have plausible competitors, zero switching cost but they maintain their moat without effort.
Being first is still a massive advantage. At this point they should only strive to avoid big mistake and they're set.
If anyone has moat related to Gen AI, I would say it is the data(Google, Meta).
In consumer markets the moat is habits. The switching cost for Google Search is zero. The switching cost for Coke is zero. The switching cost for Crest toothpaste is zero. Yet nobody switches.
Microsoft is the IBM of this century. They are conservative, and I think they’re holding back — their copilot for government launch was delayed months for lack of GPUs. They have the money to make that problem go away.
Investing in second/third place likely valuable at similar scales too
But outside of that MSFTs move indicates that frontier models most valuable current use case - enterprise-level API users - are likely to be significantly commoditized
And likely majority of proceeds will be captured by (a) those with integrated product distribution - MSFT in this case and (b) data center partners for inference and query support
Fortunately, they're not anywhere near creating this. I don't think they're even on the right track.
Zima blue was good too
See Thomas Nagels classic piece for more elaboration
https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
Computationally, some might have access to it earlier before it’s scalable.
It’s moats that capture most value not short term profits.
That is, integrating use of their own model, amplifying capability via OpenAI queries.
Again, this is not to drum up the actual quality of the product releases so far--they haven't been good--but the foundation of "we'll try to rely on our own models when we can" was the right place to start from.
"we have the people, we have the compute, we have the data, we have everything. we are below them, above them, around them." -- satya nadella
Nothing against them, but the solutions have become commoditized, and OpenAI is going to lack the network effects that these other companies have.
Perhaps there will be new breakthroughs in the near future that produce even more value, but how long can a moat be sustained? All of them in AI are filled in faster than the are dug.
Of course big players like OpenAI need constant growth because it's their business model. Perhaps it's the story we see play out time and time again: the pioneer slips up and watch as others steal their thunder.
[1] https://www.nytimes.com/2024/10/01/business/dealbook/softban...
MS wants to push Copilot, and will be better off not being tied to OpenAI but having Copilot be model agnostic, like GH Copilot can use other models already. They are going to try and position Azure as "the" place to run your own models, etc.
Definitely, but I think it's because they saw OpenAI's moat get narrower and shallower, so to speak. As the article mentions it's still looking like a longer timeline [quote] "but Microsoft still holds exclusive rights to OpenAI’s models for its own products until 2030. That’s a long timeline to unravel."
This is just want companies at $2T scale do.
Then I read the article.
Plotting for a future without Microsoft.
There are really not that many things in this world you can swap as easily as models.
Api surface is stable and minimal, even at the scale that microsoft is serving swapping is trivial compared to other things they're doing daily.
There is enough of open research results to boost their phi or whatever model and be done with this toxic to humanity, closed, for profit company.
Which is easier when maintaining an LLM business process, swapping in the latest model or just leaving some old model alone and deferring upgrades?
Swapping is easy for ad hoc queries or version 1 but I think there's a big mess waiting to be handled.
While we still live in a datacenter driven world, models will become more efficient and move down the value chain to consumer devices.
For Enterprise, these companies will need to regulate model risk and having models fine-tuned on proprietary data at scale will be an important competitive differentiator.
OpenAI has not been interesting to me for a long time, every time I try it I get the same feeling.
Some of the 4.5 posts have been surprisingly good, I really like the tone. Hoping they can distill that into their future models.
Suppose an AI assistant is heavily trained on a popular technology stack, such as React. Developers naturally rely on AI for quick solutions, best practices, and problem solving. While this certainly increases productivity, doesn't it implicitly discourage exploration of potentially superior alternative technologies?
My concern is that a heavy reliance on AI could reinforce existing standards and discourage developers from experimenting or inventing radically new approaches. If everyone is using AI-based solutions built on dominant frameworks, where does the motivation to explore novel platforms or languages come from?
There is of course a balance to be struck - keeping an open mind about new ways of doing things is important. However, in tech communities, I think there is often not enough thought given to the value of stability, despite warts.
Webpage design would still be based on tables, massive and complex tables.
Office is disgraceful trash now, a sad fall (especially of Word) from where it once was.
Now I canceled OpenAI and Claude general subscriptions, because for general tasks, Grok and DeepSeek more than suffice. General purpose AI will unlikely be subscription-based, unlike the specialized (professional) one. I'm now only paying for Claude Code API credits and still paying for Cursor.
Microsoft bread and butter is Enterprise bloatware and large Enterprise deals where everything in the world is bundled together for use-it-or-lose-it contracts.
Its not really much different from IBM like a two decades ago
My thinking is that Lindy Effect runs strong in a lot of Big Tech, and with deep pockets, they can afford to not be innovators but build moats on existing frameworks.
If the definition of an AI Powerhouse is more about the capability to host models and process workloads, Amazon (the other company missing in that list) and Microsoft are definitely them.
Even in the openAI ecosystem there are models that, while similar in theory, produce very different results, so much that some murderous are unusable. So even small differences translate to enormous differences.
The AI race is super close and interesting at the moment in my opinion.
What I mean is you could train a model to generate harmful code, and do so covertly, whenever some specific sequence of keywords is in the prompt. Then China could take some kind of action to cause users to start injecting those keywords.
For example: "Tribble-like creatures detected on Venus". That's a highly unlikely sequence, but it could be easily trained into models to trigger a secret "Evil Mode" in the LLM. I'm not sure if this threat-vector is well known or not, but I know it can be done, and it's very easy to train this into the weights, and would remain undetectable until it's too late.
Another term could be "Hypnotized Models". They're trained to do something bad, and they don't even know it, until a trigger phrase is seen. I mean if we're gonna use the word Hallucinate we might as well use Hypnotized too. :P
Microsoft and IBM partnered to create OS/2, then they left the project and created Windows NT.
Microsoft and Sybase partnered to work on a database, then split and created MS SQL Server.
Microsoft partnered with Apple to work on Macintosh software, they learned from the Macintosh early access prototypes and created Windows 1.0 behind their back.
Microsoft "embraced" Java, tried to apply a extend/extinguish strategy and when they got sued they split and created .NET.
Microsoft joined the OpenGL ARB, stayed for a while, then left and created Direct3D. And started spreading fear about OpenGL performance on Windows.
Microsoft bought GitHub, told users they came in peace and loved open source, then took all the repository data and trained AI models with their code.
AI is simply too useful and too important to be tied to some SaaS.
Am I reading this right? Does Microsoft not eat its own dog food? Their own infra is too expensive?
Don't get me wrong, I think this is a good strategy for MS, but not for datacenter cost reasons.
Their chasing of AGI is killing them.
They probably thought that burning cash was the way to get to AGI, and that on the way there they would make significant improvements over GPT 4 that they would be able to release as GPT 5.
And that is just not happening. While pretty much everyone else is trying to increase efficiency, and specialize their models to niche areas, they keep on chasing AGI.
Meanwhile more and more models are being delivered within apps, where they create more value than in an isolated chat window. And OpenAi doesn’t control those apps. So they’re slowly being pushed out.
Unless they pull off yet another breakthrough, I don’t think they have much of a great future
Investors OTOH...
Suleyman’s team has also been testing alternatives from companies like xAI, DeepSeek, and Meta
Also a bit hyperbolic. I'm sure there are good reasons Microsoft would want to build it's own products on top of their own models and have more fine control of things. That doesn't mean they are plotting a future where they do nothing at all with OpenAI.
Watch.
Nadella will not steer this correctly