Play it and first thing you hear is an enthusiastic "Today we’re announcing new funding - 40 bi-dollars at a 300 bi-dollar post-money valuation!" Hah.
If you're curious what's possible with <.01% of the funding, check out https://rime.ai/. We train on data recorded in our studio and specifically include a lot of currency in our scripts for this very reason.
[disclaimer: one of the founders of Rime]
Thanks for pointing that out. I never would've pressed play unless I had read your comment. That gave me a genuine laugh.
Wasn't aware they'd hit a WAU count this high. Impressive, but then again at this kind of valuation you sure want to be heading towards 9-figure MAU numbers.
Nvidia and AMD were low-end vs high-end. In the end Nvidia won a total victory by ditching low margin distractions like building GPUs for consoles, and focused solely on higher end PC GPUs that could dually act as accessible research chips.
That's pretty incredible to think about. I recently visited SF for the first time and saw the Salesforce tower. To think that OpenAI now has a higher valuation than that is crazy to think about.
The name alone is worth at least $100B+.
OpenAI closes $40B funding round, startup now valued at $300B https://news.ycombinator.com/item?id=43490010 (16 comments)
Also worth reading the recent submission on what a tire fire it is being in the cage with altman: https://news.ycombinator.com/item?id=43514717
The amount of funding can't change that.
Is this news, as in was this generally known to be coming, or is this an actual surprise announcement?
Maybe I lack vision. What would it take for OpenAI to join the ranks of trillion dollar companies?
Openai would need 15 Billion in profit of 15 billion per year for a P/E of 20 with zero risk, or a 10% chance of 150 Billion profits per year.
Alphabet is about 100B earnings per year. Do you think Open AI has a 10% chance of being bigger than google? it doesnt have a moat, but I guess google doesnt either, it just dominates its market.
Some kind of market advantage at the dominant price point?
Investors are blindly banking on everyone perpetually going to the theater to see the talkies and missing the vision that we'll all have TVs shortly thereafter...
So them pushing that language in their pr/marketing activity is not a surprise and not really even meant to be scientifically meaningful.
If you define AGI as human level intelligence in all aspects there's a way to go yet but things seem to be getting quite close to me. I'd say the Turing test is basically passed, stuff like Woz's coffee test that a robot can go into a house, find the coffee stuff and make coffee is not there but maybe in a couple of years? With that stuff I'd say Deepmind is much closer than OpenAI.
We're well on our way to building AIs which are competent at many tasks. Assuming an AGI doesn't need to be able to do every task a human can do, and doesn't need to do all of those tasks as well as an expert human, then something which could be called AGI doesn't seem that far off at all.
I remember a time quite recently when the idea of an AI beating a good-faith interpretation of the Turing test seemed very far away. I feel like we're much closer to AGI today than we were to beating the Turing test in the late 00s.
All you need to do is convince the credulous and greedy.
It's all about data ingestion, and the assimilatable data for computers is tiny.
Would be fun to watch billionaires pouring all their wealth into something that would make its own mind to go away and not give a damn about anything related to living things.
Not calling out any books not to spoil stuff for people - just mentioning it is not my original idea but one that I find interesting.
General intelligence means perceiving opportunities. It means devising solutions for problems nobody else noticed. It means understanding what's possible and what's valuable just from existing without being told. It means asking questions without prompting, simply for the sake of wondering and learning. It means so many things beyond "if I feed this data input to this function and hit run, can it come up with the correct output matching my expectations"?
Sure, an LLM might pass a series of problem-solving questions, but could it look up and see the motion of stars and realize they implied something about the nature of the world and start to study them, unasked, and deduce the existence of solar systems and galaxies and gravity and all the other things?
I just don't buy it. It's so reductive. They're hoping to skip over all the real understanding and achieve something great without doing the real legwork to understand the true mechanisms of intelligence by just pouring enough processing time into training. It won't work. They're missing integral mechanisms by overfocusing on the one thing they have a handle on. They don't know what they don't know, but worse, they're not trying to find out.
TBF this doesn't imply anything about OpenAI's quest to make a chatbot that gets along with people at parties.
OpenAI had billions. Now it is asking for trillions. It's taking over.