If OpenAI cracks AGI, like they seem to believe they will before 2032, I wonder what "access" MSFT will have to that and what the consequences of that will be.
Some independent researcher will, and it will require ridiculously less compute then the embarrassing things these llm companies produce.
Unfortunately, these current llm companies will still be in the best position: liquid capital, relationships with more capital, access to hardware; to productize the discovery.
At that time I hope these resources will be nationalized and handed to the public.
People expect general intelligence to mean oracle of truth, yet they make mistakes. People expect general intelligence to mean perfect memory, yet they themselves forget. People expect general intelligence to be infinitely moldable and adaptable, yet they themselves will be unable to break habits, and be unable to reason about psychedelic experiences.
If we do not currently have something that can be called AGI, we do not have something that can be called GI.
If you're rich, or even a government, surely you can see why this is worth a LOT. And if you're poor ... well ... I'm sure they'll toss you a few scraps.
The real scary thing is that, fundamentally, democracy, human rights ... make sense because of labor relations. This has at least the potential to end labor relations.
> before 2032: 51%
> before 2033: 61%
I bet we won't even have fully self driving cars by then lmao, "2 more years guys, just gib me $7 trillion I promise I'll let you play with skynet when it's finished"
Full blown delusional egomaniacs