I hadn’t really appreciated before the connection between his chess and game industry experience and the early reinforcement learning work that put Deepmind on the map, e.g. the Atari game AI demos, AlphaGo, Alphazero, etc. There is a fascinating thread there and it’s certainly a case of the right person with the right mix of past experience and vision being able to pick exactly the right problems to focus on to move technology forward.
The book has a few flaws: it’s maybe a little too uncritical of its subject. But that’s almost a given with books of this kind where the author gets a lot of access.
Also I don't really care that it's a bit of a cheerleader for DeepMind and Hassabis. Substantive criticism is good, but too often with these kind of books it feels like an editor told the author that the book needs something negative and the author has to inflate an issue to meet the requirement.
Of course, I am not trying to prove moral equivalency between Steinhardt and Hassabis. But it is worth keeping this in mind when reading something by Mallaby. Do not expect completeness or impartiality.
Out of all the heads of AI orgs out there, Dennis is the best, but the book did him a disservice by painting an unrealistically sunny picture of him as some kind of visionary figure.
Like I already said, bias is inevitable in a book where the writer gets access (to the point of interviewing Hassabis in a North London pub every month), but the benefit to readers is that you do get a lot more insight into what makes the guy tick than you would in a book written by an outsider. I certainly learned a lot and just because I did doesn’t mean I’m buying into some cult of tech hero worship.
Wait, 'unrealistically sunny'? You better not be talking about Dennis from It's Always Sunny in Philadelphia, because we're all screwed if so.
Then again, the western AI landscape has become somewhat stale recently. Claude and Gemini may have cute names, but they all pale in comparison to The Golden God.
^ Educational resources for the ignorant that instead prefer to discuss the merits of the term "bro", at length.
Guys hes just one smart guy who got placed in a good moment in the AI technological revolution- hes def not the second coming of Christ
I want to learn to think more like him. What differences between his way of thinking and mine create such a powerful gap? If I could understand those differences, I might also understand how to narrow that gap. And if we could identify the causes of that gap, perhaps humans in general could develop much further.
I truly envy his intelligence. When I read his writings, I can see fragments of knowledge that he cannot hide, and it makes me think: I want to become like that too.
Presumably sophistication ("sophisticated" as "complex" as opposed to "naïve" as in "less mature").
> If I could understand those differences, I might also understand how to narrow that gap
You narrow that gap through application on improved mental habits as aided by the leading examples of your good acquaintances (especially through reading). Just discipline yourself to think in a better way.
--
Edit: oh, since I had to return out of a linguistic matter: examine thought and its tools - language is one, logic of course another, aletics a paramount one... Refine all your tools. Learn to see better and better.
Farnam Street used to be a very good blog too, albeit it feels much more 'commercial' for the past 5 years.
Certainly much less powerful than just having rich or pushy parents.
> The CV of e.g. IQ is only 15%. That's in line with other 'natural' attributes of humans, but not compared to something like family wealth or background. From a quick Google, wealth is 700% in the US? Income also same OOM.
Cannot become like that unfortunately. But hey, you are great already, and you can become even better. Your own version.
I don't agree with everything he says, but he's obviously an enormously deeper thinker than the likes of Altman.
The main problem is that in capitalism private companies have only the mission to serve their shareholders/owners.
Public institutions have the mission to serve the public.
The only real solution is to make AI a public good/utility which should be regulated on an international level and overseen by trustworthy institutions.
There is a precedent for this in nuclear weapons. It did not work. All it takes is a sufficiently resourced nation-state to defect from whatever agreements there are and the whole thing collapses. If the incentives point toward doing so, it is an inevitable outcome.