Like the time traveler who went back in time but never bet on things that actually happened, those that had the most faith in the developments we see today before they took off never shorted the things AI would undermine, at least as far as we know, but maybe the reason we don't know is because AI hasn't properly undermined them yet?
Then, two years later, in 2020, Penrose got the Nobel Prize—finally—for his black hole work. No shared credit. No Hawking. Just Penrose. And still, the public story didn’t change. Hawking was still the name people knew. Penrose stayed basically invisible among all the other Nobel winners that nobody really remembers or cares about. It actually seemed unfair that Penrose was getting the Nobel that should have gone to Hawking. Until now It’s only with the release of Barss’s new book—right now, years after Hawking’s death and even after Penrose’s Nobel—that I finally saw the whole picture for what it is. It was Penrose who changed physics. The one who did the part that actually got recognized at the highest level. And the fact that it took this long for someone to put it all together in one place—and say it clearly—is kind of unbelievable. Why it gets to me I respected Hawking. I still do. But his story became so massive, so unshakable, that it swallowed everything around it—including Penrose. And I don’t want that to happen anymore. Now when people talk about Hawking, I can’t not mention Penrose. I can’t let the story go back to being just one guy overcoming the odds. It was always more than that. It was always a team effort. And Penrose was the one who cracked the code first. So yeah, Hawking’s reputation really did act like a black hole. It pulled everything in, and for a long time, no light escaped. Thanks to Barss, maybe it finally has.
1. From Logic Trees to Latent Spaces
Symbolic AI relies on explicit rules (if X, then Y), while neural networks encode information in latent spaces—continuous, high-dimensional structures that capture relationships implicitly.
Challenge:
How do we shape latent spaces so game-like structures emerge, enabling neural networks to interact with information as if playing a game?
Instead of hand-coded strategies, we must design architectures that naturally develop game-like reasoning through optimization.
2. From Rule-Based Games to Reinforcement Learning (RL) Games involve feedback, prediction, and strategy formation, aligning with reinforcement learning (RL):
Predicting outcomes = simulating moves.
Refining strategies = adapting through trial and error.
Developing world models = optimizing future choices.
Challenge:
Can we generalize RL structures beyond reward-driven environments, making learning game-like even outside traditional RL frameworks?
Self-play, curiosity-driven exploration, and intrinsic motivation push RL beyond explicit games into general cognition.
3. From Decsion Trees to Continuous Prediction Loops Symbolic AI treats cognition as discrete steps; neural networks continuously predict and update expectations. This mirrors predictive processing, where: The brain (or AI) anticipates sensory inputs. Errors update internal models, much like refining a game strategy.
Challenge:
Can we structure AI cognition around predictive loops rather than strict reward maximization? This aligns with active inference, where minimizing prediction error becomes the "game" itself. 4. From Hardcoded Game Rules to Emergent Learning Symbolic AI relies on predefined mechanics (e.g., chess rules), while neural networks thrive on unstructured data. A game-like AI must:
Discover meaningful rules autonomously.
Learn exploratory behaviors without explicit incentives. Generalize strategies across domains.
Challenge:
Can AI construct its own "games" from raw data, learning useful representations without predefined objectives? This requires self-supervised learning and meta-learning—teaching AI how to learn.
5. From External Tasks to a Game-Like Cognitive Framework Traditional AI sees games as external challenges. But human cognition is game-like by nature, constantly refining strategies.
A truly game-like AI must:
Interact with all data as an adaptive challenge. Set its own challenges, much like a player defining objectives.
Develop game-theoretic relationships with its environment.
Challenge:
Can AI treat all interactions—perception, memory, learning—as internal "games" where it dynamically sets rules and strategies?
This suggests that game-like cognition should be a fundamental AI principle, not just an application.
Conclusion: Can AI "Play" Its Way to Intelligence?
If cognition is fundamentally game-like, AI must go beyond playing games—it must turn reality into an evolving, self-directed learning process.
Instead of being trained to win pre-set games, AI should be designed to play its way to understanding, setting its own objectives and iterating like a skilled player refining strategies.
De-externalizing Wittgenstein’s Game Concept
Instead of treating "game-ness" as something external (rules, competition, goals, play, etc.), we look at it as a way of organizing experience within cognition. That is, the brain doesn’t just passively receive Russell’s sense data; it structures, categorizes, and interacts with them in game-like ways to make sense of the world.
Game-like Properties of Cognitive Processing
What does the brain do with sense data that makes it "game-like"? We could break this down into several mechanisms:
Pattern Recognition as Rule Formation
The brain doesn’t just register data—it infers rules from repeated exposure to stimuli. E.g., a child sees an apple roll off a table and expects another apple to do the same. These inferred rules are flexible, much like the rules of games—sometimes explicit, sometimes implicit.
Categorization as Game Classification
In games, we classify things into roles: player vs. opponent, goal vs. obstacle, tool vs. useless item. In cognition, the brain does the same: safe vs. dangerous, edible vs. inedible, self vs. other. This means that categorization itself is a kind of game, where the brain tests and refines its "rulebook" based on interactions with the world.
Predictions as Gameplay Moves
The brain simulates outcomes based on inputs. Much like in a game where we imagine possible moves before making them, the brain predicts the consequences of action (or inaction). This is fundamental to decision-making—choosing "moves" in real life.
Feedback Loops as Game Iteration
Games involve feedback: winning, losing, scoring points, failing, retrying. Cognition operates similarly: neurons fire in response to stimuli, predictions are tested, and errors refine the system. Learning is, in a sense, playing the game of adjusting to reality with better strategies.
Memory as Game Replay & Strategy Storage
Memory is not a passive recording device but a storehouse of past “games” played with the world. It allows us to "replay" strategies, refine them, and use them in similar but novel contexts. The Practical Cognitive Implication If "game" means doing something with information that enables us to interact with the world practically, then cognition itself is fundamentally game-like at every level. It does not receive sense data—it plays with it, structures it into meaningful units, and refines its internal rules through experience.
This perspective aligns well with predictive processing models of cognition (Friston, Clark), which suggest the brain is an active "predictive engine" rather than a passive data-processing machine. It also resonates with Piaget’s constructivist view that knowledge itself emerges through active engagement with the world—much like a player learns a game by playing.
Further Implications
Could we design better cognitive models by thinking of perception, memory, and learning explicitly as game mechanics? Can we structure AI cognition around game-like principles rather than strict logic trees? Does this mean that play itself is not an addition to cognition but its fundamental mode of operation?