The correct number of legal Go positions is over twice as much, or to be exact [1]:
208168199381979984699478633344862770286522453884530548425639456820927419612738015378525648451698519643907259916015628128546089888314427129715319317557736620397247064840935
Indeed far larger than the ~ 4.8 x 10^44 legal chess positions [2], that is in between the number of legal 9x9 and 10x10 Go positions.
For reference, the estimated number of individual atoms in the universe is thought to be between mere 10^80 and 10^83.
I think that using these numbers as as stand-ins for difficulty is itself a form of obfuscation.
The truth is that, despite the massive number of potential board states, Chess and Go are some of the easier games to solve, thanks to their nature (perfect information, zero randomness, alternating turns where each player plays exactly one move). And trying to use board states as a proxy for complexity and complexity as a proxy for difficulty doesn't generalize to other categories of games. Compared to Go, what's the complexity of Sid Meier's Civilization? If I devise a game of Candyland with 10^180 squares, is that harder to devise an optimal strategy for than Go just because it has more board states?
The reason that we're still using board states as a proxy for difficulty is because historically our metric of "this is difficult for a computer to play" was based on the size of the decision tree and thus the feasibility of locally searching it up to a given depth. In the age of machine learning, surely we can come up with a more interesting metric?
Computers got to being better than humans in both Go and poker in 2016. The difference is that https://www.deepstack.ai/ was solvable by academics with normal research grants. The training for the final version of AlphaGo is estimated at about $35 million. And who knows how many other versions were created?
Yes, actually solving Go and Civilization are both impossible. But I would be shocked if playing Civilization at a human level was too hard for us to solve with current machine learning techniques.
Total game states is one measure. What does it take to solve the game?
You can also look at the branching factor, how many moves are there to make on average. For chess, it's 20. For go, it's something like 300.
You can also look at how long it takes from a move to seeing its negative consequences. A bad move in chess is frequently visible in very few moves. You lose a piece. You lose an exchange. Often these sequences are virtually forced. By contrast, the consequences of a bad move in Go are usually not visible for 50 moves or more. And there is nothing forced about the sequence that gets there.
You can also look at how likely good players are to play similar games. Many chess games have been played over and over again. People sometimes play half the game out of a memorized opening sequence. By contrast, it is plausible that no Go game has ever been played twice on a 9x9 board. It is very unlikely that any Go gamme has been played twice on a 13x13 or 21x21 board.
You can also look at how big the skill gaps between humans get. In my experience, a 1 stone difference in Go is roughly a similar skill gap to 200 points of Elo in chess. A rank beginner who barely moves the pieces may have a 400 rating. No human has ever reached a 2900 rating. That's 12.5 levels. By contrast Go has 30 levels of amateur (kyo), another 9 for serious players (dan), and then the skill range among professionals is about another 3. That's 42 levels of fairly recognizable skill differences between humans. Which speaks to how much more there is to learn about Go than chess. (Even more so when you realize how much of advancing in chess is a matter of making fewer mistakes. By contrast advancing in Go is much more about integrating better principles.)
No matter how you look at it, Go is much more complex than chess.
Yes, but what are the estimated number of states of all these atoms?
Wording like “game is more complex” overal seems incorrect. Game is not complex by itself (for example go rules are extreemly simple), all the difficulty and challenge depends on the skill of your oponent. Game only allows the opponent to demonstrate the skill.
State-space complexity
Game tree size
Decision complexity
Game-tree complexity
Computational complexity
[1] https://en.wikipedia.org/wiki/Game_complexityI mean I am open to hear the justification for this, but I was fairly certain that all measures of game complexity are a function of the number of legal positions. Now certainly there are other factors, namely the cost of computing the transition from one legal move to another legal move so a simple game might have a very low cost transition function while a complex game has a very complex transition function, but I can't conceive of a game where the number of legal positions bears no weight on the game's complexity.
State count gives an upper bound, though, to how complex a game can be, for sure.
1)has infinitely many legal states/positions
2)imperfect information (another argument often used to argue that game is more complex)
And is:
3)dead simple to play optimally
It's also easy to design a board game with arbitrary number of legal positions that is dead simple to play optimally.
I haven't played Go in a while, but I'm kind of excited to try going back to use the KataGo-based analysis/training tools that exist now.
Truly a must watch! (just look at the video comments to be convinced)
What stuck with me is Lee Sedol's strong emotional reaction, leading him to leave professional Go playing.
It's understandable he didn't expect AlphaGo to be that strong. Or that (for him) losing to a machine took the 'soul' out of the game.
But come on... I've been cornered by Pac-Man ghosts many times. That doesn't make Pac-Man less fun to play.
Nor does losing to the crude 'AI' steering those ghosts. Instead, you play, aim for a high score, see how long you can survive, how many levels you can complete, or how many fruits & ghosts you can eat in a game.
And (if you care) compare how those 'metrics' stack up against other players.
If a machine with superhuman Go-playing ability isn't fun or challenging, then stick to human opponents.
Of course it's his views and choices, and I respect that. But other than providing extremely challenging opponent, I don't see how human-beating machine would take the fun out of a game. Rather the opposite: new tactics, new insights, a raised upper bound for a Go player's strength (human or otherwise), etc.
Open up how you frame personality types and life experiences, and you can think of possibilities beyond "I don't see how".
In the world of go, there was an obsession with finding "the perfect move". This was a significant motivation for the players.
That is now completely gone: if you want to find the perfect move, ask a computer.
"I believe that humans can partner with AI and make great progress. As long as we can set clear principles and standards for it, I am quite optimistic about the future of AI technology in our daily lives."
I hope he got paid well.
I thought it would be an easy victory
I ... ended up only winning one out of our five games
It's interesting, how an expert in a field can be unaware of how AI is taking over. And a few years later, no human can compete anymore.I think we are in a similar situation in multiple professions today. For example with self-driving.
Musk recently said, that other car manufacturers are not much interested in talks about licensing FSD because they don't think it can work.
In ten years, probably no human can compete with AI drivers anymore.
That's what they said 10 years ago. Sooner or later people will say it and be right, but the last few percent of any problem is a lot harder than people give it credit for. It may not be that hard to stay in a lane or write a little code, and that may look like it's doing most of the job, but those common tasks are just the easy part.
Now it is all NNs and therefore will scale with more data and more compute. Which is increasing exponentially. So far it seems like they are not hitting diminishing returns.
My point being, we don't know where the asymptote lies. Computers have had self improving algorithms since the 60s, and people have been making the same bold claims, like you, that because an iterative process for improvement has been discovered, we're close to super human AI since the 60s too.
My guess is industrial and home robotics will solve a lot of the “doing things around humans” problems in the next ten years.
Why the hell people decided to automate giant death machines before perfecting small things never made sense to me.
- The home is a very unstructured environment, whereas roads have at least _some structure_, and perhaps ~70% of the most useful roads even have clear lane markings and other signs.
- People already know that roads are dangerous, and there’s an expectation that babies won’t suddenly crawl in front of cars. This doesn’t exist in the home
- People are more comfortable being recorded on roads and highways than in their own homes, so you can get training data more easily for self driving.
- to do something useful in the home, imo you need to solve navigation _and_ complicated manipulation problems. For self driving, you only need to solve the navigation problem.
- (this is speculation on my part) Customers will happily pay 10k-20k extra for a self-driving car, and there are industries in which even more cost makes sense. Customers are less likely to pay that for a robot that does your chores
Would be very interested to hear the perspective of someone that works on self-driving
Plus there's serious questions about liability with self driving cars which are still unresolved in most of the world - if the goal is to have vehicles operate themselves with no human supervision, who goes to jail when they kill someone? Despite all of the progress that's been made with AI it's mostly been in low-stakes problems where failure isn't a big deal, so we don't have a consensus on what we're supposed to do when a neural network negligently obliterates a person because some logistics company wanted to save a few bucks on driver salaries.
It is prudent to remain cautiously optimistic that the evidence will bear out in time, but not assert unsupported claims.
Home robotics has to solve two problems: the robot and operating the robot ~perfectly. Self-driving cars already have cars, which are waldos, if you squint. What sort of sensors should be added is up for debate but the actuation mechanism is a solved problem, and a very simple one, cars have three linear inputs and two binary ones for the turn signals. Technically a few more but none of them are any less trivial.
There's less risk of a fatality when Rosie Robot knocks over the vase you inherited from your grandmother, but people are no more tolerant of that kind of failure in home robots than they are in cars.
Which small thing puts a similar burden on mankind?
I think the interesting thing is how an expert in a field is wholly unprepared for predicting how the future will develop.
You mention what Musk has said about FSD and how it will completely take over in just ten years, but I feel compelled to point out that Musk has said that it's just right around the corner with only small challenges left, for many years.
I wouldn't place any faith in anything Musk says.
Just to get ahead of anybody claiming it was not a firm promise, in 2019: “ I think we will be feature-complete full self-driving this year, meaning the car will be able to find you in a parking lot, pick you up, take you all the way to your destination without an intervention — this year. I would say that I am certain of that. That is not a question mark." [1]
See that part where he says: “I would say that I am certain of that. That is not a question mark.” That is called a firm promise.
[1] https://www.businessinsider.com/elon-musk-doubles-down-on-cl...
And we're still not really there yet. They work great during some conditions and in certain areas, but they're still nowhere close to making human drivers obsolete.
Easy to see that in hindsight, but when the game was actually played it was earlier in the development of AI and less apparent how good it had become.
That's how it usually goes with technological progress.
In any field.
Progress is minimal for a few years and then suddenly jumps up very suddenly.
So to predict what's coming, you can't just extrapolate the progress of recent years. You have to account for it being exponential with a very uneven distribution of sudden jumps.
Even going back to the closest analogue, chess, there were good chess engines for a long time prior to Kasparov loosing in 97 to deep blue. Even before Kasparov lost Chess engines were pretty good, just look at the game in 96 when Kasparov won. A grand master would still need to put some thought into how he played.
In Go however even the best engines couldn't hold a candle to a professional player, let alone someone who was the equivalent of a chess grand master. Hell, even as a lowly amateur player I was able to trounce some of the most powerful AIs at the time. Looking at some of the Pro vs AI games back in the early 2010s it's almost painful how bad they were.
It's hard to communicate just how huge of a leap this was, and just how shocking to the whole Go community. It would be like a child one day being unable to speak and the literal next day reciting Shakespeare.
"AlphaZero was a reinforcement learning system that was able to master three different perfect information games - chess, shogi (Japanese chess), and Go - at superhuman levels by just learning from self-play, without using any human expert games or domain knowledge crafted by programmers.
Its predecessor AlphaGo, which defeated the world champion Go player in 2016, was revolutionary but relied on human expert games and domain-specific rules coded by the DeepMind researchers.
AlphaZero started from random play and used a general-purpose reinforcement learning algorithm to iteratively improve its gameplay through self-play, ending up with superior performance compared to the best human players and previous game-specific AI systems.
Many experts were stunned that a general algorithm could rediscover from scratch the millennia-old principles and strategies for these highly complex games, often discovering novel and counterintuitive moves along the way."
That would be pretty strange. For a trivial counterexample, you can look up the history of integrated circuits from invention to today.
When your neighbor Bob (who is still paying mortgage and his wife is battling cancer and who occasionally babysit your kids) ran over your cat, you don't sue him. But you would sue Tesla.
Mark my words, in 10 years ex-programmers will be throwing shelter cats under FSD cars just to earn a living.