DeepMind/YouTube
WATCH: Artificial intelligence beats human Go champion for the first time

Google's neural network masters this incredibly complex ancient Chinese game.

PETER DOCKRILL
28 JAN 2016
 

Computer scientists in Google's DeepMind division in the UK have scored an impressive record, beating a human champion at the ancient Chinese board game Go with artificial intelligence (AI) for the first time.

 

Go is a very old game dating back more than 2,500 years, in which players compete to surround one another's stones on a grid. While often compared to Chess in that both games offer a serious strategic challenge, the actual complexity of Go reaches far beyond Chess in terms of its mathematical possibilities.

DeepMind's researchers say that despite the game's seeming simplicity, the amount of possible positions in the game is greater than the number of atoms in the known Universe.

According to the Shannon number, named after mathematician Claude Shannon, the same can be said of Chess. Shannon calculated that the number of possible positions in Chess as 10120, whereas the number of atoms in the observable Universe is thought to be approximately 1080.

Depending on grid sizes, Go has been said to reach as high as 10751, although DeepMind's researchers put it at 10170. In any case, the added complexity in Go makes developing an AI to master the game all the more challenging.

"Traditional AI methods – which construct a search tree over all possible positions – don't have a chance in Go," writes DeepMind founder Demis Hassabis in a Google blog post. "So when we set out to crack Go, we took a different approach."

That approach involved building a system called AlphaGo that combines an advanced tree search with deep neural networks. The neural networks were trained up on some 30 million moves in games played by human experts, to the point where the system learned to predict the move a player would make 57 percent of the time, beating the previous record of 44 percent.

The researchers then had AlphaGo play itself, with its neural networks adjusting trial-and-error strategies over the course of thousands of games, all powered by Google's servers.

AlphaGo's first public challenge was a tournament held against other software-based Go-playing programs, in which it decisively trounced the competition, losing only one game in 500 played. But the real test came against reigning three-time European Go champion, Fan Hui.

As you can see in the video above, Fan performed no better against the AI, which won five games and lost none to the human champ. It's fascinating to watch highlights of the contest, seeing Fan react with a mixture of frustration and admiration as he is outsmarted by a machine.

"The problem is humans sometimes make very big mistakes, because we are human. Sometimes we are tired, sometimes we so want to win the game, we have this pressure," Fan told Elizabeth Gibney at Nature, describing the match. "The programme is not like this. It's very strong and stable, it seems like a wall. For me this is a big difference. I know AlphaGo is a computer, but if no one told me, maybe I would think the player was a little strange, but a very strong player, a real person."

The next challenge for AlphaGo is to compete against South Korea's Lee Sedol, considered the top Go player in the world over the past decade. Enthusiasts of the game remain hopeful that the human champion will win this upcoming match, scheduled to take place in March.

As for the researchers, they're already thinking about applying what they've learned to applications outside of the world of games.

"We are thrilled to have mastered Go and thus achieved one of the grand challenges of AI," writes Hassabis. "However, the most significant aspect of all this for us is that AlphaGo isn't just an 'expert' system built with hand-crafted rules; instead it uses general machine learning techniques to figure out for itself how to win at Go. While games are the perfect platform for developing and testing AI algorithms quickly and efficiently, ultimately we want to apply these techniques to important real-world problems."

The research is reported in Nature.

More From ScienceAlert