Skip to content Skip to navigation

A Computer Might One Day Beat Humans at All Their Own Games

A Computer Might One Day Beat Humans at All Their Own Games

Researchers develop artificial intelligence algorithm that plays against itself to master several classic games.

AIGames.jpg

Image credits:

DeepMind Technologies Ltd

Thursday, December 6, 2018 - 14:00

Charles Q. Choi, Contributor

(Inside Science) -- A new computer program taught itself superhuman mastery of three classic games -- chess, go and shogi -- in just a few hours, a new study reports. These findings could help lead to artificial intelligence programs that could learn to play and master any game, and perhaps other human tasks, researchers said.

From the first days of computing, games have served as benchmarks of how well machines perform in tasks humans also find challenging. Since the computer Deep Blue beat world chess champion Garry Kasparov in 1997, AIs have defeated humans at even more computationally difficult games. For example, in 2016, AlphaGo from the company DeepMind in London bested a master of the ancient Chinese game of go, achieving one of the Grand Challenges of AI at least a decade earlier than anyone had thought possible.

Most programs for playing classic board games are often programmed to play just one game, and usually rely on human assistance. However, AlphaGo revealed it was possible to forgo human knowledge -- instead, the AI learned by playing against itself repeatedly, relying on a strategy known as reinforcement learning to explore through trial and error which actions were best at winning rewards.

Now scientists at DeepMind have developed AlphaZero, which used reinforcement learning to master not just one challenging game, but three -- chess, go and shogi. AlphaZero started with no knowledge about any of the games beyond the rules, and from totally random play, it learned how good play looked, unconstrained by the way humans think about the game, DeepMind CEO and co-founder Demis Hassabis explained in a statement.

AlphaZero mastered the games quickly while running on a device with the computational power of a very large supercomputer. After just a few hours of learning on its own, it was able to beat state-of-the-art AI programs that specialized in those games.

In the future, AI may tackle more challenging situations, such as multiplayer video games and other contests in which players have more choices of actions and not all the information needed to make each decision. It may also impact areas such as drug design and biotech.

The scientists detail their findings in the Dec. 7 issue of the journal Science.

Republish

Authorized news sources may reproduce our content. Find out more about how that works. © American Institute of Physics

Author Bio & Story Archive

Charles Q. Choi is a science reporter who has written for Scientific American, The New York Times, Wired, Science, Nature, and National Geographic News, among others.