The AI is able to teach itself how to play Atari games. Image Credit: CC BY-SA 3.0 Georges Seguin
Google's DeepMind AI is now capable of playing several Atari 2600 games as skillfully as a human.
Typically when a computer plays a game such as chess it is able to calculate what moves to make because it has been programmed with all the information that it needs about the game.
DeepMind however, an artificial intelligence created by Google's recently acquired startup DeepMind Technologies, knows nothing about a game when it starts playing but instead learns how to master it through experience as it plays.
The company has now successfully demonstrated this technology in action by having the computer figure out for itself how to rival the scores of human players at a selection of 49 old Atari games.
"It's the first time that anyone has built a single general learning system that can learn directly from experience," said DeepMind co-founder Demis Hassabis.
"The ultimate goal is to build general purpose smart machines - that's many decades away. But this is going from pixels to actions, and it can work on a challenging task even humans find difficult. It's a baby step, but an important one."
The AI ended up achieving more than 75 percent of the score of human players on more than half of the games it played and even invented strategies and loopholes that everyone else had missed.
"The interesting and cool thing about AI tech is that it can actually teach you, as the creator, something new," said Hassabis. "I can't think of many other technologies that can do that."
Source: Wired.co.uk | Comments (9)
DeepMind, Artificial Intelligence