When talk turns to robotics, the most pressing concern is usually how quickly they're going to take our jobs - but perhaps the grand masters of chess should be the ones looking over their shoulders. A computer scientist in the UK has invented a new type of chess artificial intelligence that's able to get up to the International Master level after just 72 hours of tuition.
Chess-playing computers aren't new of course - IBM's Deep Blue scored the first AI triumph over a human world champion by beating Garry Kasparov in 1997 - but Lai's approach works differently. Rather than studying all the millions of possible moves simultaneously, as Deep Blue and its successors do, the new AI (nicknamed Giraffe) is able to 'think' intuitively as it goes, using a strategy of "automatic feature extraction and pattern recognition" that gives it greater autonomy. This neural network approach adapts over time and mimics the human brain.
Essentially, modern day chess programs use 'brute force' to beat human players, as the MIT Technology Review reports. In contrast, the Giraffe AI developed by Matthew Lai from Imperial College London assesses the current state of the board instead, hence the accelerated learning time. Based on his testing, Lai says his AI performs "moderately better" than the competition.
Lai calls Giraffe "a chess engine that uses self-play to discover all its domain-specific knowledge, with minimal hand-crafted knowledge given by the programmer"; he also says it's "the most successful attempt thus far at using end-to-end machine learning to play chess".
The trick is being able to narrow down the number of potential avenues that need exploring and pursue the most promising ones, discarding the rest: that's how a human chess player approaches the situation, so Giraffe could help point the way towards artificial intelligence that operates more like our own brains do.
The Giraffe AI weighs up a move in three stages: first it checks whose turn it is and what pieces are available; then it assesses the state of the board, figuring out which pieces are where. Finally, it considers the moves its pieces can make - it still relies on millions of previously programmed scenarios, but these are moves it's tried and tested itself. In effect, it's learning which moves work and which don't as it goes along, in the same way as a human player would.
Right now, Giraffe's main drawback is that it takes longer than other computer players to make a move, but Lai says he has plenty of improvements planned - and the main objective is to work smarter rather than harder. Ultimately, his research may help computers get better at teaching themselves to do everything from driving a car to making an omelette, whether or not there's a flesh and blood human being around to help.