The debate over super-intelligent robots - and their potential dangers - is nothing new, but it's only in recent years that software has developed the speed and capacity to bring us to the brink of creating a machine that could one day prove smarter than we are. Academic Stuart Armstrong from the University of Oxford in the UK has been setting out the threat of next-gen AI, and says programmers must work to make it safe before it spirals out of control.

Part of the problem is defining exactly what 'safe' means: robots instructed to prevent human suffering could theoretically decide to kill the sick and infirm, Armstrong says, or robots told to protect humans might decide to imprison them for their own good. 

"You can give AI controls, and it will be under the controls it was given," said Armstrong, as The Telegraph reports. "But these may not be the controls that were meant."

Armstrong was speaking at a debate on AI organised by technology research firm Gartner, and he said it will be difficult to predict in which direction machines are developing until it's too late. Although robots could be programmed with a moral code, defining what those morals are and how to apply them would be problematic - it's difficult enough getting human beings to agree on right and wrong without adding artificially intelligent robots into the mix as well.

"Plans for safe AI must be developed before the first dangerous AI is created," said Armstrong. "The software industry is worth many billions of dollars, and much effort is being devoted to new AI technologies. Plans to slow down this rate of development seem unrealistic. So we have to race toward the distant destination of safe AI and get there fast, outrunning the progress of the computer industry."

As we rush to use computer processing power to fix some of the most intractable problems of our time, we're also hastening the arrival of a software system that can run independently from its creators, Armstrong says - so in trying to benefit from AI we might be pushing it further than we should. "Humans steer the future not because we're the strongest or the fastest, but because we're the smartest," said the Oxford academic. "When machines become smarter than humans, we'll be handing them the steering wheel."

Armstrong, who works at the Future of Humanity Institute at Oxford, says crunch time could come in the next few decades. He says the risk is creating a world where we're completely dependent on software and hardware - and when we become dependent we can quickly become redundant. Machines would have the capacity to communicate with each other directly without human interaction.

Troubling though Armstrong's predictions are, you'll find many different opinions about where AI is heading, and some of the world's best minds are working on the issue to make sure we're not overrun by robots in the future. It might be worth keeping your eye on Siri though, just in case.