While it has its fair share of critics, the Turing test has become one of the most well-known ways of measuring artificial intelligence. The test, originally developed in 1950, states that if a human being can't tell the difference between an AI and a real human over a chat program, the AI has passed.

But now scientists have discovered a loophole of sorts in the Turing test, and it involves one of the oldest tricks in the book: simply staying silent.

It turns out that silence on the part of the AI can help skew the perception of the person on the other end of the conversation, who is left wondering whether he or she is dealing with a shy (or offended) human being or a broken AI-powered bot.

Scientists from Coventry University in the UK looked at six transcripts from earlier Turing tests and found that when the machines stopped speaking, it put doubt in the minds of the judges. Often the silence wasn't any intentional coyness on the part of the AI, and was simply due to technical problems.

"The technical issues entailed the failure of the computer programs to relay messages or responses to the judge's questions," one of the researchers, Huma Shah, told Dyllan Furness at Digital Trends. "The judges were unaware of the situation and hence in some cases they classified their hidden interlocutor as 'unsure'."

If the judge is unsure, the AI has succeeded.

As Shah and fellow researcher Kevin Warwick note in their study, there's still plenty of controversy over the 'rules' of the Turing test, and plenty of ambiguity about what exactly its creator Alan Turing intended the challenge to actually measure.

The interpretation used in this case is the basic "imitation game" described by Turing: the AI has to be able to pretend to be human to a reasonably convincing level.

Leaving aside the debate over the conditions of the Turing test itself, the study considers the various repercussions of a bot effectively pleading the Fifth Amendment (staying quiet).

If a machine can fool humans by being tight-lipped, argue the researchers, then passing the test doesn't prove the machine can think – just that it can clam up (and by that reckoning, a stone could pass just as easily). If the human judge is unsure, that means the AI has won: and how can any certain judgement be made if the machine says nothing?

The team suggests that clever bots could keep quiet to avoid giving themselves away with a stupid answer, and that future Turing tests could be tweaked so silence automatically disqualifies a contestant, whether they're artificial or human.

According to Shah, Turing designed his test to encourage the development of "elaborate machines to respond in a satisfactory and sustained manner", not just bots that are trying to fool their judges by staying schtum. In other words, it's not really playing fair or in the intended nature of the test, even if it's effective.

Perhaps we need a new Turing test for the 21st century – after all, computing has come a long way since 1950. Or maybe the test is no longer as relevant as it once was, given the staggering advances AI has made in the past several decades. 

Microsoft CEO Satya Nadella recently predicted that the future of AI is "not going to be about human vs. machine", but rather about how intelligent systems can help augment and enhance what we already do best. It's something these researchers tend to agree with.

"The role of AI is to augment human performance with intelligent agents," Shah told Digital Trends. "For example, a human educator using an AI to score student assignments and exam questions leaving the teacher time to innovate learning, inspiring students, encouraging more into STEM, including females, for a better life or world of cooperation."

The findings have been published in the Journal of Experimental & Theoretical Artificial Intelligence.