Humans are notoriously woeful lie detectors, even when staring liars straight in the face.

A machine learning tool trained to detect tell-tale signs of lying has been found to do better than the average person, using little more than data from wearable sensors that pick up teensy flickers in facial muscles.

Developed by researchers at Tel Aviv University, Israel, the system correctly detected when people were lied 73 percent of the time, on average, and revealed two types of liars in the process.

It's "not perfect, but much better than any existing [facial recognition] technology," says behavioral neuroscientist Dino Levy.

Wearable electrodes measured the movements of facial muscles in 40 volunteers who either fibbed or told the truth, to feed a machine learning algorithm that slowly learned to recognize 'give-away' patterns in people's facial expressions.

Commonly used lie-detector technology, such as polygraphs, typically rely on physiological responses like heart rate, blood pressure, and breathing rate – all functions people can learn to control under pressure. In spite of their ongoing use by various areas of law enforcement, polygraphs are regarded as inaccurate at best.

So the search continues for other objective ways to tell if somebody is knowingly being deceitful.

The idea that genuine emotions can 'leak' onto the face of a liar is nothing new, though. It goes back as far back as Charles Darwin, who dabbled in psychology experiments. In 1872 he noted: "Muscles of the face which are least obedient to the will, will sometimes alone betray a slight and passing emotion."

Measuring, capturing, or even recognizing them is another matter: These involuntary, uncontrollable micro-expressions only appear for a split-moment, vanishing after 40 to 60 milliseconds.

Much of the research to locate precise facial muscles that contort to form expressions has been done using a technique called facial surface electromyography, or sEMG. It measures the electrical activity of facial muscles and is capable of registering expressions that are too subtle for humans to detect.

This new study tested a new type of wearable electrodes designed to be more sensitive and comfortable than sEMG devices, and a machine learning tool trained to read facial expressions in video footage.

"Since this was an initial study, the lie itself was very simple," Levy explains.

Two people sat facing one another, rigged up to the electrodes. One person wore headphones and either repeated the word they heard or said something different, to mislead their partner who was trying to catch them out.

The researchers recorded the activity of facial muscles between the eyebrows (called corrugator supercilia) and on the cheeks (zygomaticus major) of participants as they were listening to the audio cues, speaking, and responding.

People didn't necessarily hesitate any more or less when lying, as you might expect.

The study did find that among the 48 participants, people displayed different 'give-away' indicators. Some people activated their cheek muscles when lying, while others twitched muscles near their eyebrows.

With the lie-detecting algorithm, "We successfully detected lies in all the participants and did so significantly better than untrained human detectors," who rightly spotted lies anywhere from 22 to 73 percent of the time, Levy and colleagues write in their paper.

But the experimental algorithm still needs a lot more work, and people's telltale muscles are prone to changing over time, the study found.

"Interestingly, individuals who were able to successfully deceive their human counterparts were also poorly detected by the machine-learning algorithm," the researchers add.

Detecting lies is obviously more challenging in real-life or high-stakes situations where repeat liars generally recount longer stories threaded with lies and half-truths.

There are also other types of deception beyond outright one-word errors, such as omission, evasion, and the use of ambiguous language to conceal the truth (called equivocation) which might complicate things.

Of course, this is still very early days, and there are many reasons why someone might be nervous but not lying. Time will tell if this technique is able to concretely tell the difference.

"Our hope is that eventually, after development and thorough testing, this could provide a serious alternative to polygraph tests," Levy told The Times of Israel.

The team plans to continue with experiments to train their software algorithms to detect flash facial expressions with greater accuracy, such that they could eventually do away with the electrodes altogether.

They expect that testing their set-up with people telling more substantial, arduous lies could reveal a whole spectrum of micro-expressions associated with lying. Also, the image analysis tool could perhaps be improved by integrating other emerging technologies that focus on changing the tone of voice, Levy and colleagues suggest.

"There is a host of possible manifestations of deception, and we have merely uncovered two of them," the researchers conclude.

The study was published in Brain and Behavior.