You might be familiar with the 'uncanny valley' effect, where a computer-generated character or robot looks almost human… but with enough of a gap to a real person to leave you feeling uneasy. Now neuroscientists have figured out which part of the brain might give rise to these weird feelings.

The same research suggests some of us could even be more susceptible to the uncanny valley sensation than others, and these findings are going to be invaluable as engineers try to make humanoid robots and artificially created avatars more relatable.

Based on functional MRI scans of 21 individuals as they went through experiments involving images of other humans and robots, researchers identified a link between the uncanny valley and two distinct parts of the medial prefrontal cortex, a section of the brain involved in judging all kinds of stimuli.

"We were surprised to see that the ventromedial prefrontal cortex responded to artificial agents precisely in the manner predicted by the uncanny valley hypothesis," says neuroscientist Fabian Grabenhorst, from the University of Cambridge in the UK.

"With stronger responses to more human-like agents but then showing a dip in activity close to the human/non-human boundary – the characteristic 'valley'."

The uncanny valley idea has been around since the 1970s, first proposed by Japanese roboticist Masahiro Mori. It suggests our emotional response to robots and CGI creations becomes more positive the more life-like they become – but then dips again right before the point where these artificial beings become indistinguishable from the real thing.

In other words, the creepy feeling isn't brought on by robots that are obviously fake, or humans that are definitely real – but there's a strange unnerving sensation somewhere near the boundary line.

"It implies a neural mechanism that first judges how close a given sensory input, such as the image of a robot, lies to the boundary of what we perceive as a human or non-human agent," says Grabenhorst.

"This information would then be used by a separate valuation system to determine the agent's likeability."

To investigate further, Grabenhorst and his colleagues got their volunteers to look at images of humans, artificial humans, android robots, humanoid robots and mechanoid robots, and asked them to assign ratings based on likeability and human-likeness.

Then, the participants were asked to decide which of these 'agents' they'd request to select a gift a human would like. In line with the uncanny valley phenomenon, people chose either real humans, or the more human-like artificial agents for this task. They did not opt for the agents in between, where the difference between human-or-not was the hardest to make out.

Based on the fMRI scans, the researchers think one part of the medial prefrontal cortex tries to create a clear distinction between humans and non-humans, and another part then assesses likeability.

The tests also showed that another brain region – the amygdala, where decision-making, memory, and emotional responses are handled – was working hardest when almost-human agents were being rejected. The strength of that rejection varied between participants, hinting that some of us have a deeper uncanny valley than others.

These valuation signals could change over time, the scientists suggest, with artificial agents perhaps able to earn our trust.

If we're going to co-exist in the world with AI and robots though, minimising the uncanny valley effect is going to be important.

"This is the first study to show individual differences in the strength of the uncanny valley effect, meaning that some individuals react overly and others less sensitively to human-like artificial agents," says technologist Astrid Rosenthal-von der Pütten, from the RWTH Aachen University in Germany.

"This means there is no one robot design that fits – or scares – all users. In my view, smart robot behaviour is of great importance, because users will abandon robots that do not prove to be smart and useful."

The research has been published in the Journal of Neuroscience.