Biometric security is taking over from the humble password, from the fingerprint sensor you might use to unlock your phone, to advanced iris scanning - and now there's a new method in the pipeline: the sound of your skull. 

Scientists in Germany are working on a system that can identify the way your skull vibrates in reaction to an ultrasonic signal, because it could be just as unique as your fingerprint. It could eventually be used to prove you are who you say you are when logging into your email, or trying to gain access to the Pentagon.

As Andrew Liszewski from Gizmodo reports, although they only used a small sample size of 10 people to test the device, this new system was able to identify the correct user 97 percent of the time, based on their skull sounds alone.

Of course, to measure skull vibrations you're going to need some kind of headset or accessory, and the researchers are currently working with a Google Glass-style device to log you in.

Eventually, the required tech could be incorporated into smartphones, so holding one to your head to take a call would be enough to identify you.

The name for the new system is SkullConduct, and it joins various other weird and wonderful biometric security solutions in development, including ones using vein patterns and brain waves. The idea is that these biological markers are much harder to fake, whereas if someone steals your password, you're pretty much out of luck.

"If recorded with a microphone, the changes in the audio signal reflect the specific characteristics of the user's head," the researchers report in the Journal of the ACM.

"Since the structure of the human head includes different parts such as the skull, tissues, cartilage, and fluids and the composition of these parts and their location differ between users, the modification of the sound wave differs between users as well."

There are a couple of problems to overcome before SkullConduct can become a viable proposition, as Hal Hodson from New Scientist reports.

First, the system needs to be able to cope with background noise (a factor not considered in the current prototype), and secondly, the device currently uses white noise as the trigger sound - further down the line, something less grating would have to be used, like a short musical jingle.

The team from the University of Stuttgart, the University of Saarland, and the Max Planck Institute for Informatics will be presenting details on the SkillConduct at the Conference for Human-Computer Interaction in California in May