MIT researchers have invented a new computer interface that's totally hands-free and voice-free, but it doesn't read your brain waves either. Instead it relies on something called subvocalisation, or silent speech - the name for what you're doing when you say words in your head.

It's called AlterEgo, and it consists of a wearable headset that wraps around the wearer's ear and jaw, and a computing system that processes the data picked up by the headset to process and translate data received by the headset, and output a response.

"The motivation for this was to build an IA device - an intelligence-augmentation device," said lead researcher Arnav Kapur, from the MIT Media Lab.

"Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"

The system works a bit like a myoelectric prosthetic.

When you intend to act, your brain sends electrical signals into your muscles to tell them what to do.

With a prosthetic, electromyography is used to record those electrical signals, and send them to a processor to be translated into signals that tell a robotic prosthetic what actions the user intended to perform.

Speaking is a little more complex, but the basic concept is the same.

When you think of a word, your brain sends the signals into the muscles of your face and throat to shape that word for speaking. This is called subvocalisation, and many people do it when they're reading.

The AlterEgo headset consists of electrode sensors that sit on the regions of the wearer's face and jaw where those signals are strongest and most reliable, as determined by Kapur's team.

It also includes a pair of bone-conduction headphones that wrap around the outside of the wearer's ears.

These transmit sound directly through the bone of your skull, leaving your ears free to hear the world around you.

Combined, the sensors allow the wearer to silently "speak" to the computer by thinking words, and the computer to speak back via the headphones - like a Google or Siri you can talk to without having to say "OK Google" or "Hey Siri" in a crowded street.

It still requires calibration for every individual user.

This is because every wearer's neuromuscular signals will be slightly different, so the system would have to learn each user's "accent".

For the prototype AlterEgo, the research team created tasks with limited vocabularies of about 20 words each.

One was an arithmetic task, in which the user would subvocalise large addition or multiplication problems.

Another was playing chess, in which the user would issue subvocal commands using the standard chess numbering system.

For each application, they then applied a neural network to map particular neuromuscular signals to particular words.

Once the basic word-signal configurations are programmed into AlterEgo, it can retain that information so that retraining it for new users is a much simpler process.

For a usability study, the researchers had 10 users spend 15 minutes calibrating the arithmetic task to their own neurophysiology, then 90 minutes using it to conduct tasks.

Its translation rate was 92 percent accurate - which, Kapur said, would likely improve with regular use.

The team is currently collecting data on more complex conversations to try and expand AlterEgo's capabilities.

"We're in the middle of collecting data, and the results look nice," Kapur said. "I think we'll achieve full conversation some day."

If they do, the implications will be huge - especially if they can achieve human-to-human communications.

This would be useful in noisy environments, or environments where silence is required - but it could also allow the voiceless to communicate, assuming they still had use of the muscles in their jaw and face.

The team presented their paper at the Proceedings of the 2018 Conference on Intelligent User Interface that was held in Japan on March 7-11. It can be read in full online here.