Machine learning is a bit of a buzz term that describes the way artificial intelligence (AI) can begin to make sense of the world around it by being exposed to massive amount amounts of data.

But a new algorithm developed by researchers in the US has dramatically cut down the amount of learning time required for AI to teach itself new things, with a machine capable of recognising and drawing visual symbols that are largely indistinguishable from those drawn by people.

The research highlights how, for all our imperfections, people are actually pretty good at learning things. Whether we're learning a written character, how to operate a tool, or how to perform a dance move, humans only need a few examples before we can replicate what we've been shown.

In comparison, pattern-recognition in most machines – such as computers learning to identify particular faces, or recognise typed characters on a cheque or coupon – usually involves an extensive learning curve, which may amount to hundreds or thousands of drip-fed examples before the AI becomes accurate.

Not any more though. Using what's called a Bayesian program learning framework, the researchers created an algorithm that effectively programs itself by constructing code to reproduce particular visual symbols.

And rather than simply re-pasting the same learned character each time, the probabilistic algorithm draws the symbol slightly differently in every instance, based on a 'generative model' of how to create the character. In this respect, the AI is much like a human. We never write a letter exactly the same way, as we've only learned what it's supposed to look like, not how to reproduce an identical facsimile.

The researchers exposed their algorithm to 1,600 types of handwritten characters from 50 of the world's writing systems, including Sanskrit and Tibetan, and even invented symbols featured in the TV show Futurama.

Once the machine had learned the characters and could draw them independently, the researchers conducted a 'visual Turing test' to see if human judges could tell machine-drawn characters from ones written by human hands.

Ultimately, the machine's handwriting proved indistinguishable from human renderings of the characters, with fewer than 25 percent of judges performing much better than pure chance in telling the drawings apart. The findings are reported in Science.

"Before they get to kindergarten, children learn to recognise new concepts from just a single example, and can even imagine new examples they haven't seen," said Joshua Tenenbaum, a researcher in cognitive sciences at the Massachusetts Institute of Technology (MIT).

"We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts – even simple visual concepts such as handwritten characters – in ways that are hard to tell apart from humans."

A machine that fools you with its handwriting may not sound like it will change the world, but the potential applications for such a 'fast study' algorithm are pretty exciting.

"Imagine if your smartphone could do this. You use a word, and your smartphone asks you what it means and is able recognise the next time you are saying that to build its repertoire," Tenenbaum told Sarah Knapton at The Telegraph.

"Improving machines' ability to quickly acquire new concepts will have a huge impact on many different artificial-intelligence-related tasks including image processing, speech recognition, facial recognition, natural language understanding and information retrieval."