Consciousness is a slippery concept to pin down, but a small group of neuroscientists just made a solid attempt at doing just that.

Their goal was to determine if we're anywhere near the holy grail of AI – artificial self-awareness.

Now, the short answer is no. Sorry. But before you weep for a bleak future free of robot companions, they do point out how we might yet build our own fully conscious minds.

Questions of consciousness can be largely placed into two folders – the easy and the hard. The easy is hard. We'll come back to that.

And the hard? Forget it. It involves having a way to define stuff like free will and agency, which is best left to first-year philosophy undergrads after three beers.

The best we can come up with is to say that consciousness is the thoughts and sensations we all experience personally. Which means we don't yet have a way of establishing whether it exists in something else, such as a computer.

Going back to the easy problem; assuming the consciousness we take for granted in our fellow human beings is based on the same physical laws described in our physics and chemistry textbooks, we should be able to find a way to model it. Theoretically.

This was one of the things that kept mathematical legend Alan Turing up at night. His answer was to lay the groundwork for the modern computer.

While Turing dreamed of universal computing machines that would play chess better than world champions, his mind would probably be blown by the level of artificial intelligence we have today in the form of AlphaGo and DeepMind.

As fantastic as these computational systems are, their extraordinary talents only barely overlap with our own cognitive abilities - they can solve problems at ridiculous speeds, but they still don't know they can solve problems.

Could we make a few tweaks in the near future to make them wake up and possibly daydream the demise of human civilisation?

To answer this, three researchers broke consciousness into three categories.

They called the lowest category C0, equating it with the problem solving our brains do without our awareness.

If you've ever driven home from work through peak hour traffic, only to realise you have no recollection of the journey and your fuel light is now blinking, you might appreciate the C0 of the human brain.

Computers can do this well enough, as reflected in the imminent driverless vehicle revolution.

But it's a stretch to call that 'consciousness' in any real sense, which brings us to the next category, C1.

"It refers to the relationship between a cognitive system and a specific object of thought, such as a mental representation of "the fuel-tank light", the researchers write.

In C1, that object of thought is selected for global processing, moving it out of a narrow relationship into one that can be manipulated under various contexts.

That blinking fuel light can be modelled under C1 not only as a single problem, but a concept that can be evaluated, prioritised, and solved – or not – in a time-related fashion.

The final category of C2 is like a supervisor looking down from the mezzanine floor, aware of the tasks at hand. It covers what we call 'meta-cognition' – a sense of knowing what we know.

C1 can take place without C2, and vice versa. But according to the researchers, neither system has an equivalent in machine intelligence. Not yet at least.

The researchers speculate that C1 evolved as a way to break the modularity of unconsciousness processes.

Recent advances in microchips that can both store and process information like human brain cells could potentially play such a role in revolutionising existing modular technology.

To put it to work, we'd need to learn more about how our own brains create their own global workspace – architecture that gives rise to what we think of as our awareness.

To develop C2 technology, the researchers suggest various processes, such as some that apply probability to decision making, and others that hold some kind of meta-memory to establish a line between what is known and what isn't.

While the report doesn't provide a clear blue-print for next generation AI, it does argue that it's entirely possible to build conscious-like machines based on our own mental hardware.

We might have to wait a little longer for those murderous replicants, it seems.

But if this research is anything to go by, they're still on their way. 

This research was published in Science.