Humans have made some huge and sometimes startling strides in artificial intelligence over the past few years. 

But when it comes to the idea of these machines getting even smarter - and potentially more self-aware - we spend most of our time worrying about the threat robots pose to humanity.

And while that's not a risk we should ignore, this new video from Kurzgesagt - In a Nutshell puts forward an even more unnerving prospect - what if robots need protection from us? 

The video has already gone viral, with more than a million views in less than a day - and we think it's pretty essential viewing for anyone interested in AI.

Of course, it's not the first time this argument has been brought up. Just last year, an Oxford mathematician argued that artificial intelligence should be protected by human rights.

And there's no shortage of sci-fi films and books that show what can go wrong when we treat these machines like they have no rights (hello, A.I.).

But the reality is that we're not even close to dealing with the prospect of robot rights in the future - and that's a real problem.

As the Kurzgesagt team explains, most of the existing philosophy of rights is ill-equipped to deal with AI, because it's centred around the question of consciousness. And scientists still can't agree on what consciousness actually is.

Some researchers think it's a state of matter, like gas or liquid; some say that it's a product of our brains. Others say that it's immaterial.

Based on this very inconsistent definition, there are neuroscientists out there who think that any sufficiently advanced system can generate consciousness and become self-aware - and based on some tests, that might have already happened.

So does that automatically mean that any robot that achieves self-awareness deserves right?

Well, not exactly, because right now, consciousness only makes us worthy of having rights because it gives us the ability to suffer - to not only feel pain, but be aware of it.

Most of our rights are set up to avoid pain and suffering according to how we experience them as humans - and it's unlikely robots will experience them in the same way unless we program them to do so. So according to that definition, a robot probably wouldn't deserve rights.

But then there are other, less tangible rights - like the right to freedom. Would a robot that can't move mind being trapped in a cage?

And would a robot mind being dismantled if it had no fear of death? Would it mind being insulted if it had no need for self-esteem?

It's pretty obvious that things are already getting murky, and we haven't even touched on the idea that AI might one day be able to create their own AI.

But perhaps more important to consider - based on humanity's long history of abusing other people and species we think are 'less human' than us - is just how much damage we can do to artificial intelligence, and how to limit it.

You don't need to spend long in the history books to know that, throughout civilisation, certain groups have exploited others for their own benefit - and they've never struggled to come up with ideological excuses for that abuse.

Seeing as a lot of people stand to benefit huge amounts of money and power from keeping robots and AI without rights, the video above argues that it's not unlikely the same thing will happen with robots.

Last year, Stephen Hawking seemed to agree when he called the history of humanity "the history of stupidity" because we can't help but make the same mistakes over and over again.

So, do we have enough time up our sleeves to properly consider robot rights and implement them before we get things really wrong? Probably not… but that doesn't mean we shouldn't try anyway.

Check out the video above to have your mind-melted by the philosophical and ethical decisions we have coming up. For once, we don't envy our future selves, even if they might eventually get to ride around in flying cars.