We're on the verge of creating autonomous weapons that can kill without any help from humans. Thousands of experts are concerned about this - and the latest campaign effort against this tech is a chilling video demonstrating the kind of future we're heading for.

In the Slaughterbots short, which we've embedded below, swarms of AI-controlled drones carry out strikes on thousands of unprepared victims with targeted precision. What makes the clip so scary is that the scenario is entirely plausible.

The video starts with a spectacular press event where the technology is unveiled for the first time. The miniature drones are able to take out "the bad guys" – whoever they happen to be – without any collateral damage, nuclear weapons, or troops on the ground.

All the drone bots need is a profile: age, sex, fitness, uniform, and ethnicity.

Despite the applause that the tiny drones get at their unofficial unveiling, the tech behind them soon falls into the wrong hands, as it tends to do. Before long an attack is waged on the United States Capitol building, with only politicians from one particular party targeted.

The same bots are then used by an unknown group to take out thousands of students worldwide, all of whom shared the same human rights video on social media.

"The weapons took away the expense, the danger, and risk of waging war," says one of the talking heads in the clip, admitting that "anyone" can now carry out such a strike.

Thankfully this creepy video isn't real life – at least not yet.

It was published by the Campaign to Stop Killer Robots, an international coalition looking to ban autonomous weapons, and was shown this week at the UN Convention on Certain Conventional Weapons.

The group wants the UN to pass legislation prohibiting the development of this kind of AI technology, and the large-scale manufacture of the associated hardware. Legislation could also be used to police anyone who tried to develop these kind of systems.

Worryingly, these are all technologies we already have, according to one of the experts behind the video, computer scientist Stuart Russell from the University of California, Berkeley – the only step remaining is for someone to miniaturise and combine them.

"I've worked in AI for more than 35 years," says Russell in the video. "Its potential to benefit humanity is enormous, even in defence, but allowing machines to choose to kill humans will be devastating to our security and freedom."

"Thousands of my fellow researchers agree. We have that opportunity to prevent the future you just saw, but the window to act is closing fast."

Experts including Elon Musk and Stephen Hawking have also warned about the rapid development of AI, and its use in weapons.

Computer systems are now able to pilot drones on their own, and recognise faces faster than human beings can. If they were also allowed to pull the trigger on a weapon without any human approval, scientists say, wars would rage at a speed and with a loss of life far greater than anything we've ever seen before.

Let's hope that this Slaughterbots video, and other initiatives to curb the development of AI-powered weaponry, prove enough to put a stop to this particular area of research.

Noel Sharkey, AI professor at Sheffield University in the UK, and chair of the International Committee on Robot Arms Control, has been warning about the dangers of autonomous weapons for a decade.

"It will only take one major war to unleash these new weapons with tragic humanitarian consequences and destabilisation of global security," he told The Guardian.

You can find out more about efforts to support a ban on the Campaign to Stop Killer Robots website.