Elon Musk and more than 100 leaders and experts in artificial intelligence (AI) have come together urging the UN to commit to an outright ban on killer robot technology.

An open letter signed by Musk, Google Deepmind's Mustafa Suleyman, and 114 other AI and robotics specialists urges the UN to prevent "the third revolution in warfare" by banning the development of all lethal autonomous weapon systems.

The open letter, released to coincide with the world's largest conference on AI – IJCAI 2017, which is taking place in Melbourne, Australia this week – warns of a near future where independent machines will be able to choose and engage their own targets, including innocent humans in addition to enemy combatants.

"Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," the consortium writes.

"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."

It's not the first time Musk and those of his world view have united to draw attention to the threat autonomous weapons pose to humanity.

The SpaceX and Tesla chief is also behind OpenAI, a nonprofit devoted to advancing ethical AI research.

But despite the concerns AI experts are voicing, ongoing delays in developing an effective ban against autonomous weapons have led some to fear the dangers could be beyond regulation, especially given the rapid pace at which AI systems are developing.

"We do not have long to act," the open letter reads. "Once this Pandora's box is opened, it will be hard to close."

The "third revolution" to which the campaigners refer positions killer robots as a kind of technological successor to the historical developments of gunpowder and nuclear weaponry – innovations that haven't exactly improved the world we live in.

While the new letter isn't the first instance where experts have leveraged IJCAI to make their point, it is the first time that representatives of AI and robotics companies – from some 26 countries – have made a joint stand on the issue, joining the ranks of independent researchers including Stephen Hawking, Noam Chomsky, and Apple co-founder Steve Wozniak.

"The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action," says the founder of Clearpath Robotics, Ryan Gariepy.

"We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now."

That last point is one that should be emphasised. While the ultimate nightmare of autonomous weapon systems could be a future populated with T–800s like the one at the top of this page, the reality is that AI-based killing machines are already a thing.

Autonomous or semi-autonomous capability is increasingly being built into weapons like the Samsung SGR-A1 sentry gun, the BAE Systems Taranis drone, and DARPA's Sea Hunter submarine.

In other words, the technological seeds of tomorrow's killer robots are already in existence on land, sea, and air – and effective laws to regulate these lethal machines (and the industry that's hell-bent on making them) haven't yet been written down.

Well, there's no time like the present.

"Nearly every technology can be used for good and bad, and artificial intelligence is no different," says AI researcher Toby Walsh from Australia's UNSW, one of the organisers of IJCAI 2017.

"It can help tackle many of the pressing problems facing society today… [h]owever, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want."

UNSW Science is a sponsor of ScienceAlert. Find out more about their world-leading research.