Update 9 April 2018: Less than a week after the original letter, experts have ended the boycott of KAIST after deeming the university president Sung-Chul Shin's response as satisfactory.

"It goes to show the power of the scientific community when we choose to speak out – our action was an overnight success," said AI researcher and boycott organiser Toby Walsh from UNSW.

"I was very pleased that the president of KAIST has agreed not to develop lethal autonomous weapons, and to follow international norms by ensuring meaningful human control of any AI-based weapon that will be developed."

Original: While the thought of having robot companions to take out the rubbish and cut the grass is an appealing one, the same bots could also be used for more violent purposes – and an argument over where to draw that line is causing a huge schism in the AI community.

Now, more than 50 of the world's leading artificial intelligence experts are completely boycotting contact with the prestigious Korea Advanced Institute of Science and Technology (KAIST), over the development of AI-powered weaponry.

KAIST has signed an agreement with arms company Hanwha Systems, already known to be developing devastating weapons known as cluster munitions, in contravention of a UN ban on these type of wide-reaching explosive devices.

With added AI, they could be even more lethal.

"At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons," the AI researchers write in an open letter.

"We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control."

KAIST is one of the world's leading robotic and science labs: researchers there have developed everything from liquid batteries to disease sensors.

But it's the institute's links to the potential development of killing machines that's causing such a stir.

For its part, KAIST denies it has plans to develop autonomous weapons with Hanwha Systems.

"As an academic institution, we value human rights and ethical standards to a very high degree," KAIST president Sung-Chul Shin said in a statement, Times Higher Education reports.

"KAIST will not conduct any research activities counter to human dignity, including autonomous weapons lacking meaningful human control."

With hardware and software developing so fast – whether in the smart speaker in your home or the weapons used by the military – many experts are worried that we're going to develop something we can't control sooner rather than later.

Elon Musk, Steve Wozniak, and the late Stephen Hawking are some of the high-profile personalities who have called on governments and the United Nations to put restrictions on weapons running on top of AI systems.

If we get to the stage where missiles, rockets, and bombs are operating independently of human control, the consequences could be terrifying, especially in the wrong hands.

"If developed, autonomous weapons will be the third revolution in warfare," continues the open letter, signed by researchers from 30 different countries. "They will permit war to be fought faster and at a scale greater than ever before."

"They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora's box will be hard to close if it is opened."

The members of the United Nations are meeting this Monday, 9 April, to discuss a ban on lethal autonomous weapons that can kill without any intervention from a human operator.

It's not the first time such a session has been called, but so far agreement has been in short supply.

One of the problems is that individual countries are keen to develop this sort of tech for their own military ends, while private companies don't want to be shackled with regulations when it comes to developing their own AI models.

While we can all probably agree that killer robots are bad, making decisions about what to do about their development has proved to be difficult.

Perhaps the new weapons research at KAIST will force the key parties involved into a deal.

Many scientists now want to see guidelines put down about how AI can be developed and deployed. Even then though, making sure everyone is sticking to the script could be tricky.

"We are locked into an arms race that no one wants to happen," says Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales (UNSW) in Australia, who organised the boycott.

"KAIST's actions will only accelerate this arms race. We cannot tolerate this."

"I am hopeful that this boycott will add urgency to the discussions at the UN that start on Monday," adds Walsh.

"It sends a clear message that the AI & Robotics community do not support the development of autonomous weapons. Any other university planning to open a lab in this space needs to think again."

Let's hope Monday's meeting results in some ground rules that can ensure artificial intelligence is used to benefit humanity, not help destroy it.

Here's the open letter in full:

As researchers and engineers working on artificial intelligence and robotics, we are greatly concerned by the opening of a "Research Center for the Convergence of National Defense and Artificial Intelligence" at KAIST in collaboration with Hanwha Systems, South Korea's leading arms company. It has been reported that the goals of this Center are to "develop artificial intelligence (AI) technologies to be applied to military weapons, joining the global competition to develop autonomous arms."

At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons. We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control. We will, for example, not visit KAIST, host visitors from KAIST, or contribute to any research project involving KAIST.

If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them. We urge KAIST to follow this path, and work instead on uses of AI to improve and not harm human lives.