main article image
(US Air Force)

Thousands of Google Employees Are Worried Their Company Will Help Create Autonomous Weapons

"Don't Be Evil."

CARLY CASSELLA
17 MAY 2018

In 2018, Google signed a contract for Project Maven, a partnership with the US Department of Defence that seeks to improve the analysis of drone footage using artificial intelligence (AI).

 

The first assignment for Project Maven, also known as the Algorithmic Warfare Cross-Functional Team (AWCFT), is to analyze the intimidating and ever-growing pile of drone footage – a collection so great that no human would be able to sift through it all.

If the project is successful, theoretically, it could pave the way for automated target recognition and autonomous weapon systems that require very little human supervision.

But while the company's board members are excited about the new partnership, many Google employees are not.

For months, over 4,000 Google employees have passionately protested the new contract, and now, about a dozen workers have announced their resignation at the leading tech company.

An internal petition at Google argues that the company "should not be in the business of war."

Diane Greene, a Google board of directors member, has assured employees that the technology will not "operate or fly drones" or even launch weapons. Still, many employees are not so sure that changes anything.

"While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it's delivered it could easily be used to assist in these tasks," the petition reads.

 

The protestors believe the new contract undermines Google's corporate code of conduct: "Don't Be Evil."

"We cannot outsource the moral responsibility of our technologies to third parties," the letter continues.

"Google's stated values make this clear: Every one of our users is trusting us. Never jeopardize that. Ever."

Since the terror of 9/11, the US government has carried out drone strikes on Pakistan, Yemen, Somalia, Afghanistan and Libya, killing plenty of innocent people along the way.

In 2014, an analysis from the human rights group Reprieve found that when operators use targeted killing with drones, they kill vastly more people than their targets, and often they have to strike multiple times.

For instance, an attempt to kill 41 men in November of 2014, resulted in the death of 1,147 people.

Nevertheless, government officials continue to describe these weapons as "clinical" and "precise."

The deadly technology is controversial to be sure, and it isn't just Google employees that have ethical concerns about the new partnership.

Scholars, academics and researchers from the International Committee for Robot Arms Control (ICRAC) also expressed their solidarity with the concerned Google employees.

 

"With Project Maven, Google becomes implicated in the questionable practice of targeted killings," the petition reads.

"These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage."

Importantly, the petition also insists on the creation of industry-wide ethical standards for AI, which currently do not exist.

When the DoD is just "a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control," the lack of ethical standards creates a worrying blindspot.

"If ethical action on the part of tech companies requires consideration of who might benefit from a technology and who might be harmed, then we can say with certainty that no topic deserves more sober reflection – no technology has higher stakes – than algorithms meant to target and kill at a distance and without public accountability," the petition argues.

The concerns expressed here mirror the warnings that leading tech figures like Elon Musk have been issuing for years.

"I have exposure to the very cutting edge AI, and I think people should be really concerned about it," Musk said last year.

 

"AI is a rare case where we need to be proactive about regulation instead of reactive," he added.

"Because I think by the time we are reactive in AI regulation, it's too late."

The recent Cambridge Analytica scandal illustrates the current inadequacy of government regulation when it comes to new breakthroughs in information technology.

Nevertheless, the Trump administration has assured companies like Google and Facebook that they will not restrict AI development.

"We didn't cut the lines before Alexander Graham Bell made the first telephone call," said Michael Kratsios, Trump's deputy chief technology officer, to a room of 40 leading US companies.

"We didn't regulate flight before the Wright Brothers took off at Kitty Hawk."

A Google spokesperson told Gizmodo, however, that the company is currently working "to develop polices and safeguards" around AI use.

"The technology flags images for human review, and is for non-offensive uses only," the spokesperson promised.

Without government regulation, one can only hope that Google follows through on its promise to develop AI "with everyone's benefit in mind."

Science AF is ScienceAlert's new editorial section where we explore society's most complex problems using science, sanity and humor.

 
More From ScienceAlert
loadmore icon  LOAD MORE