Friday

May 3rd, 2024

Tech & Policy

Pentagon advisory board releases principles for ethical use of artificial intelligence in warfare

Aaron Gregg

By Aaron Gregg The Washington Post

Published Nov. 4, 2019

Pentagon advisory board releases principles for ethical use of artificial intelligence in warfare
WASHINGTON - Hoping to prepare for what many see as a coming revolution in artificial intelligence-enabled weaponry ― and convince a skeptical public that it can apply such innovations responsibly ― the U.S. military is taking early steps to define the ethical boundaries for how it will use such systems.

On Thursday, a Pentagon advisory organization called the Defense Innovation Board published a set of ethical principles for how military agencies should design AI-enabled weapons and apply them on the battlefield.

The board's recommendations are in no way legally binding. It now falls to the Pentagon to determine how and whether to proceed with them.

Lt. Gen. Jack Shanahan, director of the Defense Department's Joint Artificial Intelligence Center, said he hopes the recommendations will set the standard for the responsible and ethical use of such tools.

"The DIB's recommendations will help enhance the DOD's commitment to upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the U.S. military's strong history of applying rigorous testing and fielding standards for technology innovations," Shanahan said in a statement emailed to reporters.

Artificial intelligence algorithms are computer programs that can learn from past data and make choices without the input of a human. Such programs have already proven useful in analyzing the vast quantities of intelligence data that military and intelligence agencies collect, and the commercial business world has found myriad uses for them.

But the prospect of computers making decisions in a combat scenario has been met with skepticism from some corners of the tech world.



In 2017, a group of 116 technology executives asked the United Nations to pursue an all-out ban on autonomous weapons. Google went so far as to completely ban the use of its AI algorithm in any weapons system, a decision that followed employee complaints over its involvement in a program to analyze drone footage. Other tech companies, such as Microsoft and Amazon, have embraced opportunities to work with the military while arguing for a more nuanced approach to the matter.

The Pentagon's known uses of AI are a far cry from the dystopian visions that have appeared in popular fiction for decades.

The Army has been experimenting with so-called "predictive maintenance" programs, hoping to flag failing vehicle parts before they break down in combat. Defense and intelligence agencies have been using artificial intelligence to analyze drone feeds, hoping to spare Air Force personnel countless hours spent staring at video feeds collected by the surveillance aircraft.

Last year, the Defense Department created a Joint Artificial Intelligence Center to coordinate AI-related activities across the services, and unveiled an artificial intelligence strategy focused on speeding up its use of such technology to compete with Russia and China.

The Defense Department is so far just dipping its toes in, analysts say.

"What you see DoD searching for is some early use cases that are relatively easy from a tech standpoint and from a policy and cultural standpoint," said Paul Scharre, a former Army Ranger and Pentagon official who studies the issue at the Center for New American Security, a think tank. "They're looking for the ability to demonstrate clear value," he said.

The AI principles released Thursday were light on specifics, setting few of the hard-and-fast boundaries that AI skeptics might have hoped for.

Its recommendations for the Defense Department pertained mostly to broadly-defined goals like "formalizing these principles" or "cultivating the field of AI engineering." Other recommendations included setting up a steering committee or a set of workforce training programs.

What the document did do is establish a set of high-level ethical goals the department should strive for in its design of AI-enabled systems.

It clarified that AI systems should first and foremost be "responsible" and always under the full control of humans. The document specified that AI systems should be "equitable," recognizing that some AI systems have already been shown to express racial biases.

The document asserts that they should also be "traceable," such that their design and use can be audited by outside observers, and "reliable" enough to function as intended. And the systems should be "governable" so that they can be shut off when found to be acting inappropriately.

Peter Singer, a New America Foundation fellow who is working on a book called "Burn In" about artificial intelligence in warfare, said the artificial intelligence technology is not far enough along yet for specific principles to be developed.

Other countries investing in military AI generally do not have these sorts of conversations around ethical use, he said.

"There is not an equivalent to this board in Beijing," Singer said.

Scharre, the Center for a New American Security fellow, says the actual impact of the board's recommendations will depend on how the Defense Department proceeds.

"There is going to have to be high-level sustained oversight on this issue," Scharre said.

Sign up for the daily JWR update. It's free. Just click here.

(COMMENT, BELOW)

Columnists

Toons