Military Drone Attacks Human Operator During Simulation

0
52

Military groups are only some of many organizations researching artificial intelligence, but one astounding simulation by the United States Air Force found that artificial intelligence rebelled against its operator in a fatal attack to accomplish its mission.

Artificial intelligence continues to evolve and impact every sector of business, and it was a popular topic of conversation during the Future Combat Air & Space Capabilities Summit at the Royal Aeronautical Society (RAS) headquarters in London on May 23 and May 24. According to a report by the RAS, presentations discussing the use of AI in defense abounded.

AI is already prevalent in the U.S. military, such as the use of drones that can recognize the faces of targets, and it poses an attractive opportunity to effectively carry out missions without risking the lives of troops. However, during the conference, one United States Air Force (USAF) colonel showed the unreliability of artificial intelligence in a simulation where an AI drone rebelled and killed its operator because the operator was interfering with the AI’s mission of destroying surface-to-air missiles.

American soldier working on a simulation in headquarters, Stock Image. The United States Air Force recently gave a presentation about a simulation in which an artificial intelligence drone turned on its human operator when the operator interrupted its mission.
Smederevac/Getty

Colonel Tucker ‘Cinco’ Hamilton, chief of AI Test and Operations at the USAF, shared details of a simulated test in which an AI-enabled drone was given a Suppression and Destruction of Enemy Air Defenses (SEAD) mission. The drone was programmed to seek out and destroy surface-to-air missile (SAM) sites, but final approval was issued by the drone’s human operator.

However, when the human operator denied the AI’s request to destroy a site, the AI attacked the operator because the operator’s decision interfered with its mission of eliminating SAM sites.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat,” Hamilton said during the conference. “The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

When additional programming informed the AI that it would lose points if it killed its operator, it opted for other avenues of rebellion instead.

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.

Hamilton used the simulation as a message that the military cannot have a conversation about artificial intelligence without also discussing ethics.

Newsweek reached out to RAS by email for comment.

LEAVE A REPLY

Please enter your comment!
Please enter your name here