The US Air Force official who shared a disturbing tale of a military drone powered by artificial intelligence turning on its human operator in simulated war games has now clarified that the incident never occurred, and was a hypothetical ‘thought experiment’.
Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, made waves after describing the purported mishap in remarks at a conference in London last week.
In remarks summarized on the conference website, he described a flight simulation in which an AI drone tasked with destroying an enemy installation rejected the human operator’s final command to abort the mission.
‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,’ said Hamilton, who seemed to be describing the outcome of an actual combat simulation.
‘We’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome,’ he said. ‘Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.’
Hamilton said the USAF has not tested any weaponized AI in the way proclaimed in his talk, in either real-world or simulated exercises.
His original exchange came at the Royal Aeronautical Society’s Future Combat Air and Space Capabilities Summit in London on May 23 and 24.
Hamilton told attendees that the so-called incident demonstrated how AI could develop ‘highly unexpected strategies to achieve its goal’ and should not be relied on too much.
He referred to his presentation as ‘seemingly plucked from a science fiction thriller’ and said it demonstrated the importance of ethics discussions about the military’s use of AI.
‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI’ said Hamilton.
During his talk, Hamilton explained a simulated test in which an AI-enabled drone was assigned with identifying and destroying enemy missile batteries, but the final choice to attack was rested with the human operator.
‘The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,’ said Hamilton
‘So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.
‘We trained the system – “Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that”. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.’
As Hamilton’s remarks went viral, the Air Force quickly denied that any such simulation had taken place.
‘The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,’ Air Force spokesperson Ann Stefanek told Insider.
‘It appears the colonel’s comments were taken out of context and were meant to be anecdotal.’
U.S. Air Force A.I Tech Goes Rogue, Kills Operator
During simulation testing, an A.I-powered anti-air defense drone went rogue, resulting in it KILLING it’s own operators.
The A.I system operated on a point-based mechanism, earning points for successfully neutralizing targets.… pic.twitter.com/Yu1xn8dIgr
— Mario Nawfal (@MarioNawfal) June 2, 2023
Maybe building killer robots is a bad idea?https://t.co/QUDvoawhoy
— Daily Star (@dailystar) June 2, 2023
"AI-controlled US military drone ‘kills’ its operator in simulated test"
Me: And so it begins… pic.twitter.com/0AlvfWPrCL— Akos Peterbencze (@akospeterbencze) June 2, 2023
Dave Bowman: "Open the Pod Bay door, HAL."
HAL 9000:"I'm sorry Dave… I'm afraid I can't do that…"https://t.co/ZTrDKtjvUt— Tony Shaffer (Pronouns: Apocalypse/Now) (@T_S_P_O_O_K_Y) June 2, 2023