AI drone ‘kills’ human operator during ‘simulation’ – which US Air Force says didn’t take place
It turned on its operator to stop it from interfering with its mission, according to a top official – but the US Air Force denies any such simulation ever took place.
Friday 2 June 2023 11:38, UK
Listen to this article
0:00 / 3:08
1X
BeyondWords
Audio created using AI assistance
A U.S. Air Force MQ-9 Reaper drone sits in a hanger at Amari Air Base, Estonia, July 1, 2020. U.S. unmanned aircraft are deployed in Estonia to support NATO's intelligence gathering missions in the Baltics. REUTERS/Janis Laizans
Image:
File pic: A US Air Force MQ-9 Reaper drone in a hanger at Amari Air Base, Estonia, July 2020
Why you can trust Sky News
An AI-controlled drone “killed” its human operator in a simulated test reportedly staged by the US military – which denies such a test ever took place.
It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker “Cinco” Hamilton, during a Future Combat Air & Space Capabilities summit in London.
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” he said.
“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
No real person was harmed.
Ai-Da AI robot2:02
Play Video – ‘Should I be scared of you?’ asks Kay Burley
‘Should I be scared of you?’ asks Kay Burley
He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” he added.
Artificial intelligence2:48
Play Video – AI ‘could make humans extinct’
AI ‘could make humans extinct’
His remarks were published in a blog post by writers for the Royal Aeronautical Society, which hosted the two-day summit last month.
In a statement to Insider, the US Air Force denied any such virtual test took place.
Click to subscribe to the Sky News Daily wherever you get your podcasts
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” spokesperson Ann Stefanek said.
“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”
Read more:
What are the concerns around AI and are some of the warnings ‘baloney’?
World’s first humanoid robot creates art – but how can we trust AI behaviour?
China warns over AI risk as Xi urges national security improvements
While artificial intelligence (AI) can perform life-saving tasks, such as algorithms analysing medical images like X-rays, scans and ultrasounds, its rapid rise of has raised concerns it could progress to the point where it surpasses human intelligence and will pay no attention to people.
Sam Altman, chief executive of OpenAI – the company that created ChatGPT and GPT4, one of the world’s largest and most powerful language AIs – admitted to the US Senate last month that AI could “cause significant harm to the world”.
Some experts, including the “godfather of AI” Geoffrey Hinton, have warned that AI poses a similar risk of human extinction as pandemics and nuclear war.
Related Topics
Artificial Intelligence