Reports of an AI drone that ‘killed’ its operator are pure fiction


Some AI tales are so dangerous they’d make a robotic facepalm

Corona Borealis Studio/Shutterstock

Information of an AI-controlled drone “killing” its supervisor jetted all over the world this week. In a narrative that might be ripped from a sci-fi thriller, the hyper-motivated AI had been skilled to destroy surface-to-air missiles solely with approval from a human overseer – and when denied approval, it turned on its handler.

Solely, it’s no shock that story sounds fictional – as a result of it’s. The story emerged from a report by the Royal Aeronautical Society, describing a presentation by US Air Pressure (USAF) colonel Tucker Hamilton at a current convention. That report famous the incident was solely a simulation, wherein there was no actual drone and no actual threat to any human – a truth missed by many attention-grabbing headlines.

Later, it emerged that even the simulation hadn’t taken place, with the USAF issuing a denial and the unique report up to date to make clear that Hamilton “mis-spoke”. The apocalyptic situation was nothing however a hypothetical thought experiment.

“The Division of the Air Pressure has not carried out any such AI-drone simulations and stays dedicated to moral and accountable use of AI expertise. It seems the colonel’s feedback have been taken out of context and have been meant to be anecdotal,” a USAF spokesperson informed Insider. USAF didn’t reply to New Scientist‘s request for interview earlier than publication.

This story is simply the newest in a string of dramatic tales informed about AI that has at factors neared hysteria. In March, Time journal ran a remark piece by researcher Eliezer Yudkowsky wherein he stated that the almost certainly results of constructing a superhumanly sensible AI is that “actually everybody on Earth will die”. Elon Musk stated in April that AI has the potential to destroy civilisation, whereas a current letter from AI researchers stated the danger of extinction is so excessive that coping with it needs to be a precedence alongside pandemics and nuclear warfare.

Why do these narratives acquire a lot traction, and why are we so eager to imagine them? “The notion of AI as an existential risk is being promulgated by AI consultants, which lends authority to it,” says Joshua Hart at Union Faculty in New York – although it’s price noting that not all AI researchers share this view.

Beth Singler on the College of Zurich in Switzerland says that the media has an apparent incentive to publish such claims: “worry breeds clicks and shares”. However she says that people even have an innate want to inform and listen to scary tales. “AI appears initially to be science fiction, however it is usually a horror story that we wish to whisper across the campfire, and horror tales are thrilling and fascinating.”

One clear issue within the unfold of those tales is a lack of awareness round AI. Regardless of many individuals having used ChatGPT to put in writing a limerick or Midjourney to conjure up a picture, few understand how AI works underneath the hood. And whereas AI has been a well-recognized idea for many years, the fact is that the present crop of superior fashions show capabilities that shock consultants, not to mention laypeople.

“AI could be very non-transparent to the general public,” says Singler. “Wider schooling concerning the limitations of AI may assist, however our love for apocalyptic horror tales may nonetheless win by way of.”

Subjects: