News Tech and Science

US is moving toward letting AI weapons autonomously decide to kill humans

Japanese AI tool predicts when recruits will quit jobs
Source: Pixabay

The Pentagon is said to be advocating for AI drones to independently make decisions on the battlefield, as reported by The New York Times.

Concerns rise as countries, including US, China, and Israel, create AI drones or ‘killer robots’

Countries like the US, China, and Israel are actively developing lethal autonomous weapons that can use AI to select targets. Critics express concern about the prospect of “killer robots,” arguing that allowing machines to make life-and-death decisions without human input is a troubling development.

While some governments are urging the UN to establish a binding resolution to limit the use of AI killer drones, the US, along with nations like Russia, Australia, and Israel, is resisting such a move. Instead, they prefer a non-binding resolution, according to The Times.

“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times.

“What’s the role of human beings in the use of force — it’s an absolutely fundamental security issue, a legal issue and an ethical issue.”

The Pentagon is actively pursuing the deployment of large groups of AI-enabled drones, as revealed in a notice published earlier this year, Business Insider reported.

In an August speech, US Deputy Secretary of Defense Kathleen Hicks highlighted that the use of technology like AI-controlled drone swarms could help the US counterbalance China’s People’s Liberation Army’s (PLA) numerical advantage in both weapons and personnel.

“We’ll counter the PLA’s mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat,” she said, Reuters reported.

Air Force Secretary emphasizes AI weapons making lethal decisions under human supervision

Air Force Secretary Frank Kendall informed The Times that AI drones must possess the capability to make lethal decisions, all the while remaining under human supervision.

“Individual decisions versus not doing individual decisions is the difference between winning and losing — and you’re not going to lose,” he said.

“I don’t think people we would be up against would do that, and it would give them a huge advantage if we put that limitation on ourselves.”

In October, The New Scientist reported that Ukraine had deployed AI-controlled drones on the battlefield in its resistance against the Russian invasion. However, it remains uncertain whether these drones have engaged in actions that led to human casualties.

The Pentagon did not promptly provide a comment in response to the inquiry.

Tags

About the author

Brendan Byrne

While studying economics, Brendan found himself comfortably falling down the rabbit hole of restaurant work, ultimately opening a consulting business and working as a private wine buyer. On a whim, he moved to China, and in his first week following a triumphant pub quiz victory, he found himself bleeding on the floor based on his arrogance. The same man who put him there offered him a job lecturing for the University of Wales in various sister universities throughout the Middle Kingdom. While primarily lecturing in descriptive and comparative statistics, Brendan simultaneously earned an Msc in Banking and International Finance from the University of Wales-Bangor. He's presently doing something he hates, respecting French people. Well, two, his wife and her mother in the lovely town of Antigua, Guatemala.







Daily Newsletter