Friday, Dec 20, 2024

Pentagon says it created more hoops to jump through on AI weapons to calm fears it was 'building killer robots in the basement'

Unmanned aerial vehicles equipped with specialized software and sensors fly during the Technical Concept Experiment at Marine Corps Base Camp Pendleton, Calif., Jan. 19, 2024.
  • The Defense Department updated a decade-old directive on AI to include stricter review policies.
  • The update was partially to assure people that DoD wasn't "building killer robots in the basement," a senior official said.
  • The US and some of its adversaries are making rapid progress on AI weapons, with lots of controversy in the mix.

With lingering fear and confusion surrounding the future of artificial intelligence in weapons, the Pentagon recently updated a decade-old policy to assure people it isn't "building killer robots in the basement," according to a senior official.

The relatively new Department of Defense directive adds a stricter review process for developing and approving most autonomous weapons, a "good governance" move, the official said recently. But with AI booming in the military space, in not just the US but also in China and elsewhere, international pressure to check the technology's use and capabilities is growing.

Last year, DoD updated its directive on autonomy in weapons systems, which was originally published back in 2012. That change was attributed to "the dramatic advances in technology happening all around us," Deputy Secretary of Defense Dr. Kathleen Kicks said, reflecting vast and rapid developments in AI in the last decade.

Perhaps the biggest change in the DoD directive on AI was the inclusion of a senior review policy, which requires that "autonomous weapon systems, including weapon systems with both autonomous and semi-autonomous modes of operation" be approved by a host of senior defense officials before development starts and once again before any technology is fielded.


Air Force Master Sgt. Dominic Garcia observes Atom the robot dog as teammates operate it via remote control training at Barksdale Air Force Base, La., Nov. 6, 2023.
Air Force Master Sgt. Dominic Garcia observes Atom the robot dog as teammates operate it via remote control training at Barksdale Air Force Base, La., Nov. 6, 2023.

The revision of the directive adds stricter rules in line with some of DoD's more cautious language on developing AI in weapons systems. It also, according to Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities Dr. Michael C. Horowitz, helps clarify some confusions surrounding the Pentagon's intentions with AI and military applications.

At the Center for Strategic and International Studies (CSIS) Wadhwani Center for AI and Advanced Technologies panel on The State of DoD AI and Autonomy Policy earlier this month, Horowitz said the goal of revising the directive was "frankly mostly a good governance move."

"We had ended up in a situation where outside the department, the community of interest thought that DoD was maybe building killer robots in the basement," Horowitz explained.

"And inside the department," he continued, "there was a lot of confusion about what the directive actually said, with some actually thinking the directive prohibited the development of autonomous weapons with maybe particular characteristics or even in general."

So the new directive makes clear "what is and isn't allowed" regarding DoD's policy on AI weapon development. There may not be killer robots in the basement, but there is a place for AI that the military is actively pursuing.


An unmanned aerial vehicle delivers a payload to the ballistic missile submarine USS Henry M. Jackson around the Hawaiian Islands, Oct. 19, 2020, during an event designed to test and evaluate the tactics, techniques and procedures of U.S. Strategic Command's expeditionary logistics and enhance the readiness of strategic forces.
An unmanned aerial vehicle delivers a payload to the ballistic missile submarine USS Henry M. Jackson around the Hawaiian Islands, Oct. 19, 2020, during an event designed to test and evaluate the tactics, techniques and procedures of U.S. Strategic Command's expeditionary logistics and enhance the readiness of strategic forces.

The directive is now clear that any autonomous weapons system that doesn't fit in a specific, exempted category — such as, as Horowitz said, a technology that's "protecting a US base from lots of simultaneous missile strikes" — has to go through a senior review process on top of all other testing and evaluation requirements.

According to the policy, the process includes multiple under secretaries of defense, as well as the Vice Chairman of the Joint Chiefs of Staff.

"It's a huge bureaucratic lift," Horowitz said.

The move reflects the US' growing interest and progress in developing AI weapons systems. Much has been learned from the ongoing war in Ukraine, where drones have become the dominant factor on the battlefield and autonomy is key pursuit.

Stories of first-person view drones chasing soldiers down or dropping payloads inside unsuspecting tanks have highlighted the growing importance of skilled drone operators. These stories have often been coupled with concerns over electronic warfare, capabilities which can easily disrupt or jam drones.

These cases have raised questions on the role of autonomy in drones, especially when the connection between an operator and their drone can be cut so easily. There are also concerns about how involved a human operator needs to be in the process. And this discussion has extended far beyond drones, especially when the militaries like the US and others are already thinking bigger, about things like unmanned fighter jets, warships, and more.


Airmen watch a test of an unmanned ground vehicle at Tyndall Air Force Base, Fla., Nov. 10, 2020.
Airmen watch a test of an unmanned ground vehicle at Tyndall Air Force Base, Fla., Nov. 10, 2020.

As the US and other major powers lead the way on artificial intelligence, they are also helping to potentially usher in a new era of warfare, one that's prompted hosts of worries from other nations, especially given some concerns on "killer robots" and incidents like one a few years ago when a drone went rogue and went after a human target without instruction.

More recently, there were reports of Israel using AI to find Hamas targets, raising a number of questions.

Last fall, debates arose in the United Nations over imposing restrictions on how and when AI can make decisions on the battlefield. Many officials expressed concern over whether AI would be able to decide to kill a human target without human authorization.

The US, Russia, Australia, and Israel all argued there was no need for new international regulations at the time, The New York Times reported. Other nations, however, had hoped to use the UN as a platform to propose restrictions and limit how autonomous weapons operate.

"Complacency does not seem to be an option anymore," Ambassador Khalil Hashmi of Pakistan said during a meeting at UN headquarters, according to The Times. "The window of opportunity to act is rapidly diminishing as we prepare for a technological breakout."

Pentagon restrictions aside, the military is pursuing autonomy and artificial intelligence. In August, for instance, Hicks said the department wants "to field attritable autonomous systems at scale of multiple thousands, in multiple domains, within the next 18-to-24 months," a staggering push to complete with China, which wants AI weapons and systems as well.

Read the original article on Business Insider
------------
Read More
By: [email protected] (Chris Panella)
Title: Pentagon says it created more hoops to jump through on AI weapons to calm fears it was 'building killer robots in the basement'
Sourced From: www.businessinsider.com/stricter-ai-rules-fears-pentagon-building-killer-robots-in-basement-2024-1
Published Date: Tue, 23 Jan 2024 11:32:02 +0000

Did you miss our previous article...
https://trendinginbusiness.business/business/how-to-get-paid-and-make-money-as-a-content-creator