The United States launched an initiative Thursday promoting international cooperation on the responsible use of artificial intelligence and autonomous weapons by militaries, seeking to impose order on an emerging technology that has the potential to change the way war is waged.

“As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years,” Bonnie Jenkins, the State Department’s under secretary for arms control and international security, said.

She said the U.S. political declaration, which contains non-legally binding guidelines outlining best practices for responsible military use of AI, “can be a focal point for international cooperation.”

Jenkins launched the declaration at the end of a two-day conference in The Hague that took on additional urgency as advances in drone technology amid the Russia’s war in Ukraine have accelerated a trend that could soon bring the world’s first fully autonomous fighting robots to the battlefield.

The U.S. declaration has 12 points, including that military uses of AI are consistent with international law, and that states “maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.”

Zachary Kallenborn, a George Mason University weapons innovation analyst who attended the Hague conference, said the U.S. move to take its approach to the international stage “recognizes that there are these concerns about autonomous weapons. That is significant in and of itself.”

Kallenborn said it was also important that Washington included a call for human control over nuclear weapons “because when it comes to autonomous weapons risk, I think that is easily the highest risk you possibly have.”

Underscoring the sense of international urgency around AI and autonomous weapons, 60 nations, including the U.S. and China, issued a call for action at the Hague conference urging broad cooperation in the development and responsible military use of artificial intelligence.

“We are in time to mitigate risks and to prevent AI from spiraling out of control, and we are in time to prevent AI from taking us to a place we simply don’t want to be,” Dutch Foreign Minister Wopke Hoekstra said.

The call to action issued in the Netherlands underscored “the importance of ensuring appropriate safeguards and human oversight of the use of AI systems, bearing in mind human limitations due to constraints in time and capacities.”

The participating nations also invited countries “to develop national frameworks, strategies and principles on responsible AI in the military domain.”

Military analysts and artificial intelligence researchers say the longer the nearly year-long war in Ukraine lasts, the more likely it becomes that drones will be used to identify, select and attack targets without help from humans.

Ukraine’s digital transformation minister, Mykhailo Fedorov, told The Associated Press in a recent interview that fully autonomous killer drones are “a logical and inevitable next step” in weapons development. He said Ukraine has been doing “a lot of R&D in this direction.”

Ukraine already has semi-autonomous attack drones and counter-drone weapons endowed with AI. Russia also claims to possess AI weaponry, though the claims are unproven. But there are no confirmed instances of a nation putting into combat robots that have killed entirely on their own.

Russia was not invited to attend the conference in The Hague.

China’s ambassador to the Netherlands Tan Jian did attend and said Beijing has sent two papers to the United Nations on regulating military AI applications, saying the issue “concerns the common security and the well-being of mankind, which requires the united response of all countries,” he said.

Share:
More In News