How AI Is Eroding the Norms of War

An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.

May 27, 2025
Guest Commentary
Download Audio

Since 2022, I have reported on Russia’s full-scale invasion of Ukraine, witnessing firsthand the rapid evolution of technology on the battlefield. Embedded with drone units, I have seen how technology has evolved, with each side turning once-improvised tools into cutting-edge systems that dictate life and death.

In the early months of the war, Ukrainian soldiers relied on off-the-shelf drones for reconnaissance and support. As Russian forces developed countermeasures, the two sides entered a technological arms race. This cycle of innovation has transformed the battlefield, but it has also sparked a moral descent — a "race to the bottom" — in the rules of war.

In the effort to eke out an advantage, combatants are pushing ethical boundaries, eroding the norms of warfare. Troops disguise themselves in civilian clothing to evade drone detection, while autonomous targeting systems struggle to distinguish combatants from noncombatants. 

The evolution of automated drone combat in Ukraine should be a cautionary tale for the rest of the world about the future of warfare. 

The Rise of “Robots First” Warfare

In the Russo-Ukrainian War, drones now account for roughly 70%–80% of battlefield casualties

Last month, Yaroslav, a drone pilot from Ukraine’s 110th Mechanized Brigade, told me, “Drones play a very big role on the battlefield,” adding, “More than anything else.” Their outsized combat role has ushered in the era of the cautious tank.

As both Russia and Ukraine scramble for short-term technological breakthroughs wherever they can, AI provides a path forward. One Russian commentator noted, “The airspace is not about individual drones — it is about swarm tactics, AI targeting and real-time combat.”

In response, Ukraine’s military is shifting toward a “robots first” strategy, according to Colonel Vadym Sukharevsky, the commander of Ukraine’s drone forces. This approach includes the recently announced “Avengers” AI platform, which Ukraine claims can identify targets in just 2.2 seconds.

Weapons evolution is also happening at sea, where Ukraine has created a "tech navy" of maritime drones used to shoot down Russian helicopters and Russian fighter jets

Roy Gardiner, a former Canadian Armed Forces officer who researches open-source weapons, told me that Ukrainian sea drones (like the Magura V5 and Sea Baby, each costing around $250,000) have sunk or damaged as much as one-third of Russia’s Black Sea Fleet.

According to Kateryna Bondar (a fellow at the Center for Strategic and International Studies’ Wadhwani AI Center), the goal of Ukraine’s strategic evolution is to remove fighters from direct combat and replace them with autonomous, crewless systems. This shift is driven by Ukraine’s “need to conserve a limited human force and overcome vulnerabilities such as fatigue, stress, and the limited capacity to process and fuse large amounts of data from various sources and sensors.”

Ukrainian General Valerii Zaluzhnyi emphasized this point: “The revolution in military technology based on unmanned systems and artificial intelligence has completely changed the nature of war and continues to evolve.” 

A Race to the Bottom in Norms

Yet even as Ukraine’s use of AI delivers tangible results, serious ethical and security concerns are emerging. Technological disruption creates new opportunities by nullifying previous constraints. Combatants facing mortal threats experience powerful incentives to capitalize on these opportunities. 

However, such advantages come with their own set of risks. While some believe AI has the potential to reduce civilian casualties by minimizing human error on the battlefield, its effectiveness is only as strong as the data it is trained on and the norms shaping how it is used. Currently, AI is less effective at adapting to novel or ambiguous situations. Hallucinations, false or misleading AI-generated outputs, remain a challenge.

Although human operators currently authorize strikes, the rapid advancement of AI technologies creates strong incentives toward a future of autonomous combat operations. 

In an interview with GZERO this spring, Bondar said, “Ukraine doesn’t put ethical questions as the first priority, and for this exact reason, they don’t have any regulation limiting defense and military applications of AI because what they currently need is something very efficient that can kill Russians.”

“There is only a white paper, released by the Minister of Digital Transformation — it doesn’t have any law power, and it’s just kind of them sharing their vision,” Bondar told GZERO. She added: “It says we aim not to limit military AI and we want to comply with international law and regulations, which also has a lot of contradictions.” 

The incentives driving Ukraine toward a “robots-first” strategy were already visible in 2023, when I spent time with a drone unit on the front lines. Lexus, the callsign of a pilot from the special unit "Kondor" of the 1st Presidential Brigade within Ukraine’s National Guard, accurately predicted a race for autonomous systems. He compared it to the Cold War, when the United States and the Soviet Union were locked in a nuclear arms race. 

Lexus also predicted that the increasing use of drone warfare would result in Russian soldiers deploying “civilian vehicles to escape drone detection, putting civilians at risk of being killed.” 

Ihor, a drone pilot from Ukraine’s 23rd Mechanized Brigade, told me that Russian forces had attacked their positions by disguising themselves in Ukrainian uniforms. Russian troops have also used dozens of civilians as human shields and conducted operations using civilian vehicles or while dressed in civilian clothing. 

In an April 2025 article, Mick Ryan, a retired Australian army major-general, wrote: “In urban areas, the Russians often use civilian-clothed (and sometimes unarmed) personnel as scouts and forward reconnaissance.”

Images reported in Ukrainian media of Russian forces assaulting Ukrainian position using civilian vehicles equipped with anti-drone defenses. Source: Euromaidan / Militarnyi

These tactics complicate AI systems’ ability to distinguish between legitimate military targets and noncombatants.

Russia has also increasingly been sending wounded soldiers, some still on crutches or in wheelchairs, back into combat, a practice confirmed by drone footage. This is likely due to a desperate personnel shortage, rigid bureaucracy requiring units to meet quotas, and attempts to conceal casualty figures through fraudulent reporting. Some wounded have reportedly been used as “meat probes,” revealing Ukrainian positions by drawing fire.  

As autonomous systems accelerate target identification and engagement, combatants will face increasing pressure to send wounded soldiers forward as sacrificial decoys: triggering strikes, exposing enemy positions, and thus enabling healthier troops to advance. 

Most recently, Serhii "Flash" Beskrestnov, a Ukrainian civilian expert in radio communications and electronic warfare (EW), highlighted that Russia is deploying AI-powered exploding drones that independently identify and strike targets without operator input, using computer vision and machine learning to improve performance over time. These drones pose a growing threat, as they do not distinguish between military and civilian targets.

Russia’s violation of international law with impunity generates huge pressure on Ukrainian soldiers to respond in kind. As part of this response, Ukraine has updated its drone-training protocols, treating relatively ambiguous targets as threats and increasing the likelihood that automated drones will target civilians, wounded soldiers, and even friendly combatants. 

“We must accept the enemy’s brutality and act accordingly. If they use methods that violate the norms of warfare against us, we should not tolerate it — we must respond accordingly,” said Illia, who serves in Ukraine’s 13th National Guard Brigade. 

While Illia praised the “awesome” advantages of automated target guidance technology, he noted the risks they create for civilians: “I’ve also seen cases where, for example, if we’re talking about AI guidance, it locks onto a target regardless of whether it’s a civilian vehicle or not.”

/odw-inline-subscribe-cta

Global Spillover 

The erosion of norms on the Russo-Ukrainian battlefield has far-reaching implications. Russia’s invasion of Ukraine has accelerated global innovations in warfare, pushing AI-enabled drones into service at a pace that international norms or regulations can’t match. 

NATO, the world’s most powerful military alliance, has opened a joint training center with Ukrainian soldiers to help develop new fighting strategies built around AI, advanced analytics, and other machine learning systems. 

These new tactics and capabilities, proven and refined in Ukraine, will soon find their way onto battlefields in other areas of the world. “I always tell our American and other international partners: If your drone hasn’t been tested in Ukraine, it’s still just a toy,” said Oleksandra Ustinova, who serves as minority leader of Ukraine’s Parliament.

The UK, for example, recently unveiled its new StormShroud autonomous drone. Designed to support Typhoon and F-35 fighter jets by jamming enemy radars, this new drone incorporates lessons from Ukraine. In Germany, the startup Stark has developed the Virtus loitering munition, an AI-powered, electric Vertical Take-Off and Landing (VTOL) drone with autonomous targeting capabilities. 

Meanwhile, Finland is already training helicopter crews to shoot down drones, attempting to learn from Ukraine’s wartime experience. 

“Autonomous weapons systems will soon fill the world’s battlefields,” Austrian Foreign Minister Alexander Schallenberg told participants at a recent conference in Vienna about autonomous weapons systems. Schallenberg has described events in Ukraine as the “Oppenheimer moment of our generation.”

Sensing this opportunity, the defense sector is leaning in. Palmer Luckey, founder of the American defense-technology company Anduril, believes that it’s time to go all in on AI weapons, arguing that Pandora's box has already been opened. 

Calls to Action

All this raises the question of what, if anything, can be done to prevent a global erosion of norms like that seen in the Russo-Ukrainian War. 

UN Secretary-General António Guterres has urged all states to ban autonomous weapons, calling them “politically unacceptable” and “morally repugnant.” Public debates on lethal autonomous systems are “complicated by widespread, pre-existing misconceptions.” 

As of 2020, only 30 countries had declared support for a UN treaty to ban autonomous weapons systems. Since then, some progress has been made. 

On December 2, 2024, the UN General Assembly adopted a resolution on Lethal Autonomous Weapons Systems (LAWS), with broad support: 166 in favor (including the US), 3 opposed (Belarus, North Korea, and Russia), and 15 abstaining (including Ukraine). The resolution proposes a two-tiered approach, banning some LAWS while regulating others under international law.

Religious leaders have added their voices. Last year, now-deceased Pope Francis called on G7 leaders to ban the use of autonomous weapons in war, arguing that machines should not decide whether humans live or die. Even Patriarch Kirill, head of the Russian Orthodox Church and a close ally of Putin, warned: “The state should strictly control the development of AI in order to prevent the destruction of life and civilization.”

Soldiers I spoke with on the front lines have also called for limits. Emphasizing that entrusting human lives entirely to algorithms carries unacceptable risks. Vasyl, a drone unit commander from Ukraine’s 128th Mountain Assault Brigade, told me that AI battlefield deployment should be limited to guiding munitions using preprogrammed coordinates. Danilo, a soldier in Ukraine’s 108th Territorial Defense Brigade, argued that “a human must necessarily participate in all processes.” 

Yet despite these calls for restraint, there remains no real framework within international law to govern autonomous weapons. 

In the absence of regulatory intervention, Russo-Ukrainian War technologies and strategies will spark further erosion of norms. Battlefield necessity alone will continue to dictate how autonomous systems are used. The race to the bottom will continue.

Footnotes
Written by
Image: Shutterstock
Continue reading

Can “Location Verification” Stop AI Chip Smuggling?

US lawmakers propose a new system to check where chips end up.

May 19, 2025

Smokescreen: How Bad Evidence Is Used to Prevent AI Safety

Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.

Apr 18, 2025

Subscribe to AI Frontiers

Thank you for subscribing.
Please try again.

Subscribe to AI Frontiers

Thank you for subscribing.
Please try again.