How AI speeds ‘kill chain’ in US attacks on Iran
Mission completed in just over 10 minutes, raising concerns over nuclear strike use
Hiroyuki AKITA, Nikkei commentator
May 2, 2026
CHANTILLY, France — Artificial intelligence may be on the cusp of fundamentally reshaping how wars are conceived, planned and fought.
Few recent conflicts have demonstrated this transformation more starkly than in the U.S.-Israeli attack on Iran, where AI-driven systems have compressed decision cycles and accelerated the tempo of combat to a pace once almost unimaginable.
The breathtaking speed with which AI is being integrated into military operations has ignited fierce debate among policymakers, commanders and scholars, many of whom worry about the direction of warfare in an age of increasingly autonomous technologies.
How far can militaries entrust the conduct of war to AI — and where must human judgment reassert itself before a critical line is crossed?
At the World Policy Conference, held in Chantilly near Paris from April 24 to April 26, the accelerating transformation of warfare through AI emerged as an urgent theme.
Gen. Wayne Eyre, Canada’s former chief of the defense staff, warned that modern conflict is speeding up the military “adaptation cycle” — the process by which forces recognize a threat, formulate a policy and execute a response.
“It’s that adaptation cycle that in peacetime has been sometimes decades. We’re now seeing [it] in weeks,” Eyre said.
In private discussions, experts further warned that AI could cause misunderstandings and suspicion between adversaries to escalate almost instantly, potentially edging the world closer to a wider global conflict.
An analysis by the Al Habtoor Research Centre, a think tank linked to the United Arab Emirates, offers a startling account of that acceleration.
According to the center, the Feb. 28 U.S. and Israeli strike that killed Iran’s supreme leader, Ayatollah Ali Khamenei, and other senior figures took just 11 minutes and 23 seconds from target acquisition to completion.
The process of identifying a target and carrying out a strike is known as the “kill chain.” The shorter that chain, the more rapid — and more ferocious — the tempo of combat.
Before the attack, U.S. and Israeli forces integrated and analyzed vast amounts of satellite intelligence and other data. The work would have taken 328 human analysts 100 days, according to the center, but was completed in only about 90 minutes. The think tank said the whole operation was made possible by data-integration and analysis technologies provided by U.S. AI companies such as Palantir Technologies and Anthropic.
AI, however, is far from omnipotent. Human beings are said to have been involved in giving final approval for the latest attack, but they, too, can make mistakes.
On the first day of the strikes on Iran, an elementary school in southern Iran was bombed, killing many civilians, including children. The tragedy has since intensified scrutiny of modern targeting processes, including the growing role of semiautomated systems and the adequacy of human oversight.
If misused, AI can easily generate deepfakes — fabricated videos and information that look authentic — and spread them on a massive scale. The danger that such “pollution” of the information space could inflame conflicts should not be underestimated. That reality became apparent in the military clash between India and Pakistan in May 2025.
According to South Asian Voices, an online policy platform that publishes strategic analysis on South Asia, unverified videos and information circulated widely in both countries’ media and on social platforms during the clash. Some of the footage appeared to have been manipulated with AI.
As it becomes harder to distinguish truth from falsehood, the risk grows that each side will misread the other’s intentions. Such misperceptions could trigger a disastrous cascade of overreaction.
The consequences would be even more serious if lethal autonomous weapons systems, or LAWS, capable of carrying out attacks without waiting for human instructions, become widespread. Some analyses warn that involving AI in the operation of nuclear weapons could increase the risk of nuclear war.
King’s College London recently conducted a simulation exercise in which several AI systems were assigned the role of national leaders in a nuclear crisis. According to results released in February, tensions escalated alarmingly in about three-quarters of the 21 scenarios to a level that could have led to full-scale nuclear war.
During the simulation, AI appeared to show little inclination to defuse the crisis. Since the end of World War II, humans have developed a strong aversion to the use of nuclear weapons. By compressing decision-making timelines, AI-enabled warfare risks sidelining human judgment and increasing the danger of rapid, unintended escalation — a concern that extends even to nuclear confrontations.
The risks posed by the military use of AI are now too urgent to ignore. Neil Chauhan, director of global partnerships at Fortaegis Technologies, a Netherlands-based deep-tech startup, called for strict, built-in controls that would be embedded across AI systems.
“To prevent judgment errors and rapid escalation, states need to keep AI under strict control,” Chauhan said. “This cannot be achieved through policy alone. Enforceable authentication and control must be built into AI systems, from hardware to software and operations, so that only authorized systems can communicate, share data and act.”
International efforts to regulate the military use of AI remain insufficient. The United Nations has been exploring rules to ban or restrict lethal autonomous weapons systems, but there is still no clear prospect of an agreement. The U.S. and China, which should play crucial roles in such efforts, acknowledge the risks but remain cautious about creating legally binding international rules. Russia, too, remains opposed to introducing new rules.
As market competition intensifies, it will also be increasingly difficult to rely solely on ethical guidelines set by AI companies themselves. In the field of national security, states tend to assert broad discretion and press businesses to cooperate.
Still, AI will not change every aspect of warfare.
“The military importance of controlling land and maintaining superiority at sea and in the air will not disappear because of AI,” said a former senior official of Japan’s Self-Defense Forces. “But without an advantage in information, it will become difficult to maintain physical superiority on land, at sea or in the air.”
While caution is warranted, treating AI with undue alarm and seeking its complete exclusion is unrealistic. What is important is to ensure that humans retain command and responsibility — and that AI is used in accordance with well-defined ethical standards.
“There is no one single piece of technology that is going to win wars,” Eyre said. “The nature of war continues to be a contest of human will.”
