On one side of the Seine, under the glass roof of the Grand Palais, paraded on Monday, February 10, heads of state and government, representatives of institutions, small and big bosses of French and global “technology”, invited to Paris for the Summit for Action on Artificial Intelligence (AI).
On the other bank, a stone’s throw from the Hôtel des Invalides, the small world of defense, high-ranking officers and companies, had also gathered, at the invitation of the Ministry of the Armed Forces, for an afternoon of round tables at the Military Academy.
The issue is undoubtedly one of the most serious, as the use of artificial intelligence for defense purposes is advancing at an accelerating pace – from big data analysis to artificial intelligence “embedded” in weapons systems, from target detection and recognition to human resource management, including drones and cyber defense.
In France, the Ministry of the Armed Forces has set up an AI Defense Agency (Amiad) in mid-2024, tasked with designing and developing AI tools for its benefit and led by a former Google DeepMind employee, Bertrand Rondepierre.
In June, at the Eurosatory trade fair, it presented, for example, a system to help naval officers responsible for analyzing underwater acoustic signals, as well as a tool for detecting enemy vehicles.
On the battlefields, the conflicts in Ukraine and Gaza are witnessing an intensification of AI uses. Faced with a numerically superior Russian army, Kiev is using AI to process data collected by drones and to “integrate target and object recognition into satellite images,” geolocating and analyzing open-source data.
In the Gaza war, several investigative reports have documented the Israeli military’s use of artificial intelligence tools to suggest targets and optimize attack plans in a particularly deadly bombing campaign.
Politics and the military are getting closer
But because the most powerful players in artificial intelligence are private, non-military companies, developments in recent years have shuffled the deck for the defense industry. The blurring of the line between politics and the military is spectacular in the United States, especially since Donald Trump, flanked by Elon Musk, returned to the White House.
Thus, the company OpenAI, which for some time assured that it would stay away from the dominant military sector, now assumes that it is working on “national security missions”: in December, it announced a partnership with the defense start-up Anduril, known for drones and autonomous surveillance towers.
On February 4, Google, which had pledged in 2018 that artificial intelligence technologies would not be used for military or surveillance purposes, removed this promise from its public charter. Fifteen days earlier, the Washington Post revealed that Google had provided artificial intelligence and cloud services to the Israeli army.
In Europe, however, this approach is also working. This is demonstrated by the announcement in January by the French Minister of Defense, Sébastien Lecornoux, of a partnership between Amiad and the French Mistral AI .
In fact, at the Paris summit on artificial intelligence, Arthur Mensch’s company announced that it is collaborating with Helsing, a German-French-British defense startup that supplies drones to Ukraine.
The risk of “complete surprise”
When it comes to defense, too, artificial intelligence is “a struggle,” Admiral Pierre Vandier, head of NATO’s Allied Command Transformation, said Monday afternoon: “If you don’t adapt quickly and at scale, you die,” he warned. But with what consequences?
“In the wars in Ukraine and Gaza, AI has not contributed to a ‘cleaner’ war that respects international law, but rather to a much more massive and rapid use of force.” In short, “it allows you to target more people, faster, at lower cost and with the appearance of rational justification”!
On Monday, while insisting on the need to “master this technology,” the French defense minister also mentioned the risk of “complete surprise” and “strategic reversals that no one would expect, among other things, if you link the nuclear issue to the issue of artificial intelligence.”
The dizzying transformation of battlefields has just begun
In the 1980s, this scenario was reminiscent of a Hollywood movie (John Badham’s Wargames). Forty years later, however, artificial intelligence is beginning to challenge international law and the law of war, and the question of regulation is becoming increasingly urgent. The debate is taking place in multilateral forums, particularly the United Nations – whose limits and pace are known…
In France, the latest opinion of the Defense Ethics Committee, dedicated to the “use of artificial intelligence technologies by the armed forces”, was made public on the occasion of the Paris summit. It raises the need for “respect for international legality” and questions of sovereignty and responsibility in the development of artificial intelligence systems.
However, while it believes that, in an armed conflict, “the assessment and acceptance or denial of the resulting risk must remain a human prerogative”, it notes and admits that the degree of delegation to machines varies according to “the operational environment and the intensity of the conflict”. And it recommends that technologies allow “the level of automation of certain functions to be differentiated according to the judgment of the command”. The dizzying transformation of the battlefields has just begun.




