UK crossing the line as it implements use of AI for lethal targeting under Project Asgard

Posted: 24th July 2025

Despite grave ethical and legal concerns about the introduction of AI into decision making around the use of lethal force, the UK is rapidly pressing ahead with a number of programmes and projects to do so, with the British Army recently trialling a n…


Drone Wars UK


UK crossing the line as it implements use of AI for lethal targeting under Project Asgard


By Chris Cole on 23/07/2025

Despite grave ethical and legal concerns about the introduction of AI into decision making around the use of lethal force, the UK is rapidly pressing ahead with a number of programmes and projects to do so, with the British Army recently trialling a new AI-enabled targeting system called ASGARD as part of a NATO exercise in Estonia in May 2025.

Last week, the Ministry of Defence (MoD) gave a briefing to selected media and industry ‘partners’ on Project ASGARD – which it describes as the UK’s programme to “double the lethality” of the British Army through the use of AI and other technology. ASGARD is not aimed at producing or procuring a particular piece of equipment but rather at developing a communications and decision-making network that uses AI and other technology to vastly increase the speed of undertaking lethal strikes.

ASGARD is part of a £1 billion ‘Digital Targeting Web’ designed to “connect sensors, shooters, and decision-makers” across the land, sea, air, and space domains. “This is the future of warfare,” Maria Eagle, Minister for Defence Procurement and Industry told the gathering.

According to one reporter present at the briefing, the prototype network “used AI-powered fire control software, low-latency tactical networks, and semi-autonomous target recommendation tools.”

Janes reported that through ASGARD, “any sensor”, whether it be an unmanned aircraft system (UAS), radar, or human eye, is enabled by AI to identify and prioritise targets and then suggest weapons for destroying them. “Before Asgard it might take hours or even days. Now it takes seconds or minutes to complete the digital targeting chain,” Sir Roly Walker, Head of the British Army told the gathering.

Drones used in conjunction with ASGARD

While the system currently has a ‘human in the loop’  officials suggested that this could change in future, with The I Paper reporting ‘the system is technically capable of running without human oversight and insiders did not rule out allowing the AI to operate independently if ethical and legal considerations changed.’

How it works

A British Army report after the media event suggested that “Asgard has introduced three new ways of fighting designed to find, strike and blunt enemy manoeuvre:

·         A dismounted data system for use at company group and below.

·         The introduction of the DART 250 One Way Effector. This enables the targeting of enemy infrastructure three times further than the current UK land based deep fires rockets.

·         A mission support network to accelerate what is called the digital targeting or ‘kill’ chain.

According to a detailed and useful write-up of the Estonia exercise, ASGARD uses existing equipment currently in service alongside new systems including Lattice command and control software from Anduril which provides a ‘mesh network’ for communications, as well as Altra and Altra Strike software from Helsing used to identify and ‘fingerprint’ targets. The report goes on: 

Read more: UK crossing the line as it implements use of AI for lethal targeting under Project Asgard

“targets were passed to PRIISM which would conduct further development including legal review, collateral damage estimates, and weapon-to-target matching.”  

Helsing’s HX-2 drone was also used during the exercise and is another indication that the UK is likely to acquire these one-way attack drones. DART 250, a UK manufactured jet-powered one-way attack drone with a range of 250 km that can fly at more than 400 km/h was also deployed as part of the exercise. The manufacturer says that it can fly accurately even when GPS signals are jammed and that it is fitted with seeker that enables it to home-in and destroy jamming equipment.  

AI: speed eroding oversight and accountability

The grave dangers of introducing AI into warfare, and in particular for the use of force are by now well known.  While arguments have been made for and against these systems for more than a decade, increasing we are moving from a theoretical, future possibility to the real world: here, now, today.

While some argue almost irrationally in the powers and benefits of AI, in the real world AI-enabled systems remain error prone and unreliable. AI is far from fallible and relies on training data which time and time again have led to serious mistakes through bias.  

Systems like ASGARD may be able to locate tanks on an open plain in a well-controlled training exercise environment (see video above), the real world is very different. Most armed conflicts do not take place in remote battlefields but in complex and complicated urban environments. Relying on AI to choose military targets in such a scenario is fraught with danger.

Advocates of ASGARD and similar systems argue that the ‘need’ for speed in targeting decisions means that the use of AI brings enormous benefits. And it is undoubtedly true that algorithms can process data much faster than humans. But speeding up such targeting decisions significantly erodes human oversight and accountability. Humans in such circumstances are reduced to merely rubber-stamping the output of the machine. 

Meanwhile, the Ministry of Defence confirmed that the next phase of ASGARD’s development has received government funding while at the UN, the UK continues to oppose the negotiation of a new legally binding instrument on autonomous weapons systems.

 

Comment 

 

Drone Wars UK © 2025.

Find out more – call Caroline on 01722 321865 or email us.