Palantir has pushed back against concerns that military use of its AI platforms could lead to unforeseen risks, in an exclusive interview insisting that the way the technology is used is the responsibility of its military customers.

Experts have expressed concern over the use of Palantir's AI-powered defence platform - Maven Smart System - during wartime and its reported use in US attacks on Iran.

Analysts warn that the military's use of the platform, which helps personnel plan attacks, leaves little time for meaningful verification of its output and could lead to incorrect targets being hit.

However, Louis Mosley, head of Palantir in the UK, stated during a recent interview that while AI platforms like Maven have been instrumental for the US in managing the Iran conflict, the ultimate responsibility for how its output is utilized lies with the military organization.

There's always a human in the loop, so there is always a human that makes the ultimate decision. That's the current setup, Mosley noted.

The Maven Smart System, designed to expedite military targeting decisions by aggregating various data types including intelligence and satellite imagery, was launched by the Pentagon in 2017.

The system proposes recommendations for targeting and suggests appropriate levels of force based on available resources.

Despite benefits, scrutiny has increased regarding the use of such systems in combat. The Pentagon recently decided to phase out Anthropic's Claude AI, which supports Maven, due to its refusal to permit use in autonomous weapons.

Since the onset of the Iran conflict, reports indicate the US has used Maven to plan numerous strikes.

When pressed about the risk of Maven suggesting incorrect targets, including potential civilian casualties, Mosley defended the platform as a guide to aid military personnel. You could think of it as a support tool, he stated, allowing commanders to synthesize vast information that would have previously taken considerable time.

However, he acknowledged the need for military leadership to maintain policies governing the use of its output, emphasizing that it is the military's responsibility to determine their decision-making framework.

Concerns persist about AI's role in mission planning, especially following a strike that reportedly killed many civilians, underlining the demand for stricter oversight in AI's military applications.

In Congress, some lawmakers are pressing for clear regulations on the use of AI in military settings, warning against the over-reliance on technology that could jeopardize human oversight and accountability in life-and-death situations.

Despite the controversies and risks, the US military appears committed to further integrating AI technologies like Maven, which has been designated as a long-term program of record.