
PANAJI
The Pentagon is preparing to embed artificial intelligence at the core of its military operations. According to a recent internal memo, the Department of Defense plans to elevate Palantir’s “Maven” system—an AI platform used for analysing battlefield data and identifying targets—into a permanent, enterprise-wide capability.
The decision marks a decisive shift from experimentation to institutionalisation. For years, Project Maven functioned as a pilot programme, combining satellite imagery, drone feeds and other intelligence streams to assist human analysts in identifying threats. Now, by designating it a “programme of record”, the Pentagon is signalling that AI will no longer be an auxiliary tool but a foundational layer of modern warfare.
This transition reflects both technological progress and geopolitical urgency. Recent conflicts, particularly in the Middle East, have demonstrated the operational advantages of machine-speed analysis: AI systems can sift through vast datasets, prioritise targets and compress decision timelines from hours to minutes. In some cases, such tools have already supported thousands of strikes, underscoring their growing centrality to combat operations.
The move also highlights a broader reconfiguration of the defence-industrial base. Rather than relying solely on traditional contractors, the Pentagon is increasingly turning to Silicon Valley firms for core capabilities. Long-term contracts with companies like Palantir suggest a future in which software, not hardware, defines military advantage. This deepening partnership promises faster innovation but also raises concerns about dependency on a narrow set of private providers.
Yet the embrace of AI is not without controversy. Critics warn that integrating such systems into the “kill chain” risks diluting human oversight, especially as algorithms take on greater responsibility in target identification and operational planning. Ethical concerns are compounded by the opacity of proprietary systems and the difficulty of auditing their decisions in real time.
There are also signs of internal friction. The Pentagon’s recent dispute with Anthropic—another AI firm—over safety constraints and supply-chain risks illustrates the tension between military imperatives and corporate guardrails. As AI becomes embedded in defence infrastructure, such conflicts are likely to intensify, forcing companies to choose between commercial principles and government demands.
The adoption of Maven as a core system thus represents more than a procurement decision. It signals the arrival of algorithmic warfare as doctrine. In this emerging paradigm, the speed, scale and autonomy of machines may increasingly shape not just how wars are fought, but how quickly they escalate—and how difficult they become to control.