The global debate on artificial intelligence is unfolding along familiar lines. In United States, attention centres on frontier models, semiconductor dominance and talent concentration. In China, the focus is on scale, state coordination and rapid deployment. Policymakers in India, meanwhile, often frame the challenge as one of catching up—attracting investment, building models and nurturing talent. But this framing misses a more fundamental shift. The real contest in artificial intelligence is not about who builds the smartest systems, but about who organises the most useful data.
Every modern AI system rests on three inputs: compute, talent and data. The first two have dominated headlines for good reason. Advanced chips are difficult to manufacture and tightly controlled. Elite talent clusters in a handful of global hubs. Yet both are, in principle, mobile and tradable. Data is different. It is generated locally, shaped by institutions and governed by national rules. Unlike chips or researchers, it cannot simply be imported at scale. This makes data not just an input to AI, but a reflection of how an economy is organised.
This distinction is beginning to shape national strategies. The United States retains a commanding lead in semiconductor design and frontier research, driven by firms such as NVIDIA, OpenAI and Google. But much of its most valuable data remains locked inside corporate silos, fragmented across platforms with limited interoperability. China, by contrast, is pursuing a more systemic approach. It is experimenting with government-backed data exchanges that allow datasets—from logistics flows to medical imaging—to be standardised, catalogued and traded across sectors. The aim is not merely to store information, but to turn an entire economy into a continuous training environment for machine intelligence.
Europe offers a third model. Through frameworks such as the General Data Protection Regulation, it has prioritised privacy, consent and individual rights. While normatively influential, this approach has also made large-scale data sharing slower and more complex. The result is a global landscape defined not by technological gaps alone, but by differing institutional choices about how data is generated, shared and governed.
This is where India presents a distinctive case. Over the past decade, it has built one of the world’s most extensive digital public infrastructures. Systems such as Aadhaar and Unified Payments Interface have created a continuously updating record of how a billion-person economy functions—capturing transactions, identities and service delivery at scale. In effect, India has developed a kind of national digital nervous system.
Yet this abundance masks a deeper constraint. Much of India’s data remains fragmented across institutional silos. Financial histories sit with banks, medical records with hospitals, and telecommunications data with private operators. Very little of this information circulates in ways that allow AI systems to learn across domains. The challenge, then, is not the absence of data, but the absence of mechanisms to organise and mobilise it.
The policy problem this creates is subtle but consequential. Unlike China, India cannot rely on centralised state control to aggregate data. Unlike the United States, it cannot leave the problem entirely to private platforms. And unlike Europe, it cannot afford regulatory inertia that slows innovation. It must instead solve a more complex institutional puzzle: how to enable data to move across sectors while preserving privacy, consent and democratic accountability.
This requires a shift in how policymakers think about data. Rather than treating it as a proprietary asset to be hoarded, data must be understood as infrastructure—something that gains value when it circulates under trusted conditions. Mechanisms such as data exchanges, consent-based data sharing frameworks and standardisation protocols can help create what might be called “data liquidity”: the ability of information to flow securely between actors without losing integrity or control. The goal is not unrestricted openness, but structured interoperability.
If designed well, such a system could offer an alternative to both centralised and corporate models of AI development. It would allow startups, researchers and public institutions to access high-quality datasets without concentrating power in a few hands. More importantly, it would align technological progress with democratic norms, ensuring that the benefits of artificial intelligence are broadly distributed rather than narrowly captured.
The implications extend far beyond any single country. Artificial intelligence is no longer just a sector of the economy; it is becoming part of the architecture of national power. The countries that lead will not necessarily be those that invent the most advanced algorithms, but those that build the most effective systems for organising and learning from data. In that sense, the future of AI will be determined less in research labs than in the design of institutions.