CYBER-TECH: India AI Impact Summit and rise of development-first AI

NAVEEN A | 19th February, 12:23 am
CYBER-TECH: India AI Impact Summit and rise of development-first AI

When global AI diplomacy began at Bletchley Park in late 2023, the prevailing mood was one of existential caution. Under the leadership of Rishi Sunak, governments and frontier labs gathered to confront the theoretical but catastrophic risks of increasingly powerful models. The focus was clear: frontier safety, red-teaming, and voluntary commitments from a small cluster of firms, most headquartered in San Francisco. Subsequent meetings, including the AI Seoul Summit in South Korea and the AI Action Summit in France, extended this logic. National AI Safety Institutes proliferated. The language of “shared risk” became embedded in communiqués. The early architecture of AI governance was built primarily around containment.

Yet that consensus is beginning to encounter resistance. The India AI Impact Summit, held this week in New Delhi, suggests that the safety-centric framing of AI governance is no longer universally accepted. A different emphasis is emerging from large developing economies—one that treats artificial intelligence less as a destabilising force to be restrained and more as infrastructure to be deployed. 

AI governance, in short, is entering a multipolar phase.

Three Paradigms

As the first wave of AI diplomacy settles, global governance appears to be fragmenting into three distinct paradigms.

The American model remains one of acceleration and market-led dominance. The state underwrites compute, defence contracts, and research funding, but the direction of frontier models remains largely in private hands.

The European Union has advanced a regulatory paradigm. Through the AI Act, it treats AI as a product safety and rights-based issue, emphasising risk categorisation, compliance, and oversight.

A third paradigm—articulated most clearly at the Delhi summit—takes a different starting point. For countries such as India, Brazil, and Indonesia, AI is not primarily a systemic risk but a productivity tool. The policy focus is on “sovereign AI”: leveraging public-sector data, open-source models, and subsidised compute to address bottlenecks in agriculture, healthcare, education, and multilingual access.

This development-first doctrine does not dismiss safety; rather, it reorders priorities. In many emerging economies, the immediate concern is not runaway superintelligence but technological exclusion—the possibility of being locked out of a transformative technological cycle.

From Safety to Deployment

The India AI Impact Summit institutionalised this shift. Its agenda—organised around the three Sutras of “People, Planet, and Progress”—centred on public-interest applications and the expansion of AI into vernacular, low-resource environments. Where earlier summits foregrounded frontier testing regimes, the New Delhi gathering emphasised implementation capacity.

The rhetorical pivot from “risk” to “impact” reflects a deeper disagreement about harm. For the EU, harm is framed in terms of rights violations. For the United States and the United Kingdom, it is often articulated as a matter of systemic or national security risk.

For large developing economies, harm is the opportunity cost of underdevelopment.

In this framing, the most dangerous AI is not the one that becomes uncontrollable, but the one that is inaccessible—priced beyond reach, trained on linguistically narrow datasets, or embedded in proprietary ecosystems that limit local modification and cost control.

Sovereignty and Its Limits

Yet the development-first paradigm faces structural constraints. Chief among them is compute sovereignty. 

While countries may possess rich public datasets, the hardware substrate of advanced AI—high-end semiconductors and hyperscale data centres—remains concentrated in a handful of American and Chinese firms. Announcements to expand domestic GPU capacity signal ambition, but the structural gap in advanced fabrication and chip design remains wide. Without deeper control over these physical layers, sovereign AI strategies risk becoming application-layer adaptations built atop foreign-controlled infrastructure.

Geopolitics further complicates matters. As export controls tighten and technology supply chains are securitised, AI increasingly resembles other dual-use technologies. The aspiration of remaining aligned with the technology rather than the bloc becomes harder to sustain in a world of investment screening and chip embargoes. Multipolarity at the governance level does not necessarily imply multipolarity at the hardware level.

Toward a Competitive Governance Era

The emergence of a development-first doctrine also constitutes a quiet challenge to the early safety discourse. To policymakers in New Delhi or Nairobi, calls for moratoria or deliberate slowdowns can appear misaligned with domestic economic imperatives. When frontier labs are clustered in a few wealthy economies, precaution can resemble entrenchment.

This reflects a divergence in sequencing: advanced economies debate how to regulate abundance, while developing economies debate how to secure access.

Whether this marks the beginning of a durable new AI order remains uncertain. If development-first strategies generate measurable gains—higher crop yields, improved health diagnostics, or expanded linguistic inclusion—they may offer a compelling complement to Western regulatory and market-led models. If compute dependence persists, the doctrine may remain rhetorically powerful but structurally constrained.

If artificial intelligence becomes the defining infrastructure of the 21st century, its governance will not be settled by technical consensus alone. It will be shaped by demographic weight, market scale, and political leverage. The India AI Impact Summit signals that AI governance is no longer defined solely by those who build frontier models. It is increasingly shaped by those who must deploy them at scale.

The contest, increasingly, is not only about mitigating risk. It is about defining purpose.

(The views expressed are personal)

Share this