AI summit and its signals
The recently concluded Artificial Intelligence (AI) Summit held at Bharat Mandapam in Delhi, North India was conceived as a global platform showcasing the future of artificial intelligence through universities, enterprises, and international collaborations.
The ambition was evident. Yet, certain episodes emerging from the event quietly exposed deeper concerns that extend beyond logistics or representation and enter the domain of governance, accountability, and truth in technological discourse.
Public attention was drawn to an apparent disparity in stall allocation, where a private University was reportedly assigned multiple exhibition spaces while premier public institutions were represented far more modestly.
While such matters may be defended as administrative discretion, they raise broader questions about transparency and prioritisation in State-endorsed technology platforms, particularly where public trust in scientific advancement is at stake.
Misattribution in the age of AI
More troubling was a widely circulated claim made at the Summit that a quadruped robot, resembling a robotic dog, had been designed and built by a student group of the Galgotias University.
The assertion was later shown to be factually incorrect, with online scrutiny revealing that the robot was a commercial product of a Chinese manufacturer, Unitree Robotics.
The episode is not merely an embarrassment of attribution. It illustrates how easily misinformation can be amplified in technology spaces where authority is presumed and verification is often absent.
In the AI ecosystem, where credibility is currency, such misrepresentation erodes academic integrity and public confidence.
It also demonstrates the urgent need for ethical standards governing claims of innovation, particularly when AI artefacts are showcased under the implicit legitimacy of State or institutional patronage.
India’s regulatory hesitation
India’s approach to artificial intelligence regulation remains notably liberal. While policy frameworks and advisories have been discussed, binding regulation is largely indirect.
The Digital Personal Data Protection Act, 2023, though significant, addresses data fiduciary obligations rather than AI systems as autonomous or semi-autonomous actors. Issues such as algorithmic bias, automated decision-making, and psychological manipulation remain largely unregulated.
This permissive approach is often justified as necessary to foster innovation and attract investment.
However, the absence of a structured AI-specific regulatory framework also leaves significant gaps in accountability. When AI systems influence behaviour, curate information, or simulate interaction, the question arises whether traditional data protection norms are sufficient to protect human autonomy and dignity.
Europe’s risk-based restraint
In contrast, the European Union has adopted a markedly different stance through the European Union Artificial Intelligence Act, which entered into force in 2024. This legislation is widely regarded as the first comprehensive AI regulatory framework in the world. It classifies AI systems through a pyramid of risk, ranging from unacceptable risk to minimal risk, with corresponding compliance obligations.
High-risk AI systems, particularly those affecting education, employment, biometric identification, and behavioural profiling, are subjected to stringent safeguards, transparency requirements, and human oversight. Certain AI practices are prohibited altogether, including those that exploit vulnerabilities of specific groups or manipulate human behaviour in a manner likely to cause harm.
The European model recognises that unchecked AI development can interfere with cognitive autonomy and democratic processes. Regulation is therefore positioned not as an obstacle to innovation, but as a prerequisite for ethical and sustainable technological progress.
Algorithms, addiction, and the human psyche
These regulatory concerns acquire sharper relevance in light of recent proceedings before the United States Congress, where Mark Zuckerberg, representing Meta Platforms, was questioned on the addictive design of social media platforms and their impact on teenagers. The hearings focused on whether algorithm-driven engagement models deliberately stimulate reward mechanisms in the brain, particularly among users as young as thirteen.
If algorithms are designed to maximise attention through reinforcement and dopamine-inducing interactions, the ethical boundary between engagement and manipulation becomes dangerously thin. When such algorithms are increasingly powered by AI, capable of learning preferences and shaping perceptions, the potential intrusion into the human psyche cannot be ignored.
This concern extends beyond social media into education and knowledge dissemination. AI systems deployed in learning environments may subtly prioritise certain narratives, interpretations, or values based on prevailing datasets and contextual norms. Without safeguards, such systems risk shaping cognition rather than merely assisting comprehension, thereby influencing preference formation in ways that may not be transparent or contestable.
A call for principled restraint
The Indian regulatory discourse on AI must therefore evolve beyond facilitation towards principled restraint. Innovation cannot be permitted to outrun ethical responsibility. The European experience demonstrates that it is possible to regulate AI without suffocating progress, by clearly demarcating unacceptable practices and insisting on human oversight where autonomy is at risk.
As AI becomes embedded in everyday decision-making, the law must anticipate not only economic impact but also psychological and social consequences. The real challenge lies not in creating intelligent machines, but in ensuring that human agency remains intact.
The question is no longer whether artificial intelligence will shape society. It already does. The more pressing question is whether the law will shape artificial intelligence in time.
Words of prudence
At a conceptual level, artificial intelligence was never designed merely to obey rules, but to optimise outcomes. In doing so, it acquires the capacity to adapt, restructure, and replicate its operational logic in response to constraints imposed upon it.
Human regulation, which is inherently normative and static, struggles to restrain systems that evolve dynamically and recalibrate their behaviour to avoid friction.
This asymmetry should provoke legitimate concern. If the law remains reactive while AI remains adaptive, regulatory intent will be repeatedly outpaced. Legal frameworks must therefore evolve concurrently with AI capability, not in its aftermath.