This article examines the evolving debate around AI sovereignty, national rights, infrastructure dependence, and governance frameworks in India’s digital future.
Prime Minister Narendra Modi discussed India’s human-centered framework for artificial intelligence (AI), known as “MANAV,” on the first day of the India AI Summit last week.
India’s Human-Centered AI Framework and Sovereignty Vision
This framework includes national sovereignty, accountable governance, moral and ethical frameworks, accessibility and inclusion, validity, and legitimacy. He invokes “Jiska data, uska adhikar” (Your data, your right) in his justification of national sovereignty.
For the prime minister, this is a huge statement. Particularly at a time when sovereignty is being weaponized by governments, sold as a service, and commodified by the private sector, the connection between sovereignty and individual choice and rights is a helpful direction for the conversation. The transition to human rights is significant and energizing.
Beyond Rhetoric: Giving Sovereignty Real Substance
Beyond rhetoric, though, what does this mean? What social, political, and economic structures are necessary to give sovereignty substance if it is genuinely adhikar?
Much of the conversation surrounding “democratizing AI” in the India AI Impact Summit is talks and declarations revolved around the idea of model access, including who may use them, in what languages, and how widely they can be used. The main goal of developing multilingual datasets and evaluation benchmarks is to enhance model performance. To make sure that use-case-led AI innovation reaches everyone, there is a push toward diffusion. Inclusion on platforms has replaced inclusion as a call to action for social protection and entitlements. However, using this kind of inclusion does not transfer value or power.
🇮🇳 AI Sovereignty & Infrastructure Control
- Core Idea: “Jiska data, uska adhikar”
- Key Concern: Reliance on external AI infrastructure
- Risk: Extractive global technology agreements
- Challenge: Data control vs platform inclusion
- Focus: Ownership, bargaining power & national leverage
- Goal: True AI sovereignty beyond onboarding users
Geopolitics, Infrastructure, and Global Power Imbalances
This claim to sovereignty is further complicated by the geopolitical layer. As numerous agreements were reached for investments in computing partnerships, the construction of data centers, and data-sharing that further affect India’s capacity to negotiate on sovereignty, the invocation of sovereignty failed at the summit. India has increased its reliance on US technology corporations, which are intimately linked to US industrial strategy, in the pretext of promoting innovation and inclusion.
In the guise of cooperation, agreements like Pax Silica further formalize unbalanced imbalances and extractions, creating a rift amongst the Global South’s nations that could have been a place of solidarity. Additionally, there is a lack of understanding that the AI supply chain’s value is generated at the infrastructure layer; workers who label data are not given this opportunity, and assessments for model improvement do not include it.
Infrastructure Dependency and the Fragility of Sovereignty
Sovereignty becomes vulnerable in this drive for size and deployment, not just for the state but also for its population. The “N” in MANAV rests on shaky ground if access is dependent on external infrastructure, if extractive bargains shape bargaining power, and if inclusion is defined only as integrating users into systems constructed elsewhere. Expanded use alone cannot sustain national sovereignty; in a highly asymmetrical technological system, leverage, ownership, and the ability to negotiate value are necessary.
Jiska data, uska adhikar must do more than just make it possible for people to use AI systems if it is to have any real significance. In the data economy, it must inquire about the bargaining power of individuals and communities. Who decides what data is worth? Who bargains for its extraction? Downstream innovation benefits whom? And who pays for mistakes, prejudice, or exclusion?
👥 Citizens, Not Just Users
- Shift in Language: From users to rights holders
- Power: Ability to negotiate data value
- Tools: Data cooperatives & collective bargaining
- Governance: Participatory audits & independent review
- Outcome: Redistribution of power in AI systems
- Vision: Community-driven sovereignty models
From Users to Rights Holders in the Data Economy
Moving forward, we must refer to “citizens,” or rights holders, instead than “users.” A rights holder has the ability to negotiate terms of service, while a user only agrees to them. The concept of rights holders also makes it possible to regulate bottom-up organizations that people have created, such data cooperatives, which let people bargain for power instead of making little adjustments to their online experiences.
Multilingual datasets are archives of community knowledge rather than just technical artifacts. Communities become co-creators of value rather than only recipients of access if they are utilized to create profitable systems. Masakhane and other initiatives are considering licenses, and Mozilla’s data collective permits the declaration of terms for data use. The vacuuming up of data requires these frictions.
Rethinking AI Governance and Structural Control
Second, a broader understanding of governance in the interest of rights is required. Platform safety has been portrayed as an issue of AI governance at conferences like the AI Summit.
But this has frequently limited the discussion to technical risk mitigation and compliance strategies, ignoring more fundamental structural issues like data control, market consolidation, infrastructure and compute concentration, and the politics of hiring new employees. These concerns are not incidental; they influence who is in charge of the AI ecosystem and, consequently, who is able to exercise their rights in a meaningful way.
Participatory Governance and Collective Agency
Beyond corporate self-regulation and state-centric oversight, a more comprehensive understanding of governance is required. It should include community-driven and participatory procedures including public-interest audits, independent reviews, group inspection, and even organized opposition to exploitative or damaging practices. According to research, it becomes particularly important in situations when there is a lack of political will or state capacity to regulate.
Governance involves contestation, accountability, and power redistribution in addition to compliance. New types of sovereignty that are determined by the communities who create, utilize, and are impacted by AI systems are made possible by this new understanding. Instead of being limited to state-imposed legal definitions or corporately planned operational frameworks, sovereignty can arise from situated practices of collective agency.
The prime minister’s “Your data, your right” remark, whether intentional or not, creates the opportunity to ease these tensions. Maybe just landing a snappy phrase was the goal. However, even if it is in opposition to the state, it provides a starting point for reconsidering sovereignty.
Disclaimer: This article is for informational and analytical purposes only. It does not represent legal advice or official policy interpretation.