India IT Rules Raise Free Speech Concerns

India’s proposed IT rule changes aim to strengthen digital governance, but they are raising serious concerns about free speech, legal clarity, and the balance of power between the government and online platforms.

Proposals to modify India’s IT regulations run the risk of giving the government undue extra-legislative control over what can and cannot be posted online. The nation’s digital economy, legal certainty, and freedom of speech are all at risk.

India’s IT Rules: Growing Concerns Over Control

On the surface, India’s proposed changes to its IT regulations could appear to be a technical tightening of platform requirements. They are “clarificatory and procedural,” according to the government, and are meant to provide legal certainty, bolster enforceability, and guarantee more efficient control of content housed by online intermediaries, especially news and current affairs.

On closer inspection, however, it appears that government “advisories,” “guidelines,” and “clarifications” might actually become legally obligatory. If this is the case, the current version of the proposed framework may not only raise platform compliance costs but also gradually shift from Parliament-led rule-making to more direct executive direction with less procedural protections.

Tightening Digital Regulations

India’s digital rules have been getting stricter. Its AI Governance Guidelines and Digital Personal Data Protection Act are examples of “techno-legal” frameworks designed to assess compliance, promote innovation, and push businesses to incorporate safety by design.

In February, platforms were required under updated IT rules to remove illicit content within three hours of a government or court order, and non-consensual intimate photography, including deepfakes, within two hours of a complaint. Additionally, they have to mark AI-generated content, respond to user complaints within seven days, and offer protections against deepfakes in general and posts about explosives or child sexual abuse.

⚖️ Key IT Rule Changes

  • Content Removal: 2–3 hour deadline
  • AI Labels: Mandatory tagging
  • Complaint Response: Within 7 days
  • Deepfake Controls: Stronger safeguards
  • Platform Liability: Increased risk
  • Compliance: Stricter enforcement

Rising Stakes for Platforms

The stakes for platforms have now increased due to this week’s draft revisions, which have a feedback window open until April 14. If social media applications and search engines disregard government-issued cautions, they may lose their safe-harbor protections under the IT Act and become liable for user material.

Organizations like the Internet Freedom Foundation and the Internet and Mobile Association of India have expressed worry about the draft plan to regard guidelines as “enforceable orders,” seeing it as an overreach and a possible threat to free speech. There is concern that the administration may really establish regulations more quickly and adaptably than formal lawmaking permits. Of course, courts could step in.

Legal and Constitutional Concerns

A landmark decision was made by the Supreme Court in the Shreya Singhal case, which invalidated Section 66A of the IT Act due in part to its ambiguous speech prohibitions that may be too readily abused.Similar concerns over administrative overreach and free speech were raised more recently by the Center’s attempt to establish fact-checking units.

Instead of taking a chance on run-ins with government, platforms typically choose to be cautious. Smaller businesses typically abide by regulations to reduce legal uncertainty, but larger businesses may contest them as they have in the past.

⚠️ Key Risks & Concerns

  • Free Speech: Potential restrictions
  • Executive Power: Increased control
  • Legal Uncertainty: Ambiguous rules
  • Compliance Costs: Rising for platforms
  • Safe Harbor Risk: Possible loss
  • Overreach: Limited safeguards

Global Comparisons

It is instructive to compare with other jurisdictions. Platforms are subject to a number of requirements under the EU’s Digital Services Act, but these are enforced through a legal framework that includes precise definitions of unlawful content, procedural safeguards, and appeal channels. Regulators only carry out certain regulations.

A safe-harbor rule in the US allows platforms to censor messages in good faith while protecting user-generated content. The executive has little influence over content decisions; court review, not administrative action, is supposed to restrain online excesses.

Conclusion

We must make sure that speed does not compromise legal certainty, even while India’s executive approach may allow for swift crackdowns on deepfakes, fraud, fake-news operations, and the like. It should be very clear what is and is not acceptable to put online for the sake of free speech and a strong digital economy.

Disclaimer: This content is for informational purposes only and does not constitute legal advice.

About the Author

I’m Gourav Kumar Singh, a graduate by education and a blogger by passion. Since starting my blogging journey in 2020, I have worked in digital marketing and content creation. Read more about me.

Leave a Comment