India Tightens AI Rules: Deepfake Content Must Be Removed in 3 Hours

In an effort to increase the security and accountability of digital technology, the Indian government has published the IT Amendment Rules, 2026, which will take effect on February 20, 2026.

These new regulations primarily aim to control artificial intelligence (AI)-generated content in order to prevent deepfakes and misinformation produced by AI.

Social media sites and other digital media companies will now have to properly label AI material and provide greater transparency on user safety.

This action will turn out to be a significant turning point in preventing the abuse of cutting-edge AI technologies and defending citizens’ digital rights.

In order to track the origin of such information, platforms will also need to integrate permanent metadata or provenance identifiers where technically possible, according to a notification from the Ministry of Electronics and Information Technology (MeitY).

With these adjustments, we hope to balance innovation with user safety and accountability while addressing the growing threat of deepfakes and AI-driven disinformation.

According to the Information Technology Act of 2000 and other relevant criminal legislation, breaking the restrictions could result in penalties.

contains any artificially produced content that is obscene, pornographic, pedophilic, infringes on another person’s privacy, including bodily privacy, vulgar, indecent, or sexually explicit, or that includes images of non-consensual intimacy or sexually exploits and abuses children.

In certain situations, intermediaries are now legally obligated to act within three hours of receiving a takedown order, and the timeframe for responding and resolving complaints has also been shortened.

Gourav

About the Author

I’m Gourav Kumar Singh, a graduate by education and a blogger by passion. Since starting my blogging journey in 2020, I have worked in digital marketing and content creation. Read more about me.

Leave a Comment