How India's 10% Visibility Rule And Permanent Metadata Reset AI Law

The Information Technology Amendment Rules, 2026, mandate that AI-generated content can no longer hide its origin.

Advertisement
Read Time: 3 mins
The new rules mandate that AI-generated content can no longer hide its origin
Quick Read
Summary is AI-generated, newsroom-reviewed
  • The government's latest IT rules have introduced a deeper, invisible layer of protection: Permanent Metadata
  • This metadata is baked into the code of the file. It is designed to stay with the content even as it is shared
  • Think of it as a digital DNA or a "black box" for images and videos
Did our AI summary help?
Let us know.
New Delhi:

While visible watermarks on AI images can be cropped and audio disclaimers can be edited out, the government's latest IT rules have introduced a deeper, invisible layer of protection: Permanent Metadata. 

The Information Technology Amendment Rules, 2026, mandate that AI-generated content can no longer hide its origin. By shifting the focus from what we see to what is embedded, the law aims to make every AI creation traceable to its very roots.  

What is the "Unerasable Mark"?

Under the new Rule 3(3), any platform or tool that allows for the creation or modification of "synthetically generated information" must now embed a unique identifier and permanent metadata into the file itself.  

Think of it as a digital DNA or a "black box" for images and videos. Unlike a caption or a visible logo, this metadata is baked into the code of the file. It is designed to stay with the content even as it is shared, downloaded, or re-uploaded across different platforms.

Tracking the Source of the Lie

The primary goal of this "Permanent Metadata" is traceability.  

Identifying the Resource: The metadata must help identify the specific "computer resource" used to create or alter the content. If a deepfake is used for fraud, investigators won't just see the image; they will see which AI model or platform generated it.  

Technical Provenance: The law requires "technical provenance mechanisms," which essentially create a ledger of the content's history-who made it, when, and with what tools.  

Advertisement

The "No-Tamper" Clause

In a significant blow to bad actors, the new rules strictly prohibit intermediaries from providing tools that allow users to modify, suppress, or remove these labels and metadata.  

Previously, a user could use a "cleaner" tool to strip an image of its identifying data. Under the 2026 rules, platforms are legally barred from enabling this. If a platform is found to be facilitating the removal of these digital fingerprints, they lose their "safe harbor" protection and become legally liable for the content.  

Advertisement

A 10% Visibility Rule

For the visible part of this "mark," the government has set specific physical standards to ensure users aren't squinting to find the truth:

Visual Content: Labels must cover at least 10% of the visual display area.  
Audio Content: Disclosures must last for at least 10% of the total duration of the clip.  

Advertisement

Why This Matters for You

By mandating unerasable metadata, the government is building a system where "anonymity" is no longer a shield for those spreading AI-driven misinformation. If a viral video of a public figure is fake, the underlying metadata will provide the "smoking gun" needed for a 3-hour takedown and potential criminal proceedings under the Bharatiya Nyaya Sanhita, 2023.  

For the average user, it means the "Guessing Game" is over. Whether it is a visible tag or an invisible digital signature, the truth is now mandated to be part of the file itself.

Advertisement
Featured Video Of The Day
Setback For Rahul Gandhi As Publisher Denies Memoir's Release
Topics mentioned in this article