1. Home
  2. English
  3. Centre Mandates AI-Generated Contents to be Prominently Labelled
Centre Mandates AI-Generated Contents to be Prominently Labelled

Centre Mandates AI-Generated Contents to be Prominently Labelled

0
Social Share

NEW DELHI, Feb 10: The Union Government has notified amendments requiring photorealistic AI-generated contents to be prominently labelled and significantly shortening timelines for takedown of illegal material, including non-consensual deepfakes.

The changes, under the Information Technology Act, 2021, will come into force on February 20.

Under the amended rules, social media platforms will now have between 2-3 hours to remove certain categories of unlawful content, a sharp reduction from the earlier 24-36 hour window. Content deemed illegal by a court or an “appropriate government” will have to be taken down within 3 hours, while sensitive content, featuring non-consensual nudity and deepfakes, must be removed within 2 hours.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 defines synthetically generated content as “audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or a real-world event.”

A senior government official on Tuesday said the rules include a carve-out for touch-ups that smartphone cameras often perform automatically. The final definition is narrower than the one released in a draft version of these rules in October 2025.

Social media firms will be required to seek disclosures from users in case their content is AI-generated. If such a disclosure is not received for synthetically generated content, the official said, firms would either have to proactively label the content or take it down in cases of non-consensual deepfakes.

The rules mandate that AI-generated imagery be labelled “prominently.” While the draft version specified that 10% of any imagery would have to be covered with such a disclosure, platforms have been given some more leeway, the official said, since they pushed back on such a specific mandate.

As with the existing IT Rules, failure to comply with the rules could result in loss of safe harbour, the legal principle that sites allowing users to post content cannot automatically be held liable in the same way as a publisher of a book or a periodical can. “Provided that where [a social media] intermediary becomes aware, or it is otherwise established, that the intermediary knowingly permitted, promoted, or failed to act upon such synthetically generated information in contravention of these rules, such intermediary shall be deemed to have failed to exercise due diligence under this sub-rule,” the rules say, hinting at a loss of safe harbour.

The rules also partially roll back an amendment notified in October 2025, which had limited each State to designating a single officer authorised to issue takedown orders. States may now notify more than one such officer—an “administrative” measure to address the need of States with large populations, the official said.

(Manas Dasgupta)

Join our WhatsApp Channel

And stay informed with the latest news and updates.

Join Now
revoi whats app qr code