Legal Update

2026 IT Rules Amendment: AI-Generated Content and Intermediary Duties

The 2026 amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 introduces a stronger regulatory framework for synthetically generated content, including AI-generated audio, visual, and audio-visual material. The changes focus on labelling, platform accountability, faster response timelines, and stronger due diligence obligations for intermediaries.

The amendment formally defines “synthetically generated information” to cover audio, visual, or audio-visual content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a way that appears real, authentic, or true.

In practical terms, this brings AI-generated media such as deepfake videos, manipulated voice recordings, and synthetic visual content clearly within the regulatory framework. At the same time, the amendment excludes routine and good-faith editing, formatting, technical correction, accessibility improvements, and educational or conceptual materials that do not create false documents or materially misrepresent content.

A major compliance requirement introduced by the amendment is the mandatory labelling of synthetically generated information. Platforms enabling such content must ensure that it is clearly and prominently identified so that users can immediately recognize that the material has been created, generated, modified, or altered using computer resources.

The amendment also requires permanent metadata or other suitable technical provenance mechanisms, including a unique identifier where technically feasible. In the case of audio content, the disclosure must be prefixed in a way that is easily noticeable and adequately perceivable.

Intermediaries that offer tools or computer resources enabling users to create, generate, modify, alter, publish, transmit, share, or disseminate synthetically generated information must now deploy reasonable and appropriate technical measures, including automated tools or other suitable mechanisms.

These measures are intended to prevent the creation or dissemination of unlawful synthetic content, particularly content involving:

  • child sexual exploitative and abuse material,
  • non-consensual intimate imagery,
  • obscene, pornographic, or sexually explicit content,
  • false documents or false electronic records,
  • content relating to explosive materials, arms, or ammunition, and
  • false portrayals of natural persons or real-world events likely to deceive.

Significant social media intermediaries that allow content to be displayed, uploaded, or published must now require users to declare whether the content is synthetically generated. The platform is also expected to deploy appropriate technical measures to verify the accuracy of that declaration.

Where the declaration or verification confirms that the content is synthetic, the intermediary must ensure that it is clearly and prominently labelled before publication. This places responsibility not only on the user, but also on the platform to take reasonable and proportionate technical steps to verify and display such disclosures.

The amendment tightens response timelines under the rules. In rule 3, the period for certain takedown-related action has been reduced from thirty-six hours to three hours. Other complaint handling and reporting timelines have also been shortened.

This signals a stronger compliance expectation for intermediaries and reflects the government’s intention to require faster response to harmful or unlawful online content, particularly where synthetic media may create immediate harm or deception.

The amendment makes it clear that misuse of synthetically generated information may attract penalties or punishment under multiple laws, including the Information Technology Act, the Bharatiya Nyaya Sanhita, the POCSO Act, the Representation of the People Act, the Indecent Representation of Women (Prohibition) Act, and the Sexual Harassment of Women at Workplace Act.

Platforms may suspend or terminate user accounts, remove or disable access to unlawful content, preserve evidence, and in appropriate cases identify the user and disclose the identity of the violating user to victims or authorities in accordance with law.

Overall, the amendment represents a significant step in regulating AI-generated and synthetic content in India. It strengthens platform accountability, requires more visible disclosure of synthetic media, clarifies user liability, and introduces stricter due diligence obligations for intermediaries.

In effect, the 2026 rules seek to improve transparency around deepfakes and AI-generated content while creating a stronger enforcement framework for digital harms arising from deceptive synthetic media.

Chat with us Call us