The Ministry of Electronics and Information Technology (“MeitY”) has released the draft amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Intermediary Rules”), seeking to regulate the creation and dissemination of ‘synthetically generated information’, including AI-generated, algorithmically generated and deepfake content (“Proposed Amendment”). The Proposed Amendment marks a significant development as it reflects India’s first definitive step towards regulating generative AI tools and synthetic media under the framework of the Information Technology Act, 2000 (“IT Act”).
The Proposed Amendment has been issued under the rule-making powers conferred by Section 87(2)(z) and (zg) of the IT Act, read with Section 79, which provides conditional safe harbour protection to intermediaries. The accompanying Explanatory Note (also released by MeitY) clarifies that the Proposed Amendment is intended to counter the growing risks of deepfakes, AI-driven misinformation, impersonation and non-consensual synthetic media. The stated purpose is to ensure that users can distinguish between authentic and AI-generated content, stated to be aligned with international developments on mandating provenance and transparency in AI-generated online content.
The Proposed Amendment in its current form poses several interpretive and operational concerns, particularly in relation to the definition, applicability and technical feasibility of compliance, which we have summarized below.
Overview of the Proposed Amendment and its implications
(a) New Definition of ‘Synthetically Generated Information’
The Proposed Amendment introduces a new definition under Rule 2(1)(wa), describing ‘synthetically generated information’ as any information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true.
While the Explanatory Note stated that the Proposed Amendment intends to capture harmful or deceptive content such as deepfakes, to target misinformation, loss of reputation, damage to elections, etc, the nature of content that is proposed to be regulated is significantly overbroad and encapsulates both material and non-material AI-generated content, with a broader purpose. By focusing on how content is created or altered rather than whether it causes harm or deception, the definition adopts a content-centric rather than harm-centric approach. In doing so, it accords statutory recognition to AI-generated content as a distinct legal category under the IT Act, thereby bringing within its ambit a wide spectrum of synthetic media, including not just deepfakes, but also AI-generated imagery, voice clones and other forms of generative art.
The inclusion of the phrase ‘artificially or algorithmically created, generated, modified or altered’ without any accompanying statutory or technical explanation further widens the scope of the provision. In the absence of further clarity, the expression could encompass a variety of benign and innocuous applications, such as AI-based photo enhancement, image restoration, automated translation or colour correction. Routine use-cases like AI-processed medical imaging, animated educational visuals, movies and recreational content, or even personal photographs edited through AI filters or consumer tools may fall within the regulatory net.
(b) Expanded due-diligence requirements of Intermediaries
The Proposed Amendment inserts a new sub-rule (3) under Rule 3, mandating that intermediaries offering computer resources that may enable, permit or facilitate the creation, generation, modification or alteration of information as synthetically generated information are required to exercise additional due diligence in relation to such synthetically generated information. Under this provision, intermediaries are required to:
- ensure that any such information is prominently labelled or embedded with a permanent and unique metadata tag or identifier;
- ensure that such label, metadata or identifier is visibly displayed or made audible in a prominent manner on or within the synthetically generated information, covering at least ten percent (10%) of the surface area in the case of visual media or, for audio content, during the initial ten percent (10%) of its duration; and
- ensure that such label or identifier enables immediate identification of the content as synthetically generated information.
Further, intermediaries are expressly prohibited from enabling any modification, suppression or removal of such labels, permanent unique metadata or identifiers, once embedded. This mandatory duty to embed and preserve labels brings into focus a threshold question, namely, who the rule actually seeks to regulate and the entities who fall within the statutory definition of ‘intermediary’.
Under Section 2(1)(w) of the IT Act, an intermediary is defined as any person who receives, stores, or transmits electronic records on behalf of another. It remains debatable whether AI model developers or generative platforms fall within this definition, given that such entities neither host nor transmit third-party content in the traditional sense.
The Delhi High Court in the case of Google LLC v. DRS Logistics Pvt. Ltd. (2023 SCC Online Del 4809) underscored that intermediary status must be determined contextually, depending on whether the entity functions merely as a passive conduit or assumes an active role in content curation or modification. Applying this reasoning, most foundational AI system providers are unlikely to qualify as intermediaries but as technology developers or service providers, operating outside the IT Act’s intermediary framework. However, the drafting of the Proposed Amendment does not reflect this distinction. The Proposed Amendment is also of concern for intermediaries as commonly understood, due to inclusion of the term ‘modification’ in the proposed Rule 3(3), which leaves ambiguity as to whether minor visual alterations, such as light retouching, background blur or colour enhancement, would trigger the obligation of labelling and embedding content. As a result, it appears that the entities proposed to be regulated under the Proposed Amendment may be unaffected, whereas unintended entities may be impacted by the scope of the proposed law.
Further, from an implementation perspective, the requirement to display a permanent label covering at least ten percent of the visual area or duration presents significant technical challenges. Many content formats, such textual displays, dynamic graphics and memes, are inherently unsuited to overlays of this nature. In audio content, continuous audible disclosures could compromise user experience and accessibility. While watermarking and provenance standards (such as the C2PA framework) are still evolving internationally, imposing rigid quantitative labelling thresholds at this stage risks across forms of content is likely to create compliance impracticalities and inconsistent enforcement.
(c) New Obligations for Significant Social Media Intermediaries (SSMI)
The Proposed Amendment also introduces a new Rule 4(1A), which imposes enhanced due diligence requirements on Significant Social Media Intermediaries (“SSMIs”). Under this provision, SSMIs are required to undertake specific obligations directed at synthetic or AI-generated content. They must: (i) require users to declare whether uploaded content is synthetically generated information; (ii) deploy reasonable and proportionate technical measures (including automated tools or other suitable mechanisms) to verify the accuracy of such declarations; and (iii) clearly label or otherwise disclose, prior to publication, that the content constitutes synthetically generated information, where such determination arises from user declaration or verification measures.
As per the Proposed Amendment, failure by SSMIs to comply with these obligations would amount to a breach of due diligence requirements under the Intermediary Rules, potentially resulting in loss of safe harbour protection under Section 79(1) of the IT Act. In substance, this creates a “know-your-content” regime for AI-generated material, effectively placing a provenance-verification obligation on SSMIs. This addition significantly expands the scope of intermediary due diligence under the Intermediary Rules and is likely to require substantial technical and infrastructural investment by platforms to ensure compliance.
The Proposed Amendment requires SSMIs to adopt ‘reasonable and appropriate technical measures’, yet neither the text of the provision nor the accompanying Explanatory Note specifies what these measures entail, how compliance will be assessed, or which watermarking or labelling techniques (as determined appropriate by each SSMI) would be considered acceptable. In the absence of clearly articulated benchmarks, intermediaries may be compelled to design their own detection and labelling systems, resulting in significant divergence in industry practices and potential inconsistency in regulatory enforcement.
This regulatory ambiguity also creates a paradox: SSMIs are legally obligated to implement effective mechanisms to detect synthetically generated information, yet risk penalties or loss of safe harbour if those mechanisms are subsequently deemed inadequate under undefined standards.
(d) Protection for Removal of Harmful Synthetic Content
The Proposed Amendment also introduces a new proviso to Rule 3(1)(b), which provides express statutory protection to intermediaries that remove or disable access to synthetically generated information on the basis of user grievances or reasonable efforts undertaken in good faith. The proviso clarifies that such removal or disabling of access shall not affect the intermediary’s exemption from liability under Section 79(2) of the IT Act. This addition effectively codifies the good faith takedown principle, assuring intermediaries that proactive moderation of harmful or deceptive synthetic content will not compromise their statutory safe harbour.
However, this provision raises structural and doctrinal questions regarding the intermediary’s role under Indian law. The safe-harbour framework under Section 79 of the IT Act is premised on the intermediary functioning as a neutral conduit, a passive platform, that merely hosts or transmits third-party content without editorial control. By explicitly encouraging intermediaries to participate in the content-removal process, without a supporting court order or direction of regulatory authorities, the amendment risks eroding the intermediaries’ neutrality.
Moreover, the drafting of the proviso is broad and overarching, potentially allowing intermediaries significant discretion to remove or disable access to content that may be disputed without adequate procedural safeguards or transparency requirements.
Conclusion
The Proposed Amendment represents a pivotal step in India’s evolving framework for the regulation of AI-generated and synthetic media, underscoring MeitY’s intent to enhance transparency and accountability in digital content. However, its expansive definitional scope, lack of technical guidance on technical measures that assist the desired content labelling and potential inconsistency with existing intermediary due diligence obligations and position under the Intermediary Rules raise critical questions of feasibility, proportionality and coherence within the broader statutory scheme.
As on the day of writing this article, the draft of the Proposed Amendment remains at the consultative stage, with comments invited from stakeholders and the public.