Mar 21, 2024

MeitY Revises AI Advisory, Does Away with Government Permission Requirement – Update

Presumably due to industry concerns, the Ministry of Electronics and Information Technology (“MeitY”) has on March 15, 2024 issued fresh advisory (“Revised Advisory”) in supersession of its earlier advisory dated March 1, 2024 which inter-alia advised all intermediaries and platforms to obtain ‘explicit permission’ of the Government of India before using and/or making available any under-tested / unreliable Artificial Intelligence (AI) models / Large Language Models (LLM) / Generative AI models, software or algorithms to Indian users (“Erstwhile Advisory”).

Similar to the Erstwhile Advisory, the Revised Advisory has also been issued in connection with the due diligence obligations imposed upon such intermediaries / platforms under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”), framed under the Information Technology Act, 2000 (“IT Act”) and seeks to address the Government’s concerns with intermediaries and platforms implementing insufficient measures in light of the rapid development and use of AI and machine learning tools / technologies in India.

From media reports, it appears that the recommendations contained in the Erstwhile Advisory and now under the Revised Advisory emerged from ongoing discussions within MeitY in relation to formulation of the Digital India Act – a legislation expected to regulate the development and use of AI and machine learning tools / technologies in India, which has been in the works for some time and is likely to be released for consultation in July 2024. In this regard, MeitY had indicated that it may release amendments to Rule 3(1)(b) of the IT Rules that require intermediaries to implement prescribed due diligence measures in respect of AI models in order to continue to benefit from their safe harbor protections, should the finalization of the Digital India Act take longer than expected[1].

In the meantime, the MeitY recommendations have created some churn within the technology sector in India, given the permission framework contained under the Erstwhile Advisory, the successive clarifications on the intended scope of the Erstwhile Advisory by the Minister of State for Electronics & IT, Mr. Rajeev Chandrasekhar and the Union Minister for Electronics & IT, Mr. Ashwini Vaishnaw and the subsequent suppression and replacement of the Erstwhile Advisory with the Revised Advisory.

The key highlights of the Revised Advisory, the extent to which it liberalizes the Erstwhile Advisory and the corresponding issues for reflection, are as under:

(a) Who does it apply to? Similar to the Erstwhile Advisory, the Revised Advisory has been issued to all ‘intermediaries’ and ‘platforms’.

  • The term ‘platform’ is not defined under the IT Act or the IT Rules, and therefore it is presently unclear which entities will be regulated under this category.
  • The concept of an intermediary, however, is long established under the IT Act, wherein it is defined as any entity which receives, stores, or transmits, messages, data or content on behalf of another entity. Typically, this includes search engines, web hosting service providers, internet service providers, social media platforms, e-commerce platforms, etc.
  • Immediately following the release of the Erstwhile Advisory, the Minister of State for Electronics & IT, Rajeev Chandrasekhar in a social media post clarified that the recommendations contained in the Erstwhile Advisory is only aimed at ‘significant and large platforms’ and will not apply to ‘start-ups’.[2] This added further doubt as neither of these terms are defined under the IT Act or the IT Rules. Startups can encompass a wide range of entities, including those with large user bases and/or significant electronic presence, potentially creating confusion about which startups would be exempt.
  • However, the concept of a ‘significant social media intermediary’ has been defined under the IT Rules as intermediaries who have a user base above a threshold that may be specified by the Government of India. On March 04, 2024, the Union Minister of Communications, Electronics and Information Technology has clarified that the recommendations contained in the Erstwhile Advisory are intended to apply to such regulated entities only.[3] This position has been affirmed in subsequent public statements by MeitY officials since then.
  • While MeitY has released these clarifications subsequent to the issuance of the Erstwhile Advisory, none of these clarifications have been formally incorporated in the Revised Advisory.

    (b) What does the Revised Advisory stipulate? The Revised Advisory recommends the following compliances for intermediaries or platforms: –
  • Prior permission before launch of untested / unreliable AI models – done away with – The Erstwhile Advisory required intermediaries and platforms to obtain explicit permission of the Government of India before making any ‘untested’ or ‘unreliable’ AI models / LLMs / generative AI / software / algorithm available to Indian users. This requirement to obtain explicit permission of the Government of India has now been done away with.

 This grants relief to intermediaries and platforms who make available AI models to users in India.

  • Use of AI models / LLMs / Generative AIs / software(s) or algorithm(s) to be compliant with IT Rules and IT Act – Intermediaries / platforms must ensure that use of AI models / LLMs / generative AI / software / algorithm either on or through their computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined under Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act and other laws in force.

As background, Rule 3(1)(b) of the IT Rules contains prescriptive guidance on the nature of content or information that will be considered as violative in this context, which include –

  • third party unauthorized information;
  • any misinformation or information which is patently false and untrue or misleading, or in respect of any business of the Central Government – that has been identified as fake or false or misleading by such fact check unit of the Central Government as may be notified in the Official Gazette;
  • information which infringes patent, trademark, copyright, or other proprietary rights;
  • information that impersonates another person;
  • information that threatens the unity, integrity, defence, security of India, friendly relations with foreign States, or public order, or prevents investigation of any offence; or
  • information that violates any law for the time being in force.

Accordingly, intermediaries are advised to ensure that use of any AI technology, whether it be in the form of LLMs, or generative AI models, or even rudimentary and/or less complex AI software / algorithms, whether used by such intermediary on its own or can be used through its computer resource, restricts its users and subscribers from hosting or publishing any content that could be classified as unlawful under the aforementioned provisions of the IT Rules, the IT Act and any other law in force. In addition to LLMs, this provision is broad enough to cover all kinds of AI tools, and therefore could apply to texts, images, or other content that can be generated by the users by relying on such AI technologies and will necessitate intermediaries / platforms to examine the extent to which AI technologies are capable of being leveraged by their user base on its platform.

This provision also indicates an uncertain situation for intermediaries who may not necessarily use such AI tools on their platform but are unable to restrict their users / subscribers fully or materially from posting, uploading, or publishing content that is created by the aid of such AI technology onto their platform, and whether the same still require such intermediaries to be mindful of the provisions of the Revised Advisory.

Further, while the Erstwhile Advisory advised compliance with content related regulations contained under Rule 3(1)(b) of the IT Rules and the IT Act, the Revised Advisory extends the scope of compliance to all other applicable provisions of the IT Rules and the IT Act and all other laws in force. The guidance therefore is expansive to say the least.

  • Restrict any bias or discrimination – The intermediaries or platforms are advised to ensure that their computer resource, including through use of AI models / LLMs / Generative AI / software / algorithm, do not permit any bias or discrimination, or threaten the integrity of the electoral process. It is pertinent that information on the kind of content that could potentially be discriminatory or biased to influence or impact the electoral process is not specifically mentioned as unlawful under Rule 3(1)(b) or other provisions of the IT Rules or the IT Act or the Erstwhile / Revised Advisory. Therefore, intermediaries, particularly social media intermediaries, will have to carefully examine if the content visible from or published on their platforms is capable of being impacted by the above restriction.

While this recommendation remains unchanged from the Erstwhile Advisory, it is also pertinent to note that what qualifies as ‘bias’ or ‘discrimination’ has not been explained under the Erstwhile Advisory or the Revised Advisory and will require intermediaries to adopt their own internal assessment, which naturally could be subjective in nature.

  • Label AI models – Intermediaries and platforms are advised to label the possible inherent fallibility or unreliability of the output generated from the AI models and implement a consent mechanism that explicitly informs users of the fact that the content is derived from an AI technology. This will be applicable if an AI tool or application is subject to the above permission, as a result of initially being determined as untested or unreliable.
  • Create user awareness by amending terms of service Intermediaries or platforms must inform users about the consequence of dealing with unlawful information on their platform, including disabling of access, or removal of content, or suspension or termination of user accounts, and punishment under law. This awareness needs to be ensured by way of updating the terms of service, and user agreements. This remains unchanged from the Erstwhile Advisory and is a reiteration of the due diligence obligation described under Rule 3(1)(c) of the IT Rules.
  • Ensure labeling or embedding unique metadata / identifier for potential misinformation or deepfakes – If any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audio-visual information, in a manner that can potentially result in creation of misinformation or deepfakes, then such content should either be labelled or embedded with permanent unique metadata / identifier such that it is able to identify that such information has been created, generated or modified using the computer resource of the intermediary or to identify the user of the software or computer resource that have affected the change. Essentially, intermediaries operating in India will need to implement watermarking and/or labelling technology to identify the type of content that has been altered or synthetically created, either by tools available on their platform, or are otherwise uploaded by the users publishing or hosting content on their platforms.

It may be relevant to note that the EU AI Act, being the foremost legislation intended to regulate offering of AI technology in the European Union, requires high-risk use textual and audio-visual output to be labelled. It also mandates Gen AI developers to trace the origin of such AI-generated content by using watermarking and metadata identification technology. These measures are intended to make it easier to discern a deepfake from legitimate content. The European Commission has recognized however that watermarking technology that enables identification of origin is at a nascent stage and is therefore wanting of global standards, especially as the AI watermarking can easily be removed at the point of origin, making it difficult for content hosting platform to identify its provenance with accuracy. This technology is lagging behind in diverse forms of media as well, such as audio, and audio-visual content.

The US administration is similarly working on identifying the existing standards and practices for authenticating content and tracking its origin, and techniques for labelling synthetic content, such as watermarks.

Therefore, this provision is interesting as it mandates either a watermarking or labelling onto all type of content for any kind of AI technology, even if such content is merely ‘permitted’ or ‘facilitated’ to be made available either through a software, or even the computer resource of such platforms, if such content could potentially be a deepfake or misinformation. There is also no corresponding guidance from MeitY or other regulators on the accepted forms of labelling and/or watermarking practices that may be relied upon by the intermediaries, or if there will be some dispensation dependent on the type of AI-generated media, or the role of different type of intermediaries in allowing such content to be disseminated from their platforms. For instance, labelling is mostly possible at the stage of training the data, and all platforms that are not Gen AI developers will not exercise oversight at this stage.

In view of the above, given its expansive scope, and the developing nature of watermarking technology, it remains to be seen as to the threshold of compliance that will be expected by MeitY for purpose of this Revised Advisory. In the near future, we expect MeitY to engage with the private sector to develop or establish consensus on best practices in this area.

(c) What is the timeline for compliance? Entities covered by the Revised Advisory are advised to ensure immediate compliance. However, the requirement to submit an action taken-cum-status report to MeitY as was envisaged under the Erstwhile Advisory has now been done away with.

Please note that absent a specific clarification from MeitY on the scope of applicability of the Revised Advisory, the compliances summarized in para (b) above continue to be of interest for all intermediaries, even if they are non – binding in nature (refer to para (d) below).

(d) Is the Revised Advisory binding? In our view, given that the Revised Advisory contains provisions that are recommendatory in nature and have not been issued under an applicable provision of the IT Act, its effect is not binding on intermediaries or platforms. This has also been confirmed by the Union Minister of Communications, Electronics and Information Technology, Ashwini Vaishnaw, in his recent media statement[4].

Recommendations and way forward:

The Revised Advisory still leaves ample room for interpretation, particularly in respect of its applicability, which could lead to potential inconsistencies in implementation across different entities, intermediaries, or platforms.

While the Revised Advisory remains recommendatory in nature, it does set the stage for most private sector entities to consider an engagement with the Government of India to understand the likely trajectory of AI regulation in India, or to submit their stance as a voluntary compliance with the provisions of the Revised Advisory. It would be helpful for the sector to outline the category, type of AI technologies being deployed before Indian users, so that prospective legislative intervention in this area is reflective of the nuanced applications being deployed in the name of AI technology in India.

Footnotes:

[1]             Govt may amend it act to add new rules for AI Gen AI models: MoS IT Rajeev Chandrasekhar, Economic Times, January 04, 2024, https://economictimes.indiatimes.com/tech/technology/govt-may-amend-it-act-to-add-new-rules-for-ai-genai-models/articleshow/106524019.cms?from=mdr

[2]          Union Minister explains advisory on launch of AI platforms, India Today, March 04, 2024,  https://www.indiatoday.in/india/story/union-minister-explains-centres-advisory-on-launch-of-ai-platforms-2510250-2024-03-04

[3]        IT Minister Ashwini Vaishnaw clarifies that advisory on ai is not binding aimed at social media companies, CNBC, March 04, 2024, https://www.cnbctv18.com/technology/it-minister-ashwini-vaishnaw-clarifies-that-advisory-on-ai-is-not-binding-aimed-at-socialmediacompanies-19195391.htm

[4]         Govt’s AI advisory only for large platforms, not for startups: MeitY, March 4, 2024 https://www.business-standard.com/industry/news/govt-s-ai-advisory-only-for-large-platforms-not-for-startups-meity-124030400468_1.html

AUTHORS & CONTRIBUTORS

TAGS

SHARE

DISCLAIMER

These are the views and opinions of the author(s) and do not necessarily reflect the views of the Firm. This article is intended for general information only and does not constitute legal or other advice and you acknowledge that there is no relationship (implied, legal or fiduciary) between you and the author/AZB. AZB does not claim that the article's content or information is accurate, correct or complete, and disclaims all liability for any loss or damage caused through error or omission.