What is “deepfaked” content?
Today, AI algorithms are so advanced that they can apply “deep learning” techniques. These are complex machine learning algorithms which utilise multiple layers of networks to analyse raw data and generate very realistic (yet artificial) graphics, text, videos, or sounds. These technologies are what drive Generational AI. In essence, AI’s true potential is achieved if a machine can analyse raw data and produce a result which is as close as possible or indistinguishable to real life.
One byproduct of Generational AI is the ability to “deepfake” real persons and produce content which appears convincingly real but is completely fictional. Deepfake content could be the impersonation of anything – someone’s face, popular dialogue from a movie taken out of context, background environment, a body, voice, etc. For example, numerous celebrities have been deepfaked into saying or doing something that they did not do in real life. Given the efficiency of deep learning algorithms, deepfaked content can easily mislead people into thinking that such content is real.
The cons and cons of deep learning technologies
Deep learning technologies do have positive applications. For example, the ALS Association has used voice-cloning technology to assist ALS (scelorosis) patients by digitally recreating their voices. Having said that, it is easy to see how deepfake technology has serious potential for misuse and abuse.
Recently, videos of Infosys founder, Narayana Murthy, surfaced where he appeared to endorse automated trading applications. Earlier this month, videos of industrialist, Ratan Tata, in which he appeared to give investment advice, were circulated on social media platforms. A deepfaked video where the face of an Indian actress was morphed onto another person’s body found traction on social media. In addition to concerns with impersonating persons and violating their privacy, deepfaked content can also adversely affect public interest. This technology can incite violence, influence elections, and spread misinformation. It is imperative that such technology is regulated.
The Indian regulatory framework
Presently, there are no laws or regulations in India which target deepfaked content. The closest are Sections 66D and 66E of the Information Technology Act, 2000 (“IT Act”) which penalises a person with imprisonment and a fine, who cheats by impersonating an individual and/or publishes or transmits images of a private area absent content in an electronic form. Apart from this, Sections 67, 67A and 67B of the IT Act prohibit and punish those who publish or transmit obscene or sexually explicit material. These provisions, however, are not enough to address the larger problem – how to identify and prevent the circulation of abusive deepfaked content. The Union Government (“Union”) appears enthusiastic to find a solution.
On 7 November 2023, the Union issued an advisory to social media intermediaries (“SMIs”) to identify and action, inter alia, deepfaked content (“Advisory”). The Union advised SMIs to ensure that:
- due diligence is exercised and reasonable efforts are made to identify misinformation and deepfakes, and in particular, information that violates the provisions of rules and regulations and/or user agreements;
- such cases are expeditiously actioned against and access is disabled well within applicable timelines under the IT Rules, 2021;
- SMI users are caused to not host such content (including deepfaked content); and
- any such content, when reported, is removed within 36 hours of the report – SMIs’ failure to act in this regard would attract Rule 7 of the Information Technology Rules (Intermediary Guidelines and Digital Media Ethics) Code, 2021 by way of which they could be charged with offences under the Indian Penal Code.
The Advisory cautioned that if SMIs failed to comply with their obligations under the IT Act, 2000 and the IT Rules, 2021, they run the risk of losing the coveted immunity from liability for illegal user-generated content under Section 79 of the IT Act, 2000.
Subsequently, on 27 November 2023, the Union announced its plans to draft new regulations and amend existing laws to combat the creation and spread of deepfaked content. The Union has made it clear that the basis of these regulations will be to identify, prevent, report, and create awareness of deepfake technologies. We will update this space based on the new regulations.
Recent developments in India
What is important to note is the role played by private stakeholders (some of whom are, incidentally, large investors in AI) in implementing preventative measures. Google advocates for responsible AI development and is in talks to collaborate with the Indian government to organise a “multi-stakeholder discussion” to address the challenges in dealing with deepfaked content. Google has also partnered with the Indian Institute of Technology, Chennai, to set up a think-tank aimed at formulating policies and guidelines for responsible use of AI technologies.
Interestingly, despite the absence of law to regulate deepfakes, the Indian judiciary has paved the way for controlling the misuse of deepfakes. Recently, a celebrated Indian actor, Mr Anil Kapoor, sought protection of his name, image, publicity, persona, voice and other attributes of his personality against misuse on the internet. In Mr. Kapoor’s case, the defendants used AI deepfake technology to produce derogatory content with his face, personality and movie dialogues morphed on the torsos of celebrities to make deepfake images and fake pornographic videos to sell merchandise and services. The Delhi High Court observed that the actors’ case satisfied the three-prong test of being granted an injunction restraining the defendants from using Mr. Kapoor’s name, image, voice, personality, etc, by using technological tools such as AI, machine learning, deepfakes, and face morphing, either for monetary gain or otherwise:
- prima-facie case of unauthorised and illegal use of his persona, image, etc;
- balance of convenience lies in the actor’s consideration of the violation of copyright and personality rights, common law rights and the right to privacy; and
- irreparable loss/harm to the actor, in terms of not only economical but social harm including the infringement of his right to live with dignity.
Notably, such reliefs are not novel in India. Mr Amitabh Bachchan was granted similar relief in 2022 when his public image and popularity were used to promote products and services.
The way forward
Combating the evils of disruptive technologies such as abusive deepfakes is something that could be done as a reactionary measure and not as a proactive measure. India’s existing legal regime is sufficient to penalise and/or imprison impersonators or those who spread misinformation. However, this is not the only solution for dealing with the issues posed by deepfaked content. This is because of the sheer pace at which the industry undergoes technological innovation.
It is therefore imperative that industry stakeholders (such as deep learning technology developers or SMIs) have a say in the regulatory process. Such entities are best suited to provide an accurate understanding of the technology in question and at the given time. This, in turn, will assist creating solutions to identify, report, assess the degree of skepticism with which deepfaked content needs to be approached and action against abusive and illegal deepfaked content.