Cybersecurity Risks Associated With Deepfakes: A Comparative Jurisprudential Analysis Between Indian Legislations and US Draft Legislations.

Introduction

In the present era, artificial intelligence has brought about an unprecedented revolution and is also regarded as the technological advancement lying at the core of the 4th industrial revolution. It has been endowed with a new capacity i.e., “to create”. Artificial intelligence, or “AI”, is a field of computer science comprising of machine learning, natural language processing, voice processing, expert systems, robotics, and machine vision. As a subclass of AI, machine learning is programmable to automate decision-making by following pre-determined “if-then” decision trees. This power of automotive decision making, has empowered the AI to blur the lines between fact and fiction. An AI powered website www.thispersondoesnotexist.com, created in February 2019 is a testament to the power of AI to create real images of people who do not exist. Every time one refreshes the page, a new image gets manifested. AI can be a source of optimism and immense benefits for society when utilized in the right direction. However, it also has the potential to create menace in society upon its misuse. 

AI through its creative power can be the origin of infinite deepfakes used for political deception, face swap pornography etc., which would be extremely difficult to handle in democratic societies. “I was The Victim of a Deepfake Porn Plot Intended to Silence Me” was the subject of an article written by investigative journalist Rana Ayyub for HuffPost in 2018. Ayyub received a flood of online hate after accepting invitations from major news organisations to speak on the political context surrounding the rape of an eight-year-old Kashmiri child. Ayyub’s life was destroyed by a deepfake pornographic film that was shared online alongside rape and death threats. Recently, several Bollywood actresses had also become victim of deepfake controversies alluding to the rampant misuse of technology. From these incidents, it becomes clear how psychologically, socially and financially devastating the problem of deepfakes can be for the victim. It is an assault on the victim’s right to privacy, establishment of a case for criminal defamation and also a cybercrime of profound gravity. 

Having mentioned plethora of cybersecurity and privacy problems associated with AI generated deepfakes, it becomes imperative to find solutions via the prevalent legal framework. Indian legal framework provides us with many provisions that can be utilized for this purpose. However, there are no specific laws dealing with this issue. Further, reforms and improvements can be suggested taking cue particularly from US draft legislations such as Malicious Deep Fake Prohibition Act 2018 (US) and Deep Fakes Accountability Act 2019 (US). These codes still hold the status of bills and are not enforced as acts. 

Tracing the origin of the term “Deepfakes” and comprehensive overview

The origin of the word “deepfakes” can be traced back to 2017 to a reddit user by the name of “deepfakes”. Reddit banned the user “deepfakes” in February 2018 for distributing involuntary porn. Twitter and the pornographic website Pornhub have both officially prohibited the usage of deepfakes for involuntary porn. Deepfake content has not yet been blocked on certain less responsive platforms, such as 4chan and 8chan. ‘DeepNude’ was an app that was withdrawn from the market by its author in 2019 after widespread complaints about its offensive intended use. Surprisingly, this app solely worked on images of women. This fact is the evidence of how vulnerable women have become due to prevalence of the misuse of AI. 

The Power and Control wheel, a tool intended to spot interpersonal abuse, has eight methods, all of which have been proven to be employed in non-consensual pornography. Deepfake porn provides a technique to immediately exercise this control without really recording the victim. Historically, harmful adaptations of new technology have primarily targeted women. However, despite this, too frequently, fake porn is grouped together and passed off as merely “speech” – possibly disagreeable, but not harmful. This is in contrast to discriminating between the use of deep fakes for “experimental, expressive play” and “non-consensual objectification and harassment.” Therefore, there is a dire need to recognize the various effects including the dissemination of false information to the general public, the production of hoaxes and social mayhem, medical emergencies, threats of violence and civil unrest, market manipulation and commercial fraud, voter fraud and political manipulation, as well as the production and distribution of “revenge porn.” That lack of control will be damaging in and of itself, adding to or escalating personal injury. Once a photograph has been shared on social media, it might not be possible to take it down or erase it, at least not easily. Having understood the humongous nature of the problem, it would be pertinent to find solutions within the prevalent legal framework using jurisprudential comparative analysis, particularly those of India and America.

Comparative analysis

India and USA have similar political systems as they are foundationally based on democracy and secularism. However, differences in social construct and democratic form are important factors to be considered when carrying out the comparative analysis. The primary objective of this analysis is to suggest a robust framework taking into account the effective provisions present in US bills regarding the subject suiting the Indian context. Further, it also seeks to enhance the effectiveness of Indian legislations on this issue.  

Legal framework of USA to deal with deep fakes

In December 2018, the US Senate received the Malicious Deep Fake Prohibition Act 2018 (US) (Senate Bill 3805) for consideration. The proposed act defines Deep fake as, “an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.” The act in its Section 2 of offence section, punishes the offence with maximum imprisonment of 2 years and makes intention as an essential requirement. The offence makes unlawful the creation of deep fake with the intention to distribute coupled with the intention that such an act would facilitate tortious or criminal conduct under the law. Apart from the creator, it also makes distributor liable who has the actual knowledge of it being a deep fake. The act resorts to the intention of creating a “tort” apart from facilitation of crime. This aspect broadens the ambit under which liability arises. Further, many torts don’t require intention to be a pre-requisite. Therefore, if the word “tort” is interpreted in its generic sense, then intention becomes immaterial. The act thereby follows the consequentialist theory of law which would presume intention on mere commission of tort. Further, the scope of the constitutive element of the principal crime would also require carefully crafted guidelines to promote wise prosecution decision-making in the beginning.

Another bill of the USA is “The Deep Fakes Accountability Act 2019 (US)” which provides for victim redress. However, the proposed rights only call for the gathering of pertinent data on deepfake deceptions and name an administrative coordinator to ensure prosecutions. The act in its Section 1042 calls for appointment of coordinator by the Attorney General to take cognizance of the violation of offence committed under 2018 act and listen to the grievance of deep fake victims. Further, such coordinator is to be appointed in each United States Attorney’s office to receive reports from the public regarding commission of the offence (relating to deep fake depictions of an intimate and sexual nature). Further, the act also mandates for any advanced technological false personation record which contains a moving visual element to contain an embedded digital watermark. It further mandates such altered record to include at least one articulated verbal statement that identifies the record as altered and also specifies the extent of such alteration. 

Furthermore, it also penalises for failure to disclose the aforementioned details and violating the section with “the intent to harass or humiliate a person by falsely impersonating him/her into committing acts of sexual nature or in a state of nudity”, “to cause violence or physical harm, incite armed or diplomatic conflict, or interfere in an official proceeding, including an election, provided the advanced technological false personation record did in fact pose a credible threat of instigating or advancing such”, to cause “in the course of criminal conduct offence related to fraud, including securities fraud and wire fraud, false personation, or identity theft”, to cause “by a foreign power, or an agent thereof, with the intent of influencing a domestic public policy debate, interfering in a Federal, State, local, or territorial election, or engaging in other acts which such power may not lawfully undertake.” The penalty includes fine and maximum 5 years imprisonment. Further, the penalty merely for not disclosing the alteration will invite a fine of $1,50,000 per record or alteration. Also, there are further rules for injunctive remedy and privacy rights. The aggrieved can also file a right in rem civil action against an advanced technological false personation.

One of the greatest challenges for imposition of liability upon the creator of deep fakes is anonymity. It becomes extremely difficult to trace the origin of false impersonation. To address this, the act calls for establishment of Deep fake task force. It calls for creation of “appropriate, research and develop technologies to detect, or otherwise counter and combat, deep fakes and other advanced image manipulation methods and distinguish such deep fakes or related forgeries from legitimate audio-visual recordings or visual depictions of actual events.”, “encourage efforts of the United States Government to adopt such technology”, “facilitate discussion and appropriate cooperation between the United States Government and relevant private sector technology enterprises or other nongovernmental entities including academic and research institutions, regarding the identification of deep fakes or other advanced image manipulation methods”. Here, Section 1028(7)(a)(5) calls for private sector collaboration for developing a counter to the prevalent problem using technology.

Legal framework in India to deal with Deep fakes

  • The Information Technology Act, 2000

The Information Technology Act was the landmark legislation passed by the government of India dealing with cybercrimes. The primary objective of this law was to implement the UNCITRAL Model Law on Electronic Commerce (E Commerce Law), which was published in 1996. As per Section 66E of the act if the accused person takes a picture, records a video, publishes, or sends an image of someone else’s private area without that person’s express or implied consent and does so knowingly or intentionally, they could be sentenced to up to three years in prison or a fine of no more than two lakh rupees, or both.  

According to Section 67 of the IT Act of 2000, publishing pornographic material in electronic form carries the appropriate penalties. Section 67A of the IT Act, 2000, which also imposes penalties for the electronic publication of content with sexually explicit images. Then, under Section 67B of the IT Act, 2000, there will be consequences if the published information on an electronic platform represents youngsters in a sexually explicit manner. The accused person will be held accountable under Section 66C of the IT Act, 2000 if the deepfake material makes fraudulent use of any type of special identification feature, such as an individual’s electronic password. Additionally, violating 66D of the IT Act of 2000 by impersonating another person while utilising a computer resource is punishable.

  • The Digital Personal Data Protection Act, 2023.  

This Act particularly deals with data protection which is indispensable for protecting the privacy of persons and preventing them from exploitation. It enshrines duties, rights and safeguards provided to data fiduciaries and data principals in relation to one another. The Act provides with unequivocal consent to be given by data principal while his/her data is being processed. Further, the data fiduciary shall create adequate security measures in accordance with section 8(5) of the 2023 act to defend against data breaches. Additionally, it must delete personal data as soon as its intended use is achieved, and its preservation is no longer required for legal reasons. However, this Act merely confines its ambit to data fiduciary and data principal, it is obvious that a person who generates deep fakes cannot always be data fiduciary. Most of the times, he has accessed personal data uploaded on social media and then alters or modifies it in order to misuse it or commit any cyber-related crime. This act provides certain mechanisms to deal with data breaches, however, these are not sufficient to deal with the prevalent issue of deep fake generators.

Conclusion and Suggestions

The problem of deep fakes is crucial to address in this modern era of artificial intelligence. The aforementioned discussion testifies to the pernicious global consequence when deep fake pornography is shared at an exponential rate. Further, the comparative jurisprudential analysis makes it clear that the US bills have proposed effective measures specific to the deepfake problem and can result in a more robust legal framework to deal with the issue. India has generic legislations which provide effective measures but these need to be reformed in consonance with the problems caused by misuse of artificial intelligence. 

Inculcation of certain provisions from US bills such as defining the offence of deep fake, requirement of watermark and articulated verbal statements, appointment of coordinators in each state for collection of commission of such incidents, establishment of deep fake task force at par with latest technological developments etc. would result in robust Indian legal framework. Conclusively, it can be said that taking cue from the comparative analysis and effective implementation of suggestive reforms can help in solving this humongous issue of deep fakes. 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top