Author: Archita Bhargava, SVKM’s Narsee Monjee Institute of Management Studies, Indore
Abstract
The creation of artificial intelligence (AI) has introduced radical transformations throughout industry and social domains. Its most innovative creation, however, is Deepfake Technology, one form of synthetic media that has the potential to change or create very realistic audio, video, or image content. While this technology has certain rightful purposes such as its creative and entertainment value, its ill purposes such as defamation, identity theft, misinformation, non-consensual pornography, and political propaganda have sparked fierce legal and moral arguments globally. In India, to date, the existing legal arrangements such as the Information Technology Act, 2000, the Indian Penal Code, 1860, and the Digital Personal Data Protection Act, 2023, failed to prove sufficient to effectively regulate and compensate for the harms that Deepfake content creates. This article takes on a doctrinal and qualitative legal research approach, which relies on purely secondary information gathered from statutes, case laws, academic writings, and policy reports. It analytically examines the extant Indian legal regime while comparing the same to regulatory policies from the United Kingdom, European Union, and the United States. The study identifies gaps within regulation, discusses judgments, and provides several recommendations which should comprise the legislative and policy plan of action by India to effectively combat this rising digital threat.
Keywords: Deepfake, Artificial Intelligence, Privacy, Data Protection, Legal Regulation, Consent, Cyber Law
Introduction
Deepfake technology ranks among the most disputed byproducts from the creation of artificial intelligence during the digital age. Deepfake technology involves content manipulation or creation of hyper-realistic synthetic content using AI software that can substitute one person's face or voice with another person's face or voice within videos, images, or audio recordings, respectively. Originally designed to improve filmmaking and video games, Deepfakes have since been grossly misused to prepare sexually explicit content, spread political lies, conduct identity fraud, and cause communal conflicts using staged images or visuals. The massive availability of AI software and edit programs has seen amateur users create believable Deepfake content, thereby widening the risk span and harm scale. India, which has an increasingly large and growing digital population and relatively weak enforcement frameworks against cybercrime, has seen an alarming Deepfake based crime spike. Morphed videos of public figures, politicians, women, and celebrities have been seen on the scene, raising significant privacy, consent, and security concerns. Although the right to privacy, among other constitutional protection, was eloquently identified by the Supreme Court in Justice K.S. Puttaswamy (Retd.) v. Union of India, the enforcement frameworks to put such protections from AI-based synthetic content are yet disjointed and uncertain. This study explores the inefficacy of the current Indian legal framework to counter Deepfakes and discusses how other countries have adapted using an overarching AI and internet protection law framework. The study aims to offer actionable proposals on ways to help India develop an exclusive legal framework to regulate Deepfakes and counter the risks that they subject rights individuals and order public peace to.
Literature Review
Over the years, legal theorists, policy researchers, and tech experts have increasingly paid attention to the highly advanced and evolving risks Deepfake technology offers. Anirudh Rastogi, reporting in 2022, illuminated that Deepfakes are no longer just interesting internet novelties but have immediately become tools of digital harassment, identity forgery, and willful spread of political disinformation. To support this issue, Vidhi Centre, by its 2023 briefing note, identified the acute dearth of AI-specific law within India, underscoring the country's poor readiness to counter the unique problems that synthetic media offers. Likewise, though the NITI Aayog's 2021 report on AI Ethics referred to Deepfakes as one of the significant risks to the integrity of data and to public trust in digital information, it refrained from recommending final legislative changes or criminal sections to address these dangers. From a strictly legal perspective, Gopal Sankaranarayanan, writing in NUJS Law Review, 2023, was critical of the limited range of Section 66D of the Information Technology Act, which sanctions identity fraud but does not clearly include crimes related to synthetic media created by virtue of AI technology tools. Other sections, like Section 499 of the Indian Penal Code, which pertains to defamation, and Section 67 of the IT Act, which penalizes the distribution of obscene electronic content, are just as antiquated. They do not possess technological specificity and enforcement teeth to adequately counter the specific harms that occur through Deepfakes. Compared to this, the laws in other places have shifted firmly to control this new threat. The United Kingdom's Online Safety Act, 2023, e.g., sets forth explicit criminal actions against damaging Deepfake content and obligates digital platforms to immediately take down manipulated content. The European Union's AI Act, 2024, classifies AI technologies that can create Deepfakes as ‘high-risk’ and sets forth mandatory disclosure of content and severe sanctions against illicit Deepfake creation and sharing. Concurrently, within the United States, the then-pending Deepfakes Accountability Act, 2023, while yet to receive parliamentary endorsement, proposes criminal penalties on non-consensual Deepfakes, especially those related to intimate images or election manipulation. It further proposes obligatory labelling of AI content using watermarks and origin disclosures to enable end-users to differentiate manipulated content from original content. This comparative literature review finds a stark absence of regulation in India's Deepfake governance strategy and deems an intense necessity to craft an AI-specific, special, and modern law that is crafted to combat such new risks.
Research Methodology
This study adopts a doctrinal and qualitative legal research methodology, relying exclusively on secondary data sources. The research involves a detailed examination of existing statutory provisions, including the Information Technology Act, 2000, the Indian Penal Code, 1860, and the newly introduced Digital Personal Data Protection Act, 2023. It also incorporates relevant judicial decisions, scholarly articles, legal commentaries, and policy papers published by both governmental and independent legal think tanks. No fieldwork, empirical survey, or original collection of information was done for this research study. The whole information collected is from publicly published legal documents, peer reviewed scholarly articles, and authoritative policy reports. The research method employed is a qualitative one, relying on content analysis to critically evaluate and interpret legal provisions, case law, and policy framework. Also, the research conducts a comparative study by comparing India's existing regulations to global legal precedents related to Deepfake technology namely those of the United Kingdom, European Union, and the United States. The comparative aspect thus empowers the work to glean lessons from international best practices and recognize gaps within the Indian system that need to be filled. Compliance on the ethical front has been maintained by strictly abiding by the usage of publicly available sources and upholding academic integrity during the research process. The study aims to draw actionable policy recommendations through a qualitative doctrinal analysis to achieve the formulation of a dedicated Deepfake regulation framework in India.
Origins and Emergence of Deepfake Technology
Deepfake technology has its roots in the earliest work on AI-powered image and video manipulation software, which was first designed to work within the fields of filmmaking and video games. These software programs were meant to create improved visual effects and character animation, and they presented filmmakers and developers with new avenues of creativity. The field of synthetic media, however, changed profoundly when, in 2017, very powerful deep learning models, especially Generative Adversarial Networks (GANs), came onto the scene. These AI programs enabled the creation of hyper realistic synthetic content videos, images, and audio recordings that could pass off realistic images of real individuals. The word ‘Deepfake’ itself came onto Reddit in the latter end-of-year 2017, when an unidentified person posted AI created adult material using celebrities’ faces mapped onto the bodies of other individuals without their permission. This launched the technology onto the public radar, raising widespread fears about its misuse possibilities. Globally, the first reported Deepfake abuses began to center on the unauthorized production and sharing of non-consensual explicit material directed against prominent public personalities. This was followed quickly by Deepfakes that had a political agenda, whereby manipulated oratory and spurious videos of politicians were distributed to confuse or politicize election narratives. Instances of identity fraud and voice scams by AI followed, which highlighted the technology's criminal potential. As Deepfakes gained mainstream traction in 2019, during the Indian General Elections, manipulated videos of politicians started to spread on social networks. The clips served to spread misinformation, fuel religious and communal clashes, and destroy reputations. Even though there was clear evidence that the technology was capable of threatening privacy, personal dignity, and democratic procedure, the Indian legal and regulatory infrastructure was still ill-prepared to combat this new type of digital injury.The swift proliferation and rising advances in Deepfake technology have since sparked urgent legal, moral, and security issues, not just in India, but internationally. The widespread and easy access to AI editing software have, additionally, further democratized the production of Deepfakes, even by those lacking advanced technical expertise, to create highly believable manipulated contents considerably increasing the capacity to cause harm.
The Regulatory Landscape in India: Gaps and Inadequacies
India's current laws in principle, the IT Act, 2000 and IPC, 1860 are not well equipped to handle crimes perpetrated using Deepfakes. While S.66D (impersonation) and S.67 (obscenity) in the IT Act and S.499 (defamation), S.292 (obscenity), and 354C (voyeurism) in the IPC mention some kinds of crimes committed in cyberspace, not one statute directly addresses AI computer synthesized media. Such laws were not imagined to anticipate the sophistication of Deepfakes. The Digital Personal Data Protection Act, 2023 is a positive move when it comes to regulating privacy, but even such an act ignores AI based content and grants no immediate relief to victims of Deepfakes particularly those experiencing abuse of morphed pictures or voices. Another obstacle is enforcement. Law enforcement agencies and investigating agencies do not possess the technical equipment, legal clarity, and cyber forensic knowledge to effectively deal with cases involving Deepfakes. The anonymity in the internet, coupled with poor conviction rates, deprives most victims, particularly women, of justice.
International Comparative Legal Approaches
Unlike India’s outdated and fragmented legal approach, many countries have adopted structured laws to regulate deepfakes. The UK’s Online Safety Act, 2023 criminalizes harmful Deepfakes, mandates labelling of AI generated content, and holds platforms accountable. The EU’s AI Act, 2024 classifies deepfake tools as “high-risk” and requires transparency, watermarking, and risk assessments. In the US, the proposed Deepfake Accountability Act, 2023 targets non-consensual deepfakes and mandates watermarking and traceability. These laws reflect a global shift toward AI specific regulation and platform accountability, highlighting India’s urgent need for a proactive, rights based legal framework to address Deepfakes effectively.
Social and Legal Implications of Unregulated Deepfakes in India
The rampant dissemination of Deepfakes in India is a threat to the very foundations of democracy, dignity, and privacy. Already struggling against misinformation, cyber harassment, and communal conflicts, in a society, Deepfakes add to the problems. They are most often published against women, politicians, and celebrities, the non-consensual, explicit video clips. The victims are subject to long-term harm, and the content is circulated too rapidly to be controlled. They are also often a tool for blackmailing and discrediting.
Though the right to privacy is protected in Puttaswamy v. Union of India, no direct law prohibits synthetic media. The existing laws do not address the sophistication of AI-based content. Deepfakes are also deceiving political discourse and spreading misinformation. Outdated laws, enforcement, and lack of regulation on AI in India render victims vulnerable. There is a requirement for a special-purpose legal regime to prohibit Deepfakes, safeguard victims, and boost enforcement in cyberspace.
Results
The comparative legal research confirms that prevailing laws on cybercrime in India are neither clear nor actionable against AI driven artificially created media. Although limited latitude is provided under some sections like S. 66 (d) of the IT Act and S.499 of the IPC against identity impersonation and defamation, they are incapable of accounting for the magnitude or nature of damage in deepfakes cases. Effective enforcement is held back owing to outdated laws, inadequate training, and lack of high- tech instruments. Meanwhile, the UK and US have set up contemporary frameworks. They not just criminalise some deepfake crimes but also embrace content labelling, real-time take downs, as well as platform liability, bolstering prevention as well as victim remedy. The disparity between India and such jurisdictions highlights a strong imperative for change
Discussion
The SC in Shreya Singhal v. Union of India struck down S. 66A of the IT Act in an effort to protect free speech but also insisted on restrictions which must be narrowly tailored. But deepfakes involve misuse of electronic instruments for impersonation, harassment, or deception, not expression. The criminalisation of abuse in the case of deepfakes would thus not contravene Art 19(1)(a), but rather protect the latter by promoting truth and dignity for the individual. Furthermore, in the case of Ritu Bhargava v. State of Uttar Pradesh, the Allahabad High Court was aware of the gravity in cases involving cyber harassment against women and the need for new innovations in enforcement practices. This is all the more essential when women are harassed in non-consensual and obscene ways using Deepfakes. As in Avnish Bajaj v. State (NCT of Delhi), intermediaries are also important. The court held leaders in a platform accountable under S. 67 for obscene material. That precedent can be followed to demand heightened platform responsibility for Deepfake takedowns as well as cooperation for law enforcement. India must also maintain victim-centric procedures. That includes swift registration of FIRs, special cybercrime units, and safe grievance systems, especially for gender-based Deepfake abuse. Public legal education and AI content alerts like watermarking and trace tags can nurture digital literacy and security.
Conclusion
Deepfake technology is an issue that current law in India is not well equipped to address. The paper's findings indicate an immediate need for a special purpose, AI focused law. Such a law would be compelled to define clearly the deepfake content in all its forms and criminalize in particular the production and dissemination of non-consensual or malicious synthetic media. Such a law would also be compelled to insist on transparency on the part of platforms, such as labelling of content as well as expedient takedown processes. Equally important is the empowering of law enforcement agencies with the technical knowledge and tools to effectively investigate such crimes. Victims would need to be offered both criminal and civil remedies which are both prompt and facile. The right to dignity as well as the right to privacy is enshrined in the Constitution in Puttaswamy v. Union of India, but until and unless the law evolves to keep pace with innovations in technology, such rights would be ineffective. The rights based, forward-thinking regime is not merely required, it is long overdue.
References
1. Indian Penal Code, No. 45 of 1860, INDIA CODE (1860)
2. Digital Personal Data Protection Act, No. 22 of 2023, INDIA CODE (2023).
3. Justice K.S. Puttaswamy (Retd.) v. Union of India, (2017) 10 S.C.C. 1 (India).
4. Shreya Singhal v. Union of India, (2015) 5 S.C.C. 1 (India).
5. Avnish Bajaj v. State (NCT of Delhi), 116 (2005) D.L.T. 427 (Del.).
6. Ritu Bhargava v. State of Uttar Pradesh, 2022 SCC OnLine All 1285 (India).
7. Anirudh Rastogi, Deepfakes and the Legal Vacuum in India, OBSERVER RES. FOUND. (2022), https://www.orfonline.org.
8. Vidhi Centre for Legal Policy, Regulating Deepfakes in India: A Legal and Technological Overview (Briefing Note 2023), https://vidhilegalpolicy.in.
9. Gopal Sankaranarayanan, Synthetic Harms: Addressing Deepfakes through Law, 16 NUJS L. REV. 67 (2023).
10. NITI Aayog, Responsible AI: Part 1 – Principles for Responsible AI, NITI.GOV.IN (2021), https://niti.gov.in.
11. Online Safety Act 2023, c. 34 (U.K.).
12. Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), COM (2021) 206 final (EU).
13. Deepfake Accountability Act, H.R. 3230, 118th Cong. (2023) (U.S.).
14. Robert Chesney & Danielle Keats Citron, Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, 107 CALIF. L. REV. 1753 (2019).
15. Regina Rini, Deepfakes and the Epistemic Backstop, 33 PHIL. & TECH. 92 (2020).
16. Jan Kietzmann et al., Deepfakes: Trick or Treat?, 63 BUS. HORIZONS 135 (2020).
17. James Grimmelmann, Fake News and the First Amendment, 201 IOWA L. REV. 1023 (2017).