top of page
Attribution of Criminal Liability in Cyber Crime: Legal and Evidentiary Challenges

Author: Nilofer Naaz, Karnataka State Law University


Cybercrime
  • Cybercrime is the use of computers and other electronic devices and the internet by criminals to execute fraud and other crimes against companies, consumers and other individuals. It is a broad term which is used to describe criminal activity committed on computers or on the internet.

  • Cybercrime has grown to be a serious world-wide issue that affects people, companies and governments due to quick development of digital technologies.

  • Cybercrimes committed against persons include various crimes like transmission of child pornography, harassment of any one with the use of a computer such as e-mail, WhatsApp and social media platforms

  • Cybercrime presents a unique challenge to criminal law because identifying the offender is the most difficult part due to anonymity like, fake identities, phishing and spoofed IP addresses, VPNs and social platforms. 

  • Those issues result became central problem which was called attribution

  • Attribution means identifying who is legally responsible for committing a cyber offence

  • The fundamental principle of criminal liability is that there must be a wrongful act actus reus, combined with a wrongful intention is mens rea before a person is made liable


Introduction

In the contemporary digital landscape, rapid technological advancements have significantly influenced the manner in which harm is inflicted, particularly against children. What once required physical presence can now be achieved with a single click, enabling digital abuse that can cause irreparable harm. The expansion of Artificial Intelligence, deepfake technology, and encrypted digital platforms has enabled sophisticated forms of virtual abuse, including the creation and dissemination of sexually explicit deepfake content and the platforms that operate in an anonymous manner. This raises significant moral, legal and policy questions that require the intervention of the law to protect the rights and dignity of children in the digital age.


Facts of the Case

Ms. Sneha Mehra, a 16-year-old award-winning school poet from the city of Bengaluru in India, is well recognized for her literary contributions, particularly in the field of women’s rights. She frequently shares video clips of her speeches on social media platforms, including Instagram and Facebook, where she enjoys a substantial following of several million people. 


In February 2025, Sneha Mehra was shocked and distressed by an incident that went viral across social media, in which her face was superimposable into a deepfake pornography video. Ananya’s friends stumbled upon the video in a group chat and promptly alerted her. Subsequently, numerous copies of the deepfake video circulated across social media platforms, causing severe distress, trauma, and a violation of her dignity. 


 After this incident, Mrs. Ananya Mehra, mother of Sneha had lodged a complaint with the Cyber Crime Police Station in the city of Bengaluru, following which a First Information Report (FIR) was registered under the provisions of the Protection of Children from Sexual Offences (POCSO) Act, 2012, Information Technology Act, 2000, and related provisions of the Digital Personal Data Protection (DPDP) Act, 2023.


Petitioner Argument

Issue 1:

Attribution of Criminal liability

How can the actual perpetrator be identified when the offence is committed using anonymous accounts, AI-generated content, VPNs and encrypted platforms.


  1. The foremost challenge in cyber offences involving deepfakes and AI generated content lies in accurate attribution of criminal liability, particularly when preparators exploit technological anonymity through VPNs, encrypted platforms and pseudonymous identities.

  2. The inability to identify the preparator due to anonymity tools cannot clear liability, especially when intermediaries possess the technical capacity and legal obligation to assist in attribution

  3. Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021:

  4. Intermediaries like social media, messaging apps, websites must enable traceability of originators and they are legally required to provide the specific user related information like subscriber details, login history, browser, device ID and IP address logs 

  5. Intermediaries must follow some legal responsibilities like, remove illegal content upon notice, appointing a grievance officer and assisting law enforcement. These are the minimum standards of care by intermediaries to follow. 

  6. They have immunity under sec 79, IT Act,2000 – safe harbour.

    “Sec 79 – Intermediaries are not held liable for user generated content and they are treated as neutral platform” 

  7. If the platform is notified about deepfake child phonography and does not remove the content quickly or does not help to identify the offender then they have failed due diligence and lost safe harbour immunity and held liable for enabling the offence.

  8. This has been clarified by the Supreme court in the Shreya Singhal case that intermediaries act upon actual knowledge, if failure to act then liability can arise.

  9. Furthermore, intermediaries don’t just host content, they also store metadata, like IP addresses, login times, device details, location information and account activity history.

  10. This data will help to trace out who uploaded and circulated the content.

  11. Platforms actively control how content spreads through algorithms (what goes viral), content moderation tools and ability to remove content, block accounts and limit visibility.

  12. So, they are not passive but they decide the reach and visibility

  13. Intermediaries are key facilitators because they enable creation, upload and sharing, determine how widely the content spreads and have the technical ability to stop it.

  14. Thus, platforms must exercise due diligence and must act when content is illegal, if they fail to do so then they cannot claim to be neutral intermediaries

  15. Intermediaries qualify as data fiduciaries under the Digital Personal Data Protection Act,2023, as they determine the purpose and means of processing personal data (decides how user data is collected, shared and stored)

  16. But in this case the face or identity of Ms.Sneha Mehra was used without consent which is unlawful processing and reflects the failure of data protection safeguards.

  17. So, the platform fails and breaches the fiduciary duty and violates obligations under DPDP Act


Issue 2: 

Violation of Fundamental Rights

Whether the circulation of deepfake pornography involving a minor violates the right to privacy, right to dignity, right to life and personal liberty

  1. The circulation of deepfake phonography involving a minor constitutes a gross violation of fundamental rights, particularly under Article 14,19 and 21 of constitution of India and DPSP Article 39e and f – right to dignity

  2. In Justice K.S. Puttuswamy v. Union of India, the Supreme court clearly recognized privacy as a fundamental right under Article 21, encompassing, informational privacy, bodily autonomy and decisional freedom 

  3. Whereas, Deepfake pornography, non-consensually uses a person’s identity and manipulates bodily representation which directly violates informational and bodily privacy

  4. The Court in Francis Coralie Mullin v. Administrator, Union Territory of Delhi held that the right to life includes the right to live with human dignity.

  5. Whereas, deepfake sexual content, Objectifies the victim and causes reputational and psychological harm which constitutes digital sexual violence

  6. Article 21 has been expansively interpreted the right to life and personal liberty to include, mental well-being and protection from exploitation

  7. In this case minor is involved so the harm is aggravated due to heightened vulnerability and it may also violate protections under the POCSO Act, 2012

  8. And such violations, restricted victims' freedom of expression under article 19 and forced to withdraw from digital spaces which creates a chilling effect

  9. Thus, the circulation of deepfake pornography involving minors is not merely a statutory offence but a constitutional wrong, infringing the right of privacy, dignity, liberty and warranting strict scrutiny and heightened state protection.


Issue 3:

Adequacy of Legal framework

Whether existing Indian laws adequately address AI-generated sexual exploitation, identity manipulation, large-scale digital dissemination

  1. The current Indian legal framework is uneven and responsive, failing to adequately address the unique challenge posed by AI-generated sexual exploitation and deepfake

  2. DPDP Act- limited scope:

  3. This Act focuses on consent and civil penalties. 

  4. Does not Criminalize AI Generated sexual abuse 

  5. Lacks provisions on deepfakes or synthetic identity manipulation 

  6. Intermediary rules- reactive nature

  7. These rules depend on actual knowledge and the platforms can use AI-detection tools to identify the content that it is manipulated or not.

  8. Can block re-uploads of previously flagged content and detect suspicious upload patterns or bot activity but the rules do not mandate those things and the reactive system is inadequate to stop the rapid viral dissemination.

  9. The gap between those acts and rules is that there is no regulation of AI tools, no strict liability and no real time victim protection.

  10. Thus, existing laws fail to address AI enabled exploitation and necessitate judicial intervention and regulatory evolution. 


Respondent Argument

Issue 1:

Attribution of criminal liability

How can the actual perpetrator be identified when the offence is committed using anonymous accounts, AI-generated content, VPNs and encrypted platforms.

  1. Criminal liability requires proof beyond reasonable doubt which includes identification of the accused and establishment of intent.

  2. In cases involving VPNs, encrypted platforms and AI-generated content, the attribution becomes speculative and unreliable 

  3. Intermediaries cannot be held liable because they are mere channels under section 79 of IT Act,2000

  4. Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, traceability is subject to technical feasibility and privacy safeguards

  5. The DPDP Act further restricts excessive data collection and protects user privacy which prevents indiscriminate surveillance.

  6. In absence of concrete attribution, liability cannot be imposed, otherwise it would violate principle of criminal jurisprudence.


Issue 2: 

Violation of Fundamental rights

Whether the circulation of deepfake pornography involving a minor violates the right to privacy, right to dignity, right to life and personal liberty


  1. Liability for rights violation cannot be extended to intermediaries or the state without direct involvement.

  2. In Shreya Singhal v. Union of India, the Supreme Court held, intermediaries are liable only upon actual knowledge via court orders and government notification.

  3. Thus, no obligation for proactive monitoring exists and imposing such duty would chill freedom of speech (Article 19(1)(a)) and lead to over-censorship

  4. The Digital Personal Data Protection Act, 2023 ensures the balance between privacy and legitimate use of data


Issue 3:

Adequacy of legal framework

Whether existing Indian laws adequately address AI-generated sexual exploitation, identity manipulation, large-scale digital dissemination


  1. The existing framework is comprehensive and sufficient, covering all aspects of the alleged harm:

  2. DPDP Act

This act ensures consent-based processing, provides penalties for misuse and protects children’s data.

  1. IT Act and Intermediary Rules provide a takedown mechanism, ensure platform accountability and balanced innovation and regulation.

  2. Criminal laws and Pocso Act cover the punishment of offences of obscenity, defamation and cheating.

  3. The issue is not absence of law but enforcement challenges, hence, no new legal framework is required.

  4. Liability cannot be presumed in the shadows of anonymity; criminal law demands certainty not conjecture.

 

Attribution of Criminal Liability
  • To establish the liability, the prosecution must prove the control over the device or email account with fake identities.

  • The actus reus which is creation or intentional dissemination of deepfake with the knowledge that it will create harm to the victim.

  • The link between offender and digital activity must prove beyond reasonable doubt which is highly complex in cyber-crime due to anonymity, fake identities, encrypted networks, shared digital devices and spoofed IP addresses. 

  • . In such cases, courts rely on a combination of digital forensic evidence—such as IP logs, metadata, device analysis, and communication records—to connect the unlawful act to a real person. However, mere association with a device or account is insufficient; the prosecution must demonstrate control, knowledge, and intent. 

  • Thus, attribution of criminal liability remains a central challenge in modern cyber law, particularly in cases involving technologies like deepfakes, where identifying the true perpetrator is both technically and legally difficult.

  • The attribution is difficult due to other factors, cross-border infrastructure which leads to different countries' jurisdiction process and crime may be executed with automation AI/bots i.e without direct presence of humans.

  • Another challenge to prove the act is beyond reasonable proof need evidence and in cybercrime utmost cases dismissed due to lack of direct evidence

In Shreya Singhal case, court emphasized that mere action is not enough but there must be clear intent to justify the criminal punishment


Types of Criminal Liability in Cyber Context
  1. Person who creates the deepfake i.e directly commits the offence

  2. Person providing tools to create the deepfake 

  3. Liability of companies and intermediary platforms.

  4. Joint liability i.e multiple persons acting together


Methods of Attribution (how courts decide)

Courts rely on cumulative evidence not a single proof

  1. Technical evidences such as – IP Logs, device identifiers and metadata

  2. Forensic evidence – recovery of files, software used and deleted data

  3. Behavioural evidence like search history, chat messages and prior conduct

  4. Circumstantial evidences are motive, opportunity and access


Evidentiary Challenges
  • Anonymity of offenders - fake accounts and encrypted system’s obscure identity

  • Lacks of direct evidence which relies on metadata, IP logs and circumstantial digital traces

  • Accuse may claim that, lack of exclusive control because of shared device that is, misuse by third parties.

  • Due to rapid viral speed, the content replication makes it difficult to identify original source and control damage

  • Electronic Evidence must satisfy the proper certification, chain of custody and forensic integrity 


Investigation Process in the Case
  • The first step of investigation by police is to issue the notice to social media platforms to take down the deepfake content of the child 

  • Taking down the content process includes, removal of content, blocking of accounts and preservation of logs i.e IP address and login data

  • Next is to tracing the origin of the content like identifying the first uploader, original source account and timeline of dissemination

  • After obtaining the IP logs from platforms, contacting the internet service providers (ISPs) to identify the users but the investigators face the challenge here due use of VPNs and public WiFi which may conceal the identity of specific login or person

  • If investigators suspect any device then they will seize the device and conduct forensic examination to recover deepfake software, edited files, browser history and deleted data.

  • Deepfake forensic examination to analyze the AI-generated Markers, editing patterns and software signatures.

  • Social media and communication analysis to examine chats, emails and sharing patterns and identify individuals who created, uploaded and knowingly circulated the content.


Legal Framework Analysis
  • The Indian legal framework addressing digital sexual exploitation, privacy violations, and misuse of electronic content operates through a multi-layered domestic and international approach, combining statutory provisions, constitutional protections, and global standards such as the Budapest Convention on Cybercrime.

  • At the domestic level, the Information Technology Act, 2000 criminalizes privacy violations and circulation of obscene or sexually explicit material. Section 66E penalizes the non-consensual capture or transmission of private images, while Sections 67, 67A, and 67B address obscene content, sexually explicit material, and child sexual abuse material (CSAM) in electronic form, with stricter punishment where children are involved.

  • The POCSO Act, 2012 provides a child-centric legal framework by criminalizing sexual harassment (Sections 11–12), use of children for pornographic purposes (Sections 13–14), and even possession or storage of child sexual content (Section 15). Crucially, it extends to digital and virtual spaces, thereby covering online grooming, threats involving fabricated or morphed images, and emerging harms such as deepfake-based exploitation.

  • The Bharatiya Nyaya Sanhita, 2023 (BNS) supplements these protections by penalizing voyeurism (Section 77) and defamation (Section 356), which are often implicated in cases involving non-consensual dissemination or manipulation of intimate content.

  • From an evidentiary perspective, Section 65B of the Indian Evidence Act, 1872 ensures that electronic records are admissible in court, provided procedural requirements are met, thereby enabling effective prosecution in cybercrime cases involving digital evidence such as screenshots, metadata, and online communications.

  • At the constitutional level, Article 21 of the Indian Constitution, as interpreted in Justice K.S. Puttaswamy v. Union of India, guarantees the right to privacy, encompassing informational autonomy, bodily integrity, and protection against non-consensual digital exposure. Further, Articles 39(e) and 39(f) mandate the State to safeguard children from exploitation and ensure their dignity and development.

  • Importantly, the Budapest Convention on Cybercrime provides an influential international framework for combating cybercrime, including offences related to child pornography (Article 9) and illegal content dissemination. It emphasizes harmonization of cyber laws, international cooperation, cross-border data access, and procedural tools for electronic evidence collection. Although India is not a signatory, the Convention serves as a guiding benchmark, especially in addressing transnational cyber offences such as online sexual exploitation, where perpetrators, servers, and victims may be located in different jurisdictions.

  • sAdditionally, international standards such as those articulated by UNICEF define child sexual exploitation to include both contact and non-contact (digital) abuse, reinforcing the need for robust legal responses to emerging threats like AI-generated sexual content and online grooming.

While India’s domestic laws are substantively comprehensive, the absence of formal accession to the Budapest Convention on Cybercrime creates challenges in:

  • Cross-border investigation and evidence sharing

  • Real-time cooperation with foreign law enforcement

  • Standardized cybercrime procedures

Thus, aligning domestic enforcement mechanisms with Budapest Convention standards remains crucial for effectively tackling transnational digital sexual offences.


Case References
  1. Anvar P.V. v. P.K. Basheer & Ors. (2014 10 SCC 473)

The decision aligned Indian evidentiary law with technological realities, mandating verifiable certification for digital material. It remains a cornerstone precedent guiding courts on the admissibility and reliability of electronic evidence in both civil and criminal proceedings.

Held,

Electronic evidence is valid only with Section 65B certificate which is important to block chain data deepfake evidence to present as evidence


  1. Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal

(2020) 7 SCC 1

This ruling standardized the evidentiary protocol for electronic data such as videos, call records, and digital documents. It reinforced procedural safeguards to ensure authenticity and reliability in digital evidence, influencing practices in criminal investigations, civil trials, and election petitions. The decision remains a cornerstone reference for courts and practitioners dealing with technology-based evidence in India.


  1. Shreya Singhal v. Union of India, 2015

Supreme Court of India that struck down Section 66A of the Information Technology Act, 2000 as unconstitutional. The decision affirmed the primacy of free expression online and redefined limits of state power over digital speech in India.

Held: Struck down Section 66A IT Act but upheld Sections 67, 67A, 67B as valid restrictions.

Relevance: Confirms that obscene and sexually explicit online content can be lawfully criminalized.


  1. Justice K.S. Puttaswamy (Retd.) v. Union of India and Others (2017) 10 SCC 1


The Court unanimously held that privacy is intrinsic to life and personal liberty under Article 21 and woven through other fundamental freedoms. It overruled prior contrary rulings and affirmed privacy as central to dignity, autonomy, and individual choice. The bench emphasized that informational privacy requires protection against both state and private intrusions in a digital society.

Three-part test for privacy limitations

To justify any state intrusion, the Court established a proportionality framework requiring:

  1. Legality – a law authorizing the action;

  2. Legitimate state aim – such as security or welfare;

  3. Proportionality – the measure must be necessary and least intrusive. 


  1. State of Tamil Nadu v. Suhas Katti was a landmark 2004 Indian legal case that marked one of the world’s first convictions for cyber harassment under the Information Technology Act. It established critical legal precedent for prosecuting online defamation and harassment in India, demonstrating the enforceability of emerging cybercrime laws.


International Case Laws
  1. R v Sharpe is a 2001 decision of the Supreme Court of Canada that examined the constitutional limits of Canada’s child pornography laws under the Canadian Charter of Rights and Freedoms. The case tested the balance between freedom of expression and the state’s interest in protecting children from sexual exploitation.

Principle: Validity of laws against child pornography

Held: Restrictions justified to protect children from exploitation.

Relevance: Supports laws like Section 67B IT Act and POCSO provisions.


  1. Ashcroft v. Free Speech Coalition (2002) is a landmark U.S. Supreme Court decision that struck down parts of the Child Pornography Prevention Act of 1996 (CPPA). The Court ruled that provisions criminalizing “virtual” child pornography—images that appear to depict minors but do not involve real children—were unconstitutional under the First Amendment to the United States Constitution.

Principle: Limits on banning virtual content

Held: Struck down ban on virtual (non-real) child pornography.

Relevance: Highlights legal gaps in AI/deepfake sexual content, especially where no real child is involved.


  1. Google Spain SL and Google Inc. v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González (Case C-131/12) is a 2014 judgment of the Court of Justice of the European Union (CJEU) that established the modern “right to be forgotten.” It clarified the application of EU data-protection law to online search engines and the balance between privacy and public access to information.

Principle: Right to be Forgotten

Held: Individuals can request removal of harmful personal data.

Relevance: Important for removal of non-consensual sexual content online.


  1. Von Hannover v. Germany was a landmark 2004 judgment of the European Court of Human Rights (ECHR) concerning the balance between privacy rights and freedom of the press. The case was brought by Princess Caroline of Monaco, who sought protection from intrusive media photographs published without her consent. It established key limits on public-interest claims in celebrity reporting under Article 8 of the European Convention on Human Rights.

Principle: Protection of private life

Held: Even public figures have privacy rights.

Relevance: Reinforces protection against unauthorized image publication.


  1. United States v. X-Citement Video, Inc. (1994) was a landmark U.S. Supreme Court case interpreting the federal child-pornography statute, 18 U.S.C. § 2252. The Court held that prosecutors must prove a defendant knew both the sexually explicit nature of the material and that it depicted minors, narrowing the scope of strict liability under the law.

Principle: Knowledge requirement (mens rea)

Held: Liability requires awareness of involvement of minors.

Relevance: Influences interpretation of intent in digital sexual offences.


  1. Judicial precedents such as Justice K.S. Puttaswamy v. Union of India establishes privacy as a fundamental right, while Shreya Singhal v. Union of India validates restrictions on obscene digital content. Cases like Anvar P.V. v. P.K. Basheer ensures evidentiary reliability in cyber prosecutions. Internationally, decisions such as Google Spain v. AEPD and R v. Sharpe reinforce data protection and child safety, though Ashcroft v. Free Speech Coalition exposes regulatory gaps in addressing AI-generated sexual content. Together, these cases demonstrate both the strengths and evolving challenges in regulating digital sexual exploitation.


Recent Cases in India
  1. Anil Kapoor v. Simply Life India & Ors. is a 2023 Delhi High Court ruling on celebrity personality rights in the digital age. The court granted Indian actor Anil Kapoor broad protection over his name, image, voice, likeness, and iconic expressions against unauthorized use, especially through artificial intelligence and online platforms.

Issue: Unauthorized use of actor’s likeness via AI/deepfake content

Held: Delhi HC granted ex parte injunction restraining use of his image, voice, and personality

Principle Established:

  • Personality rights are enforceable rights

  • Deepfakes violate privacy + commercial rights

Relevance: Landmark Indian case directly recognizing deepfake misuse as actionable harm 


  1. Amitabh Bachchan v. Rajat Nagi is a 2022 case before the Delhi High Court concerning the protection of personality and publicity rights of Indian actor Amitabh Bachchan. The matter is notable for addressing unauthorized commercial use of a celebrity’s name, image, and voice in India’s evolving right-of-publicity jurisprudence.

Issue: Misuse of celebrity identity, images, and voice (including AI manipulation)

Held: Court granted John Doe injunction against unknown persons

Principle:

  • Protection of name, voice, image, likeness

Relevance: Frequently relied upon in deepfake-related injunctions


  1. Arijit Singh v. Codible Ventures LLP

This is a 2024 Bombay High Court decision on celebrity personality and publicity rights in the context of AI voice cloning, unauthorized merchandising, and digital exploitation. It is widely regarded as India’s first major judgment squarely addressing generative AI’s misuse of a celebrity’s persona.

Issue: Unauthorized AI use of singer’s voice and identity

Held: Recognized voice and persona as protectable rights

Relevance: Extends to voice-cloning deepfakes and audio morphing


  1. Akira Nandan v. Sambhawaami Studios

The case Akira Desai @ Akira Nandan v. Sambhawaami Studios LLP & Ors. is a 2026 decision of the Delhi High Court concerning large-scale misuse of personality rights through AI-generated deepfake content. Justice Tushar Rao Gedela granted an ex parte ad-interim injunction to protect the plaintiff’s name, image, likeness, voice and persona from unauthorized AI use.

Issue: AI-generated film using a person’s face, voice, and persona

Held:

  • Court ordered takedown of AI-generated content

  • Restrained all future use of identity via AI/deepfake

Key Principle:

  • Deepfakes violate privacy, personality rights, and reputation

  • Harm is irreparable and cannot be compensated monetarily

 Relevance: Strong precedent for AI-generated sexual/morphed content cases


Scope for Reform

The scope for reform in India is broad and urgent, given the exponential growth of AI technologies:

  • Transition from reactive judicial interpretation → proactive legislative framework

  • Development of “digital personality rights” as a statutory right

  • Integration of:

    • Privacy law

    • Cyber law

    • Criminal lawinto a unified AI governance model

India has the opportunity to become a global leader in AI regulation by:

  • Balancing innovation and rights protection

Creating technology-neutral but future-ready laws


Conclusion

The rapid proliferation of deepfakes and AI-generated content poses a profound challenge to existing legal frameworks, particularly in the domains of privacy, dignity, and sexual exploitation. While Indian courts have played a proactive role in expanding the scope of personality rights and applying existing provisions under the IT Act, BNS, and POCSO Act to emerging harms, these efforts remain inherently reactive and fragmented.

The jurisprudence reflects a clear recognition that deepfakes are not merely technological anomalies but constitute serious violations of fundamental rights under Article 21, including informational privacy, bodily autonomy, and reputation. However, the absence of a dedicated statutory regime, coupled with enforcement and evidentiary challenges, limits the effectiveness of current legal responses.

Therefore, a comprehensive and forward-looking legal framework is imperative—one that integrates criminal liability, intermediary accountability, data protection, and international cooperation. Aligning domestic law with global standards such as the Budapest Convention on Cybercrime, while embedding technological safeguards and victim-centric remedies, will be crucial in addressing the evolving landscape of AI-enabled offences.

Ultimately, the goal must be to ensure that technological advancement does not come at the cost of human dignity, privacy, and security, and that the law remains capable of effectively regulating emerging digital harms in an increasingly AI-driven world.


References
  • Indiankanoon.org

  • https://supreme.justia.com/

  • The copyright act, 1957 (14 of 1957)

  • Protection of Children from Sexual Offences Act, 2012 

  • THE INFORMATION TECHNOLOGY ACT, 2000

  • Bharatiya Nyaya Sanhita (BNS), 2023

  • INDIAN CONSTITUTION

  • Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

  • Digital Personal Data Protection Act, 2023










3 days ago

16 min read

0

1

Related Posts

RECENT POSTS

THEMATIC LINKS

bottom of page