top of page

Criminal Liability of AI in India



Author: Yuwaraj Yadav, Chaudhary Charan Singh University



Introduction

Self-driving cars have been in a trend recently, but did you ever wonder what would happen if a self-driving car ran a red light or hit a pedestrian? Who would be responsible, the car owner or the company that made it? This complexity in answering the question underlines the growing obstacle in establishing the criminal liability of Artificial Intelligence (AI). As per traditional criminal jurisprudence, to establish a liability, both Actus Rea and Mens Rea must co-exist. However, the concepts do not fit right when it comes to the criminality of AI, since they’re human concepts, and AI being a robot or machine would always lack the element of Mens Rea.

As the technology is growing at a rapid pace, it is necessary to curb its misuse. The governments around the world are dealing with the same challenges, i.e., to form a comprehensive framework to govern AI and establish criminal liability. However, they have met an endpoint. This blog post examines the traditional criminal liability jurisprudence, the existing legal framework, the reason behind the failure to formulate a robust framework, and the way forward.  


Criminal Liability Jurisprudence

The Jurisprudence of Criminal Liability in India is based on two pillars: actus reus and mens rea. Actus rea means actual physical act, whereas Mens Rea represents a guilty mind. To attach criminal liability to most of the offences, both elements must co-exist.  Mens real requirements like malice, knowledge, or negligence presume human conscience and decision-making. 

Since AI systems lack consciousness, emotions, and a moral compass, they lack mens rea. Higher-level AI is capable of simulating a thought process, however, these behaviours are the result of computer programming rather than deliberate behaviour. Unlike human drivers, self-driving cars behave based on data provided into the system and real-time computations rather than making the decision to run a red light or kill a pedestrian. Therefore, using standard criminal culpability rules becomes difficult when AI’s capacity to produce actus reus without mens rea is used. 


Legal Framework for AI in India

India's legal system is still undeveloped and disjointed when it comes to AI systems. There is no explicit law in the nation addressing the criminal culpability of AI. Present methods combine clauses from several legal areas.

 The main criminal law statute in India is the “Bharatiya Nyaya Sanhita, 2023.” The Sanhita upholds conventional theories of criminal responsibility that are based on human action. Section 2 provides uniquely human definitions for basic nouns such as "act," "omission," and "intention." Corporations may be considered "persons" under Section 2(26), but algorithmic entities are not.Section 3(5) of the Sanhita extends jurisdiction to offenses committed outside India targeting computer resources within India. This may deal with malicious AI activities carried out from overseas jurisdictions. 

Another possible legal path is provided by “the Information Technology Act of 2000.” Unauthorised access to or damage to computer resources is punishable by civil responsibility under Section 43. These identical actions are illegal under Section 66 if they are carried out dishonestly or fraudulently. Concepts of carelessness pertinent to AI situations are introduced in Section 43A. They do not, however, address AI systems that are acting legally but producing undesirable, unexpected results. 

The “Digital Personal Data Protection Act, 2023,” significantly updated India's framework for protecting personal data. These ideas may have an effect on AI systems that handle personal information. Another possible pathway for some AI damages is provided under “the Consumer Protection Act of 2019.” Design and information flaws are included in the broad definition of "product liability" found in Section 2(34). This might include faulty AI products that hurt customers. 

In 2018, the National Strategy for Artificial Intelligence was published by the Ministry of Electronics and Information Technology. In 2021, the Approach Document for India Part 1 was released by the NITI Aayog. These policy texts do not set legally binding criteria; instead, they express governance aspirations. Although they have no legal authority behind them, they acknowledge ethical issues. 


Difficulty in making the law 

The above discussion makes it clear that none of the existing legislation is well-equipped to deal with the AI-related offences. This is due to various reasons, one of the foremost being that AI is a Machine. AI runs on an algorithm and lacks intentions, morality, and feelings. In criminal law, a human must have Mens Rea to commit an offence, which is missing in AI, as it can’t have intent. 

Secondly, it's a very complex task to establish accountability in the case of AI. The question arises: Who should be blamed, the owner, the company that sold it, or the developer who created it? The growth in machine learning has taken the next step towards the advancement whereby AI systems have begun to learn things which even the creator is not aware of, making it more unpredictable. The question remains the same: in such a situation, who should be held accountable?

Thirdly, the field of AI is very complex and requires expert assistance, making it difficult to address the liability in case of an offence. AI has also ended the territorial boundaries between the countries, however, the laws of different parts of the world vary from each other, making the existing situation more complex. 


Way Forward

Common penalties under criminal law include the death sentence, imprisonment, and fines. AI beings can be subject to similar penalties with certain adjustments. For instance, permanently erasing the AI entity's program would have the same impact as the death penalty for people. Additionally, removing the program temporarily may be equivalent to locking up human offenders. A similar punishment for the AI creature may be community service.

Furthermore, it is advised that India evaluate suggestions to change current laws in a way that would confer sophisticated AI legal personhood status. When sophisticated systems commit harm, this will allow for direct restitution. To assign proper criminal culpability, create sectoral frameworks for key application areas such as AI in police and autonomous vehicles. Create sandbox systems to assess new AI and adjust accountability systems prior to broad use. To a certain extent, these actions will reduce the misuse.


Conclusion

Thus, there’s no shadow of doubt that rapid technological growth poses significant challenges to traditional criminal jurisprudence, which is deeply rooted in human acts and intentions. Due to a lack of consciousness in AI, applying the traditional concepts is challenging. The current laws fall short in addressing the liability in case of AI. To overcome this challenge, India must introduce a comprehensive regulation consisting of a sectoral sandbox system and must look for an approach granting legal personhood to AI. If the AI goes unchecked, it may threaten the basic fabric of society.


Reference
  1.  “Dr. Manjit Singh, Criminal Liability in AI-enabled Autonomous Vehicles: A comparative Study (Apr 11, 2025, 11:20AM), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5167278”

  2.  The Bhartiya Nyay Sanhita, No. 45, Acts of Parliament, 2023.

  3.  The Information Technology Act, No. 21, Act of Parliament, 2000.

  4.  The Digital Personal Data Protection Act, No. 22, Act of Parliament, 2023.

  5. The Consumer Protection Act, No. 35, Act of Parliament, 2019.

  6. “NITI Aayog, https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf (April 15, 2025, 3:50 PM).”

  7. “NITI Aayog, https://www.niti.gov.in/sites/default/files/2022-11/Ai_for_All_2022_02112022_0.pdf (April 15, 2025, 4:50 PM).”

  8. “Teena Arora & Dr. Shailja Thakur, Criminal Liability of Artificial Intelligence: A Comprehensive Analysis of Legal Issues and Emerging Challenges, 5 IJRPR 11, 1886-1889 (2024).”

  9. “Akanksha Priya, Criminal Accountability for AI: Mens Rea, Actus Reus, and the Challenges of Autonomous, 3 Int’l J DLR 1, 273,278 (2025).”

  10.  Amit Kumar Padhy, Criminal Liability of the Artificial Intelligence Entities, 8 Nirma University Law Journal 2, 16, 20 (2029).


Related Posts

bottom of page