top of page
AI ERROR OR HUMAN LIABILITY: A LEGAL DILEMMA

Author: Udhayanthika Shanmuganathan, Christ Academy Institute of Law


INTRODUCTION 

Complex AI systems though a built and developed machine human is making it really tough to determine the damage has been done. Black box autonomous behaviour of these AI machines makes it further unpredictable along with the continuous functionalities which renders conventional breaches and others associated with it. Developing a user interface involves numerous parties into the picture making it intertwined and harder to pinpoint the liability and the failure to function according to the code. So who will be liable for the consequences? Is it the manufacturer? To determine whether there is a design flaw. The programmer? To determine whether the coding is appropriate or connected to the user? Or is it the user? To determine the judicious use of AI systems. Otherwise is the AI flawed by itself?

 

CLASSIFICATION ACCORDING TO EXISTING LIABILITIES
  1. NEGLIGENCE - Establishing it the claimant must prove that the defendant owes a duty of care which requires a proximate relation and connection between the parties. But in case of holding a person liable because of machine failure since not any of the above mentioned would have a control over these machines. The very fact of establishing that there is a failure would be hard, keeping in mind that the AI has automatic machine learning. If we narrow down the liability for negligence, will the manufacturer be liable for producing a flawed machine? Or the developer is liable for feeling in faulty codes that resulted in such a damage? Or the user is liable since they have a responsibility for proper and judicious use of AI systems?

  2. CONTRACT BREACHES - It occurs when AI machines do not fit to purpose. It could include various characteristics such as the product quality, whether or not it is fit to purpose, does it match with the description etc. Simply the AI machines must act only according to their said description and not on their own. 

  3. STRICT LIABILITY -  A strict liability ensures that the claimant has a right when the defective product has caused any damages either to the individual or property. There is a grey area in law even when the courts have observed that codes without being pinned to hardware does not constitute a ‘product’. But since AI codes are intangible what is to be considered? 


RISK MANAGEMENT

Proper precedents in this area are still underexplored, which leaves us to hopefully wait for the courts to use their reasoning that would apply to new liability frameworks. In order to escape the black box paradox, proper risk associated liability must be framed. a 3 fold liability mechanism would be of a great consideration when it comes to the vicarious liability for black box agents which would be the (actants), user liability for humane interaction with AI systems which would be (hybrids), and collective liability for developing the black box paradox which would be the crowd. There are several instances such as the machine connectivities, big data paradox, digital hybrid, industrial or sector specific AI hazards,  algorithmic contracts, digital breaches of contracts, tort and product liabilities etc, which makes us wonder whether the existing models explore these intricacies or not. But absolutely it is underexplored. Liability gaps still go under further scrutiny due to the evolving practices and digitalization. Sticking on to the traditional idea of holding an individual liable will not work on the AI systems, since the human actors are replaced with AI systems. Now there is a clear cut understanding of how inevitable escaping the liability is, thus pinning it collectively does make sense, but yet this is subjected to the damage caused to the victims. But if at all the liability is limited it will be a pattern of constellations of consequences that would remain unnoticed. 


PRODUCT LIABILITY 

The Product Liability Directive 2024/2853 came into force on 9 December 2024, it is like a system that holds significant importance to the strict liability but with inbuilt changes. It considers AI as a system model that includes the software and AI, no matter the usage, supply, considered  a product or not. The small developers can although be considered for being the coders, but not for high harms like personal injury or death.  There is another model that suggests a 3 model liability namely the manufacturers liability, developers and users liability. 


NEW ERA?

The AI Act which will be effective from 2 August 2026 would put on safety measures and certain conformity assessments must be met in order to be able to sit in the market mainly to mitigate the risks. Thus a breach of these safety standards would lead to the apt liability. 

The New PLD is said to provide businesses with apt supply and deployment of AI machines in context to the liabilities they face. But this would cause the companies to properly disclose the obligations compared to the consumers including displaying proper user manuals. There must be a systematized analyzation of the digital behaviour, their risks and associated liabilities. But the complexity lies whether agents, robots, and other digital actors must be understood as algorithms rather than human actors whatsoever. There must be a proper understanding of the communication chain, between the said agents, only then it could be possible to determine the liabilities associated with it.


CONCLUSION

Thus as AI systems continue to evolve and grow the question of who holds the liability becomes increasingly important, especially with the integration of AI systems in various sectors and also in some cases AI seems to supersede humans. Yet this could only be answered by a combination of evolved laws, proper oversight mechanisms, and continued judicial pronouncements to pave ideas in this constantly evolving jurisprudence.





Jul 13

4 min read

0

4

Related Posts

RECENT POSTS

THEMATIC LINKS

bottom of page