Author: Udhayanthika Shanmuganathan, Christ Academy Institute of Law, Bengaluru
ABSTRACT
To meet the shifting expectations of the contemporary world, the Government of India is working towards the integration of artificial intelligence (AI) systems into the country’s existing frameworks, be it healthcare, education, or defense sectors. The government seems to understand the potential of AI; hence, NITI Aayog is formulating strategies that propel incentivization policies at a national level.
As AI technologies become more and more integrated into our society, it will soon be necessary to develop ethically sound and public-friendly legislation. There are strong reasons in favor of new regulations that address the quick development of AI while simultaneously guaranteeing accountability and equity for users.
This research examines the extent of Indian AI legislation regarding strict liability principles applied to AI technology development and usage. The study examines the possible outcomes of AI integration while addressing these issues through technological perspectives and ethical considerations alongside general legal frameworks. The provisions establish controls to ensure benevolent AI systems function within comprehensive and citizen-focused legal frameworks. The responsible use of AI involves directing these systems to reduce the risk of catastrophic failures that might cause extensive negative effects. India's transition to an AI-driven future makes this approach essential.
The paper argues for a broad-based approach where it is emphasized that other stakeholders, including government officials, should collaborate with the leaders of technology towards developing regulations that are flexible but sufficiently rigid to manage contradictory self-interested motives influenced by diverse societal sectors.
KEYWORDS: Artificial Intelligence (AI), Regulatory Frameworks, Inclusive Legal Design, Balancing Innovation and Regulation, Ethical Imperatives, Citizen Engagement, and Strict Liability—users and developers.
INTRODUCTION
AI is influencing our lives today, from smart assistants and automated services to tailored online suggestions. This was previously regarded as a technology of the future. In response to the developing technology, the Indian government is working to harness the potential of AI. NITI Aayog, the chief planning body of India, aims to implement AI in smart cities, healthcare, agriculture, and education, automating several processes.
As AI becomes available to the general public, it becomes increasingly important to ensure these technologies are used properly. I could tell the system if something goes wrong. But how do we set up socially just guarantees of legal protections around technology for everyone?
"This paper identifies a critical gap: the lack of a dedicated, ethical, and enforceable legal framework in India to regulate the rising impact of AI technologies." Moreover, it argues for the application of a compelling legal principle of strict liability, which means that harm is ascribed to the developers and users of AI regardless of whether there was malevolent intent or they were guided by ignorance. But it still appreciates the difficulty of applying customary laws to new technology in a fast-paced world.
Ultimately, we hope that there will be cooperation among the government, technological developers, legal professionals, and ordinary citizens for inclusive legislative frameworks that are straightforward, responsive, and participatory. And the manner in which we regulate the tools as a country transitioning to an AI-empowered future will dictate the tools’ effects on our lives and society. The core objectives of this paper are to explore the need for ethical AI laws, propose strict liability as a legal principle, and promote inclusive, citizen-focused governance.
Laws need to evolve as quickly as technology so that we can safeguard responsible advancement and use of AI. Emerging technologies are often not taken into account by traditional legislation in relation to the sophisticated risks of intelligent systems such as latent biases, autonomous actions, and data privacy issues. Strict liability, or responsibility for creators and users of AI, even without finding intention to cause harm, seems to be one of the few workable frameworks.
LITERATURE REVIEW
Artificial intelligence (AI) as it grows into areas such as healthcare, education, and transportation-driven smart cities deserves global expert discussions on how best to use AI safely and justly. This paper deals with challenging questions such as responsibility and accountability in AI and accountability in strict liability principles with respect to citizens conducting a discourse on AI law-making.
Ethical Responsibility for Artificial Intelligence Algorithms
AI has given us major ethical questions such as fair AI, non-maleficence, and human rights. Establishing a legal framework for the same, however, is difficult. India has no single law on AI, and the existing laws, such as the Information Technology Act (2000), do not sufficiently manage modern AI systems, which typically involve autonomous decision-making capabilities and privacy of data. The European Union has applied its AI Act, classifies AI systems on risk level, and guarantees transparency. Writing in this column last month, Om Malik advised similar approaches to create AI law in India. Legal Solution, specifically strict liability
Strict Liability A person or company can be held liable for harm caused, regardless of intent. To an AI, a method is proposed as the approach because it makes sense to try and be able to indeed take intent into account, but AI systems can act autonomously, so it is hard to find out or prove that. Pagallo and Balkin note that companies should bear responsibility for AI systems that are harmful, as we should work to foster the development of safe technology. The case of MC Mehta vs. Union of India, 1987 (environmental law, strict liability could provide a model for AI regulation).
Engage Citizens on Drawing AI Law
Public involvement: not only the government and tech firms must be part of regulation for AI. It is more important than everywhere, especially in India, where the incidence of AI in different communities is happening differently.
AI is evolving in leaps and bounds, but with that, India continues to face challenges when it comes to the effective laws for AI following global regulations. Strict liability is one way to make companies responsible for their actions without definitively proving intent, but more studies are necessary for analyzing its application to AI. To put it differently, public disengagement in an unstructured manner of governance process, especially from those who represent justice and inclusivity views, is a large problem facing India's AI future.
METHODOLOGY
The study is grounded in a qualitative, doctrinal, and analytical research methodology (complemented by elements of comparative law and policy analysis). The research concern is to inform and analyze how ethical, inclusive, and legally enforceable frameworks for AI governance, in particular strict liability, could be developed within the Indian legal system and draw lessons from global experiences to assess how such frameworks can be applied in the Indian context.
1. Doctrinal Legal Research
The initial step of the study is to look at the existing laws in India for AI, viz., the Information Technology Act 2000, the Consumer Protection Act 2019, and environmental laws (MC Mehta v. Union of India). The aim is to identify the extent of these laws in relation to AI. The research will explore concepts in law such as strict liability and absolute liability to explore how they might help with some of the harm that AI could do. The research will also review AI ethics frameworks, both domestic (similar to NITI Aayog’s National Strategy for AI) and international (e.g., EU AI Act, OECD Principles on AI), to identify their tenets and constraints.
2. Comparative Legal and Policy Review
Following the comparison of how other countries (e.g., the European Union, the United States, and Canada) are legislating on AI will be the analysis below these fields. This involves the review of international rules on risk classification, accountability, and legal standing of AI systems. It will examine to what extent the international models are possible to adapt in the Indian legal and cultural scenario, working on the state of AI regulations, enforcement, and public participation in India.
Research Tools and Sources:
Primary Legal Sources: Indian statutes, case law, government reports, white papers
Secondary Legal Sources: Journal articles, legal commentaries, AI ethics reports
Comparative Data: International treaties, AI regulations, and foreign court judgments
Analytical Tools: Legal interpretive techniques, thematic content analysis, policy mapping
FINDINGS
The outline of AI regulation in India and issues raised about the ethics and legal setup required for a responsible incorporation are discussed in this paper. It finds the following chief results and implications:
The paper makes a case for an inclusive legal paradigm that guarantees the responsible development and operation of AI technologies through accountability. Indian laws (e.g., Information Technology (Reasonable Security Practices and Procedures) Rules 2003 and Personal Data Protection Bill—PDPB) do address some of the AI-related problems/incidents, such as liability, privacy, and transparency, from a partial perspective.
Though the international frameworks, such as GDPR (General Data Protection Regulation) 2012 of the European Union and the proposed AI Act, are more or less appropriate models, India is free to modify them without sacrificing technological advancement in her socio-economic context.
The paper further calls for aggressive liability in AI creation as well. Follow my blog. It contends that developers and users of autonomous AI systems should be liable for the harm caused by malfunctions or any unintended consequences that can be demonstrated if they did not calibrate it correctly. This is especially significant in high-risk sectors such as healthcare (e.g., robotic surgery) and transportation (e.g., autonomous vehicles).
The paper identifies as a major ethical dilemma that AI increasingly making autonomous decisions can determine the lives, in one way or another, of people ranging from hiring to law enforcement and credit scoring. These systems are already ethically fraught in what they can operate—even malfunctions and bugs can raise issues of accountability/transparency. Examples like the iBOT, a robot for people with disabilities, or the movie Ex Machina are used in the paper to demonstrate what those consequences look like for AI action.
Fairness, transparency, and human rights are the CSR framework for AI regulation that ethics should
DISCUSSION
While investigating the necessity of an all-encompassing legal architecture to govern artificial intelligence (AI) in India, I brought out both the problems and potentials of navigating between encouraging tech innovation and moral issues. Although India presently has a couple of the legal frameworks to ensure that some areas of ASian are covered, they are quite inadequate in ensuring that the AI system-specific challenges are covered adequately, such as by the FT Act, the Information Technology Act, 2000, and PDPB (Personal Data Protection Bill). These laws do not cover the nuance of AI, which includes liability for AI systems' autonomous decisions and ethics on biases or data misuse. International General Data Protection Regulation (GDPR) and the EU AI Act (proposed) serve as models for AI regulation in terms of data protection, transparency, and a principles-based approach to regulation, which can be leveraged under India's legal framework.
Some experts have called for an AI Accountability Act in India with the aim that the strict liability for damages caused by AI systems is brought on top of developers and user-side stakeholders (proposals for a policy are here). By enforcing this approach, it would enable victims to claim damages for AI-based blunders just like product liability laws do. Also, an AI Ethics Framework has been envisioned, which provides a roadmap for ethical deployment of AI, similar to the questions and constraints seen in films like Ex Machina, where AI autonomy causes manipulation and destruction. They will thus ensure that ethical factors are considered before the deployment of AI technologies and prevent evil societal effects.
Also, creative solutions like an AI Human Rights Charter have been tossed around to protect the rights of humans in a world overrun by AI, free from discrimination and seeking to promote human dignity and respect for privacy by machines. An additional futuristic concept is legal personhood for AI, the idea of imposing a legal status on AI systems so that they are considered persons that can be held liable like companies are today. Other ideas include algorithmic transparency laws that would mandate explaining the algorithms to regulators, ensuring that AI decisions are explainable and can be audited as necessary. AI Impact Assessments (AIAs), which will consider a deep audit of the AI systems before deploying them, whether ethical or socially relevant, when the need arises. But implementing those rules is a challenge because of the fast-moving nature of AI development, AI technology complexity, and lack of international cooperation. Balancing encouraging innovation with the need for safety is a thin tightrope to weave that might end up on the pole of blowing technology in one direction or another to allow only misuse. Also, public education and engagement in the regulatory process are essential. The public must be informed of the risks and benefits of AI, as well as allowed opportunities to express opinions about policies governing its use.
To build this transparency and engagement would lead to the trust that AI technologies are for the greater good but under respect for human rights and freedom.
To sum up, legal and regulatory developments in India must accommodate the burgeoning impact of AI technologies in an enabling manner. From international models to pliable, ethnically based human innovations that 'overcome the divide' between progress and ethics for the benefit of society but not at the expense of safety or fairness or human dignity in technology development & use.
LIMITATIONS:
The works in this study do pave important directions on the construction of ethical and fair laws for AI in India, especially for the strict liability idea, but there are also some caveats to legitimate
1. Primary data collection was never worked upon.
This is based on secondary data such as the laws, trends, and papers written by academics together. It is not new data from direct interviews, surveys, or public feedback.
Which means it may not be the most accurate impression or truth of general individuals or impacted through AI at all.
2. Limited technical touch
Most of it is where AI is concerned from a legal and ethical perspective. It doesn't dig deep into how AI technologies really work or are put together.
This is why some legal recommendations might be more challenging to put into practice due to technological restrictions.
3. Light Comparisons with the Global South
It may talk with India and parts of the EU or US (but not many examples out of, for example, countries with similar issues to India).
Which might lessen the transferability or sensibility with India in practice.
4. Consideration of Partially Civil Liability
Most of the research focuses more on civil, i.e., people can sue for actual damages caused.
Not tackling aspects of my area of focus, such as criminal law or intellectual property.
Thus, this does not contain every legal part of AI usage.
5. Future Possibilities
Given the nascent state of AI law in India, many of the suggestions in this paper are aspirational or end goals and not practical immediate solutions for what currently exists.
Not all these ideas will just fly or be enacted into law.
CONCLUSION
Artificial intelligence (AI) is fast becoming an inextricable part of everyday life, containing both profound opportunities as well as enormous ethical, legal, and societal challenges. In the Indian setting, the immense proliferation of AI requires immediate consideration of regulatory frameworks that are theoretically sophisticated as well as normatively salient and politically transformative.
A strong case for one of the most promising legal approaches to handling AI risks is that it rears its head in strict liability, which holds people liable for harms even if there was no intent to do so. If intent is moved out of question and liability becomes primarily based on outcome, then developers and deployers of AI systems remain liable whether accidental harm is perpetrated or not unexpectedly unforeseen.
This kind of framework would provide assurance, drive responsible innovation, and inhibit the aggressive or spurious use/hurilities of our most cutting technologies. But regulation needs to exceed legal doctrines. Shaped with input from those most affected, true legal and social justice must be rooted in legislation declared. And that is not limited to legal or technical experts but all citizens, civil society, and communities most vulnerable. Just as no valid governance of technology that is created to serve man will work without making people feel that they are in charge of how their technology is governed and used.
REFERENCES
Books
Turner, Jacob. Robot Rules: Regulating Artificial Intelligence. Springer, 2018. ISBN 9783319962344.
Abhivardhan. Artificial Intelligence Ethics and International Law. 2nd ed., BPB Publications, 2023. ISBN 9789355516220.
Reports Books
NITI Aayog, National Strategy for Artificial Intelligence (2018)
Bhatt & Joshi Associates, Legal Challenges in Regulating AI and Emerging Technologies in India (Feb. 1, 2025), https://bhattandjoshiassociates.com/legal-challenges-in-regulating-ai-and-emerging-technologies-in-india/.
Agarwal, Somya & Ganatra, Kavita, Tackling AI Challenges in India Towards a Legal Framework for Responsible AI (Oct. 23, 2024), https://www.legaleraonline.com/data-protection/tackling-ai-challenges-in-india-towards-a-legal-framework-for-responsible-ai-933423.
Case Laws
M. C. Mehta vs. Union of India (Case of Oleum Gas Leak), AIR 1987 SC 108
Charan Lal Sahu vs. Union of India, 1989(1) SCC 674
News Articles
Reuters, India Asks Tech Firms to Seek Approval Before Releasing 'Unreliable' AI Tools, Reuters (Mar. 4, 2024), https://www.reuters.com/world/india/india-asks-tech-firms-seek-approval-before-releasing-unreliable-ai-tools-2024-03-04/
Movies
Ex Machina. Directed by Alex Garland, Universal Pictures, 2015.
I, Robot. Directed by Alex Proyas, 20th Century Fox, 2004.













