Author: Anukriti Katiyar, Amity University
Abstract
Artificial Intelligence (AI) systems are transforming sectors such as healthcare, finance, governance, and public services—but their increasing autonomy, opacity, and societal impact pose profound legal challenges. These include uncertainty over liability when AI causes harm; requirements for transparency and explainability; risks of bias and discrimination; data protection and privacy; intellectual property rights for AI-generated content; cross-border jurisdictional issues; and enforcement gaps due to regulatory capacity constraints.
This paper provides a comparative legal analysis of recent regulatory efforts in the European Union and India. Under the EU AI Act, legal definitions such as “AI system,” “provider,” “deployer,” and “high-risk AI system” are clearly codified, and transparency obligations require providers to ensure traceability, explainability, and disclosure when users interact with AI systems. India’s new Digital Personal Data Protection Act (DPDP), 2023, offers cross-sectoral data protection including consent, purpose limitation, and rights of data principals—but lacks explicit provisions for algorithmic transparency, oversight of AI decision-making or protections from automated profiling.
The study formulates hypotheses regarding whether jurisdictions with risk-based regulation and clear definitions (like the EU) achieve greater legal certainty and enforcement capability compared to jurisdictions where AI regulation is fragmented. Using case law, statutory analysis, policy drafts, and literature, it finds that while the EU framework sets strong legal precedents, ambiguities remain — particularly around general-purpose AI, synthetic content, and “deepfake” labeling. India shows emerging regulatory momentum but faces substantial gaps in liability law, transparency mandates, and IP law. The paper concludes by proposing a regulatory framework emphasizing precise definitions, robust transparency mandates, liability rules, privacy and IP reforms, and international cooperation to ensure AI benefits while safeguarding rights.
Keywords: Artificial Intelligence, Regulation, Liability, Transparency, Data Protection, Fairness, India
Introduction
AI systems are no longer futuristic—they are already making decisions that materially affect human lives. For example, AI tools may decide who receives a loan, which medical treatments are offered, or even assist in criminal justice. These systems raise novel legal questions around accountability, fairness, transparency, privacy, intellectual property, and more.
Regulating AI is a dual challenge: we need regulation that is strong enough to protect human rights, civil liberties, and public safety; but not so heavy-handed that it stifles innovation, especially in countries still developing their technological infrastructure. India offers an instructive case: it is adopting AI in many sectors rapidly, and is now confronting the legal and ethical issues.
Regulatory frameworks are emerging globally to tackle these issues, but many legal challenges persist, particularly in jurisdictions still developing AI‐specific laws. This paper investigates these challenges with special focus on India, compares them with EU regulatory models, reviews recent literature, examines policy developments and cases, sets hypotheses, and proposes a model framework.
This paper examines the legal obstacles to regulating AI, with a focus on India, and puts forward proposals to build more resilient, just regulatory systems.
Literature Review
Recent scholarship has mapped out many of the legal and ethical issues in AI regulation. A review of “Analyzing AI regulation through literature and current trends” shows strong consensus that risk‐based regulation is among the more promising models, though implementation and enforcement remain difficult.
Similarly, India’s Advance on AI Regulation (Mohanty & Sahu, 2024) surveys perspectives across government, industry, and civil society in India, finding gaps in stakeholder alignment and policy roadmaps.
Other works (such as Investigating the Technological, Legal, and Ethical Difficulties of AI in India) highlight how technical complexity, data privacy, bias, and cross-border issues are particularly acute in the Indian context.
Methodology
Doctrinal / normative legal analysis of statutes, policy documents, case law in European Union and India, including EU AI Act, GDPR, India’s Information Technology Act, DPDP Act 2023, policy drafts, court judgments, and regulatory advisories.
Comparative approach: Compare how the EU’s regulatory framework addresses liability, transparency, fairness, IP, enforcement vs how India is addressing (or failing to address) the same issues.
Case study sample: A sample of recent Indian controversies/cases (e.g. the lawsuit by news publishers vs OpenAI over training data, review of copyright law by India) plus the EU AI Act’s development and implementation.
Hypotheses
H1: Regulatory frameworks that impose mandatory transparency/explainability will reduce the number of cases of unaccountable harm from AI systems.
H2: Countries with comprehensive risk-based AI laws (e.g. EU) will have more clarity on liability and stronger enforcement than countries without AI-specific laws (e.g. India).
H3: Undefined or vague criteria (e.g. what counts as high risk, what constitutes AI system) in regulation contribute to inconsistency and legal uncertainty.
Definitions and Scope
Artificial Intelligence (AI): A broad range of technologies including machine learning, deep learning, neural networks, natural language processing, computer vision, and decision systems that often adapt or learn from data, not just hard-coded rules.
Regulation: This refers to legal rules (statutes, regulations), oversight mechanisms, enforceable standards—issued by governments or international bodies. This excludes purely voluntary codes or guidelines, though those are discussed.
AI lifecycle stages: Data collection & curation; model training & validation; deployment; monitoring; decommissioning. Challenges can emerge at each stage.
High-risk AI systems: Systems whose failure or misuse could result in serious harm to persons or groups—e.g. critical infrastructure, medical diagnosis, criminal justice, public safety.
Core Legal Challenges
Liability and Accountability
When an AI system causes harm (e.g., wrongful medical diagnosis, autonomous vehicle accident), it's often unclear who is liable: the developer, data provider, deployer, user.
Traditional legal doctrines (product liability, tort law) assume foreseeability, control, and human agency. AI systems, especially self-learning or opaque ones, make foreseeability difficult.
There is risk of diffusion of responsibility—if many actors contribute (data, model, deployment), each may disclaim or shift liability.
Transparency, Explainability, and the “Black-Box” Problem
Many AI methods (deep neural networks, large language models) are opaque. Their decision paths may not be intelligible to users or even to developers.
Explainability is relevant for rights such as due process, administrative law, fairness. Without explanation, it is hard for affected individuals to challenge decisions.
Tension arises with trade secrets or proprietary concerns: full transparency may conflict with IP protection or business confidentiality.
Bias, Discrimination, and Fairness
AI models can inherit bias present in training data (e.g. demographic, socio-economic, gender, caste) and perpetuate or amplify them.
Fairness is not a single metric; legal systems struggle to choose among competing fairness definitions (e.g. equal false positive rates vs. equal overall accuracy).
In countries like India, data for minority / rural / underserved populations may be sparse or skewed, increasing risk of bias.
Data Protection & Privacy
AI requires large amounts of data, often personal or sensitive. There are risks of misuse, data breaches, surveillance.
Consent regimes may be inadequate: people often do not understand what they are consenting to, particularly in AI uses. Anonymization may be imperfect, and re-identification possible.
Legal rules may not cover all dimensions: combining datasets, inference, profiling, predicting sensitive attributes.
Intellectual Property and Ownership of AI Outputs
Who owns the output of an AI? If an AI generates text, images, designs, or inventions, traditional IP law that presumes human authorship faces limitations.
Copyright or patent laws may not clearly define whether AI-generated content qualifies and how ownership / rights should be allocated.
Use of copyrighted works as training data: are they licensed? Is fair use/fair dealing applicable? In India, recent cases show rising disputes.
Jurisdiction, Cross-Border Issues, and Harmonization
Many AI systems are developed in one country, deployed in others; data may be stored or processed elsewhere. Which jurisdiction’s law applies?
Conflicts between laws—data protection, privacy, IP—across borders. Enforcement across jurisdictions may be very difficult.
Regulatory fragmentation offers opportunities for regulatory arbitrage: deploying AI in places with weak oversight or fewer restrictions.
Enforcement, Oversight, and Regulatory Capacity
Regulatory bodies may lack technical expertise (in AI, ML) to evaluate systems, audit them, or monitor compliance.
Rapid technology change can leave regulation behind—laws may become outdated quickly.
Resource constraints, institutional inertia, political economy (e.g. pressure from industry) can weaken enforcement.
Regulatory and Legal Landscape: Global & Indian Comparisons
EU’s AI Act: Adopts a risk-based regulatory framework. High-risk AI systems are subject to stricter obligations: transparency, human oversight, data quality, testing, etc.
United States: Much more sectoral and fragmented; laws focus on privacy (e.g. laws like California Consumer Privacy Act), anti-discrimination statutes, but no uniform national law specifically for AI.
Brazil, UK, Canada, etc. have various regulatory or normative frameworks. Some emphasize risk, some rely on voluntary codes.
India is somewhat behind in having AI-specific laws, though there are multiple developments:
The Digital Personal Data Protection Act (DPDP Act), 2023, which regulates personal data.
Information Technology Act, 2000, and associated rules (e.g., the Information Technology Rules, 2021) cover some digital and intermediary liability, but not designed specifically for AI.
Government and policy think tanks (e.g. NITI Aayog) have published principles and proposals (e.g. “Ethical AI”, “AI for All”) but many remain non-binding.
Comparative Perspectives: EU vs India
Aspect | EU (AI Act etc.) | India |
Legal framework | Specific AI Act (risk-based), GDPR, strong privacy laws, non-discrimination protections. | No dedicated AI regulation yet; DPDP, IT Act, Copyright Act, advisories, policy proposals. |
Liability clarity | Better defined through high-risk system obligations, conformity assessments. | Ambiguous; ongoing lawsuits and panels trying to clarify. |
Transparency and explainability | Mandates for high risk systems; documentation; human oversight required. | Some policy guidance, but weak legal mandate across board. |
Bias & fairness | Integrated into regulation; but practical enforcement & metrics still under debate. | Large gaps; risk of bias high given demographic & data challenges. |
IP issues | Actively addressed; EU IP law evolving. | Early stages; litigation starting; laws antiquated for AI generation. |
Enforcement & capacity | Stronger institutions, though implementation still work in progress. | Weaker technical capacity, shortage of regulatory clarity, delayed enforcement. |
Indian Case Studies & Recent Developments
Here are some concrete examples from India that illustrate the legal challenges in practice.

