top of page
Legal Challenges in Regulating AI System

Author: Anukriti Katiyar, Amity University

Abstract

Artificial Intelligence (AI) systems are transforming sectors such as healthcare, finance, governance, and public services—but their increasing autonomy, opacity, and societal impact pose profound legal challenges. These include uncertainty over liability when AI causes harm; requirements for transparency and explainability; risks of bias and discrimination; data protection and privacy; intellectual property rights for AI-generated content; cross-border jurisdictional issues; and enforcement gaps due to regulatory capacity constraints.

This paper provides a comparative legal analysis of recent regulatory efforts in the European Union and India. Under the EU AI Act, legal definitions such as “AI system,” “provider,” “deployer,” and “high-risk AI system” are clearly codified, and transparency obligations require providers to ensure traceability, explainability, and disclosure when users interact with AI systems. India’s new Digital Personal Data Protection Act (DPDP), 2023, offers cross-sectoral data protection including consent, purpose limitation, and rights of data principals—but lacks explicit provisions for algorithmic transparency, oversight of AI decision-making or protections from automated profiling. 

The study formulates hypotheses regarding whether jurisdictions with risk-based regulation and clear definitions (like the EU) achieve greater legal certainty and enforcement capability compared to jurisdictions where AI regulation is fragmented. Using case law, statutory analysis, policy drafts, and literature, it finds that while the EU framework sets strong legal precedents, ambiguities remain — particularly around general-purpose AI, synthetic content, and “deepfake” labeling. India shows emerging regulatory momentum but faces substantial gaps in liability law, transparency mandates, and IP law. The paper concludes by proposing a regulatory framework emphasizing precise definitions, robust transparency mandates, liability rules, privacy and IP reforms, and international cooperation to ensure AI benefits while safeguarding rights.


Keywords: Artificial Intelligence, Regulation, Liability, Transparency, Data Protection, Fairness, India


Introduction

AI systems are no longer futuristic—they are already making decisions that materially affect human lives. For example, AI tools may decide who receives a loan, which medical treatments are offered, or even assist in criminal justice. These systems raise novel legal questions around accountability, fairness, transparency, privacy, intellectual property, and more.

Regulating AI is a dual challenge: we need regulation that is strong enough to protect human rights, civil liberties, and public safety; but not so heavy-handed that it stifles innovation, especially in countries still developing their technological infrastructure. India offers an instructive case: it is adopting AI in many sectors rapidly, and is now confronting the legal and ethical issues.

Regulatory frameworks are emerging globally to tackle these issues, but many legal challenges persist, particularly in jurisdictions still developing AI‐specific laws. This paper investigates these challenges with special focus on India, compares them with EU regulatory models, reviews recent literature, examines policy developments and cases, sets hypotheses, and proposes a model framework.

This paper examines the legal obstacles to regulating AI, with a focus on India, and puts forward proposals to build more resilient, just regulatory systems.


Literature Review

Recent scholarship has mapped out many of the legal and ethical issues in AI regulation. A review of “Analyzing AI regulation through literature and current trends” shows strong consensus that risk‐based regulation is among the more promising models, though implementation and enforcement remain difficult. 

Similarly, India’s Advance on AI Regulation (Mohanty & Sahu, 2024) surveys perspectives across government, industry, and civil society in India, finding gaps in stakeholder alignment and policy roadmaps. 

Other works (such as Investigating the Technological, Legal, and Ethical Difficulties of AI in India) highlight how technical complexity, data privacy, bias, and cross-border issues are particularly acute in the Indian context. 


Methodology
  1. Doctrinal / normative legal analysis of statutes, policy documents, case law in European Union and India, including EU AI Act, GDPR, India’s Information Technology Act, DPDP Act 2023, policy drafts, court judgments, and regulatory advisories.

  2. Comparative approach: Compare how the EU’s regulatory framework addresses liability, transparency, fairness, IP, enforcement vs how India is addressing (or failing to address) the same issues.

  3. Case study sample: A sample of recent Indian controversies/cases (e.g. the lawsuit by news publishers vs OpenAI over training data, review of copyright law by India) plus the EU AI Act’s development and implementation.


Hypotheses
  • H1: Regulatory frameworks that impose mandatory transparency/explainability will reduce the number of cases of unaccountable harm from AI systems.

  • H2: Countries with comprehensive risk-based AI laws (e.g. EU) will have more clarity on liability and stronger enforcement than countries without AI-specific laws (e.g. India).

  • H3: Undefined or vague criteria (e.g. what counts as high risk, what constitutes AI system) in regulation contribute to inconsistency and legal uncertainty.


Definitions and Scope
  • Artificial Intelligence (AI): A broad range of technologies including machine learning, deep learning, neural networks, natural language processing, computer vision, and decision systems that often adapt or learn from data, not just hard-coded rules.

  • Regulation: This refers to legal rules (statutes, regulations), oversight mechanisms, enforceable standards—issued by governments or international bodies. This excludes purely voluntary codes or guidelines, though those are discussed.

  • AI lifecycle stages: Data collection & curation; model training & validation; deployment; monitoring; decommissioning. Challenges can emerge at each stage.

  • High-risk AI systems: Systems whose failure or misuse could result in serious harm to persons or groups—e.g. critical infrastructure, medical diagnosis, criminal justice, public safety.


 Core Legal Challenges

 Liability and Accountability

  • When an AI system causes harm (e.g., wrongful medical diagnosis, autonomous vehicle accident), it's often unclear who is liable: the developer, data provider, deployer, user.

  • Traditional legal doctrines (product liability, tort law) assume foreseeability, control, and human agency. AI systems, especially self-learning or opaque ones, make foreseeability difficult.

  • There is risk of diffusion of responsibility—if many actors contribute (data, model, deployment), each may disclaim or shift liability.

 Transparency, Explainability, and the “Black-Box” Problem

  • Many AI methods (deep neural networks, large language models) are opaque. Their decision paths may not be intelligible to users or even to developers.

  • Explainability is relevant for rights such as due process, administrative law, fairness. Without explanation, it is hard for affected individuals to challenge decisions.

  • Tension arises with trade secrets or proprietary concerns: full transparency may conflict with IP protection or business confidentiality.

Bias, Discrimination, and Fairness

  • AI models can inherit bias present in training data (e.g. demographic, socio-economic, gender, caste) and perpetuate or amplify them.

  • Fairness is not a single metric; legal systems struggle to choose among competing fairness definitions (e.g. equal false positive rates vs. equal overall accuracy).

  • In countries like India, data for minority / rural / underserved populations may be sparse or skewed, increasing risk of bias.

Data Protection & Privacy

  • AI requires large amounts of data, often personal or sensitive. There are risks of misuse, data breaches, surveillance.

  • Consent regimes may be inadequate: people often do not understand what they are consenting to, particularly in AI uses. Anonymization may be imperfect, and re-identification possible.

  • Legal rules may not cover all dimensions: combining datasets, inference, profiling, predicting sensitive attributes.

Intellectual Property and Ownership of AI Outputs

  • Who owns the output of an AI? If an AI generates text, images, designs, or inventions, traditional IP law that presumes human authorship faces limitations.

  • Copyright or patent laws may not clearly define whether AI-generated content qualifies and how ownership / rights should be allocated.

  • Use of copyrighted works as training data: are they licensed? Is fair use/fair dealing applicable? In India, recent cases show rising disputes.

 Jurisdiction, Cross-Border Issues, and Harmonization

  • Many AI systems are developed in one country, deployed in others; data may be stored or processed elsewhere. Which jurisdiction’s law applies?

  • Conflicts between laws—data protection, privacy, IP—across borders. Enforcement across jurisdictions may be very difficult.

  • Regulatory fragmentation offers opportunities for regulatory arbitrage: deploying AI in places with weak oversight or fewer restrictions.

Enforcement, Oversight, and Regulatory Capacity

  • Regulatory bodies may lack technical expertise (in AI, ML) to evaluate systems, audit them, or monitor compliance.

  • Rapid technology change can leave regulation behind—laws may become outdated quickly.

  • Resource constraints, institutional inertia, political economy (e.g. pressure from industry) can weaken enforcement.


 Regulatory and Legal Landscape: Global & Indian Comparisons
  • EU’s AI Act: Adopts a risk-based regulatory framework. High-risk AI systems are subject to stricter obligations: transparency, human oversight, data quality, testing, etc. 

  • United States: Much more sectoral and fragmented; laws focus on privacy (e.g. laws like California Consumer Privacy Act), anti-discrimination statutes, but no uniform national law specifically for AI. 

  • Brazil, UK, Canada, etc. have various regulatory or normative frameworks. Some emphasize risk, some rely on voluntary codes. 

India is somewhat behind in having AI-specific laws, though there are multiple developments:

  • The Digital Personal Data Protection Act (DPDP Act), 2023, which regulates personal data.

  • Information Technology Act, 2000, and associated rules (e.g., the Information Technology Rules, 2021) cover some digital and intermediary liability, but not designed specifically for AI. 

  • Government and policy think tanks (e.g. NITI Aayog) have published principles and proposals (e.g. “Ethical AI”, “AI for All”) but many remain non-binding. 


Comparative Perspectives: EU vs India

Aspect

EU (AI Act etc.)

India

Legal framework

Specific AI Act (risk-based), GDPR, strong privacy laws, non-discrimination protections.

No dedicated AI regulation yet; DPDP, IT Act, Copyright Act, advisories, policy proposals.

Liability clarity

Better defined through high-risk system obligations, conformity assessments.

Ambiguous; ongoing lawsuits and panels trying to clarify.

Transparency and explainability

Mandates for high risk systems; documentation; human oversight required.

Some policy guidance, but weak legal mandate across board.

Bias & fairness

Integrated into regulation; but practical enforcement & metrics still under debate.

Large gaps; risk of bias high given demographic & data challenges.

IP issues

Actively addressed; EU IP law evolving.

Early stages; litigation starting; laws antiquated for AI generation.

Enforcement & capacity

Stronger institutions, though implementation still work in progress.

Weaker technical capacity, shortage of regulatory clarity, delayed enforcement.



Indian Case Studies & Recent Developments

Here are some concrete examples from India that illustrate the legal challenges in practice.

 Copyright/Training Data Disputes

  • In 2025, major Indian news outlets such as Hindustan Times, The Indian Express, NDTV etc. filed lawsuits against OpenAI and others alleging that their content was used to train AI models without authorization. They argue existing copyright law is inadequate in handling AI’s use of copyrighted work. 

  • In response, India set up a panel to review the Copyright Act, 1957, to examine whether and how existing law needs amendment or supplementation to address AI training and content generation. 

 Risk Framework and AI Governance Debate

  • Academics like Amlan Mohanty have argued for a formal AI risk framework in India, to classify AI systems by risk and impose corresponding regulatory obligations. 

  • India has shown “pro-innovation” bias (i.e. trying not to stifle the sector) but is increasingly under pressure to formalize rules, oversight, and ensure protections. 

Data Privacy Rights

  • India’s Supreme Court in Puttaswamy v. Union of India (2017) recognized privacy as a fundamental right. This decision has implications for AI, especially in data collection, inference, and profiling. 

  • The DPDP Act, 2023 is intended to provide broader data protection, but there are questions about its scope, enforcement, and whether it sufficiently covers inference, profiling, and secondary uses. 

Gaps in Regulation & Enforcement

  • The India legal regime currently has no dedicated AI Act; most AI regulation is still either sectoral or through general digital/data/privacy or intermediary liability rules. 

  • There are loopholes in legal liability: for example, if an AI makes an error but it’s difficult to trace the cause to any human actor, or if disclosure or transparency is limited.

  • Institutions lack specialized technical expertise in many cases to conduct audits, assessments, or to enforce “algorithmic accountability” in a rigorous way.


Critiques of Current Approaches
  • Vagueness in definitions: Terms like “high risk”, “explainability”, “AI system” are often loosely defined, which causes uncertainty for implementers and courts.

  • Regulation lagging innovation: By the time a law is codified, the technology may have advanced further, making rules outdated or unfit.

  • Balance strike-outs: Overly strict regulation could hamper AI innovation, especially for startups or in areas where resources are limited. On the other hand, weak regulation fails to protect rights.

  • Enforcement constraints: Laws may exist, but regulatory bodies may lack technical capacity, funding, or political will to enforce them.

  • International misalignment: Without harmonization, companies can exploit differences across jurisdictions. For example, training data from one country may violate privacy laws in another.

  • Ethical dimensions under-addressed: Laws may not capture deeper questions: what decisions should AI be allowed to make? What about human dignity, autonomy, fairness beyond statistical metrics?


Proposed Strategies for Effective Regulation

Based on the challenges and lessons, the following are recommendations for creating a more effective regulatory framework, particularly for India but applicable more broadly.

  1. Adopt a Risk-Based Regulatory Framework

    • Define categories of AI systems by risk (low, medium, high) based on potential for harm and likelihood.

    • High-risk systems should face stricter obligations: more transparency, stricter data quality controls, mandatory oversight and auditing.

    • Provide clear guidance on what counts as high risk.

  2. Clarify Liability and Accountability Laws

    • Define who is responsible for harms at each stage (data gathering, model design, deployment).

    • Possibly introduce legal presumptions in favor of affected individuals (reverse burden in certain cases) in high-risk contexts.

    • Consider compensation funds or mandatory insurance for high-risk AI actors.

  3. Mandate Transparency and Explainability

    • Require documentation of training data, model testing, decision logic insofar as possible.

    • Allow individuals affected by automated decisions to obtain explanations and challenge decisions.

    • Balance IP / proprietary concerns with rights to explanation.

  4. Strengthen Data Protection & Privacy Laws

    • Ensure laws cover not just collection of personal data but profiling, inference, secondary uses.

    • Consent models need to be richer; people should be informed about AI-based processing.

    • Enforce strong anonymization or pseudonymization and guard against re-identification.

  5. Address Intellectual Property Gaps

    • Update copyright / patent laws to consider AI-generated content: authorship, licensing of data used in training.

    • Clarify fair use / fair dealing exceptions when used in AI training; ensure content creators are fairly compensated.

  6. Institutional & Technical Capacity Building

    • Create or designate regulatory bodies or agencies with AI expertise.

    • Establish AI audit labs, technical standards bodies, testing / certification regimes.

    • Encourage multidisciplinary collaboration (law, ethics, computer science).

  7. Incident Reporting & Monitoring

    • Mandate reporting of AI failures, harms, bias incidents.

    • Maintain registries or oversight mechanisms.

    • Implement sectoral surveillance/sandboxing for new AI-use cases.

  8. International Cooperation, Harmonization of Norms

    • Engage with international AI regulatory efforts (EU, UN, etc.) to harmonize standards, definitions, and cross-border enforcement.

    • Mutual recognition of compliance in certain cases to reduce duplication.

  9. Adaptive Legal Instruments

    • Use flexible regulation: delegated rules, guidelines, regulatory sandboxes.

    • Include periodic reviews or sunset clauses to revisit regulation as technology evolves.


Conclusion

AI has immense promise—but its regulation is fraught with legal and ethical complexity. Liability, transparency, fairness, privacy, IP, jurisdiction, and enforcement are all areas needing serious attention. India provides an example of a country at the cusp: high adoption, growing legal awareness, but with many regulatory gaps.

Effective regulation is likely to rest on risk-based approaches, clear legal responsibilities, robust data protection, and institutions with real technical capacity. Transparent, participatory law-making, together with international cooperation, will be essential.

By proactively building frameworks that anticipate harms rather than merely reacting to them, societies can harness AI’s benefits while protecting core values like justice, fairness, and rights.


References
  • Abdel Fattah, K. E. A. M. A., & Mohamed, B. (2024). The role of law in addressing the risks of using artificial intelligence. International Journal of Science, Technology and Society, 12(5), 151-158. 

  • Bharati, R. (2024, July). Navigating the legal landscape of artificial intelligence: Emerging challenges and regulatory framework in India. SSRN

  • Joshi, D. (2024). AI governance in India – law, policy and political economy. 

  • Kusche, I. (2024). Possible harms of artificial intelligence and the EU AI Act. 

  • Mohanty, A., & Sahu, S. (2024, November 21). India’s advance on AI regulation. Carnegie Endowment for International Peace

  • Mennella, C., et al. (2024). Ethical and regulatory challenges of AI technologies in clinical practice. 

  • Pande, S. (2025). Regulation of Artificial Intelligence in India: Legal … SSRN

  • Rathore, S. K. (2025). Technological, legal, and ethical obstacles of regulating AI in India.

  • “Risk-Based AI Regulation: A Primer on the Artificial Intelligence Act” (2024). RAND Corporation

  • Ruschemeier, H. (2023). AI as a challenge for legal regulation




Related Posts

RECENT POSTS

THEMATIC LINKS

bottom of page