top of page
AI-Driven Intellectual Property Rights Infringement: Challenges and Legal Implications

Author: Gagan Rawat, Guru Gobind Singh Indraprastha University


ABSTRACT

AI is a growing trend all over the world, all of the countries are in a race of AI. Which country would make an AI which can change the way humans think, create and innovate. By creating tools that can produce art, music, literature, and code on their own, artificial intelligence (AI) is revolutionising the content creation process. This inventiveness, however, brings up serious issues with intellectual property rights (IPR). Intellectual property also known as the creation of a man’s intellect is now at a risk of Potential infringements that are highlighted by problems like machine-generated code, deep fakes, and unauthorised replication. Liability of AI and the insufficiency of current legal frameworks, cross-jurisdictional approaches, and required reforms are the main research questions covered in this article. This research identifies urgent gaps in existing legislation and suggests ways to balance technological advancement with intellectual property protection through doctrinal analysis, comparative law, and a few chosen case studies.

Keywords: Intellectual property, Infringement, Liability of AI


INTRODUCTION

AI refers to machines programmed to perform tasks typically requiring human intelligence, including visual perception, decision-making, and content generation. These actions can be carried out with varying levels of autonomy, depending on the degree of human input, interaction, or supervision involved. As AI-generated content becomes indistinguishable from human-created works, questions arise about authorship, ownership, and liability in cases of intellectual property infringement.

Concerns span across copyright (e.g., generated artworks, papers), trademarks (e.g., AI- designed logos), and patents (e.g., AI-discovered innovations). The lack of legal clarity risks both under-protection for creators and overregulation that stifles innovation. This article seeks to explore the legal challenges posed by AI to IPR, with an emphasis on liability frameworks, legislative insufficiency, and comparative approaches.


RESEARCH METHODOLOGY

This study employs a doctrinal legal research methodology, involving critical analysis of:

  • Statutes such as the Copyright Act, trade marks act, and Title 17 of the United States code.

  • Case laws across jurisdictions from U.S., EU, India.

  • Secondary sources, including academic publications, policy papers, and international guidelines.

  • Comparative analysis across legal regimes to uncover shared principles and divergences.


LITERATURE REVIEW

This study relies on a range of secondary data sources, including academic journals, research papers, industry updates, and expert opinions from scholars and professionals. Reputable educational websites such as Gizmodo, arXiv, and TechCrunch were consulted to access relevant scholarly literature. Additionally, academic search engines like Google Scholar, along with domestic Q&A platforms and social media sites such as Zhihu and Xiaohongshu, were used to gather online materials, generate ideas, and participate in academic discussions. All collected secondary data were organized and stored digitally on laptops and iPads.

There are some findings which literature explores with help of the present data:

  1. AI as a tool: While AI is often treated as a tool because of its capability to easily reproduce and replicate art forms , books, voice, images and many more However, sometimes its autonomous behaviors challenge human-centric liability models because contrary to the replication AI is in itself capable of creating original creations but the main concern which arises from it is the Ownership as it is unclear that who owns that creation-the programmer, user or the developer of the AI.


  1. Legal personhood: The lack of personhood of the AI also raises concerns about the liability of a person who infringes, the same concern as mentioned above which is – who is liable- the user, programmer or the developer.



  1. Policy frameworks: WIPO, the EU Commission, and the USPTO have begun exploring AI-IPR interactions, but comprehensive policies are nascent. Also, in India no recent development is shown regarding the interaction, which makes it more complicated to have any extraterritorial jurisdiction, as the AI operates globally and if any person infringes the right of any person from any country, then it is currently impossible to make him liable due to the insufficient legislation.


RESEARCH GAP

From the above given literature review few research gaps can be identified which are as follows:

  • Firstly, there is no consensus whether who is liable for bearing the liability which will arise from these infringements—developer, user, or AI.

  • Secondly, few or no cross-jurisdictional studies comparing enforcement and remedies available to the plaintiff.

Thirdly and lastly, Limited ethical discourse on AI mimicking, replicating or infringing human creativity. As of now there are limited studies which tells us about the ethical remedies on how AI can be beneficial without any further complications.


RESEARCH QUESTIONS
  1. Who should be held liable for AI-generated IPR infringements: the developer, the user, or the AI itself?

  2. Are current IPR laws sufficient to address the unique challenges posed by AI- generated works?

  3. How do different jurisdictions approach AI-related IP violations, and what can be learned from them?

  4. What legal reforms are necessary to effectively regulate AI without hindering innovation?


RESULTS

- WIDESPREAD INFRINGEMENT VIA TOOLS LIKE STABLE DIFFUSION, GITHUB COPILOT, AND CHATGPT, WHICH TRAIN ON COPYRIGHTED CONTENT.

AIGC (Artificial Intelligence Generated Content) platforms like Stable Diffusion have been found to cause copyright infringement through unauthorized use of copyrighted works, excessive plagiarism, and adaptation of those works without proper attribution, raising significant legal concerns for human artists and copyright owners.

ChatGPT, developed by OpenAI, is trained on vast datasets that include a wide array of publicly available text, some of which may be protected by copyright, this raises questions about whether outputs generated by ChatGPT could inadvertently reproduce or closely paraphrase copyrighted material without proper attribution or permission. For example, Aggarwal et al. (2023) discuss the implications of using ChatGPT in academic and medical settings, they note that the tool’s reliance on existing published works potentially violate copyright laws if not used responsibly, and if we see in the current times the ChatGPT is in controversy regarding its use for the creation of the Ghibli art, the employees of ChatGPT had to give a clarification regarding the recent trend of Ghibli in which they said that they have created a system for the refusal of whenever somebody tries to give that specific prompt. But if we look the CEO of ChatGPT himself encouraged users to do it which means that they have trained their GPT for that specific command which gives us a inference that they have used the copyrighted work of Ghibli studios for the purposes of training their model, raising the concerns regarding the infringement.

The ethical use of ChatGPT is a subject of ongoing debate. As explored by Kaur and Kaur (2023), the ease with which users can generate large volumes of text blurs the line between original authorship and derivative works, complicating the enforcement of copyright protection. This is especially relevant in educational and research contexts, where the originality of content is paramount.


U.S. COPYRIGHT OFFICE (2023) RULED THAT WORKS SOLELY CREATED BY AI CANNOT BE COPYRIGHTED.


The landmark case regarding copyright and AI-generated works is the U.S. Copyright Office's decision on Zarya of the Dawn, a graphic novel written by Kristina Kashtanova and illustrated using the AI tool Midjourney. In February 2023, the Office partially revoked the original copyright registration after determining that the images generated by Midjourney did not qualify for copyright protection, as they lacked the required element of human authorship. However, the Office affirmed copyright for the text and the creative arrangement of images and text, as these reflected Kashtanova’s own creative input

In 2023, the U.S. Copyright Office reaffirmed that works produced entirely by artificial intelligence, without substantial human involvement, are not eligible for copyright protection. This was clearly demonstrated in a February 2023 case, where the Office revoked copyright for AI-generated artwork, although it continued to protect elements like the text and layout that reflected human creativity. The Office emphasized that copyright law applies only to works that involve meaningful human authorship. This policy reflects an effort to recognize the growing role of AI in content creation while upholding the core principle that copyright is intended for human-generated work.



CASES LIKE ANDERSEN V. STABILITY AI (2023) ILLUSTRATE ARTISTS CHALLENGING AI TRAINING DATASETS FOR UNAUTHORIZED USE.

Andersen v. Stability AI (2023)9 is a landmark case where artists have taken legal action against AI companies for the unauthorized use of their copyrighted works in training datasets. In this case, several artists filed a lawsuit against Stability AI, the developer of Stable Diffusion, alleging that their images were used without permission to train generative AI models, resulting in outputs that could infringe on their copyrights

As of now, Andersen v. Stability AI (2023) is still an ongoing case, and there hasn't been a definitive ruling or court holding available. However, the case revolves around




INDIA'S LEGAL FRAMEWORK LACKS AI-SPECIFIC PROVISIONS BUT APPLIES TRADITIONAL AUTHORSHIP DEFINITIONS TO HUMAN CONTRIBUTORS.

Section 2(d) of the Indian Copyright Act defines an “author” as the person who causes a computer-generated work to be created, thereby ruling out the possibility of granting authorship to machines that produce content without human input. This means that neither the AI itself, nor its developers, nor even the copyright owners of any materials in the AI’s training database, can claim ownership. Instead, the individual who inputs a prompt directing the AI to generate content—such as music—would be recognized as the author for legal purposes. Even if we attempt to liken the relationship between the AI and its user to that of an employer and employee, the analogy fails, as AI lacks the legal capacity to consent or engage in contractual relationships. Furthermore, this ambiguity may open the door to co-authorship claims by the developers of the AI, particularly due to their role in the initial creation or conceptualization of the system.


EU AI ACT (DRAFT) PROPOSES RISK-BASED REGULATION BUT AVOIDS IP-SPECIFIC MANDATES.

The EU AI Act, adopted in 2024, introduces a risk-based regulatory framework for artificial intelligence systems, categorizing them according to their potential risk to health, safety, and fundamental rights. High-risk AI systems face stricter requirements, including transparency, human oversight, and robust risk management measures. The Act is notable for its comprehensive, cross-sectoral approach, aiming to balance innovation with the protection of public interests.

However, the AI Act does not include mandates or specific provisions addressing intellectual property (IP) rights or infringement. Instead, it focuses on the ethical, safety, and liability aspects of AI deployment, leaving IP-related issues-such as copyright, patent, or trade secret concerns-outside its direct regulatory scope. As a result, IP matters continue to be governed by existing EU intellectual property laws, rather than by the AI Act itself.


DISCUSSION

The legal ambiguity around AI creators' liability could lead to a chilling effect on innovation or unchecked infringement.

Further several Key challenges include:

  • Attributing intent and causation when AI generates infringing content.

  • Developers may be liable under secondary liability doctrines; users under direct infringement.

  • AI cannot be held liable under current legal frameworks due to lack of legal personhood.


RECOMMENDATIONS
  • Amend copyright laws to clarify rights in AI-generated content.

  • International cooperation through treaties and guidelines (e.g., WIPO standards).

  • Create AI-specific liability regimes, akin to product liability or intermediary frameworks.


CONCLUSION

AI challenges core tenets of IPR by autonomously creating, replicating, and distributing content. Existing laws inadequately address the complexities of AI involvement. There is an urgent need for global, harmonized reforms that preserve both innovation and the rights of original creators. Future developments must balance commercial potential with ethical and legal safeguards.


REFERENCES

RESEARCH PAPERS AND JOURNALS

  1. Xavier Oberson, Definition of AI and Robots, in Taxing Robots 4 (2019).


  1. Gil de Zúñiga, Homero et al. “A Scholarly Definition of Artificial Intelligence (AI): Advancing AI as a Conceptual Framework in Communication Research.” Political Communication 41 (2023).

  2. Lyulin Zhuang, AIGC (Artificial Intelligence Generated Content) Infringes the Copyright of Human Artists, Applied & Computational Eng’g 4 (2024).


  1. T.B. Arif, U. Munaf & I. Ul-Haque, The Future of Medical Education and Research: Is ChatGPT a Blessing or Blight in Disguise?, 28(1) Med. Educ. Online 2181052 (Dec. 2023)


  1. F. Sovrano, E. Hine, S. Anzolut & A. Bacchelli, Simplifying Software Compliance: AI Technologies in Drafting Technical Documentation for the AI Act, 30 Empirical Software Eng’g 91, 94 (2025),


  1. Wimmy Choi, Marlies van Eck, Cécile van der Heijden, Theo Hooghiemstra & Erik Vollebregt, Legal Analysis: European Legislative Proposal Draft AI Act and MDR/IVDR (2022),

  2. Hana Tiro, Question of Knowledge in Active Usage of AI Tool “ChatGPT”, in Artificial Intelligence in Music, Arts, and Theory Revisited 47 (2024).


NEWS ARTICLES AND BLOGS

  1. TOI Trending Desk, 5 Reasons Why People Are Bitterly Criticizing ChatGPT's Ghibli Studio, Times of India (Mar. 31, 2025, 10:31 PM IST), https://timesofindia.indiatimes.com/etimes/trending/5-reasons-why-people-are-bitterly- criticising-chatgpts-ghibli-studio/articleshow/119805592.cms.

  2. Ankita Jagnani, Intellectual Property & AI-Generated Works: Is India Ready?, NLIU L. Rev. Blog (Nov. 16, 2024), https://nliulawreview.nliu.ac.in/blog/intellectual-property-ai-generated- works-is-india-ready/.


CASE LAWS:

  1. Zarya of the Dawn, Registration No. VAu001480196 (2023).

  2. Andersen v. Stability AI Ltd., 3:23-cv-00201, (N.D. Cal.)


STATUTES

  1. The Copyright Act, No. 14 of 1957, § 2(d), India Code (2023).

  2. EU AI ACT 2024









Related Posts

RECENT POSTS

THEMATIC LINKS

bottom of page