In an era where artificial intelligence (AI) is deeply integrated into creative and decision-making processes, a key legal question arises: can AI be sued liable for infringement or plagiarism?
As AI systems rapidly generate texts, images, and music, understanding the legal framework is crucial for businesses seeking to secure their use of AI technologies.
This article explores the allocation of liability for intellectual property (IP) infringement involving AI, the challenges under current law, and strategies businesses can adopt to mitigate risks.
Sommaire
- 1 Legal liability of AI: Current framework
- 2 Legal uncertainty for autonomous AI creations
- 3 When AI-generated content leads to infringement: who can be held liable?
- 4 The DeepSeek case: a cautionary tale of AI plagiarism
- 5 Strategies to protect your business
- 6 Conclusion: anticipating and managing legal risks
- 7 FAQ
Legal liability of AI: Current framework
AI lacks legal personality
Currently, no legislation, including the recently adopted European Regulation 2024/1689 on Artificial Intelligence (AI Act), recognises AI systems as independent legal entities.
AI remains a tool, devoid of legal standing. Under article 1240 of the French Civil Code and general tort law principles, only natural persons and legal entities using, developing, or commercialising AI can be held liable for harm caused.
The AI Act outlines specific obligations for high-risk systems but emphasizes that compliance and accountability rest entirely with human or legal operators, not with the AI systems themselves.
Legal uncertainty for autonomous AI creations
Copyright law, particularly under the Berne Convention and Directive 2001/29/EC, protects works created by human authors.
AI-generated content created independently, without substantial human intervention, falls outside traditional legal protections, making ownership claims and infringement actions more complex, particularly when such content unlawfully reproduces pre-existing works.
The AI Act does not create an independent IP regime for AI-generated works. However, Article 50 mandates transparency obligations, requiring users to be informed when interacting with artificially generated content. At the same time, Article 52 establishes procedures for overseeing AI models that pose systemic risks.
When AI-generated content leads to infringement: who can be held liable?
Liability of natural and legal persons operating AI
When AI-generated content infringes third-party rights, the operator of the AI system is treated as the legal author of the infringing content.
Liability is incurred regardless of intent, as exploitation alone can trigger legal consequences if it causes harm to a rights holder.
To reduce such risks, operators must:
- Conduct systematic prior verification of AI-generated outputs;
- Implement internal compliance protocols tailored to the specific AI system.
Liability of AI developers and suppliers
AI developers or suppliers, whether natural persons or corporate entities, can also be held liable in two key situations:
- Failure to ensure effective human oversight, including by not providing users with the necessary information for proper understanding, monitoring, and intervention, as required by Article 14 of the AI Act;
- Unlawful use of protected works during the model training process, constituting a separate IP infringement.
In accordance with the AI Act, providers of high-risk AI systems must:
- Ensure the quality, representativeness, and statistical relevance of training, validation, and testing datasets (Article 10);
- Provide detailed technical documentation, describing system characteristics, development processes, and compliance measures (Article 11);
- Inform end users about the nature and limitations of AI-generated content, ensuring transparency and protecting user expectations (Article 50).
The DeepSeek case: a cautionary tale of AI plagiarism
In March 2024, Chinese company DeepSeek was accused of plagiarising large portions of copyright content without proper attribution.
Investigations revealed that the AI chatbot reproduced entire sections word-for-word, lacking substantial transformation or originality.
DeepSeek defended itself by arguing that the sources were public and that the reuse constituted transformative use.
However, according to international copyright standards, mere aggregation or superficial rewording is insufficient to avoid infringement when the original content remains recognisable.
This case highlights the substantial risks businesses face when training datasets are not properly vetted. It stresses the importance of:
- Rigorous auditing of training datasets;
- Maintaining detailed records of dataset provenance;
- Implementing robust transparency measures for AI-generated content.
Strategies to protect your business
Establish strong internal compliance policies
- Systematically review AI outputs before publication;
- Train employees on intellectual property risks associated with emerging technologies;
- Limit AI use to cases where legal risks are effectively managed.
Strengthen contractual safeguards
- Request explicit warranties regarding the legality of training data;
- Negotiate robust indemnification clauses to cover potential IP infringements;
- Reject unreasonable limitations of liability in supplier agreements.
Deploy technological and legal risk management tools
- Use advanced plagiarism detection software tailored for AI-generated content;
- Establish internal auditing procedures and rapid response mechanisms to address identified risks.
Conclusion: anticipating and managing legal risks
AI systems, under current law, cannot be held liable.
Legal responsibility for infringement or plagiarism always rests with the natural or legal persons developing, operating, or commercialising AI solutions.
Given the new regulatory obligations imposed by the European AI Act, companies must adopt a proactive approach, integrating technical diligence, contractual safeguards, and ongoing legal oversight.
Dreyfus Law Firm works with clients in the food sector, providing specialist advice on intellectual property and regulatory issues to ensure compliance with national and European laws.
We collaborate with a global network of intellectual property attorneys.
Join us on social media !
FAQ
Can AI be sued for plagiarism or infringement?
No. Only a natural or legal person operating or developing an AI system can be held liable.
How can a company minimise its legal risks when using AI?
By implementing strict validation procedures, securing contractual protections, and regularly auditing AI-generated outputs.
Does the European AI regulation impose obligations relating to intellectual property?
Indirectly, through transparency and dataset quality requirements applicable to AI developers and deployers.