How to manage AI risks in 2026?
Introduction
The year 2026 marks a pivotal stage in the regulation and governance of artificial intelligence systems. After several years characterised by guiding principles and national strategies, AI law has now entered a phase of binding regulation, notably with the progressive implementation of the European Artificial Intelligence Act (AI Act).
At the same time, the widespread adoption of AI by businesses, particularly generative systems and autonomous agents, significantly increases legal, technical and reputational risks.
In this context, managing AI risks in 2026 requires moving beyond experimental approaches and implementing structured, documented governance aligned with international regulatory requirements.
A transformation of the AI regulatory framework in 2026
The global regulatory landscape for AI is no longer limited to recommendations or ethical principles; it is now based on binding legal obligations that apply on a large scale. The European Union, with the AI Act, is playing a leading role, but other jurisdictions such as China, the United States, South Korea, Australia, and Vietnam are following a similar path.
The year 2026 marks a decisive turning point, as the AI Act becomes fully enforceable particularly the provisions regarding high-risk AI systems and mechanisms such as regulatory sandboxes are implemented. This entry into force comes amid the growing use of AI in decisions that have a direct impact on individuals’ rights and access to essential services.
As a result, regulators consider that informal governance mechanisms are no longer sufficient to mitigate these risks.
The mandatory formalisation of risk management
Starting August 2, 2026, managing AI-related risks will become a core requirement for companies developing or using systems, particularly high-risk ones. These systems, deployed in sensitive areas (security, biometrics, medical devices, education), are subject to strict requirements.
The AI Act now requires companies presenting or developing products that incorporate high-risk AI systems to carry out:
• A prior risk assessment and a documented management system;
• CE marking and registration in the European database;
• Comprehensive technical documentation, ensuring transparency and traceability;
• Effective human oversight;
• Continuous monitoring, including logs, compliance checks, and requirements regarding robustness and cybersecurity
These obligations require integrating risk management throughout the entire lifecycle of the systems, from design through to operation.
This evolution also requires a redefinition of internal responsibilities, with closer collaboration between legal, technical and compliance teams.
Increased requirements for transparency and explainability
Transparency has become a central pillar of AI regulation. Authorities require that users be clearly informed when interacting with an AI system and that they can understand, at least broadly, how automated decisions are made.
An automated decision is a decision made, in whole or in part, by an algorithmic system without direct human intervention, based on data and predictive models. Automated decisions must be explainable, particularly in sensitive sectors. This requirement aligns with the GDPR’s principles regarding transparency and the right to information.
The AI Act also imposes obligations regarding the identification of AI-generated content. Users must be able to determine whether content has been generated by AI, particularly in the case of AI-generated text, images, or videos. In this regard, the European AI Office offers a voluntary code of best practices designed to support the implementation of these obligations.
This evolution requires companies to integrate transparency from the design phase (“transparency by design”). These obligations go beyond formal disclosures and involve a genuine effort to ensure user understanding and decision intelligibility.
Stronger requirements for security and user protection in AI systems
The safety of AI systems is an increasingly prominent regulatory priority. Risks related to algorithmic bias, discrimination and exposure of vulnerable users, particularly minors, are subject to heightened scrutiny.
Regulators have already demonstrated, since 2025, their willingness to intervene where AI systems generate inappropriate content or pose risks to users. This trend is reinforced in 2026 with the the adoption of new regulations in many jurisdictions requiring specific obligations to prevent harmful or manipulative content.
Furthermore, risks related to deepfakes and synthetic content represent a major concern. These technologies may be used for fraud, harassment or privacy violations, prompting several jurisdictions to consider targeted regulation.
In this context, companies must integrate “safety by design” principles, including bias testing, use-case restrictions and enhanced control over generated content.
Ongoing intellectual property challenges
Intellectual property remains a central issue in AI-related legal debates. Litigation concerning the use of protected content to train AI models is increasing, revealing divergent approaches across jurisdictions.
Some recent decisions have recognized that the unauthorized use of copyrighted works may constitute copyright infringement, as in the German case GEMA v. OpenAI of November 11, 2025, in which the Munich court held that training models using copyrighted song lyrics without a license was unlawful.
Conversely, other jurisdictions take a more restrictive approach: in the case of Getty Images v. Stability AI on November 4, 2025, the High Court of Justice in London held that a model that does not contain copyrighted works in a recognizable form cannot be considered an unauthorized reproduction, thereby highlighting the decisive role of technical analysis and territorial context.
For more information on the Getty Images v. Stability AI decision, we invite you to read our previously published article.
This legal uncertainty requires companies to secure their practices, particularly regarding training data, licensing strategies and control of generated outputs. Beyond litigation risks, these issues also concern the valuation of intangible assets and the protection of innovation.
Conclusion
Managing AI risks in 2026 requires a fundamental transformation of business practices, integrating legal, technical and ethical requirements into a comprehensive and structured approach.
The growing complexity of technologies, combined with increasing regulatory obligations, demands constant vigilance and proactive risk management.
Dreyfus & Associates assists its clients in managing complex intellectual property matters by providing tailored advice and comprehensive operational support to ensure the full protection of intellectual property rights.
Dreyfus Law Firm works in partnership with a global network of intellectual property attorneys.
Nathalie Dreyfus, with the assistance of the entire Dreyfus team.
Q&A :
Is the AI Act mandatory for all companies?
Yes, if a company develops, markets, or uses an AI system that has an impact within the EU. However, obligations vary depending on the level of risk and are much stricter for high-risk systems.
Are AI training datasets regulated?
Yes. They must comply with GDPR principles (lawfulness, transparency, purpose limitation) and, where applicable, copyright law. Companies must be able to justify the origin and use of the data.
How can risks related to deepfakes and AI-generated content be mitigated?
By combining detection tools, traceability mechanisms, and strict internal policies to control usage and prevent the spread of misleading content.
Why should companies act now?
Because the rules are coming into force and enforcement is increasing. Acting early helps reduce legal risks and build trust.
What are the risks of non-compliance?
Companies face significant fines, product bans or withdrawals, and potential litigation. Reputational damage can also be immediate, especially in cases of controversial AI use.
This publication is intended to provide general guidance and highlight certain issues. It is not intended to apply to specific situations or to constitute legal advice.
















