The rapid advancement of artificial intelligence (AI) technologies has necessitated the development of comprehensive frameworks to ensure their ethical and responsible use. In this context, the European Commission has introduced the third draft of the General-Purpose AI (GPAI) Code of Practice, aiming to guide AI model providers in aligning with the stipulations of the EU AI ActRegulation (EU) 2024/1689). This article delves into the primary objectives of this draft and explores its future implications.
Sommaire
I – Objectives of the third draft of the General-Purpose AI Code of Practice
A – Enhancing transparency
A cornerstone of the third draft is its emphasis on transparency. All providers of general-purpose AI models are mandated to disclose pertinent information about their models, including design specifications, training data sources, and intended applications. This initiative seeks to foster trust among users and stakeholders by ensuring they are well-informed about the functionalities and potential limitations of AI systems. Notably, certain open-source models are exempted from these transparency obligations, reflecting a nuanced approach to diverse AI development paradigms.
B – Addressing copyright concerns
The draft also tackles the intricate issue of copyright in AI development. Providers are required to implement measures that respect intellectual property rights, ensuring that AI models do not infringe upon existing copyrights. This includes establishing mechanisms for rights holders to report potential violations and for providers to address such claims effectively. The draft outlines that providers may refuse to act on complaints deemed “manifestly unfounded or excessive, particularly due to their repetitive nature.”
C – Ensuring safety and security
For AI models identified as posing systemic risks, the draft delineates additional commitments focused on safety and security. Providers of these advanced models are obligated to conduct comprehensive risk assessments, implement robust mitigation strategies, and establish incident reporting protocols. These measures aim to preemptively address potential threats and ensure that AI systems operate within safe and ethical boundaries.
II – Future perspectives of the General-Purpose AI Code of Practice
A – Implementation challenges
As the AI landscape continues to evolve, implementing the GPAI Code of Practice presents several challenges. Providers must navigate the complexities of aligning their operations with the Code’s requirements, which may necessitate significant adjustments in their development and deployment processes. Ensuring compliance without stifling innovation will be a delicate balance to maintain.
B – Global influence and harmonization
The GPAI Code of Practice has the potential to set a global benchmark for AI governance. By establishing comprehensive guidelines, the European Commission aims to influence international standards, promoting harmonization across jurisdictions. This could lead to a more cohesive global approach to AI regulation, benefiting both providers and users worldwide.
Alongside the third draft, the Chairs and Vice-Chairs are also introducing a dedicated executive summary and an interactive website. These resources aim to facilitate stakeholder input, both through written comments and discussions within working groups and specialized workshops. The final version of the Code is expected in May, serving as a compliance framework for general-purpose AI model providers under the AI Act, while integrating cutting-edge best practices.
C – Continuous evolution and adaptation
Recognizing the rapid pace of AI advancements, the Code is designed to be adaptable. It emphasizes the need for continuous evolution, allowing for updates and refinements that reflect technological progress and emerging ethical considerations. This flexibility ensures that the Code remains relevant and effective in guiding AI development responsibly.
Conclusion
The third draft of the General-Purpose AI Code of Practice represents a significant step toward responsible AI governance. By focusing on transparency, copyright respect, and safety, it lays a foundation for ethical AI development. As the cCde progresses toward finalization, its successful implementation will depend on collaborative efforts among stakeholders to address challenges and seize opportunities for global harmonization.
Need expert guidance on AI and intellectual property? Dreyfus Law Firm specializes in intellectual property law, including trademark, copyright, and AI-related legal matters.
We collaborate with a global network of intellectual property attorneys.
Join us on social media !
FAQ
1 – What is the AI Act?
The AI Act is a regulation proposed by the European Commission aimed at governing the development and deployment of artificial intelligence systems within the European Union. It is the first comprehensive legal framework in the world dedicated to AI, designed to balance innovation and fundamental rights protection. The AI Act classifies AI systems into four risk levels: • Unacceptable risk (banned, such as social scoring systems or subliminal manipulation). • High risk (subject to strict requirements, including AI systems used in critical infrastructures, recruitment, or judicial decisions). • Limited risk (requiring transparency obligations, such as chatbots or deepfakes). • Minimal risk (with no specific obligations, such as AI-powered content recommendations). The primary objective is to ensure that AI systems deployed in the EU comply with fundamental rights, safety, and transparency while promoting responsible innovation.
2 – When will the AI Act come into force?
The AI Act was provisionally adopted in 2024 and is expected to come into force in 2025, following final approval by the European Parliament and the Council of the European Union. However, its implementation will be gradual: • Some immediate provisions will take effect six months after publication. • Rules for high-risk AI systems will apply starting in 2026. • Additional obligations, such as those for general-purpose AI models, may not be fully implemented until 2027. This phased approach allows businesses to adapt their operations to comply with the new regulatory framework.
3 – What is the legal framework for AI?
The legal framework for AI currently consists of a combination of European and national laws covering various aspects: 1. The AI Act (set to come into force in 2025), which will provide specific regulations for AI development and deployment. 2. The GDPR (General Data Protection Regulation), which governs the use of personal data—a key issue for AI systems. 3. The Product Liability Directive (Directive 85/374/EEC) and the upcoming AI Liability Directive (COM(2022) 495 final 2022/0302 (COD)), which define the responsibility of AI developers and users in case of damages. 4. Sector-specific regulations (e.g., finance, healthcare) that impose industry-specific AI compliance requirements. 5. Copyright and intellectual property laws, which impact the training datasets of generative AI models (e.g., ensuring that AI does not infringe on existing copyrights). This legal framework is constantly evolving to protect users and promote ethical AI development.