At a major milestone in artificial intelligence governance, the European Commission received the final version of the general AI code of practice.
This comprehensive, voluntary framework is designed to guide the development and deployment of powerful AI systems in the EU.
This code is intended to bridge the gap between innovation and regulations, and is intended to provide developers with a practical pathway to future legal requirements under the EU’s AI law.
Created by a panel of 13 independent experts and shaped by input from over 1,000 stakeholders, including AI developers, small businesses, academics, rights holders and civil society, the code is set up to play a pivotal role in preparing the industry for the next phase of AI regulation.
With the enforcement deadline approaching, the code arrives as both a policy tool and a blueprint for responsible AI development in one of the world’s largest digital markets.
Henna Wilkunen, vice-president of technology sovereignty, security and democracy, commented:
“Code co-designed by AI stakeholders is tailored to their needs. Therefore, we invite all generic AI model providers to adhere to the code.
What is General Purpose AI?
General purpose AI refers to an advanced basic model that allows you to perform a wide range of tasks in a variety of sectors.
Unlike task-specific AI built for idiosyncratic purposes such as facial recognition and product recommendations, general purpose AI models can be adapted to countless applications.
These include language processing, image generation, customer service automation, and scientific research.
These versatile systems often serve as the backbone of AI-driven tools used in the healthcare, finance, education and creative industries.
They offer great potential, but their broad capabilities pose complex challenges, including intellectual property concerns, transparency issues and potential misuse.
Code of Practice: Roadmap for Compliance
The General AI Practical Code is centered around three core themes: transparency, copyright, safety and security, and is designed to help developers navigate obligations under the AI Act, which will come into effect on August 2, 2025.
Enforcement continues in stages. One year later, and the new model will be two years later.
Transparency
To promote clarity and openness, the code introduces model document forms. It is a standardized, user-friendly tool that allows AI providers to disclose important information about how to train, evaluate, and use models.
It aims to improve trust, promote integration into downstream systems, and enable users to understand potential limitations and risks.
Copyright
The Copyright chapter explains how AI developers align with EU copyright law during model training and deployment.
It outlines practical ways to establish a clear usage policy and respect intellectual property. This is an increasingly urgent issue as generative AI becomes wider.
Safety and security
Some general-purpose AI models pose high risks, such as enabling the development of harmful substances and spreading disinformation.
The Safety and Security Section guides developers on how to assess and mitigate these systematic risks through cutting-edge practices, targeting the most advanced models.
Voluntary sign-on, legal clarity
Once approved by EU member states and committees, the code is open to voluntary recruitment.
Sign-on providers benefit from a streamlined compliance process, reduced management overhead, and greater legal certainty under AI laws. For many, it represents a clear and efficient route to demonstrating accountability and regulatory integrity.
Additionally, the committee has prepared further guidelines to clarify which entities fall within the scope of the general AI rules of the AI Act.
These guidelines are expected to be made public prior to enforcement of the law and complement the code of practice.
Building a safe AI future in Europe
As the EU is at the forefront of ethical AI development, the general AI code of practice sets a strong precedent.
It provides a balanced, positive framework that supports innovation while protecting the public interest.
For businesses and developers who work with general purpose AI, the message is clear. Transparency, accountability, and preparation are no longer options. They are essential.
Source link