The AI Regulation came into force on 1 August 2024, and the first parts of the new regulation will take effect from 2 February 2025. Becoming compliant is not easy, but you can learn from your GDPR implementation process and get started as soon as possible.
Among other things, we provide three tips on how you can start working to comply with the AI Regulation. It is crucial to know whether you are in fact using an AI system and which category in the AI Regulation the AI system falls under.
What is an AI system?
An AI system is defined as being machine-based and designed to operate with varying degrees of autonomy. Furthermore, it has adaptability, inferring how to generate outputs such as predictions, content, recommendations, considerations or decisions. An AI system can influence the physical or virtual environments it is part of.
Article 3(63) defines AI models for general use. These can be models such as Chat GPT trained on large amounts of data that monitor themselves and can also be integrated into AI systems.
Risk determines commitment - different categories
The higher the risk, the stricter the requirements - this is the overall principle of the AI Regulation. It is a risk-based approach where the rules are adapted to expected risks.
Therefore, it is important, from the very beginning, to be aware of which category the AI system you are using falls into.
Category 1.
Article 5 describes the AI systems that are prohibited. These are AI systems that pose a threat to the security and fundamental rights or dignity of natural persons. For example, systems that manipulate or target particularly vulnerable persons to make decisions that they would not otherwise have made, resulting in harm. Article 5 will apply from 2 February 2025, and the Commission has announced that it will issue further guidelines on the bans before this date.
Category 2.
Article 6 deals with high-risk AI systems that may jeopardise the health, safety or fundamental rights of natural persons. Annex 3 contains the areas that are at least high risk. These include AI systems for determining admission to education programmes or whether a citizen is eligible for medical treatment. High-risk AI is particularly found in specific sectors such as critical infrastructure, areas that may affect workers or welfare services.
Category 3.
Article 50 of the AI Regulation contains a transparency requirement that applies to certain AI systems regardless of risk. The provision is included to build trust by ensuring transparency in AI usage. The rule mandates awareness of any content manipulation. This is relevant for deep fakes, for example. Similarly, users must be made aware that they are interacting with an AI system, such as a chat bot.
Category 4.
General purpose AI models - also called General Purpose AI models or generative AI models - are regulated in Chapter V of the AI Regulation.
AI models are essential components of AI systems, but do not constitute AI systems in themselves. AI models require the addition of additional components, such as a user interface, to become AI systems.
General purpose AI models have therefore been given their own chapter in the Regulation with obligations to ensure transparency throughout the entire value chain of an AI system. For example, it is crucial that a provider of an AI system integrating a General Purpose AI model has access to all necessary information, including about the AI model, to ensure that the system is safe and compliant with the Regulation.
Special obligations apply to General Purpose AI models with systemic risks. These are particularly powerful models and include large language models and Chat GPT.
The rules on AI and GDPR
If you are not working with an AI system covered by the AI Regulation, you do not have to comply with the AI Regulation, but other regulations such as GDPR may come into play.
While the scope of the AI Regulation differs from that of the GDPR, both regulations necessitate thorough documentation. Therefore, you can be inspired by the workflows established under the GDPR. In addition, there can be synergies between the two areas. Therefore, it makes sense to have both AI and GDPR expertise within the same team.
Distribution of roles
The AI Regulation distributes responsibility throughout the value chain - from the providers who develop and bring AI to market, to deployers (users) who utilise the AI system and integrate it into processes, products and services. In addition to the category your AI system belongs to, the requirements depend on the role you play in the value chain of the AI system in question. It is therefore important to define your role and responsibilities, from the very beginning. In practice, it can probably be a challenge to determine the division of roles. And we imagine that you can have multiple roles, just like after GDPR.
The different roles include:
- Provider
- Deployer
- Authorised representative
- Importer
- Distributor
- Operator
Three useful tips
- Find out if your system is covered by the AI Regulation and in which category.
- Decide on roles and responsibilities from the start.
- Get inspiration and utilise existing workflows from GDPR.
The AI Regulation in brief
The work on the AI regulation started back in 2019 with the publication of a white paper that serves as the basis for the regulation.
It focused on two considerations:
- Consideration of the citizen's fundamental rights
- Innovation and free movement considerations
The regulation is an expression of the desire to balance the two considerations.
With the AI Regulation, a new practice has been chosen to regulate a specific technology, namely AI systems. This contrasts with the earlier principle of technology neutrality, where general rules were applied uniformly to all technologies.
|