Years of work in the EU to regulate artificial intelligence recently came to an end, but it was certainly not an easy road to the now finalised artificial intelligence regulation, also known as the "AI Act". In the very last trialogue negotiations, there were two topics in particular that gave rise to disagreements between the negotiating parties.

The first topic was facial recognition, including whether law enforcement agencies should be allowed to use the technology in real-time in public spaces to fight crime.

Disagreement on facial recognition

The Parliament wanted a ban with few exceptions, while the Council of Ministers wanted a general possibility of using this technology under certain specified conditions.

The compromise was that police authorities are not generally allowed to use real-time facial recognition technologies in public spaces but can use such facial recognition technologies in specific cases within narrow exceptions and in compliance with strict requirements.

The use must be limited in time and place and must be for the purpose of targeted victim searches, the prevention of specific and current terrorist threats, and the location or identification of persons suspected of having committed specific serious crimes.

Foundation models caused controversy

The second topic concerned the so-called "foundation models", which have been this year's major technological breakthrough within artificial intelligence. Foundation models are artificial intelligence models that are trained on large amounts of data, are generic in their applicability and are used as a foundation for building a wide range of AI systems, including generative AI models (algorithms that can create new content) such as GPT-4 (used in ChatGPT, among others).

With its draft regulation in June 2023, the Parliament wanted to impose a number of obligations on foundation model providers. However, this proposal has led to challenges and changes to the final regulation. Several of the EU's economic giants, Italy, France and Germany, felt that the regulation went too far in limiting innovation and competitiveness compared to other countries such as China and the USA.
Instead, the three countries were more in favour of regulating the developers of foundation models in a more voluntary way with guidelines for good practices, e.g. that the development should be transparent.

However, the Parliament was not satisfied with this and succeeded in ensuring that foundation models must comply with the transparency requirements previously proposed by the Parliament. In addition, the Parliament managed to ensure that strict obligations apply to the so-called "high-impact foundation models", which include foundation models that are a systemic risk.

"Fundamental rights impact assessment"

The Parliament members also included a mandatory "fundamental rights impact assessment" for AI systems that are classified as high-risk. This risk assessment must be carried out before the system becomes available in the market.

In the coming weeks, the final technical details will be sorted out, after which the Presidency will submit the compromise text for approval by member state representatives. The entire text must then be approved by the Council and the European Parliament before the regulation can be formally adopted.
Horten has followed the creation of the regulation closely and is ready to advise on its implementation with both developers, distributors and users of artificial intelligence.


Emilie Loiborg

Director, Attorney