The trilogue negotiations between the European Parliament, the European Commission and the Council of the European Union on the AI Act are underway. They each have their own wishes for the new legislation, such as an expanded list of prohibited AI systems and strict transparency requirements for providers of ChatGPT and similar models.

Artificial intelligence is a technology that has taken the world by storm. Its rapid development and the many possibilities involved have fundamentally changed human interaction with technology. The biggest technological milestone in many years is probably the availability of artificial intelligence based on advanced language models in chatbots, including "ChatGPT" from OpenAI and "Bard" from Google.

The possibilities of this type of artificial intelligence seem to be limited only by human imagination. Legislation is on its way, but until the upcoming EU Artificial Intelligence Act (the "AI Act") is in place, we must rely on the regulation and legal tools we already have to guide us through the many issues raised by the use of AI.

Status of the AI Act

The Council of the European Union adopted its common position on the AI Act on 25 November 2022. The European Parliament adopted its negotiating position on the AI Act with a total of 499 votes in favour, 28 against and 93 abstentions on 14 June 2023. The AI Act has now moved on to the trilogue negotiations between the Parliament, the European Commission and the Council, with the aim of agreeing on a final text for the AI Act by the end of 2023.

Expansion of the list of prohibited AI systems

The Council wishes to expand the list of prohibited AI systems to include the use of AI for social scoring to private actors. The Council also wishes to extend the already existing prohibition on using AI systems that exploit the vulnerabilities of a specific group of people, to include people who are vulnerable due to their social or economic situation.

The Parliament wishes to ban the use of biometric identification systems in the EU. This applies to the use of these systems in real-time - which was also proposed in the Commission's proposal - and to subsequent use, with a few exceptions.

In addition, the Parliament wants to ban:

  • all biometric categorisation systems using sensitive (personal) characteristics;
  • "predictive policing" systems (algorithms that predict crime) based on profiling, location or previous criminal behavior;
  • emotion recognition systems in law enforcement, border control, workplaces and educational institutions;
  • AI systems using indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

High-risk AI systems

The Commission proposed that a number of AI systems used for purposes listed in Annex III of the AI Act automatically should be considered as high-risk systems. In this context, the Parliament wishes as an additional requirement that the systems must pose a "significant risk to health, safety and fundamental rights" to qualify as high-risk systems. Further, in this context, the Council wishes that AI systems that are unlikely to cause serious violations of fundamental rights or other significant risks are not considered as high-risk systems.

In addition, both the Parliament and the Council have deleted, added and clarified the list of high-risk systems in Annex III.

Requirements for providers of foundation models and generative AI

The Parliament's draft is the only one to include a definition and rules for the use of foundation models (models trained on large amounts of data and used as a foundation for building AI systems), including generative models (algorithms that can create new content) such as GPT-4 (ChatGPT).

Foundation models are defined as "an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks". The use of these models comes with a number of new obligations for providers, including identifying and managing the risks posed by the model, complying with certain specific design, information and environmental requirements, and registering the model in an EU database.

Providers of generative foundation models such as ChatGPT will be subject to strict transparency requirements, including being required to disclose that the content is generated by AI and not by a human, training and designing the model to prevent the generation of illegal content, and documenting and publishing a sufficiently detailed overview of the use of training data.

National authorities and enforcement

The Parliament wishes to empower national authorities to request access to both AI systems and the models they use, including foundation models. The Parliament also proposes to establish a so-called "AI Office" whose purpose is to support the harmonised application of the AI Act, provide guidance and coordinate joint cross-border investigations.

While we wait - three tips

Below we have listed three tips if you are already working with artificial intelligence, including foundation models, or are about to start.

  • Prepare guidelines for the use of artificial intelligence. Prepare internal guidelines for the use of AI that support that employees do not unintentionally process personal data in violation of the GDPR, that copyrights are not being violated, that trade secrets are not being inadvertently disclosed, and that confidentiality is not being breached.
  • Use tools from the Danish Data Ethics Council. Data ethics is fundamentally about making well-considered decisions when using data and technology, taking into account the consequences that such use will have. To help clarify data ethics dilemmas, the Council has developed an assessment form, impact analysis and tools that can be found on the Council's website. Until the more specific legislative framework is in place, data ethics can be the compass in the jungle of technology and data that guides you through the many issues raised using artificial intelligence.
  • Prepare relevant documentation. Document which artificial intelligence tools and solutions you use and what potential challenges may arise from their use. Investigate whether and to which extent the supplier of the AI systems in question has documented the development and use of the AI system. Revisit or prepare an Article 30 record under the GDPR, conduct risk assessments and consider preparing an impact assessment. Creating these forms of documentation will make the use of AI less risky.

The definition of "artificial intelligence system"

Both the Parliament's and the Council's draft definition of 'artificial intelligence systems' (AI systems) simplify the Commission's definition. The Council narrows the definition to systems developed through machine learning and logic and knowledge-based approaches, while the Parliament's definition reads:
"A machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments."

The Parliament's definition corresponds to the OECD's definition of an AI system.

Contact

Emilie Loiborg

Director, Attorney