AI in the EU: Artificial Intelligence Regulation

In this category

AI in the EU: Artificial Intelligence Regulation

Like other technologies, the use of Artificial Intelligence has two sides: It can be used to support and help people, for example with precise medical diagnostics, but also to manipulate or discriminate against individuals or social groups. The development and use of AI thus not only concern aspects such as productivity or the quality of work results, but also ethical aspects as the congruence of AI systems and their use by companies, organizations or institutions with rights and societal values.

What are the objectives of the regulation?

The position of AI as a key technology with the potential to threaten fundamental values is being addressed by the European Union with a regulation on artificial intelligence. It aims to prevent the rights of citizens in the EU from being impaired by the use of AI. At the same time, it aims to ensure that the EU remains an open market for AI and that research into AI can continue.

What exactly is the regulation intended to regulate?

The European Commission’s proposal to regulate AI contains rules for the placing on the market, putting into service and use of AI systems. These are divided into four groups depending on the risk of harm to health, safety or fundamental rights. No regulations are envisaged for solutions with low or very low risk – although it has not yet been named exactly which solutions are to be included. High-risk or specific-risk solutions must meet certain requirements or transparency requirements, and solutions with an unacceptably high risk of violating the EU’s fundamental values are to be banned. This includes, for example, monitoring systems used by public authorities to assess citizens’ social behavior (social scoring).

  • High-risk systems

High-risk systems include, for example, systems “intended to be used as safety components of products subject to third-party conformity assessment” and “stand-alone AI systems explicitly listed in Annex III that have a primary impact on fundamental rights.” These include systems for biometric identification, for management and operation of critical infrastructure, systems used to select individuals for access to educational institutions, for recruitment as employees and associates in companies, for access to government services.

For these systems, to reduce risks to fundamental values, certain requirements must be met. Among the obligations of providers and users are that they establish and apply a risk management system and create and maintain technical documentation. In addition, the systems must be “capable of being effectively supervised by natural persons.”

  • “Certain” AI systems

This refers to AI systems that are capable of manipulating individuals, regardless of their classification in a risk group. This refers to systems such as chatbots for interacting with people, systems for emotion recognition and biometric categorization, or solutions for generating deepfakes. Providers and users of such systems are obliged to inform data subjects that they are dealing with an AI system.

Who should the regulation apply to?

The AI Regulation is not intended to keep providers from outside the EU out of the EU market, but to protect citizens in the EU from having their rights and security compromised. Therefore, the regulation is intended to apply to all providers “who place AI systems on the market or put them into operation in the Union, regardless of whether these providers are established in the Union or in a third country.” Further affected are “users of AI systems located in the Union” as well as providers and users from anywhere who want to use the “result produced by the system in the Union.”

When will the regulation apply?

The Commission’s proposal must be adopted by the European Parliament and the Council of the European Union. The roadmap is for that to happen by the end of 2022. The AI Regulation is then expected to enter into force twenty days after publication in the Official Journal of the EU, i.e., late 2022 or early 2023, and is expected to apply 24 months after that, with some parts earlier.


Reactions to the draft EU regulation cover the entire spectrum from “too lax” to “too restrictive.” The summary of the hearing of the Committee on Digital Affairs in the Bundestag in September 2022 is a good example of this; assessments in the other EU member states are likely to be similarly wide-ranging. It remains to be seen what form this will take in the final version.

One thing is clear: Anyone who wants to enter the market in the EU must comply with the rules that apply there – we already know this from the DSGV. And many want to enter the EU market with its nearly 450 million inhabitants, great economic power and high level of qualification. Perhaps the AI Regulation will set an example internationally just like the GDPR.