Like other technologies, the use of Artificial Intelligence has two sides: It can be used to support and help people, for example with precise medical diagnostics, but also to manipulate or discriminate against individuals or social groups. The development and use of AI thus not only concern aspects such as productivity or the quality of work results, but also ethical aspects as the congruence of AI systems and their use by companies, organizations or institutions with rights and societal values.
What are the objectives of the regulation?The position of AI as a key technology with the potential to threaten fundamental values is being addressed by the European Union with a regulation on artificial intelligence. It aims to prevent the rights of citizens in the EU from being impaired by the use of AI. At the same time, it aims to ensure that the EU remains an open market for AI and that research into AI can continue.
What exactly is the regulation intended to regulate?The European Commission’s proposal to regulate AI contains rules for the placing on the market, putting into service and use of AI systems. These are divided into four groups depending on the risk of harm to health, safety or fundamental rights. No regulations are envisaged for solutions with low or very low risk – although it has not yet been named exactly which solutions are to be included. High-risk or specific-risk solutions must meet certain requirements or transparency requirements, and solutions with an unacceptably high risk of violating the EU’s fundamental values are to be banned. This includes, for example, monitoring systems used by public authorities to assess citizens’ social behavior (social scoring).
- High-risk systems
- “Certain” AI systems