The EU’s AI Act differentiates AI systems based on the risks they pose to safety, livelihoods and the rights of people. The AI Act defines 4 levels of risk for AI systems.
Unacceptable risk
The EU considers some AI systems completely unacceptable and thus bans them:
- harmful AI-based manipulation and deception
- harmful AI-based exploitation of vulnerabilities
- social scoring
- individual criminal offence risk assessment or prediction
- untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
- emotion recognition in workplaces and education institutions
- biometric categorisation to deduce certain protected characteristics
- real-time remote biometric identification for law enforcement purposes in publicly accessible spaces
High risk
AI systems that can pose serious risks to health, safety or fundamental rights are classified as high-risk. AI systems in this category include AI safety components in critical infrastructures, AI solutions used in education institutions that may determine access to education and the course of someone’s professional life, AI use-cases in law enforcement that may interfere with people’s fundamental rights and others.
example A train management system that is used to ensure safety throughout the entire rail infrastructure will be considered high risk.
example Automatic examination of visa applications will be considered high risk as this has a high impact on fundamental rights in the context of migration.
High-risk AI systems are subject to strict obligations before they can be put on the market, including the need to conduct risk assessments and have mitigation systems, the need to have high-quality datasets and appropriate human oversight measures.
Transparency risk
The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so that they can make an informed decision.
Read more about the AI transparency problem in this Guide.
Minimal or no risk
The AI Act does not introduce rules for AI that is deemed minimal or no risk. This includes applications such as AI-enabled video games or spam filters.