- where the AI is part of a safety system, and where that safety system already needs to undergo conformity assessment.
OR
- where the AI system poses a significant risk of harm to the health, safety or fundamental rights of natural persons - for very specific use-cases (page 122-125 https://www.europarl.europa.eu/resources/library/media/20230...).
Title III Chapter 1 CLASSIFICATION OF AI SYSTEMS AS HIGH-RISK
[AI systems shall be considered high-risk if they pose]
- a significant risk of harm to the health, safety or fundamental rights of natural persons
- a significant risk of harm to the environment
Also Annex III page 122
[use cases considered to be high-risk]
1. Biometric and biometrics-based systems
2. Management and operation of critical infrastructure
3. Education and vocational training
4. Employment, workers management and access to self-employment
5. Access to and enjoyment of essential private services and public services and benefits
6. Law enforcement
7. Migration, asylum and border control management
8. Administration of justice and democratic processes
If an AI system affects people and their rights, or national security, or environment, it is subject to regulation. Fair enough.
[1] https://www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COM...
- where the AI is part of a safety system, and where that safety system already needs to undergo conformity assessment.
OR
- where the AI system poses a significant risk of harm to the health, safety or fundamental rights of natural persons - for very specific use-cases (page 122-125 https://www.europarl.europa.eu/resources/library/media/20230...).