Four Risk Tiers
Unacceptable (Prohibited): Social scoring, real-time biometric ID in public spaces, subliminal manipulation, emotion recognition in workplace/education (except medical/safety), predicting criminality solely from personality profiling, untargeted facial scraping.
High Risk: Biometric ID, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice (Annex II/III).
Limited Risk: Transparency obligations apply regardless of risk tier - AI interaction disclosure, emotion recognition disclosure, AI-generated content labeling.
Minimal Risk: No specific obligations. Most AI systems fall here.
Roles
Provider: Creates compliance artifacts - CE marking, conformity assessment, technical documentation (Annex IV), risk management system, human oversight mechanisms, post-market monitoring.
Deployer: Verifies proper use - FRIA before deployment (public bodies/public service providers), logs/documentation retained 6+ months, human oversight during operation.
Importer: Verifies artifacts exist and are valid. Gatekeepers for non-EU providers.
Substantial modification of a system = becomes a provider and assumes all provider obligations.
GPAI & Systemic Risk
Tier 1 (all GPAI): Technical documentation + copyright compliance policy incl. text/data mining opt-outs.
Tier 2 (systemic risk): Training data summary, adversarial testing, cybersecurity, report incidents to EU AI Office, energy consumption reporting.
Systemic risk threshold: >10²⁵ FLOPs OR Commission designation. GPAI obligations are parallel to - not overlapping with - high-risk system obligations.
Penalties
Prohibited practices: €35M or 7% global turnover. Provider breaches (high-risk): €15M or 3%. Misleading info: €7.5M or 1%.