EU Artificial Intelligence Act
View Law TextNeed Help with EU Artificial Intelligence Act Compliance?
The world's first comprehensive AI regulation establishing harmonized rules for AI systems in the EU
Get Expert HelpOverview
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing harmonized rules for the development, deployment and use of AI systems in the European Union.
Key Facts
- Risk-based approach categorizing AI systems into unacceptable, high, limited, and minimal risk
- Strict requirements for high-risk AI systems including human oversight and transparency
- Prohibition of certain AI practices deemed unacceptable risk
- Extraterritorial scope affecting companies worldwide providing AI systems in the EU
- Significant penalties for non-compliance up to €35M or 7% of global revenue
Prohibited Practices
Social Scoring
AI systems used by public authorities for evaluating or classifying the trustworthiness of natural persons based on their social behavior or personality characteristics
Examples:
- Mass surveillance systems
- Behavior prediction for social ranking
- Automated social credit systems
Exploitation of Vulnerabilities
AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, social or economic situation
Examples:
- Targeted manipulation of elderly people
- Exploitation of children's behavior
- Discriminatory targeting of vulnerable groups
Biometric Identification
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes
Examples:
- Real-time facial recognition in public spaces
- Live biometric tracking systems
- Automated public surveillance systems
Exceptions:
- Search for victims of crime
- Prevention of imminent terrorist threats
- Detection of serious criminal offenses
Subliminal Manipulation
AI systems deploying subliminal techniques to materially distort behavior in a manner causing physical or psychological harm
Examples:
- Unconscious behavior manipulation
- Harmful psychological targeting
- Covert influence systems
Risk Categories
Unacceptable Risk
ProhibitedAI systems posing unacceptable risks to fundamental rights
Examples:
- Social scoring by governments
- Exploitation of vulnerabilities of specific groups
- Real-time remote biometric identification in public spaces
- Subliminal manipulation causing physical or psychological harm
High Risk
Strict RequirementsAI systems with significant potential impact on health, safety, or fundamental rights
Examples:
- Critical infrastructure (transport, water, gas)
- Educational/vocational training
- Employment, workers management, access to self-employment
- Essential private/public services
- Law enforcement
- Migration, asylum, border control
- Administration of justice and democratic processes
Limited Risk
Transparency RequirementsAI systems with specific transparency obligations
Examples:
- Chatbots
- Emotion recognition systems
- Biometric categorization systems
- Deep fakes
Minimal Risk
Voluntary CodesAll other AI systems posing minimal risk
Examples:
- AI-enabled video games
- Spam filters
- Inventory management systems
- Manufacturing optimization tools
Compliance Requirements
Risk Management System
Establish and maintain a risk management system for the entire lifecycle of the AI system
- Risk identification and analysis
- Risk evaluation methods
- Risk mitigation measures
- Documentation of risk assessment
- Regular monitoring and updates
Data Governance
Implement data quality management and governance practices
- Data quality criteria
- Relevant data properties
- Data preparation protocols
- Data examination for biases
- Data security measures
Technical Documentation
Maintain detailed technical documentation demonstrating compliance
- System architecture
- Development process
- Training methodologies
- Validation procedures
- Performance metrics
Record-Keeping
Maintain logs of system activity and automated record-keeping
- System operations logs
- Error logs
- Access records
- Training data changes
- System modifications
Transparency
Ensure transparency and provide information to users
- System capabilities
- Intended purpose
- Performance limitations
- Human oversight measures
- Expected lifetime
Human Oversight
Implement appropriate human oversight measures
- Oversight procedures
- Training for human overseers
- Authority to override
- Monitoring protocols
- Incident response plans
Enforcement & Penalties
Administrative Fines
Up to €35M or 7% of global revenue
For violations related to prohibited AI practices
Up to €20M or 4% of global revenue
For non-compliance with other obligations
Up to €10M or 2% of global revenue
For supply of incorrect information