Algorithm Recommendation Regulations
China's comprehensive framework for regulating algorithmic recommendation systems in internet services, ensuring transparency, user control, and responsible deployment of AI technologies.
Key Points
- Effective from March 1, 2022
- Applies to all algorithm-based recommendation services in China
- Enforced by the Cyberspace Administration of China (CAC)
- Focuses on transparency, user rights, and content moderation
Algorithm Requirements
Algorithm Transparency
Service providers must disclose basic information about their recommendation algorithms.
- Clear disclosure of algorithmic recommendation principles
- User-friendly opt-out mechanisms
- Explanation of key parameters affecting recommendations
- Regular algorithm audits
- Documentation of changes
User Control
Users must have control over algorithmic recommendations.
- Tag management interface
- Preference settings
- History deletion options
- Personalization controls
- Data access rights
Content Moderation
Platforms must prevent algorithmic discrimination and manipulation.
- Anti-discrimination measures
- Content diversity mechanisms
- Bias detection systems
- Fair recommendation practices
- Regular impact assessments
Compliance Requirements
Risk Management System
Establish and maintain a risk management system for the entire lifecycle of the AI system
- Risk identification and analysis
- Risk evaluation methods
- Risk mitigation measures
- Documentation of risk assessment
- Regular monitoring and updates
Data Governance
Implement data quality management and governance practices
- Data quality criteria
- Relevant data properties
- Data preparation protocols
- Data examination for biases
- Data security measures
Technical Documentation
Maintain detailed technical documentation demonstrating compliance
- System architecture
- Development process
- Training methodologies
- Validation procedures
- Performance metrics
Record-Keeping
Maintain logs of system activity and automated record-keeping
- System operations logs
- Error logs
- Access records
- Training data changes
- System modifications
Transparency
Ensure transparency and provide information to users
- System capabilities
- Intended purpose
- Performance limitations
- Human oversight measures
- Expected lifetime
Human Oversight
Implement appropriate human oversight measures
- Oversight procedures
- Training for human overseers
- Authority to override
- Monitoring protocols
- Incident response plans
Prohibited Practices
Social Scoring
AI systems used by public authorities for evaluating or classifying the trustworthiness of natural persons based on their social behavior or personality characteristics
Examples:
- Mass surveillance systems
- Behavior prediction for social ranking
- Automated social credit systems
Exploitation of Vulnerabilities
AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, social or economic situation
Examples:
- Targeted manipulation of elderly people
- Exploitation of children's behavior
- Discriminatory targeting of vulnerable groups
Biometric Identification
Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes
Examples:
- Real-time facial recognition in public spaces
- Live biometric tracking systems
- Automated public surveillance systems
Exceptions:
- Search for victims of crime
- Prevention of imminent terrorist threats
- Detection of serious criminal offenses
Subliminal Manipulation
AI systems deploying subliminal techniques to materially distort behavior in a manner causing physical or psychological harm
Examples:
- Unconscious behavior manipulation
- Harmful psychological targeting
- Covert influence systems