SecurePrivacy Logo

Deep Synthesis Technology Regulations

China's regulatory framework for managing deep synthesis technology, ensuring responsible development and protecting against misuse of AI-generated content.

Key Points

  • Effective from January 10, 2023
  • Applies to providers of deep synthesis services
  • Enforced by the Cyberspace Administration of China (CAC)
  • Focuses on content authenticity and user protection

Deep Synthesis Requirements

Content Labeling

Clear identification of AI-generated or altered content.

  • Watermarking requirements
  • Clear AI content indicators
  • Metadata tagging standards
  • Source attribution
  • Modification history

Security Controls

Robust security measures to prevent misuse.

  • Data encryption
  • Access controls
  • Security audits
  • Incident response plans
  • Vulnerability assessments

User Protection

Safeguards for personal information and user rights.

  • Consent management
  • Privacy controls
  • Data subject rights
  • Opt-out mechanisms
  • Transparency requirements

Compliance Requirements

Risk Management System

Establish and maintain a risk management system for the entire lifecycle of the AI system

  • Risk identification and analysis
  • Risk evaluation methods
  • Risk mitigation measures
  • Documentation of risk assessment
  • Regular monitoring and updates

Data Governance

Implement data quality management and governance practices

  • Data quality criteria
  • Relevant data properties
  • Data preparation protocols
  • Data examination for biases
  • Data security measures

Technical Documentation

Maintain detailed technical documentation demonstrating compliance

  • System architecture
  • Development process
  • Training methodologies
  • Validation procedures
  • Performance metrics

Record-Keeping

Maintain logs of system activity and automated record-keeping

  • System operations logs
  • Error logs
  • Access records
  • Training data changes
  • System modifications

Transparency

Ensure transparency and provide information to users

  • System capabilities
  • Intended purpose
  • Performance limitations
  • Human oversight measures
  • Expected lifetime

Human Oversight

Implement appropriate human oversight measures

  • Oversight procedures
  • Training for human overseers
  • Authority to override
  • Monitoring protocols
  • Incident response plans

Prohibited Practices

Social Scoring

AI systems used by public authorities for evaluating or classifying the trustworthiness of natural persons based on their social behavior or personality characteristics

Examples:

  • Mass surveillance systems
  • Behavior prediction for social ranking
  • Automated social credit systems

Exploitation of Vulnerabilities

AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, social or economic situation

Examples:

  • Targeted manipulation of elderly people
  • Exploitation of children's behavior
  • Discriminatory targeting of vulnerable groups

Biometric Identification

Real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes

Examples:

  • Real-time facial recognition in public spaces
  • Live biometric tracking systems
  • Automated public surveillance systems

Exceptions:

  • Search for victims of crime
  • Prevention of imminent terrorist threats
  • Detection of serious criminal offenses

Subliminal Manipulation

AI systems deploying subliminal techniques to materially distort behavior in a manner causing physical or psychological harm

Examples:

  • Unconscious behavior manipulation
  • Harmful psychological targeting
  • Covert influence systems