How to comply with the EU AI Act
The EU AI Act is the first comprehensive legal framework established to regulate the use of AI across EU member states. The EU AI Act was officially published in the EU Official Journal on the 12h of July 2024. Following the implemented grace period, states and organisations will be expected to show full compliance with the EU AI Act as of 2025/2026. The following article outlines the possible steps member states and organisations may take to showcase their compliance with the EU AI Act.
Understand your AI system and risk level
- Conduct an AI Audit:
- Inventory all AI systems in use or development.
- Identify their purpose, functionality, and criticality.
- Classify Risk Levels:
- Assess whether your AI system falls under unacceptable risk, high risk, limited risk, or minimal risk categories.
- Examples: Recruitment tools may be high-risk, while chatbots may be limited risk.
- Evaluate Potential Impact:
- Consider how your AI systems interact with users and their implications for safety, rights, and privacy.
2. Implement Risk Management Processes
- Establish a Risk Management System:
- Regularly monitor and evaluate risks associated with your AI systems.
- Conformity Assessment:
- For high-risk AI, conduct pre-market conformity assessments to ensure compliance with the Act’s standards.
3. Focus on Transparency and Explainability
- Disclose AI Use:
- Inform users when they are interacting with an AI system.
- Label AI-generated content, such as deepfakes or text outputs from generative AI models.
- Provide Clear Documentation:
- Maintain detailed records of system architecture, training data, and decision-making processes.
4. Ensure robust data governance
- Data Quality:
- Use high-quality, representative, and bias-free data for training AI models.
- Compliance with GDPR:
- Align data practices with GDPR principles to protect user privacy and ensure lawful data usage.
5. Establish Human Oversight Mechanisms
- Embed Human Control:
- Ensure that humans can intervene or override AI decisions in critical scenarios.
- Train Staff:
- Provide training for employees responsible for monitoring and managing AI systems.
6. Leverage technical standards and best practices
- Adopt European Standards:
- Use technical standards provided by European Standardization Organizations (ESOs) to guide system development and deployment.
- Use AI Regulatory Sandboxes:
- Test AI systems in regulatory sandboxes to refine functionality and compliance before full deployment.
7. Designate compliance and governance roles
- Create a Compliance Team:
- Assign roles for overseeing AI compliance, risk management, and documentation.
- Engage Legal and Ethical Experts:
- Consult with experts to interpret regulatory requirements and ethical considerations.
8. Stay updates on regulatory changes
- Monitor Updates:
- Track developments in the AI Act, particularly for emerging areas like foundation models and generative AI.
- Participate in Industry Groups:
- Join associations or networks focused on AI governance to stay informed and influence standards.
9. Prepare for enforcement and training
- Audit-Ready Documentation:
- Maintain up-to-date and accessible documentation for regulators.
- Incident Reporting:
- Develop a process for promptly reporting incidents, as required for high-risk AI systems.