Breakdown of EU AI Act

On the 12th of July 2024, the EU AI Act was officially published in the EU Official Journal. This Act then officially came into force on the 1st of August 2024.
The EU AI Act was a catalyst for innovation in the digital world of AI and technology as it urged other nations to follow in its footsteps and begin taking AI regulation seriously. As it has direct effect in each member state within the EU, this Act also ensured that each member has a proactive steps to regulate AI.
Lets begin this article by breaking down the key components of the EU AI Act:
Key Components
- Risk Based Classification
- High Risk AI Requirements
- Prohibited AI Practices
- Regulatory Sandboxes
- Governance and Oversight
- Penalties for Non-Compliance
- Implementation Timeline
Risk Based Classification
The risk classification system in the EU AI Act categorizes AI systems based on the potential risk they pose to safety, privacy, and fundamental rights. The Act uses this classification to determine the level of regulation and oversight needed for each type of AI system, this is outlined in Chapter II. The system is divided into four levels of risk:
Minimal Risk:
- AI systems that pose little to no risk to individuals or society.
- These are generally exempt from regulatory requirements, except for transparency obligations in some cases (e.g., chatbots must inform users they are interacting with AI).
- Examples: AI used in video games or spam filters.
Limited Risk:
- AI systems that have limited potential for harm but still require some regulatory oversight.
- These systems must adhere to certain transparency requirements, such as informing users that AI is being used and explaining its capabilities and limitations.
- Examples: AI-driven recommendations in online shopping or social media platforms.
High Risk:
- AI systems that pose significant risks to people’s health, safety, or fundamental rights.
- These systems are subject to strict regulatory requirements, including risk management, transparency, documentation, human oversight, and compliance assessments.
- Examples: AI in critical infrastructure, law enforcement, healthcare, and autonomous vehicles.
Unacceptable Risk:
- AI systems deemed to present unacceptable risks that are prohibited under the Act.
- These systems are banned entirely due to their potential to harm individuals or society, such as AI systems that manipulate human behavior, exploit vulnerable groups, or conduct social scoring.
- Examples: AI for real-time biometric identification in public spaces or social scoring systems.
High Risk AI Systems
High Risk AI Systems are governed in Chapter III of the EU AI Act. This chapter is structured into 5 sections. These are as follows:
Section 1- Classification of High Risk AI Systems- These include: 1- The system is intended to be used as safety component of a product. 2- the product is required to undergo a third party conformity assessment with a view of putting this product on the market. A system shall not be considered High risk where it does not pose a significant risk of harm to health, safety or fundamental rights of a person, including by not influencing the outcome of decision making
Section 2- Requirements for High Risk AI Systems- If a system is identified as high risk, there are 7 strict requirements it must follow:
1- Risk Management System- RMS established, implemented, documented and maintained throughout the lifecycle of AI system.
2- Data Governance- systems that involve training data must comply with quality criteria
3- Technical Documentation- made before deployment and kept up to date.
4- Record Keeping- High risk systems shall allow for the automatic recording of key events.
5- Transparency- Accompanied by instructions for use in appropriate digital format, or include instructions that are clear to deployers
6- Human Oversight- Must be designed in a way where they can be overseen by natural persons
7- Accuracy, Robustness, cyber security-
Section 3- Obligations for deployers and providers- Deployers are assigned with 12 requirements. These ensure standards are met. Providers are required to establish a quality management system, the Act outlines aspects that must be included.
Section 4- Notifying Authorities- This section outlines the notification bodies available. Each member shall establish a notifying authority responsible for procedures
Section 4- Standards, conformity assessment, certificates, registration-
Prohibited AI Practices
Chapter 2, Article 5 of the EU AI Act covers the prohibited AI practices. The article outlines these as prohibited:
1- Systems that deploy subliminal techniques beyond a persons consciousness or use deceptive techniques to distort the behaviour of a person
2- systems that exploit the vulnerabilities of a person due to their age, disability or specific social or economic situation.
3- social scoring by public authorities that could lead to unjust discrimination or unfair treatment of individuals based on their behaviour or characteristics.
4- Deploying AI systems which make risk assessments of individuals to assess/predict their risk of committing a criminal offence
5- AI systems that create facial recognition data bases through scraping images such as CCTV
6- AI systems that infer emotion within the areas of workforce or education
Regulatory Sandboxes
The purpose of regulatory sandboxes is to allow developers test their AI systems in a controlled environment. This allows for deployers to test their products in real world conditions whilst ensuring compliance with provisions. The requirements surrounding regulatory sandboxes are held in Chapter VI, Article 60 of the EU AI Act. There are certain eligibility criteria which must be met in order to use regulatory sandboxes.
- The eligibility criteria is as follow: the sandbox is typically available to High Risk AI systems which may not comply with all areas of the EU AI Act, yet is still considered innovative.
- The entities involved must meet certain requirements, such as demonstrating a commitment to compliance and having a clearly defined use case for the testing.
- The testing can be limited to specific use cases or applications that are considered particularly high-risk or novel.
- National authorities or designated bodies will oversee these sandboxes to ensure that the trials adhere to ethical guidelines, privacy concerns, and regulatory requirements.
- Sandboxes might have dedicated teams from regulatory bodies who work closely with the innovators to monitor and advise on compliance throughout the testing period.
Governance and Oversight
Chapter VII of the EU AI Act controls the governance and oversight mechanisms in regard to AI systems.
Article 64 covers the creation of an AI Office. The commission will develop expertise and capabilities required. In addition to this, a European Artificial Intelligence Board must be established. This is governed under Article 65. This board will be composed of one representative from each member state. The Board has an array of tasks, but specifically its role is to advise the commission and member states in order to facilitate the proper application of the regulation.
Under Article 67 an advisory forum must also be established. It will be their task to provide technical expertise and advise the Commission. An annual report will be produced yearly which will be available to the public.
Under Article 68, the Commission must establish a scientific panel of independent experts, known as “The Scientific Panel.” Their task will be to support the enforcement activities under the regulation. The Scientific Panel shall also advise the AI Office on 4 tasks specified in the EU AI Act. In particular, support the implementation and enforcement of the regulation in regard to General Purpose AI Models.
Each member state must designate a national competent authority made up of at least one notifying authority and one market surveillance authority. These authorities will act independently.
These are all of the governance bodies required under the EU AI Act.
Penalties for Non-Compliance
There are a number of specific penalties laid down in the EU AI Act, under Chapter XII. I will now list the specific penalties:
- Non-compliance with the prohibited AI Practices are subject to a fine of up to EUR 35,000,000 OR 7% total annual turnover
- Non-compliance with any provisions related to notified bodies are subject to fines of up to EUR 15, 000, 000 or 3% annual turnover.
- Providing inaccurate information to notified bodies are subject to a fine of up to EUR 7, 500, 000 or 1% annual turnover.
In addition to these penalties, the EU AI Act also lays down administrative fines on Union institutions, bodies, offices and agencies. These are set out as follows:
The European Data Protection Supervisor may impose administrative fines or Union institutions, bodies, offices or other agencies falling within the scope of this Regulation.
Non-compliance with any of the prohibited AI practices are subject to fines of up to EUR 1, 500, 000
Non-compliance with any obligations or regulations under the Regulation are subject to fines of up to EUR 750, 000
There are also specific penalties relating to General Purpose AI Models.
Implementation Timeline
Phase 1: Pre-Adoption and Legislative Approval
April 2021: The European Commission proposed the AI Act, the first regulatory framework focusing on AI in the EU, aiming to create a risk-based classification system and ensure AI systems comply with fundamental rights.
2021-2023: Negotiations and revisions take place within the European Parliament and Council of the European Union. Member states and industry stakeholders provide feedback, and draft amendments are made.
Late 2023 – Early 2024: Final approval of the EU AI Act by the European Parliament and the Council of the EU. This marks the point when the legislation will be formally adopted, and the countdown to its implementation begins.
Phase 2: Preparation for Enforcement
2024: The EU AI Act enters into force, with some provisions taking effect immediately, while others allow for a grace period for businesses and stakeholders to adapt to new requirements.
2024 – 2025: The European Commission will work on drafting secondary regulations and guidelines to support the implementation of the AI Act. This includes defining technical standards for high-risk AI systems, setting up the AI Regulatory Sandbox for testing, and providing guidance for national competent authorities.
2024: The European Artificial Intelligence Board (AI Board) is established, overseeing the AI Act’s enforcement, providing expertise on cross-border AI issues, and supporting collaboration between member states.
Phase 3: Compliance and Enforcement
2025:
- Full enforcement begins for high-risk AI systems and other specified AI technologies.
- Companies are required to comply with the transparency, safety, and accountability obligations specified in the Act. This includes risk assessments, data management, and ensuring the AI systems are auditable.
2025:
- Businesses that offer high-risk AI systems must ensure that their systems comply with the AI Act’s requirements (e.g., data governance, risk management, human oversight). These businesses may also need to register their AI systems in the EU database of high-risk AI applications.
2025 – 2027:
- Ongoing compliance audits and reporting by companies, overseen by national regulators. This includes checks for compliance with data privacy, transparency, fairness, and accountability requirements.
- Regulatory bodies will begin conducting periodic inspections and audits of high-risk AI systems in various sectors (e.g., healthcare, finance, autonomous vehicles, etc.).