On the 12th of July 2024 the EU AI act, Regulation 2024/1689 was officially published in EU official journal. On the 1st of August 2024, the EU AI act came into force. This act established the first comprehensive horizontal legal framework for the regulation of AI in the EU and has direct effect in each member state.
A crucial aspect of this regulation is the risk-based assessment which categorises each AI system into a system of; high risk, low risk and unacceptable risk (prohibited practices). This risk-based approach outlines the threat of each AI system and establishes a specific set of requirements it must comply with.
Table of Contents
Preamble
- Containing recitals 1-180
Enacting Terms:
- Chapter I – General Provisions
- Chapter II- Prohibited AI
- Chapter III- High Risk AI Systems
- Chapter IV- Transparency
- Chapter V- General Purpose AI
- Chapter VI- Measures in support
- Chapter VII- Governance
- Chapter VIII- EU database for High Risk AI
- Chapter IX- Post market monitoring
- Chapter X – Codes of Conduct
- Chapter XI – Delegation of Power & Committee Procedure
- Chapter XII – Penalties
- Chapter XIII – Final provisions
Annex 1-XIII
Recitals
The EU AI Act contains 180 recitals which are intended to outline the framework and principles. Some of the key points outlined in the recitals are as follows:
Need for a Legal Framework: There is a necessity for a Union legal framework to establish harmonized rules on AI to foster its development and ensure high protection of public interests, such as health, safety, and fundamental rights (8).
Risk-Based Approach: The regulation employs a risk-based approach to categorize AI systems, particularly focusing on high-risk systems, which require stringent compliance measures (27).
Protection of Fundamental Rights: The regulation emphasizes the protection of fundamental rights, including privacy, data protection, and non-discrimination, particularly in the context of high-risk AI systems (10, 13).
Transparency and Accountability: There is a strong focus on ensuring transparency in AI systems, especially those that interact with citizens or affect their rights (35).
Prohibition of Certain Practices: The regulation outlines specific prohibitions, including the use of AI systems for social scoring and real-time remote biometric identification in publicly accessible spaces for law enforcement, except under narrowly defined circumstances (31,33).
Post-Market Monitoring: Providers of high-risk AI systems are required to implement post-market monitoring systems to manage risks and ensure compliance after the systems are deployed (39).
Role of Member States: Member States are tasked with enforcing the regulation and must designate competent authorities to oversee compliance (39).
Evaluation and Review: The regulation mandates regular evaluations and reviews to assess its effectiveness and adapt to technological developments (174).
Chapter I- General Provisions
Chapter I contains Articles 1-4. The purpose of this chapter is to outline the foundational objectives of the regulation.
Article 1- Subject Matter – This article outlines the fact that the Regulations purpose is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, and fundamental rights.
Article 2- Scope- This chapter outlines the occasions in which this Regulation applies. The Regulation applies to various entities including providers placing AI systems on the market in the Union, deployers of AI systems in the Union, and providers and deployers located in third countries where the output is used in the Union.
Article 3- Definitions- This Article outlines the key definitions for specific terms referenced throughout the Regulation. Examples include; AI system, Deployer and Provider.
Chapter II- Prohibited AI Systems
Chapter II of the EU AI Act outlines the prohibited AI practices, detailing specific types of AI systems that are banned due to their potential harm to individuals or society. Key areas include:
Subliminal Techniques: AI systems that deploy subliminal techniques beyond a person’s consciousness, or manipulative techniques that distort behavior, are prohibited. Such systems impair an individual’s ability to make informed decisions, leading to significant harm.
Exploitation of Vulnerabilities: The use of AI systems that exploit vulnerabilities of individuals based on age, disability, or social situation to manipulate behavior is not allowed.
Social Scoring: AI systems that evaluate or classify individuals based on social behavior or personal characteristics, leading to detrimental treatment in unrelated contexts, are banned.
Risk Assessment for Criminal Offenses: AI systems used to assess the risk of individuals committing crimes based solely on profiling or personality traits are prohibited. Exceptions exist for systems that support human assessments based on objective facts.
Facial Recognition: The creation or expansion of facial recognition databases through untargeted scraping of images is not permitted
Emotion Recognition: The use of AI systems to infer emotions from individuals without proper consent or oversight is banned.
Chapter III- High Risk AI Systems
Chapter III of the EU AI Act outlines the regulations regarding high-risk AI systems. It is structured into several sections, detailing the classification, requirements, and obligations related to these systems.
Article 6- Classification of High Risk AI Systems- This section defines what constitutes a high-risk AI system, emphasizing that such systems must either be intended for use as safety components of products covered by Union harmonization legislation or fall into specific categories listed in Annex III. There are also provisions for AI systems that may not initially appear high-risk but can be classified as such under certain conditions.
Annex III-
Article 8 – Requirements for High Risk AI systems- High-risk AI systems must comply with specific requirements, including the establishment of a risk management system that is documented and maintained throughout the system’s lifecycle. The chapter emphasizes the importance of risk assessment, transparency, and the need for technical documentation to demonstrate compliance.
Article 10- Data Governance- Providers must ensure that data used in high-risk AI systems meets quality standards, is representative, and is managed appropriately throughout the system’s lifecycle.
Article 11- Technical Documentation- Providers are required to create and maintain technical documentation that demonstrates compliance with the relevant regulations. This documentation must include detailed descriptions of the AI system, its development processes, and validation results.
Article 16-21- Obligations of Providers and Deployers- Providers are held accountable for ensuring their systems meet all regulatory requirements and must have quality management systems in place. Deployers, on the other hand, must use the systems according to the provided instructions and monitor them for compliance.
Specifically, Article 17 outlines the fact that providers of High Risk AI systems must conduct a Conformity Management Procedure.
Chapter IV- Transparency
Chapter IV of the EU AI Act includes Article 50 which focuses on the transparency obligations for providers and deployers of certain AI systems. Here’s a summary of its key points:
Transparency Obligations: Providers must ensure that AI systems intended for direct interaction with natural persons inform them that they are engaging with an AI system, unless it’s obvious. This is aimed at enhancing awareness and understanding of AI system interactions.
Synthetic Content Disclosure: Providers of AI systems that generate synthetic content (audio, images, video, or text) must ensure that outputs are marked as artificially generated or manipulated. This includes ensuring the effectiveness and reliability of technical solutions for marking content.
Deep Fakes and Public Information: Deployers of AI systems that create or manipulate deep fakes must disclose this information unless the use is authorized by law for specific legal purposes. For text published on public interest matters, similar transparency is required.
Information Accessibility: The information regarding AI interactions and generated content must be provided in a clear and accessible manner, ideally at the time of the first interaction.
Compliance with Other Laws: The chapter emphasizes that these obligations do not override existing transparency requirements under Union or national law.
Encouragement for Codes of Practice: The AI Office is tasked with promoting and facilitating the development of codes of practice at the Union level to aid in the effective implementation of transparency obligations.
Chapter V- General Purpose AI Models
Chapter V of the EU AI Act focuses on General Purpose AI Models and outlines the classification, obligations, and procedures related to these models, particularly in the context of systemic risk.
Article 51- Classification Rules- General-purpose AI models can be classified as having systemic risk if they possess high-impact capabilities, evaluated using specific technical tools and methodologies. A model is presumed to have high-impact capabilities if the computation used for its training exceeds a defined threshold.
Article 52- Procedure- Providers must notify the Commission if their model meets the criteria for systemic risk. They can also present arguments if they believe their model should not be classified as posing systemic risks. The Commission can designate a model as presenting systemic risks based on established criteria.
Article 53- Obligations for providers- Providers of general-purpose AI models must maintain detailed technical documentation that includes information about the model’s capabilities, limitations, and training processes. They are required to ensure compliance with Union law regarding copyright and provide summaries about the training content.
Article 55- Obligations for models with systemic risk- Providers of general-purpose AI models classified as having systemic risk must perform model evaluations using standardized protocols, document adversarial testing, and ensure cybersecurity protections are in place.
Article 56- Codes of Practice- The AI Office will encourage the development of codes of practice at the Union level to ensure compliance with the obligations laid out in this chapter. These codes will address specific requirements and foster voluntary application.
Chapter VI- Measures in support
Chapter VI of the EU AI Act outlines “Measures in Support of Innovation,” focusing on the establishment of AI regulatory sandboxes by Member States. Here are the key points:
Establishment of Sandboxes (Article 57): Member States are required to ensure that their competent authorities establish at least one AI regulatory sandbox at the national level by August 2, 2026. This sandbox may also be created in collaboration with other Member States’ authorities. Existing sandboxes can be utilized if they provide equivalent national coverage.
Additional Sandboxes: Member States may also create regional or local AI regulatory sandboxes, or participate in joint initiatives with other Member States.
Role of the European Data Protection Supervisor: This supervisor can establish a separate AI regulatory sandbox for Union institutions, bodies, offices, and agencies and will exercise roles similar to national competent authorities in this context.
Resource Allocation: Member States must allocate sufficient resources to their competent authorities to effectively establish and operate these sandboxes. They are encouraged to collaborate with other relevant authorities and involve various actors within the AI ecosystem.
Assistance from the Commission: The Commission may provide technical support, advice, and tools to assist in the establishment and functioning of these AI regulatory sandboxes.
Chapter VII- Governance
Chapter VII of the EU AI Act focuses on the governance of AI within the European Union. It establishes the framework for the oversight and management of AI systems to ensure compliance with the regulations set forth.
AI Office: The Commission is tasked with developing expertise and capabilities in the field of AI through the establishment of the AI Office. Member States are required to support the AI Office in its responsibilities.
European Artificial Intelligence Board: A Board is created, comprising representatives from each Member State, with the European Data Protection Supervisor participating as an observer. The Board’s role includes coordinating the implementation of the regulation, sharing best practices, and providing advice to the Commission and Member States.
Tasks and Responsibilities: The Board is tasked with facilitating consistent application of the regulation, coordinating authorities, collecting technical expertise, advising on best practices, and supporting the development of common criteria. It will also monitor the implementation of AI regulatory sandboxes and cooperate with other Union bodies and relevant third-country authorities.
Advisory Forum: An advisory forum will be established to provide technical expertise and advice to the Board and the Commission, ensuring balanced representation from various stakeholders, including industry, academia, and civil society.
Resources and Support: Member States are expected to report on the adequacy of their resources for implementing the regulation. The Commission will facilitate the exchange of experiences and support national authorities in their tasks.
Chapter VIII- EU Data-base for High Risk AI Systems
Chapter VIII of the EU AI Act outlines the establishment and maintenance of an EU database for high-risk AI systems. Here’s a summary of the key points:
Database Establishment: The Commission, in collaboration with Member States, is tasked with setting up and maintaining an EU database that will contain information about high-risk AI systems as identified in Article 6(2).
Information Requirements: The database will include data that providers and deployers are required to submit. This data will cover details such as the identity of the provider, the AI system’s intended purpose, and its operational status.
Public Accessibility: Except for certain sensitive information related to law enforcement and immigration, the information in the database will be publicly accessible and made available in a user-friendly, machine-readable format.
Data Entry Responsibilities: Providers are responsible for entering specific information into the database, while public authorities or agencies are responsible for entering information related to their deployments.
Data Protection: The database may contain personal data, but only as necessary for compliance with the regulation, ensuring that the privacy of individuals is respected.
Commission as Controller: The Commission will serve as the controller of this database, offering technical and administrative support to users and ensuring compliance with accessibility requirements.
Chapter IX- Post Market monitoring, Information sharing and Market Surveillance
Post-Market Monitoring (Article 72): Providers of high-risk AI systems are required to establish a post-market monitoring system that actively collects and analyzes data on the performance of their systems throughout their lifecycle. This system must include a post-market monitoring plan, which outlines how the monitoring will be conducted.
Reporting Serious Incidents (Article 73): Providers must report any serious incidents involving their high-risk AI systems to the relevant market surveillance authorities within a specified timeframe. Initial reports may be incomplete, but follow-up reports must provide complete information. The providers are also responsible for investigating these incidents and taking corrective action.
Market Surveillance (Article 74): Market surveillance authorities are empowered to monitor compliance with the regulation. They can assess AI systems for risks and require corrective actions if non-compliance is found. The authorities must collaborate with each other and share information regarding any identified risks or non-compliance.
Information Sharing (Article 75): The chapter emphasizes cooperation among market surveillance authorities, including sharing technical documentation and data related to AI systems. This cooperation is essential for effective market surveillance and ensuring high standards of AI safety and compliance across member states.
Article 88- Enforcement of obligations for providers of GPAI-
Chapter X- Codes of Conduct and Guidelines
Chapter X of the EU AI Act is focused on the development and implementation of codes of conduct and guidelines related to artificial intelligence (AI) systems, particularly those that are not classified as high-risk. Here’s a summary of its key points:
Encouragement of Codes of Conduct: The AI Office and Member States are tasked with encouraging and facilitating the creation of codes of conduct that promote voluntary adherence to specific requirements applicable to AI systems, excluding high-risk ones. These codes aim to foster best practices within the industry.
Content of Codes of Conduct: The codes should include clear objectives and key performance indicators to measure their effectiveness. They should address various aspects, such as ethical guidelines, environmental sustainability, AI literacy, inclusive design, and the impact of AI on vulnerable groups.
Involvement of Stakeholders: The development of these codes may involve individual providers, organizations representing them, civil society organizations, and academia. The aim is to ensure that the codes are practical and tailored to the needs of different stakeholders, including small and medium-sized enterprises (SMEs)
Guidelines from the Commission: The Commission is responsible for creating guidelines to aid in the practical implementation of the regulation. These guidelines will cover various topics, including transparency obligations, prohibited practices, and the relationship of the regulation with existing Union laws.
Review and Adaptation: There is a provision for the review and adaptation of these codes of conduct to align with emerging standards and practices in AI.
Chapter XI- Delegation of Powers and Committee Procedure
Chapter XI of the EU AI Act outlines the delegation of power and committee procedures related to the regulation on artificial intelligence. The key points are as follows:
Exercise of Delegation (Article 97): The power to adopt delegated acts is conferred on the Commission for a period of five years starting from August 1, 2024. The delegation can be extended automatically unless opposed by the European Parliament or the Council. The Commission must consult experts designated by each Member State before adopting these acts.
Notification and Objection (Article 98): Once a delegated act is adopted, the Commission must notify the European Parliament and the Council. Such acts will only enter into force if neither body objects within three months of notification.
Committee Procedure (Article 98): The Commission will be assisted by a committee as defined by Regulation (EU) No 182/2011. This committee will aid in the implementation of the regulation and ensure that the process is transparent and accountable.
Overall, Chapter XI establishes a framework for the delegation of powers to the Commission, ensuring that relevant stakeholders are consulted and that there are checks and balances in the adoption of delegated acts related to AI regulation.
Chapter XII- Penalties
Chapter XII outlines a structured approach to penalizing non-compliance with AI regulations, emphasizing the need for penalties to be appropriate to the severity of the infringement while considering the operational context of the entities involved.
Establishment of Penalty Rules: Member States are required to establish rules on penalties and enforcement measures for violations of the regulation, ensuring that these penalties are effective, proportionate, and dissuasive. The rules should also consider the interests of small and medium-sized enterprises (SMEs)
Types of Penalties:
- Non-compliance with the prohibition of certain AI practices can result in administrative fines of up to EUR 35 million or 7% of the total worldwide annual turnover, whichever is higher.
- Non-compliance with specific operator or notified body obligations can lead to fines of up to EUR 15 million or 3% of the total turnover.
- For providing incorrect or misleading information, fines can reach EUR 7.5 million or 1% of turnover.
Consideration of Circumstances: When determining penalties, authorities must consider various factors, including the nature and gravity of the infringement, the size and economic status of the offender, and any mitigating actions taken to address the violation.
Reporting and Accountability: Member States must report to the Commission annually regarding the administrative fines imposed and related legal proceedings.
Fines for Union Institutions: The European Data Protection Supervisor can impose fines on Union institutions for non-compliance, with specific caps set at EUR 1.5 million for violations of prohibited practices and EUR 750,000 for other infringements.
Chapter XIII- Final Provisions
Chapter XIII of the EU AI Act emphasizes the importance of aligning existing legislation with the new AI regulation, establishes a framework for ongoing evaluation and review, and details the procedural steps for the regulation’s implementation and enforcement across the EU.
-
Amendment of Existing Regulations: The chapter includes provisions for amending various existing regulations and directives to incorporate and align with the new rules established by the AI regulation (specifically, Regulation (EU) 2024/1689). This includes references to specific articles in other regulations concerning AI systems that are safety components.
-
Evaluation and Review: The Commission is tasked with assessing the need for amendments to the list of high-risk AI systems and prohibited AI practices at least once a year after the regulation comes into force. Additionally, every four years, the Commission must evaluate the functioning of the AI regulation, including the effectiveness of its enforcement and supervision mechanisms.
-
Guidelines Development: The Commission is responsible for developing guidelines to facilitate the practical implementation of the regulation, ensuring these guidelines consider the needs of small and medium-sized enterprises (SMEs) and the unique circumstances of various sectors.
-
Entry into Force and Application: The regulation will take effect on the 20th day after its publication in the Official Journal of the European Union, with specific chapters applying earlier than others. Notably, Chapters I and II will apply from February 2, 2025, while most provisions will apply from August 2, 2025.
-
Binding Nature: The regulation is binding in its entirety and directly applicable in all Member States, ensuring uniformity in the application of AI rules across the EU.