On the 12th of July 2024 the EU AI act, Regulation 2024/1689 was officially published in EU official journal. On the 1st of August 2024, the EU AI act came into force. This act established the first comprehensive horizontal legal framework for the regulation of AI in the EU and has direct effect in each member state.
A crucial aspect of this regulation is the risk-based assessment which categorises each AI system into a system of; high risk, low risk and unacceptable risk (prohibited practices). This risk-based approach outlines the threat of each AI system and establishes a specific set of requirements it must comply with. This is outlined in Chapter II and continued in Chapter III.
The main purpose of this Act is to improve the functioning of the internal market and promote the uptake of human- centric and trustworthy AI.
High Risk AI systems
Chapter III, Article 6 sets out the classification of High Risk AI systems. This states AI systems shall be classified as high-risk where both of the following conditions are fulfilled:
- The AI system is intended to be used as a safety component of a product, or the system itself is a product, covered by the union harmonisation legislation listed in Annex I
- The product whose safety component pursuant to point (a) is the AI system, or the AI system as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
Annex I provides an in depth description of the union harmonisation legislation.
Section 2, Article 8 states that any AI system which falls within the category of “High Risk” will require providers to ensure that their product is fully compliant with all applicable requirements under applicable union harmonisation legislation.
In addition to this, Article 9 (1) requires a risk management system to be established, implemented and documented throughout the entire life cycle of the High Risk AI system. This must be subject to regular systematic review.
Deployers of High Risk AI are also subject to a number of requirements as per Article 16. One notable requirement which is discusses further in Article 17 is the quality management system which ensures compliance with the above requirements.
Transparency
Article 13 provides that those AI systems which pose a significant risk of interception or deception are subject to information and transparency requirements. The primary aim of these transparency regulations is to ensure all users are aware that the content given is artificially generated. Providers of AI systems who generate large scale synthetic content are required to implement reliable, effective, and robust techniques to ensure that users detect the information is AI generated, and not human.
General Purpose AI
General Purpose AI models are defined as models which meet any of the following criteria:
- It has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks;
- Based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.
Annex XIII outlines the criteria for the designation of General Purpose AI models with systemic risk. This states the criteria which the Commission must take into account when determining whether systems have the capabilities equivalent to those in Article 51.All models which meet the above criteria must be presumed to have High impact capabilities, this is identified in Article 51. |
Providers of general-purpose AI models must:
- Draw up and keep up to fate the technical documentation of the model, including its training and testing process and the results of its evaluation, which shall contain, the information set out in Annex XI for the purpose of providing it to the AI Office and the national competent authorities;
- Draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets in accordance with Union and National law, the information shall:
- Enable providers of AI systems to have a good understanding of the capabilities and limitations of the general-purpose AI model and to comply with their obligations pursuant to this Regulation; and
(II) contain, at a minimum, the elements set out in Annex XII; Annex XII provides details on the information which should be included in the General Purpose AI documentation. |
- Put in place policy to comply with Union law on copyright and related rights, and in particular to identify and comply with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of Directive (EU) 2019/790;
- Draw up and make publicly available a sufficiently detailed summary about the content used for training of the general-purpose AI model, according to a template provided by the AI Office.
Codes of Practice
The AI Office, must facilitate the drawing up of codes of practice. These codes must contain specific objectives and measures as well as include specific performance indicators. These are expected to be ready by the 2nd of May 2025.
Governance
Chapter VII outlines the governing bodies at union level. Article 64 concerns the AI Office. The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office.
Article 64 then moves on to the Artificial Intelligence Board. The Board is composed of one representative per Member State .The European Data Protection Supervisor shall participate as an observer. The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation.
Prohibited Practices
Chapter II, Article 5 outlines the prohibited AI practices. Any of the following systems will not be permitted:
A. the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;
B. the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
C. the placing on the market, the putting into service or the use off AI systems for the evaluation or classification of natural persons or groups
of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(I) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are unrelated to the contexts in which the data was originally generated or collected;
(II) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or disproportionate to their social behaviour or its gravity;
D. the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;
E. the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
F. the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education
G. the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or categorizing of biometric data in the area of law enforcement;
H. the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:
(I) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons;
(II) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(III) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or
prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.
Annex II provides a list of criminal offences.
Enforcement
Section 3, Article 74 covers the control of EU systems in
the union market.
Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective enforcement of this Regulation:
(a) | any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Article 2(1) of this Regulation; |
(b) | any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation. |
Codes of Practice
The AI Office, must facilitate the drawing up of codes of practice. These codes must contain specific objectives and measures as well as include specific performance indicators. These are expected to be ready by the 2nd of May 2025.
Governance
Chapter VII outlines the governing bodies at union level. Article 64 concerns the AI Office. The Commission shall develop Union expertise and capabilities in
the field of AI through the AI Office.
Article 64 then moves on to the Artificial Intelligence Board. The Board is composed of one representative per Member State .The European Data Protection
Supervisor shall participate as an observer. The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective application of this Regulation.