On May 3, 2023, Brazil issued the Bill on the use of Artificial Intelligence [2338/2023] (‘the Bill’). The Bill is currently in the Brazilian Senate and has been subject to a number of amendments.
The aim of the bill is it to provide foundations, principles and guidelines for the development and application of artificial intelligence in Brazil. It also includes others measures which provide for the use of the artificial intelligence.
A key part of the Bill is the categorization of risks deriving from artificial intelligence. The Bill establishes the requirement of a preliminary assessment and defines the applications classed as excessive risk and high-risk, which are subject to stricter control standards.
Right to Information
Article 7 of the Bill provides that people affected by artificial intelligence systems have the right to receive, prior to contracting or using the system, clear and adequate information on a number of following aspects, including categories of personal data used in the context of the operation of the artificial intelligence system and humans involved in the decision-making, forecasting or recommendation process.
Article 8 of the Bill provides that a person affected by artificial intelligence system may request an explanation of the decision, forecast or recommendation, with information about the criteria and procedures used, as well as to the main factors affecting a particular prediction or
decision.
Importantly Article 10 of the Bill provides a safeguard so that when the decision, prediction or system recommendation of artificial intelligence produces relevant legal effects or if it impacts in a meaningful way the interests of the person, the person may request the intervention or human review (unless human intervention or review is proven impossible.
Preliminary Assessment
Article 13 requires a preliminary assessment to be carried for every artificial intelligence system prior to its placing on the market or use in service. This will classify its degree of risk. If as a result of this, the artificial intelligence system is classed as high risk, the documentation of the preliminary assessment is required to be registered and further controls are then put in place.
Article 17 of the Bill provides that artificial intelligence systems are considered to be high risk if they are used for the following purposes:
·
application as safety devices in the management and operation of critical infrastructure, such as traffic control and networks water and electricity supply;
·
vocational education and training, including systems of determination of access to educational or vocational training institutions or for student evaluation and monitoring;
·
recruitment, screening, filtering, evaluation of candidates, making decisions on promotions or terminations of contractual relationships of work, division of tasks and control and evaluation of performance and behaviour of people affected by such intelligence applications in the areas of employment, worker management and access to employment on your own;
·
evaluation of criteria for access, eligibility, concession, review, reduction or revocation of private and public services that are considered essential, including systems used to assess the eligibility of natural persons for the provision of public services of assistance and security;
·
assessment of people’s indebtedness natural or establishment of their credit rating;
·
sending or establishing priorities for emergency response, including firefighters and medical assistance;
·
administration of justice, including systems that assist judicial authorities in the investigation of facts and in the application of the law;
·
autonomous vehicles, when their use may generate risks to the physical integrity of people;
·
IX – applications in the health area, including those intended to assist medical diagnoses and procedures;
·
biometric identification systems;
·
criminal investigation and public security, especially for individual risk assessments by the competent authorities in order to determine the risk of a person committing offences or reoffending, or the risk of for potential victims of criminal offences or to assess the traces of personality and the characteristics or past criminal behaviour of natural persons or groups;
·
analytical study of crimes related to natural persons, allowing law enforcement to search large data sets complex, related or unrelated, available from different sources or in different data formats, in order to identify patterns or discover hidden relationships in the data;
·
investigation by administrative authorities to assess the Credibility of evidence in the course of the investigation or prosecution of infringements, to predict the occurrence or recurrence of an actual or potential based on profiling of natural persons; or
·
migration management and border control.
If the system is classed as high risk as a result of the preliminary assessment, Article 22 of the Bill provides that an algorithmic impact assessment must be undertaken which must be notified to the regulatory authority.
The algorithmic impact assessment will be carried out by professional or team of professionals with technical, scientific knowledge and legal requirements necessary to carry out the report and independently functional.
Article 20 of the Bill also provides specific governance measures and internal processes that operators of high risk systems are required to undertake. This includes additional documentation requirements and testing.
Excessive Risk Systems
Article 14 of the Bill defines excessive risk systems as the implementation and use of systems of artificial intelligence:
·
that employ subliminal techniques that have as their objective or by inducing the natural person to behave in a harmful manner or dangerous to your health or safety or against the foundations of this Law;
·
that explore any vulnerabilities from groups specific to natural persons, such as those associated with their age or physical or mental disability, so as to induce them to behave in a way that harmful to your health or safety or against the foundations of this Act;
Article 16 of the Bill delegates the regulation of excessive risk artificial intelligence systems to the regulatory authority. They will also be required to provide a database of systems identified as high risk or excessive risk based on criteria n Article 18 of the Bill.
Article 31 of the Bill requires operators of artificial intelligence systems to communicate to the regulatory authority if there is a serious
security incidents, including when there is a risk to the life and physical integrity of persons, the disruption of critical infrastructure operations,
severe damage to property or the environment, as well as serious violations of the fundamental rights, under the terms of the Bill. The communication must be made within a reasonable period of time.
Administrative Sanctions
Article 36 of the Bill includes a number sanctions for infractions of the provisions in the Bill. This ranges from a simple fine, limited, in total, to fifty million reais per infraction up to the total prohibition of processing of certain databases.