On the 15th of November 2023, the Artificial Intelligence, Innovation and Accountability Act 2023 was introduced. On the 31st of July 2024, the Committee on Commerce, Science and Transportation ordered the bill to be reported with an amendment. The Bill is yet to be enacted. The following Act is a Bipartisan legislation which establishes a framework to enhance innovation.
Contents
TITLE I—ARTIFICIAL INTELLIGENCE RESEARCH AND INNOVATION
Sec. 101- Open Data Policy Amendments
Sec. 102-Online Content Authenticity and Provenance Standards Research and Development
Sec. 103-Standards for Detection of Emergent and Anomalous Behavior and AI-Generated Media
Sec. 104- Comptroller General Study on Barriers and Best Practices to Usage of AI in Government
TITLE II—ARTIFICIAL INTELLIGENCE ACCOUNTABILITY
Sec. 201- Definitions
Sec. 202- Generative Artificial Intelligence Transparency
Sec. 203- Transparency Reports for High-Impact Artificial Intelligence Systems
Sec. 204-Recommendations to Federal Agencies for Risk Management of High-Impact Artificial Intelligence Systems
Sec. 205- Office of Management and Budget Oversight of Recommendations to Agencies
Sec. 206- Risk Management Assessment for Critical-Impact Artificial Intelligence Systems
Sec. 207- Certification of Critical-Impact Artificial Intelligence Systems
Sec. 208- Enforcement
Sec. 209- Artificial Intelligence Consumer Education
Sec. 101-
Section 101 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” focuses on amendments to the open data policy as it relates to artificial intelligence. It specifically modifies Section 3502, Title 44 of “the United States Code” by adding new definitions related to data assets, including:
- Data Model: Defined as a mathematical, economic, or statistical representation of a system or process used for making calculations and predictions through algorithms or AI systems.
- Artificial Intelligence System: Described as an engineered system that generates outputs such as predictions or decisions, designed to operate with varying levels of adaptability and autonomy using both machine and human inputs.
These amendments aim to clarify the terminology and framework surrounding data and AI systems, thereby facilitating better understanding and implementation of AI technologies in the context of open data policy
Sec. 102-
Section 102 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” focuses on the research and development of standards for online content authenticity and provenance. The Under Secretary of Commerce for Standards and Technology is tasked with conducting research within 180 days of the Act’s enactment to establish methods for ensuring authenticity and provenance for content generated by both humans and artificial intelligence systems.
Key elements of this research include:
- Secure Methods: Developing secure ways for human authors to append provenance statements to their content using unique credentials, watermarking, or other data-based approaches.
- Verification Methods: Creating methods for verifying authenticity and provenance statements, such as watermarking or classifiers that can distinguish AI-generated media.
- Display of Provenance: Establishing clear methods for displaying provenance information to end-users.
- Facilitating Technologies: Identifying necessary technologies or applications to create and verify content provenance information.
- Minimizing Burden: Ensuring that the developed technologies and methods are not unduly burdensome on content producers.
- Attribution for Creators: Exploring the use of provenance technology to enable proper attribution for content creators.
Additionally, it outlines the need for developing standards for methodologies and applications deemed ready for standardization, and it includes provisions for a pilot program to assess the feasibility of these technologies in federal agencies. The Under Secretary is also required to report to Congress on the progress and recommendations related to these initiatives within specified timeframes
Sec. 103-
Section 103 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” focuses on establishing standards for the detection of emergent and anomalous behavior, as well as AI-generated media. It amends Section 22A(b)(1) of the National Institute of Standards and Technology Act (15 U.S.C. 278h–1(b)(1)) to include new responsibilities for the NIST. Specifically, it adds provisions for best practices in detecting outputs generated by artificial intelligence systems, which includes various forms of content like text, audio, images, and videos. Additionally, it addresses methods for understanding and mitigating anomalous behavior in AI systems, ensuring that safeguards are in place against potentially adversarial or compromising situation
Sec. 104-
Section 104 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” mandates the Comptroller General of the United States to conduct a review of the legislative and regulatory barriers to the use of artificial intelligence (AI) systems in the Federal Government. This review is to be completed within one year of the Act’s enactment. The goals include identifying best practices for adopting AI systems in Government operations, ensuring that the use of AI aligns with governmental needs, and establishing safety measures to manage risks associated with these systems.
The section outlines specific areas to be examined, including ensuring that AI systems are proportional to governmental needs, managing access based on the systems’ capabilities and risks, and ensuring appropriate data handling practices. Following the review, a report must be submitted to relevant congressional committees within two years, summarizing the findings and providing recommendations to eliminate barriers to the effective use of AI in government functionalities.
Title II- Artificial Intelligence Accountability
Sec. 201-
Section 201 of the bill provides definitions relevant to AI accountability. It defines key terms such as “appropriate congressional committees,” “artificial intelligence system,” “covered agency,” “covered internet platform,” “critical-impact AI organization,” “critical-impact artificial intelligence system,” “deployer,” “developer,” “generative artificial intelligence system,” “high-impact artificial intelligence system,” “NIST recommendation,” “Secretary,” “significant risk,” “TEVV,” and “Under Secretary.”
These definitions set the groundwork for understanding the requirements and regulations that follow in the subsequent sections of the bill.
Sec. 202-
Section 202 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” addresses the transparency requirements for generative artificial intelligence systems used on covered internet platforms.
- Prohibition and Disclosure: It establishes that it is unlawful for a person to operate a covered internet platform that utilizes a generative artificial intelligence system unless they inform users about this usage. The notice must be clear and conspicuous, provided before a user interacts with the generated content. Users may also have the option to see this notice only upon their first interaction with such content.
- Enforcement Actions: If a covered internet platform fails to comply with these requirements, the Secretary of Commerce must notify the platform and order remedial actions. If the platform does not address the noncompliance within 15 days, the Secretary may take further enforcement actions under section 208.
- Effective Date: This section is set to take effect 180 days after the enactment of the Act
Sec. 203-
Section 203 focuses on the requirements for transparency reporting related to high-impact AI systems. Here’s a summary of its key components:
- Transparency Reporting Requirement: Each deployer of a high-impact artificial intelligence system must submit an initial report to the Secretary before deploying the system and then annually thereafter. This report must detail the design and safety plans for the AI system.
- Updated Reports: If there are material changes in the purpose of use or the type of data processed by the AI system, the deployer is required to submit an updated report.
- Contents of the Reports: The reports must include information such as:
- The AI system’s purpose and intended use cases.
- Benefits and deployment context.
- Description of data processed as inputs.
- Metrics for evaluating performance and known limitations.
- Processes and testing performed prior to deployment to ensure safety and effectiveness.
- Any third-party AI systems or datasets relied upon for training or operating the system.
- Post-deployment monitoring and user safeguards.
- Developer Obligations: Developers of high-impact AI systems must adhere to similar obligations as deployers regarding transparency and reporting.
- Considerations for Reporting: When preparing the reports, deployers and developers are encouraged to consider best practices from the risk management framework developed by the National Institute of Standards and Technology (NIST).
This section emphasizes the importance of transparency and accountability in the deployment of high-impact AI systems, ensuring that stakeholders are informed about their capabilities and risks.
Sec. 204-
Section 204 of the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023” focuses on recommendations for Federal agencies regarding the risk management of high-impact artificial intelligence (AI) systems.
It defines a high-impact AI system as one that is deployed for purposes beyond use by the Department of Defence or intelligence agencies and that significantly affects individuals’ access to critical services such as housing, employment, or healthcare, posing risks to rights and safety.
The Director of the National Institute of Standards and Technology (NIST) is required to develop sector-specific recommendations for federal agencies to ensure safe and responsible use of high-impact AI systems. These recommendations must be updated biennially to reflect changes in technology and use cases.
The section emphasizes the use of a voluntary risk management framework to provide guidance on establishing regulations, standards, and best practices to mitigate risks associated with high-impact AI systems.
Sec. 205-
Section 205 outlines the oversight responsibilities of the Office of Management and Budget (OMB) regarding recommendations made by the National Institute of Standards and Technology (NIST) to federal agencies for managing artificial intelligence (AI) systems.
Key points include:
- Submission of Recommendations: The Under Secretary shall submit each NIST recommendation to the Director of the OMB, heads of covered agencies, and appropriate congressional committees within one year of the Act’s enactment.
- Agency Responses: Covered agency heads must respond within 90 days, indicating whether they intend to adopt the recommendations fully, partially, or reject them. Their response must also include a proposed timeline for implementation or reasons for refusal.
- Public Availability: The Director is responsible for making the NIST recommendations and agency responses available to the public at a reasonable cost.
- Annual Reporting: Each covered agency must provide an annual regulatory status report to the Director regarding compliance with NIST recommendations, which will be reviewed and commented on by the Director.
- Technical Assistance: The Under Secretary will assist agencies in implementing NIST recommendations.
- Regulation Review and Improvement: The OMB Administrator will develop and periodically revise performance indicators and measures for regulating AI systems.
Sec. 206-
Section 206 outlines the requirements for risk management assessments that critical-impact AI organizations must perform. Here are the key points:
- Assessment Requirement: Each critical-impact AI organization must conduct a risk management assessment not later than 30 days before making a critical-impact artificial intelligence system publicly available. They must also conduct updated assessments at least biennially while the system is available.
- Reporting: After completing a risk management assessment, the organization must submit a report to the Secretary within 90 days. This report must outline the assessment and be in a consistent format.
- Assessment Focus: The assessments should address various categories, including the organization’s policies for managing AI risks, the structure and capabilities of the AI system, and how the organization uses methodologies to analyse and monitor risks.
- Developer Obligations: Developers of critical-impact AI systems must provide necessary information to deployers to comply with these assessment requirements.
- Limitations on Secretary: The Secretary cannot prohibit a critical-impact AI organization from making a system available based solely on the assessment review 20.
Overall, Section 206 emphasizes a structured approach to risk management for critical-impact AI systems, including regular assessments and transparency in reporting to ensure safety and compliance.
Sec. 207-
Section 207 ” focuses on the certification of critical-impact artificial intelligence systems. Here are the key points:
- Establishment of an Advisory Committee: The Secretary of Commerce is required to establish an advisory committee within 180 days of the Act’s enactment. This committee will provide advice on testing, evaluation, validation, and verification (TEVV) standards for critical-impact AI systems.
- Duties of the Advisory Committee: The committee’s responsibilities include recommending TEVV standards, reviewing prospective standards, and advising on the implementation plan for certification.
- Certification Plan: Within one year, the Secretary must create a 3-year implementation plan for the certification of critical-impact AI systems, which will include methodologies for gathering information and processes for establishing TEVV standards.
- TEVV Standards: The Secretary is responsible for issuing TEVV standards that ensure safe, secure, and transparent operations of critical-impact AI systems, and these standards must be updated regularly to reflect advancements in technology.
- Exemptions and Self-Certification: The Secretary has the authority to exempt systems from TEVV standards temporarily and critical-impact AI organizations must certify compliance with these standards, unless they know the certification is misleading.
- Noncompliance Findings and Enforcement: If a critical-impact AI system is found to be noncompliant, the Secretary will notify the organization and may require remedial actions. If the organization fails to comply, further enforcement actions may be taken.
Overall, Section 207 aims to establish a framework for the certification and oversight of critical-impact artificial intelligence systems to ensure their safe and effective deployment.
Sec. 208-
Section 208 outlines the enforcement mechanisms for ensuring compliance with the provisions of the Act.
- General Enforcement Actions: If the Secretary discovers noncompliance by a deployer of a high-impact artificial intelligence system or a critical-impact AI organization, and determines that their remedial actions are insufficient, the Secretary is authorized to take further actions.
- Civil Penalties: The Secretary can impose civil penalties on entities that violate the Act or its regulations. The penalty can be either a maximum of $300,000 or twice the value of the transaction that led to the violation.
- Intentional Violations: If a violation is determined to be intentional, the Secretary may prohibit the organization from deploying a critical-impact artificial intelligence system, in addition to any civil penalties.
- Factors for Civil Penalties: The Secretary may establish standards for civil penalties based on the severity of the violation, culpability of the violator, and mitigating factors like cooperation with the Secretary.
- Civil Actions: The Attorney General may bring a civil action in a U.S. district court to enforce compliance, seeking to enjoin violations or collect penalties.
- Protection of Sensitive Information: The section clarifies that developers of critical-impact artificial intelligence systems are not required to disclose trade secrets or other protected information in the enforcement process.
Sec 209-
Section 209 establishes a working group focused on responsible education efforts regarding AI systems. Here are the key points:
- Establishment: The Secretary of Commerce is required to establish the working group within 180 days of the enactment of the Act.
- Membership: The working group will consist of up to 15 individuals with expertise in AI, including representatives from various sectors such as education, consumer advocacy, public health, marketing, and technology.
- Duties: The group will identify recommended education programs that the industry can voluntarily implement to inform consumers and stakeholders about AI systems as they become available or widely used. Additionally, they will submit a report to Congress and make it available to the public.
- Considerations: The working group will consider various topics, including the intent, capabilities, limitations of AI systems, use cases that improve government efficiency, consumer interaction methods, and safety features.
- Consultation: The Secretary will consult with the Chair of the Federal Trade Commission regarding the group’s recommendations.
- Termination: The working group will dissolve two years after the enactment of the Act.
This section emphasizes the importance of consumer education in the context of AI advancements and aims to facilitate informed public engagement with these technologies.