In June 2023, the Australian Government published it’s consultation paper on “Safe and Responsible AI in Australia.”
This purpose of this paper was to explore the governance mechanisms surrounding AI in Australia and outline the potential options for future regulation. With this in mind, this paper seeks for feedback and contribution to aid the Government in their future plans for development.
Section 1.1- Scope of this paper
This section states that the paper does the following:
- provide an overview of existing domestic governance and Australia’s broader regulatory framework
- provide an overview of recent (and ongoing) international developments
- seek feedback on whether further governance and regulatory responses are needed in Australia.
As well as this, the focus of this paper is to identify potential gaps in the domestic governance landscape to assist in the development of AI.
This paper does not seek to make note of any issues relating to AI, instead focuses on the governance mechanisms.
Section 2- Opportunities and Challenges
Section 2.1 comments on the opportunities for the deployment of AI to improve economic and social outcomes. Meanwhile Section 2.2 focuses on the challenges which occur alongside the increased application of AI. Section 2.2 outlines a specific focus on algorithmic bias as one the major risks of AI. To avoid unwanted bias, the paper outlines the need for developers to design, test and validate their systems to correct for bias. The paper then moves on to discuss further risks which may be associated with AI.
Section 3- Domestic and International Landscape
Section 3.1 governs the domestic environment and the current regulatory landscape of AI. The paper notes the general regulations which currently regulate AI. Some examples include; data protection and privacy law. Regulators will consider how their current regulatory frameworks mitigate the risks of AI and may issue further guidance to narrow any gaps presented. This has been evidenced with the Online Safety Act 2021.
This paper seeks to note that it is not the intention of this consultation paper to consolidate sector specific regulations. Rules suitable for one sector, may not be suitable in another. Therefore, portfolio’s will continue to be developed specific to governance areas.
This chapter also highlights the introduction of the Responsible AI Network (RAIN) by the National AI Centre. This is centred around 6 principles: law, standards, principles, governance, leadership and technology.
Section 3.2 looks into the International developments of AI. This chapter looks into the different approaches countries are taking to Ai governance. Such as, the EU, USA, UK, China, New Zealand, Singapore, Thailand, Italy and Indonesia.
Section 4 – Managing the potential risks of AI
Australia’s current approach includes a combination of the following:
- a broad set of general regulations that are mainly technology neutral (for example, consumer protection, online safety, privacy and criminal)
- sector-specific regulation (for example, therapeutic goods, financial services, food safety and motor vehicle safety)
- voluntary or self-regulation initiatives such as ethical principles for AI that provide guidance to businesses and governments to responsibly design, develop and implement AI.
It is the hope that through this consultation paper and its response, it will help to avoid a piecemeal regulatory environment which would only restrict development.
Any governance measures adopted by Australia will be guided by the following:
- ensure there are appropriate safeguards, especially for high-risk applications of AI and ADM
- provide greater certainty and make it easier for businesses to confidently invest in AI-enabled innovations and ADM activities and engage in these activities responsibly.
The consultation paper lists the possible options for governance measures. These are the following options;
- Regulations- could be new AI-specific laws. Create binding obligations that can be legally enforced and provide certainty.
- Industry self-regulation- industry formulates its own rules through codes of conduct or voluntary schemes. Often quick to implement.
- Regulatory Principles- Outline when and how policymakers should regulate
- Regulator collaboration- Greater collaboration and information sharing
among regulators can reduce the compliance burden different regulators can place on the same regulated entities.
- Governance/Advisory bodies– Bodies and platforms are being established to support AI governance outcomes. These have roles such as providing advice. Example- Australia’s AI Network
- Enabling regulatory levers- Regulations can be designed to facilitate emerging technologies rather than hinder innovation. – Regulatory Sandboxes.
- Technical standards– Mandatory technical requirements led by experts to improve consistency and international trade.
- Assurance Infrastructure– Measures to test and verify that an AI system meets certain standards
- Policies guiding the operations of the government – These can increase awareness of government expectations
- Transparency/ Consumer information requirement – Initiatives such as AI impact statements can help to inform the public about risks associated with AI.
- Bans- Government can prohibit an activity by law.
- Public education- These are non-regulatory options that influence and encourage certain behaviour by increasing awareness
- By – Design considerations- These are becoming popular as preventative mechanisms to ensure the design of appropriate AI or other digital systems
- Risk – Management Approach – This could guide the implementation of any of the above options. Many countries have used this management principle, e.g EU.
Section 5 – How to get involved
This section welcomes the contributions of the public to help the Australian Government in the consideration of regulatory responses to AI.