Australia’s interim response to the consultation paper:
On the 17th of January 2024, the Australian Government published its interim response to the Consultation paper.
From the consultation paper conducted in 2023, it was noted that the currently regulatory framework does not adequately address the risks presented with AI. The government is working to strengthen laws in areas that will help address these risks. One example being the implementation of privacy law reforms; Online Safety act 2021.
In considering the correct regulatory approach to take when implementing safeguards, the government’s main aim will be to ensure the development of AI is legitimate. As well as ensuring High-risk systems are safe whilst Low-risks systems can continue to flourish. The government will look to enforce an AI Investment Plan to support the development of AI technologies across the economy.
From the consultation paper published in 2023. The following concerns were raised; low trust, lack of skills, inadequate IT infrastructure, financial barriers and regulatory uncertainty.
In spite of these concerns, Australians also see the opportunity of AI. AI systems are helping to analyse medical images, optimise engineering designs and better forecast and manage natural emergencies. Submissions identified that AI has the potential to bring transformative benefits to the way Australians live and work. AI can create new jobs and benefit consumers. It can change the way we learn, power new industries, boost productivity, uplift healthcare and facilitate a smooth transformation to net zero.
Risks identified by the Submission.
Many of the risks identified are not new, these include examples of risks which had not been properly mitigated such as biases. However, among these there are new risks which have no regulatory safeguards in place to limit their harm. These are the risk identified:
- Technical Risks- Outputs of AI systems can comprise technical limitations which can result in unfair outcomes
- Unpredictability and opacity- Opaque systems can make it difficult to identify harms.
- Domain-specific risks- New risks can arise when AI interacts with existing harms.
- Systemic risks- Systemic risks may arise from emerging AI developments. Submission examples include frontier models which can produce unpredictable harms.
- Unforeseen risks- AI is evolving at a speed that will pose unforeseen risks. Submissions called for regulatory approaches to respond to these risks.
What Australians want to be done
Nearly all submissions called for the Government to act on preventing the harms of AI. Submissions support the establishment of regulatory safeguards to prevent harms. Pathways include improving existing laws as well as establishing new ones. The public also felt that Low- risk AI systems should not be subject to strict regulations.
Regulatory Action
- Ex- ante laws:
Ex-ante regulation is a regulatory intervention to limit harms before they occur. A range of preventive interventions were proposed. Such as:
- testing, including internal and external testing before the release of an AI system and ongoing auditing requirements
- transparency, including requiring generative AI systems to incorporate digital labelling or ‘watermarks’, so that AI-generated content is identifiable
- accountability, including mandating ‘human-in-the-loop’ requirements where critical points of AI decision-making have human oversight to mitigate AI misalignment with human objectives, or introducing licensing schemes for high-risk AI development.
- introducing outright bans of AI uses that present unacceptable risks – suggestions in submissions included behavioural manipulation, social scoring and real-time widescale facial recognition.
- Risk based Approach-
With this method AI development and application is subject to regulatory requirements according to the level of risk they pose. Therefore allowing low-risk models to freely operate whilst targeting regulations towards High-risk AI models.
Benefits of a Risk-Based approach include:
- give regulatory certainty by categorising risks and obligations
- minimise compliance costs for businesses that do not develop or use high-risk AI
- incorporate a well-defined ‘menu’ of risk-management options that could be imposed
- balance the costs of regulatory burden with the value of risk reduction
- be flexible and responsive as technology develops
Limitations of a Risk-Based approach:
- frameworks will not accurately and reliably predict and quantify risks
- context-specific risks will not be well captured by categorisation
- unpredictable risks will not be considered, particularly for frontier models designed for general purpose application
3. Updating Existing Laws
There are a number of sector- specific laws which are already in place to regulate the use of AI. Submissions identified 10 existing legislative frameworks that need updating to be relevant in the context of AI.
- uncertainties relating to whether individuals who use AI to generate deepfakes can be liable for misleading and deceptive conduct (competition and consumer law)
- whether AI models used by health and care organisations and practitioners could lead to clinical safety risks (health and privacy laws)
- the way creative content may be used to train generative AI models, including potential remedies for any infringement (copyright law).
4. Targeted but technology based approach
There is a risk that if regulatory frameworks are too rigid, their application to the current state of technology will not apply as intended when AI technology advances. Calls for actions to be technology neutral ensures keep up with advances.
5. Security
The submissions called for greater security measures within AI systems. In order to support this the 2023-2030 Australian cyber security strategy will ensure this.
6. Non-Regulatory actions
- establishing an expert AI advisory body
- considering regulatory sandboxes
- investing in domestic AI capability
- adopting and promoting international standards for AI development and deployment
- for the government to lead by example in its own safe and responsible use of AI.
Principles guiding the interim Response-
Risk based approach – the government will take a risk-based approach to support the use of AI.
Balanced and Proportionate – The government will avoid unnecessary burdens and will balance the need for innovation and the need to protect the community.
Collaborative and Transparent- The government will work with experts to develop a strong response
A trusted international partner- Australia will be consistent with the Bletchley Declaration and leverage its strong foundations.
Community first- people and the community will be at the centre of any regulatory approach.
Preventing Harms through testing, transparency and accountability
Ai safety standard
The National AI centre will work with industries to draw these frameworks together to produce AI risk-based safety framework. This will create a single source for businesses wanting to develop AI.
Temporary expert advisory group
The Department of Industry, Science and Resources established this group to support the government in the development of options for AI safeguards.
Clarifying and strengthening laws to safeguard citizens-
Work is underway to introduce regulatory frameworks. The following work is underway:
- developing new laws that will provide the Australian Communications and Media Authority with powers to combat online misinformation and disinformation
- an independent statutory review of the Online Safety Act 2021 to ensure that the legislative framework remains responsive to online harms
- working with the state and territory governments, industry, and the research community to develop a regulatory framework for automated vehicles in Australia, including interactions with work health and safety laws
- ongoing research and consultation by the Attorney-General’s Department and IP Australia, including through the AI Working Group of the IP Policy Group, on the implications of AI on copyright and broader IP law
- implementing the privacy law reforms
- strengthening Australia’s competition and consumer laws to address issues posed by digital platforms
- agreeing an Australian Framework for Generative AI in Schools by education ministers to guide the responsible and ethical use of generative AI tools in ways that benefit students, schools and society while protecting privacy, security and safety
- ensuring the security of AI tools, such as using principles like security by design, through the government’s work on the Cyber Security Strategy.
Global AI Safety Summit-
In November 2023, Australia alongside 27 other countries signed the Bletchley Declaration at the first global AI safety summit. . The declaration highlights that proactive, risk-based international collaboration is required to help ensure the safety of frontier AI. This also highlights Australia’s commitment to work with international partners.