On the 23rd of March 2023, the Department for Science and Technology published a White Paper, “AI Regulation: A pro- innovation approach”. This sets out the UK Governments intended approach to regulate Artificial Intelligence (AI).
Aims of this framework.
The UK regulatory framework has proposed a pro – innovation approach to AI regulation. The paper has set out their aims into 3 objectives which they seek to achieve through this framework. These are outlined within Part 3 (1) as:
- Drive growth and prosperity-
- Increase public trust in AI-
- Strengthen the UK’s position as a global leader in AI-
Proposed Regulatory framework
Part 3 (2) of the UK Approach outlines the fact that this paper uses a principle based framework. This regulatory framework outlines characteristics of this regulatory regime which existing regulators will be expected to implement:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Part 3, Section 50 outlines that regulators will be expected to apply the principles proportionately to address the risks posed by AI within their remits.
Regulating the use
Part 3.2.2, states that this framework is context- specific. This means the regime will regulate based on the outcomes AI is likely to generate in particular applications. This is because the classification of all applications within a certain sector of AI as “High Risk” could be detrimental, due to the fact that some aspects of this sector may be low risk and therefore be incorrectly categorised. This has the potential to limit innovation.
The UK framework will be built around 4 key elements. These are :
- Defining AI based on its unique characteristics to support regulator coordination
- Adopting a context-specific approach
- Providing a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities
- The principles clarify government’s expectations for responsible AI and describe good governance at all stages of the AI life cycle.
- The principles clarify government’s expectations for responsible AI and describe good governance at all stages of the AI life cycle.
- The principles clarify government’s expectations for responsible AI and describe good governance at all stages of the AI life cycle.
- Delivering new central functions to support regulators to deliver the AI regulatory framework, maximising the benefits of an iterative approach and ensuring that the framework is coherent
Model for applying the principles
Although currently the Government have issued these principles on a non- statutory basis, they have noted clearly under section 3.2.4 that when parliaments time allows they will want to introduce legislation to impose a duty on regulators to impose the principles. However if the assessment of the current non-statutory framework suggests that a statutory duty is unnecessary, one will not be implemented.
Individual regulators applying the principles
Part 3.2.5 of the UK Pro-Innovation Approach recognises that fact that in some sectors AI principles will already exist. This framework therefore gives sectors the ability develop specific principles which suit their domain. Regulators will need to monitor the implementation of the framework within their domain.
Implementation Principles
Annex A sets out the proposed implementation principles which regulators may wish to take. These are set out in line with the 5 characteristics proposed in Part 3 (2).