On the 6th of February 2024, the then Conservative government published the policy paper (Implementing the UK’s AI regulatory Principleswhich aimed to provide guidance on the considerations in which regulators may wish to consider when implementing the UK’s approach to AI. This builds on the White paper published in 2023.

The previous White paper established sets of principles which outlined the key outcomes AI should be aligned with. To support this, the framework provides regulators with the flexibility to interpret these principles to the outcomes of AI in use cases that fall within their remit.

A phased approach is being taken in implementing this guidance:

Phase 1- This initial guidance supports regulators by enabling them to properly consider the principles and to start considering developing tools and guidance for their regulatory remit. Provides considerations for regulators as they develop regulatory activities. This document is Phase 1.

Phase 2 -Will expand on this guidance to provide further detail based on information from regulators on the gaps. This is expected Summer 2024.

Phase 3- Involve collaborative working with regulators to identify areas for potential joint tools and guidance across regulatory remits.

Guidance on interpreting/applying the regulatory framework

This section outlines a number of ways regulators can consider the framework. Some of these include:

  1. Put information in the public domain on the actions regulators are taking. E.g. AI plans.
  2. Guidance provided by other regulators-
  3. Maintaining an understanding of how principles are being interpreted – This can help identify gaps and inform policy.

Additional recommendations are outlined within the chapter and can be accessed here.

Applying Individual Principles:

This section outlines each principle and lists consideration for regulators to consider.

Safety, Security and Robustness- 6 considerations are provided:

A. Communicate the level of safety related risk- provide clear definitions of safety, security and robustness. Identify and act upon risks.

B. Provide tools/guidance for undertaking AI-related safety risk assessments and implementing appropriate mitigations-

C. Enable AI deployers and end users to make informed decisions about the safety of AI products and services-

D. Consider how the associated actors on the AI supply chain can regularly test or carry out due diligence- Encourage actors to supply information functioning, resilience and security of a system throughout the supply chain.

E. Encourage AI developers and deployers to mitigate and build resilience to cybersecurity related risks throughout the AI life cycle.

F. Encourage AI developers and deployers to consider and mitigate potential malicious or criminal use of AI products and services.

Appropriate Transparency and explainability:

A. Emphasise that transparency and explainability help to foster trust in AI and can increase appropriate innovation and adoption- Degree of transparency could be responsive to risk.

B. Encourage AI developers and deployers to implement appropriate transparency and explainability measures – could include notifying end users they are engaging with AI.

C. AI developers could also be encouraged to provide appropriate transparency and explainability measures to AI deployers about the system they are using to deliver a product or service- Provide clear information on how the AI system works.

D. Consider asking or requiring AI developers and deployers to provide information to show how they are adhering to this principle- labelling generated AI content.

E. Note that this principle is necessary for the proper implementation of the other four principles.

Fairness:

A. Provide descriptions of fairness that can be applied to outcomes of AI systems used within the sector(s) they regulate – requires regulators to provide a definition of fairness- context specific.

B. Tools and guidance could also consider relevant law, regulation, technical standards and assurance techniques-

C. Consider how AI systems in their remit are designed, developed, deployed and used considering such descriptions of fairness- Can help mitigate impact of biases. Requires an inspection of when decisions made by humans.

Accountability and Governance:

A. Whether regulators’ regulatory powers allow them to place legal responsibility on actors- issue guidance to deployers and developers to communicate these laws and who they hold account.

B. Where legal responsibility cannot be assigned to an actor in the supply chain that operates in a regulatory remit, encourage the AI actors to ensure good governance in who they outsource to- Future guidance is expected in this area.

C. Be clear that ‘accountability’ refers to the expectation that AI developers could adopt appropriate governance measures to ensure the proper functioning, throughout the life cycle.

D. Place clear expectations for compliance, good practice and internal governance structures on AI developers and deployers within regulators’ remits.

E. Clarify the responsibilities of AI developers and deployers within regulators’ remits to demonstrate proper accountability and governance- helps to create legal certainty.

F. Foster accountability through promoting appropriate transparency and explainability-Make it clear when AI systems are being used.

Contestability and Redress:

A. Ensure that AI developers and deployers are consistent with their statutory objectives, to provide clarity to users on which existing routes to contestability apply.

B. Highlight that appropriate transparency and explainability are relevant to the good implementation of this principle- Transparency is key in making clear redress routes.

How to communicate progress on engagement with AI principles:

This framework is established to support regulators in complying with the 5 principles in a way which works best in their regulatory remits. Regulators are encouraged to publish an update outlining their approach and the steps they are taking to meet these principles. This framework then outlines aspects the update could include. These are:

  1. Their current assessment of how AI applies within the scope of their regulatory responsibilities
  2. The steps they are already taking to adopt the AI principles set out in the white paper- include concrete examples.
  3. A summary of guidance they have issued or plan to issue on how the principles interact with existing legislation and the steps industry should take in line with the principles.

These are just 3 of the 8 recommendations provided.