Code of Practice Principles

On the 31st of January 2025 the Department for Science, Innovation and Technology published the following policy paper: “Code of Practice for the Cyber Security of AI”

At the core of this paper, are the Voluntary Code of Practice Principles.  Here are the principles outlined in the report. 

Principle 1: Raise awareness of AI security threats and risks

This principle is centred around training programmes. All Organisations Cyber Security training programme shall include AI security content. This must be provided to staff members, and tailored to the specific roles of staff members. All staff members must stay up to date with the latest security threats and vulnerabilities which can be notified through a range of sources. 

For developers, training must be provided in secure coding and system design techniques which are specific to AI development. The focus is on minimizing AI vulnerabilities. 

Principle 2: Design your AI system for security as well as functionality and performance

When considering whether to develop an AI system, a System operator/ developer must conduct a thorough assessment. If a data custodian is part of the organisation they must be included in the assessment and discussions for the data needs of the AI system. 

If an AI system is created, special considerations must be made to ensure the AI system can withstand AI attacks and security risks. 

Principle 3: Evaluate the threats and manage the risks to your AI system

System operators and developers must ensure that there is a thorough analysis of all threats occurring to their systems. This involves a process of threat modeling and risk management to assess any security risks that arise when a new setting is implemented or updated into the AI system.  

Any risks identified should be resolved through the implementation of controls based on considerations such as cost. 

Any identified threats which cannot be resolved by Developers must be communicated to the System Operators so that they can threat model their AI systems. This should then be followed by communication the the End-users. 

Developers and System operators should also continuously monitor and review their system infrastructure according to the risk appetite 

Principle 4: Enable human responsibility for AI systems

This included subpoints 4.1-4.5. This principle is centered around ensuring humans take responsibility for the AI system.

When developing an AI system, Developers and System Operators must incorporate and enable human oversight. This includes making it easy for humans to assess the outputs. 

However, where human oversight is risk control, developers shall develop and maintain technical measures to reduce such risk. 

In addition, the Developers and System operators must ensure that any security controls which have been specified by the Data Custodian have been built into the AI system. 

Principle 5: Identify, track and protect your assets

This includes sub-points 5.1-5.4.1. 

All parties involved (Developers, System Operators, Data Custodians ) must maintain a comprehensive inventory of their assets. As part of broader software security practices, each party must have tools in place to manage version control and secure their assets.

Any disaster recovery plans must be tailored to account for any specific attacks towards the AI system. 

In the context of data, each party must protect sensitive data, training and test data, against any unauthorised access. 

When designing the AI model, checks must be applied to data and inputs. When any revisions are made these steps must be repeated. 

Principle 6: Secure your infrastructure

This involves points 6.1-6.6. 

There should be a thorough evaluation of the organisations access control to identify any appropriate measures to secure API’s, models and data. 

Developers must create dedicated environments for development and model tuning activities. These must be backed by technical controls to ensure separation. 

A clear vulnerability disclosure plan must also be published alongside an AI system incident management plan and AI system recovery plan. 

Principle 7: Secure your supply chain

This includes sub-points 7.1-7.4.

Developers must ensure they follow secure software supply chain processes for their AI models.

System Operators may choose to adapt any models which are not well documented/secured. They can justify their decision to use such models through documentation. This must be accessible to End-users. 

If they wish to update any AI systems, their intention must be communicated to the End-users in an accessible way prior to any updates. 

Principle 8: Document your data, models and prompts

Developers must maintain a clear audit trail of their systems design and post-deployment maintenance plans. This must include security-relevant information, such as sources, intended scope etc. 

Any model scopes made available to other shareholders must be provided a cryptographic hash to allow them to verify the authenticity of the components. 

Any poisoned data must be documented, along with how it was obtained. However, all training data must be documented. 

Principle 9: Conduct appropriate testing and evaluation

Involving 9.1-9.4.1.  All AI models released to End-users must undertake testing as part of a Security assessment process. This testing shall be conducted prior to any interaction with End-users. 

In order to ensure un-biased  results. Developers must ensure any security testing is conducted by an independent tester with technical skills. Results must then be shared with System operators to support their own testing procedures and evaluations. 

Developers must evaluate model outputs to ensure system operators and End users do not reverse engineer training data or non-public aspects of the AI model. 

Principle 10: Communication and processes associated with End-users and Affected Entities

This includes 101-10.3. System operators will be required to inform End-users on where and how their data will be used, accessed and stored, along with sufficient support. 

System operators must also provide the necessary information about the AI model, this includes guidance as well as the limitation of the AI system.

End-users must also be informed about any potential security threats to the AI models in a clear and accessible way. 

Following any cyber-security attack, Developers and System Operators must support End-users to mitigate any of the effects from the incident.

Principle 11: Maintain regular security updates, patches and mitigations

This includes 11.1- 11.3. All developers must provide security updates to the System operators, who will then deliver such updates to end-users. 

Developers will ensure they have mechanisms and contingency plans to mitigate security risks. 

In regards to AI system updates, these must be treated as if a new version of the model has been released. This must therefore undertake thorough security testing and evaluation. Developers must assist the system operators in evaluating the model changes. 

Principle 12: Monitor your system’s behaviour

Involving sections 12.1- 12.4. Security Operators must log all user actions to ensure there is security compliance and incident investigations. With these logs, the operators must analyze the data to ensure the AI models continue to produce the outputs desired.  This is the same for long-term analysis to maintain a steady output, ensuring there is no gradual reduction in the AI behaviour. 

 

Principle 13: Ensure proper data and model disposal

If there is a transfer of ownership in regards to the training data or AI model itself, there must be certain procedures followed. The transfer must include; the involvement of a Data Custodian. With this in place, it should ensure there are no security issues from transfer from one AI system instantiation to another. 

The same procedure must be followed if there is a decommission of an AI model.