Correspondence

Letter from DSIT and DfE Secretaries of State to the Office for Standards in Education, Children's Services and Skills - HTML

Published 15 February 2024

This was published under the 2022 to 2024 Sunak Conservative government

Rt Hon Michelle Donelan MP
Secretary of State for Science, Innovation and Technology
100 Parliament Street
London SW1A 2BQ

Rt Hon Gillian Keegan MP
Secretary of State for Education
20 Great Smith St
London SW1P 3BT

1 February 2024

Dear Sir Martyn,

Delivering a pro-innovation approach to AI regulation

Last year, government published the AI Regulation White Paper outlining our framework for governing AI in order to drive safe, responsible innovation. The White Paper proposed that we will leverage the expertise of our regulators to implement the five principles underpinning the framework. We indicated that we would expect the UK’s regulators to interpret and apply these cross-cutting principles to AI use cases within their remits, allowing AI to be regulated in a targeted, context-specific and coherent manner across the economy. The framework proposed is non-statutory for the time being - although as signaled in the White Paper, it may become necessary to introduce a statutory duty for regulators to have due regard to the principles at a later point.

Our approach was broadly welcomed by stakeholders across industry, academia and civil society. One of the key areas of feedback, however, was the need to ensure that regulators were taking the risks and opportunities of AI within their remits seriously. Stakeholders wanted more detail on how the principles-based framework would be interpreted and applied by regulators to ensure that industry could prepare, government could coordinate, and civil society could scrutinise for gaps effectively. As the use of AI becomes more widespread across sectors, such as education, we need greater transparency regarding the steps regulators are taking to understand both the extent of opportunities and the risks this creates, and the actions they are taking in response.

We are therefore asking key regulators to publish an update by 30 April 2024, outlining their strategic approach to AI and the steps they are taking in line with the expectations in the White Paper. Your organisation is one of the regulators from whom we would particularly value an update, given how significantly AI risks such as abuses of AI in education and children’s social care - along with other risks outlined in the annex to this letter - impact upon the sectors you regulate.

This is an opportunity for you to set out the work you are doing to understand, assess and manage the current and emerging opportunities posed by AI as relevant to your remit, and how your regulatory, supportive and enforcement approaches will seek to tackle them. You may also want to consider interactions and overlap between your area of responsibility and that of other regulators. We have set out the information we would encourage you to cover in this update in the annex to this letter - although as an independent regulator, you are best placed to determine its form and substance.

We want to thank you for your collaboration and initiative so far – we are determined to unlock the full benefits of AI by establishing a regulatory approach which drives safe, responsible innovation. This will require a sustained effort from both the UK government and the independent regulators at the front line of implementing the framework.

Please note that this request will be published alongside the White Paper consultation response to ensure transparency around the steps we are taking collectively to develop the AI regulatory framework.


Rt Hon Michelle Donelan MP
Secretary of State for Science, Innovation and Technology

Rt Hon Gillian Keegan MP
Secretary of State for Education


Annex

  • Regulators are best placed to determine the detailed form and substance of this update. However, they may want to consider including information on specific areas of interest which we have detailed below. Where suitable, we are happy for regulators to signpost pre-existing material.

  • Their current assessment of how AI applies within the scope of their regulatory responsibilities including an explanation of their enabling legislation and its relevance in the context of AI.

  • The steps they are already taking to adopt the AI principles set out in the White Paper – where possible this should include concrete examples of the actions they have taken.

  • A summary of guidance they have issued or plan to issue on how the principles interact with existing legislation and the steps organisations they regulate should take in line with the principles.

  • The work they are doing to understand, assess and manage the current and emerging risks posed by AI as relevant to their sector and remit. This could range from social harms such as bias and discrimination, to broader harms such as cyber security, privacy risks, and potential for AI misuse from bad actors (to be informed in due course by the government’s central AI risk assessment).

  • Consider interactions and overlap between their area of responsibility and that of other regulators. They could also cover any assessments on AI risks and opportunities that they have made, and how their regulatory, supportive and enforcement approaches will seek to tackle them.

  • The steps they have taken to collaborate with other regulators to identify and tackle AI-related issues that cut across regulatory remits.

  • An explanation of their current capability to address AI risks within their regulatory remit - and how this compares with their assessment of the capabilities they need. This should set out the structures and resources they currently have in place including an assessment – e.g. quantified if possible – a) the number of people working partly or fully on AI-related issues, b) the budget they have allocated to AI-related issues, c) specific skills and expertise they require in order to effectively regulate AI within their sector.

  • A forward look of their plans and activities over the coming 12 months, this should include the actions they are taking to address any capability gaps identified above and could also include – but need not be limited to – risk assessment work they plan to undertake, tools and/or guidance they are preparing, planned stakeholder engagement activity, and international engagement. It would be useful to understand how they may prioritise their organisation’s resources to support the work within this forward look.