Open consultation

Guidance for using the AI Management Essentials tool

Published 6 November 2024

AI Management Essentials (AIME) is a self-assessment tool designed to help businesses establish robust management practices for the development and use of AI systems.[footnote 1] The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organisational processes that are in place to enable the responsible development and use of these products.

Who AIME is designed for

AIME can be used by any organisation that develops, provides or uses services that utilise AI systems as part of its standard business operations. AIME is sector agnostic and may be used by organisations of different sizes. However, it is primarily intended for Small to medium-sized enterprises (SMEs) and start-ups that encounter barriers when navigating the evolving landscape of AI management standards and frameworks. For larger organisations, AIME can be used to assess AI management systems for individual business divisions, operational departments or subsidiaries.[footnote 2]

Why DSIT developed AIME

Over the last few years, there have been a proliferation of standards and frameworks designed to help organisations effectively manage AI systems. While these resources offer important guidance, our engagement with industry suggests that many organisations find it challenging to navigate this landscape and engage with these resources. To address this, the Department for Science, Innovation and Technology (DSIT) has developed AIME to deliver practical support and greater clarity for businesses. AIME distils key principles from existing AI regulations, standards and frameworks to provide an accessible resource for organisations to assess – and improve – their AI management systems and practices.

We conducted a literature review of key frameworks and standards and have based the tool on 3 prominent frameworks:

We prioritised these international frameworks, in part, to ensure the interoperability of the tool. It is worth noting that AIME does not seek to replace these frameworks, nor does completing the AIME self-assessment represent compliance, but it provides a starting point for implementing commonly regarded best practices in AI management.

In particular, the OECD’s G7 reporting framework will primarily seek to facilitate effective action and greater transparency among companies developing the most advanced AI systems, complementary to the UK’s AIME, which will provide an accessible starting point for organisations of any size to assess and improve their AI management systems. AIME will also complement and support other existing international efforts to identify and mitigate risks posed by AI systems, such as the OECD reporting framework for the G7 Hiroshima process code of conduct for organisations developing advanced AI systems, which is currently under development.

After a thematic analysis to identify common themes and principles across these documents, we distilled key information into a series of questions for organisations to self-assess and identify actions to improve their management systems.

Over the past year, we have iterated and tested a prototype of AIME in 3 targeted pilots with industry organisations. These pilots were followed by 3 workshops, where we presented and iterated the tool with regulators and policy makers; government departments; and SMEs via techUK. This feedback has informed the ongoing development of the tool, and this consultation seeks to gather information to refine it further.

How AIME can help businesses

The tool will not be mandatory to use but will help organisations to embed baseline good practice within their AI management systems. It is designed to provide clarity on what is needed to demonstrate responsible AI management systems and will help organisations to identify the strengths and weaknesses of their internal processes. The tool also provides practical actions for improving management systems.

AIME does not provide formal certification. However, working through the tool will help organisations to assess and improve their AI management processes, and become better positioned for a foundational level of compliance with the standards and frameworks that inform it.

In the future, there may be opportunities to explore embedding AIME into public sector procurement frameworks for AI products and services.

What the AIME tool will look like

We expect the final version of AIME to include 3 components:

  • a self-assessment questionnaire
  • a rating for each section of the self-assessment, to provide users with a concise view of their AI management system health, calculated on self-assessment answers
  • a set of action points and recommendations for improvement, generated by self-assessment answers

Only the self-assessment questionnaire is included in this consultation. The ratings and recommendations will be developed by DSIT following this consultation. The outputs will be made available alongside the final version of the AIME tool.

The self-assessment questionnaire is organised into 3 thematic areas:

  • Internal processes: these questions assess the overarching structures and principles underlying your AI management system.
  • Managing risks: these questions assess the processes through which you prevent, manage, and mitigate risks.
  • Communication: these questions assess your communication with employees, external users and interested parties.

Each section begins with a motivating statement to represent good practice, that the following questions are designed to interrogate.

Completing the AIME self-assessment

The assessment should be completed by an individual or individuals who have wide-reaching knowledge of an organisation’s governance practices. For example, a chief technology officer (CTO) or software engineer may have relevant expertise for answering more technical questions. You may also find it helpful to involve an AI ethics officer or HR business manager, if you have colleagues with these roles or similar in your organisation.

How to complete the AIME self-assessment

Please note, you do not need to complete the self-assessment in order to respond to this consultation. We welcome general feedback on the design, content and usability of this tool.

If you would like to complete the self-assessment, please work your way through the questions in order, starting from section 1. The self-assessment tool is multiple-choice. It can be printed and completed by hand or digitally using a PDF mark-up tool. The time to complete will vary, depending on your expertise and your organisation’s existing governance structures.

Depending on your answer to a given question, you may not be required to respond to all subsequent questions in that section. Where this is the case, this will be clearly stated beside the relevant answer box. If no option to skip is provided, please proceed to the next question as usual. Questions that are conditional on a previous answer are indented.

For questions containing technical or specialised terminology, a short explanation of these terms is provided in-line in a grey text box. A glossary of key terms used throughout the self-assessment and guidance is available in Annex A.

See the AIME tool.

  1. AI systems: products, tools, applications or devices that utilise AI models to help solve problems. AI systems are the operational interfaces to AI models - they incorporate technical structures and processes that allow models to be used by non-technologists. More information on how AI systems relate to AI models and data can be found in DSIT’s Introduction to AI Assurance

  2. AI management system: the set of governance elements and activities within an organisation that support decision making and the delivery of outcomes relating to the development and use of AI systems. This includes organisational policies, objectives, and processes among other things. More information on assuring AI governance practices can be found in DSIT’s Introduction to AI Assurance