Kainos: Kainos and Dstl partner to implement AI ethical principles in Defence

Kainos's work, alongside our Dstl colleagues, has been instrumental in developing an understanding of how to implement principles in the delivery of AI-enabled products and services for Defence.

Background & Description

At Kainos, we are delighted to have been engaged by the Defence Science and Technology Laboratory (Dstl) as the delivery partner of the Defence AI Centre (DAIC) programme of advanced rapid AI experimentation.

In 2022, the Ministry of Defence (MoD) announced its position on AI ethics and governance in its policy paper ‘Ambitious, Safe, Responsible’ and shared its AI ethical principles outlined in the Department’s Defence AI Strategy. These principles focus on human-centricity, responsibility, understanding, bias and harm mitigation and reliability.

Our work, alongside our Dstl colleagues, has been instrumental in developing an understanding of how to implement these principles in the delivery of AI-enabled products and services for Defence.

The nature of AI gives rise to risks and concerns relating to the potential impact on people, which are particularly acute within a Defence context. Finding a way to safely and responsibly use AI is not just desirable, it is essential.

One key component of our approach within the DAIC was to conduct ethics and harm workshops (inspired by Microsoft Harms Modelling), which informed both impact assessments and design choices.

In these sessions, the project team consisting of data scientists, delivery managers, user researchers, quality assurance experts, solution architects, application engineers and business analysts came together with subject matter experts from the MoD, taking a ‘one team approach’.

With guidance from a data ethicist, the teams defined the potential benefits, harms and harm mitigations of the AI-enabled system or service. Subsequently, harm mitigations identified in the session were reviewed and selected mitigations were progressed as part of the project plan, informing product and service design.

The ethics and harm workshops were structured around the MoD AI ethical principles. This ensured the principles were seriously considered for each project, any potential tensions or trade-offs between them were explored in one holistic discussion and they became fundamental to the core delivery.

Conducting ethics and harm workshops at the very start of the agile delivery cycle, and then revisiting them at subsequent stages ensured not only an ethics-by-design approach, but also that the mitigations were supplemented and updated where necessary, as a greater understanding of the project parameters was developed. Ethics and harm workshops were an integral part of a wider delivery framework guiding structural checkpoints, safety and legal considerations, and testing.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles

Safety, Security and Robustness

When designing for military scenarios where ethics and trust are of critical importance, it is not enough to only explore and mitigate potential harms for an AI-enabled systems being used in the way it was originally intended and foreseen.

The ethics and harm workshops allowed us to consider wider vulnerabilities. For example, it is important to explore the potential impact of using AI in a different context to that which it was originally designed for, the risk of harm from unintentional misuse, abuse, and the potential consequences of deploying an AI system in strategically sensitive environments.
These safety, security and robustness related challenges were explicitly explored in the context of the ethics and harm workshops prompted by the MOD AI ethical principles of reliability and human-centricity. The output typically informed testing strategies for the AI system in focus.

Appropriate Transparency and Explainability

The MOD AI principle of ‘Understanding’ encouraged teams to explore the risk of opacity in the ethics and harm workshops. This led to specific harm mitigations like the drafting of data explanations, model cards and trust statements, ensuring these statements could also be understood by a non-AI experts.

Fairness

Addressing the MOD ethical principle of bias and harms mitigation led to the articulation of mitigations directed at ensuring fairness. For example, informed by the ethics and harm workshops, ethicists, data scientists and the research team worked closely together to look at the data sources to identify and minimise any potential bias in the data quality.

Accountability and governance

Ethics and harm workshops equally focused on benefits, risk and required mitigations related to accountability and governance, inspired by the MOD AI ethical principle of responsibility. This informed, among other considerations, where and when a human was brought in the loop of the AI system or service.

Why we took this approach

Perhaps more than in any other part of government, the Ministry of Defence needs to be able to responsibly field robust AI-systems that are trusted. To succeed in this goal, the Defence sector must overcome many challenges to ensure the integrity of AI outputs, and clearly demonstrate how they are putting their ethical principles into practice.

Ethics and harm workshops, embedded in a wider AI good practice framework for Defence (currently under development), provided a collaborative platform that brought different skill sets and perspectives together. They enabled the level of detail and consideration required to identify and begin to address potential risks associated with Defence AI systems, implementing the published ethical principles for Defence AI projects.

Benefits to the organisation

  • The ability to anticipate risks and harms early on and embed mitigations in project design from the very beginning;
  • A collaborative approach, allowing for domain expertise to be combined with a diverse skill set from across the project team;
  • A practical application of the MOD AI ethical principles;
  • A platform to explore scenarios, ensuring that not only the desired state is considered in the design, but also unintended yet foreseeable consequences;
  • Supporting a culture of good practice for Defence.

Limitations of the approach

Where the technique described lends itself well to managing risks of specific use cases, a limitation is that the work must be revisited as more use cases develop and are added. This includes situations where an AI-enabled system or service becomes embedded in a wider product or system not previously foreseen.

Further AI Assurance Information

Updates to this page

Published 26 September 2024