DNV: Recommended Practice: Assurance of AI-enabled systems 

Case study from DNV.

Background & Description

DNV, the risk management and assurance company, has published a suite of recommended practices (RPs) that will enable companies operating critical devices, assets, and infrastructure to safely apply artificial intelligence (AI). Assurance of AI-enabled systems (DNV-RP-0671) is available free of charge at:

DNV-RP-0671 Assurance of AI-enabled systems - edition Sep, 2023

DNV-RP-0671 is a new recommended practice that provides guidance on how to assure whether systems enabled by artificial intelligence are trustworthy and managed responsibly throughout their entire lifecycle.

In this RP, ‘trustworthy AI’ refers to 1) the technical characteristics of the AI-enabled system, and 2) social aspects in the assurance process.

The social aspects include involving stakeholders in the assurance, prioritizing between system characteristics, and negotiating trade-offs between actors responsible for different parts. ‘Responsible management of AI’, on the other hand, refers to the ethical and societal considerations that are taken into account during the entire lifecycle (e.g. the design, development, and deployment) of AI-enabled systems. ‘Responsible management of AI’ entails that the use of AI promotes and safeguards the values of society. For the purpose of this recommended practice, the values of society are represented by a set of core ethical principles (beneficence, non-maleficence, autonomy, justice, and explicability).

DNV-RP-0671 includes:

How to develop case-specific requirements for AI and the system containing AI, for supporting/fostering the development of warranted trust in the capabilities of the AI-enabled system and for managing AI responsibly.

How to show compliance with the case-specific requirements and applicable standards throughout the lifecycle.

How to strengthen the knowledge about the system properties and behaviour, and thereby increase the level of confidence that the system meets expectations.

Methodological details that support the assurance process.

Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

The assurance framework described in DNV-RP-0671 provides a process to strengthen knowledge about the system properties and behaviour, thereby increasing the level of confidence that the system meets expectation. The RP defines the process to develop case-specific system requirements for assuring trustworthy and responsibly managed AI-enabled systems.

Accountability & Governance

DNV-RP-0671 provides methods to show compliance with the case-specific requirements and applicable standards throughout the lifecycle of the AI-enabled system. This methodology can be applied to regulation when required.

Why we took this approach

AI technology is rapidly gaining in importance in many industries and will likely prove to be a major driver of economic growth. Any organizations that develop AI-enabled products and systems will need to build trust with its customers.

Assurance is a way to show compliance with regulations and standards. It will become a “ticket-to-trade” with the rollout of new regulations (e.g., EU AI act) and increasing standardization across and within industries (e.g., ISO/IEC JTC 1 /SC 42 series of standards on AI).

Additionally, DNV believes that any tools that can improve assurance services for AI-enabled systems to an acceptable stakeholder and regulatory level, will almost certainly open up opportunities for such services and for customers in a wide range of industries.

Benefits to the organisation using the technique

AI-enabled systems can suffer from negative user perception so establishing a framework to ensure and demonstrate the systems are safe, competent and can perform as required is key to securing acceptance.

DNV-RP-0671 can be used by any actor to assure that an AI-enabled system is trustworthy and managed responsibly. This includes any organization that develops, sells, integrates, uses, operates, interacts with, depends on, or are affected by AI components or AI-enabled systems.

It will also be invaluable in proving compliance with any laws or regulations.

Limitations

This recommended practice does not cover the assurance of organizations involved in AI-enabled systems, such as the organizations’ management systems or work processes. This topic is covered by other DNV recommended practices for digital trust (see further links).

Artificial intelligence - DNV

Recommended Practices for Successful Digital Transformation - DNV

Further AI Assurance Information

Updates to this page

Published 12 December 2023