Anekanta AI: AI Risk Intelligence System for biometric and high-risk AI

Case study from Anekanta AI.

Background & Description

Anekanta ® AI provides research and risk management services for high-risk AI, to global organisations who are developing and using the technology. We identified a problem both developers and users experience when attempting to classify and assess the risk of the features of AI, especially those which process biometric and biometric-based data whilst considering the impact and risks to stakeholders.

Anekanta® AI’s AI Risk Intelligence System™ is a specialised discovery and analytical framework which comprises a questionnaire format which challenges developers and users to consider a range of detailed questions about the AI system they have developed, procured or plan to integrate into their operations.

The questions asked relate to transparency and explainability and encompass the level of autonomy, the origin of the inputs, the expected and sometimes unexpected outputs and the effects and impacts of the AI system. The questionnaire, soon to become an online service, is currently completed in-house by the Anekanta®AI’s team in collaboration with the developer or user. When complete, the impact and risk data set generated from the questionnaire is analysed by Anekanta® AI which leads to a thorough, wide-reaching and consistent impact and risk assessment together with risk-mitigating recommendations. The system and its outputs may readily be aligned with the UK’s AI Regulation white paper, GDPR and the pending EU AI Act. The EU AI Act is driving users and developers towards both legal and voluntary compliance obligations which set a high bar for feature and use case discovery, and which is transferrable to all other regions developing AI legislation and standards.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety Security and Robustness

The AI Risk Intelligence System™ helps organisations whether they are developing or using biometrics and high-risk AI products to consider the impact of good cyber security practices on stakeholders and to support understanding of the stability and repeatability of the AI technology.

Appropriate Transparency and Explainability

The system helps organisations whether they are developing or using biometrics and high-risk AI products to learn about the features and outputs of AI technology in a way which can encourage transparency, and support the steps needed towards achieving an appropriate level of explainability in real-world terms which are understandable by the user.

Fairness

Our system helps organisations whether they are developing or using biometrics and high-risk AI products to consider the fairness of the technology by examining the type of data collected and testing whether through processing such data, individuals or groups of individuals may be discriminated against. The tool stimulates the questioning process with the goal of leading to mitigations or avoidance of bias-inducing techniques altogether.

Accountability and Governance

Anekanta® AI: AI Risk Intelligence System™, helps organisations whether they are developing or using biometrics and high-risk AI products to elevate the AI risk landscape to the board and senior management agenda. This is achieved through supporting organisations to ensure that there are clear decision-making and escalation routes which lead to effective governance measures and allow the activation of an ‘emergency stop button’ if harm is discovered.

Contestability and Redress

Our system helps organisations whether they are developing or using biometrics and high-risk AI products to consider the impact of the technology on their stakeholders and as a result, create robust policies which are available to impacted parties such that they may contest the use of the technology and seek redress from the correct party identified in the discovery process.

Why we took this approach

There is little or no consistency in the way biometrics-based AI technologies and their features are described and understood by developers and users. This poses a problem which left unaddressed may stifle adoption or lead to misuse. Anekanta’s system asks detailed questions about the outputs and effects of the software rather than making assumptions about functionality. The resulting information may be further analysed, and decisions made regarding the impact and risk associated with the use of the software which may include decisions about the need to achieve further compliance through standards or the prevailing legislation/regulation.

Benefits to the organisation

The Anekanta® AI, AI Risk Intelligence System™ for biometric and high-risk AI supports organisations to build consistency into their governance processes. The system leads to the creation of a new comparable data set about the AI, which can be analysed to reach conclusions about the impact and risks associated with using certain types of software. The framework is based on the requirements of the EU AI Act regarding biometric and biometric-based AI technology and their potential impacts on the health, safety, and fundamental rights of people. By using the framework, users and developers are stepping towards their obligations under the Act, also aligning with the UK’s AI regulatory and principle-based governance frameworks, and in doing so complete vital groundwork for the preparations needed to comply with emerging standards such as ISO/IEC 23984 and ISO/IEC 42001. Implementation of a range of AI standards may also provide developers and users with evidence of their fulfilment of certain legal aspects of the EU AI Act.

Limitations of the approach

The framework is a 150+ point questionnaire which requires a deep understanding of how AI systems are developed and operate, this knowledge and understanding is combined with the interpretation skills of the Anekanta ® team members, or trained employees within the organisation using the system. While it may appear to be onerous, the data collected is useful to inform further business analytical processes required to meet the requirements of the nascent UK AI Regulation and EU AI Act which will become a legal requirement in early 2024.

Further AI Assurance Information

Updates to this page

Published 19 September 2023