Anekanta® AI: Facial Recognition Privacy Impact Risk Assessment System™ (for verification and remote biometric identification)

Anekanta® AI’s Facial Recognition Privacy Impact Risk Assessment System™ helps organisations identify and address these risks, ensuring compliance with all applicable laws and regulations.

Background and Description

As facial recognition technology becomes more sophisticated and widely used to solve very difficult problems with health and safety, security and operational efficiency, it is essential for organisations to have a plan in place to mitigate the risks associated with its use. Anekanta® AI’s Facial Recognition Privacy Impact Risk Assessment System™ helps organisations identify and address these risks, ensuring compliance with all applicable laws and regulations, and that this powerful technology is being used in a responsible and ethical manner which minimises the impact on fundamental rights.

Anekanta® specialise in de-risking high-risk AI from biometrics through to Gen AI to help the market develop safely. It is a contributor to the development of best practice and standards in these areas around the world, including providing input and guidance to the new BS9347 British Standard “Facial recognition technology – Ethical use and deployment in video surveillance-based systems – Code of practice (BS 9347:2024)“

Anekanta®’s system is built on recognised UK and global regulations, principles, and standards for AI technology, including the EU AI Act. It considers the specific requirements of the region, nation, and locality, of the deployment in the context of the intended purpose.

By using our contextualised proprietary regulation database, the system automatically provides an independent pre-mitigation report and a range of specific recommendations to help steer towards compliance and minimise the impact on the rights and freedoms of individuals. Our independent reports are also tailored to the specific use case and provides actionable insights into:

  1. Potential risk level pre-implementation
  2. Legislation and regulation governing the use of facial recognition software.
  3. EU AI Act requirements and local ordinances with specific requirements for the city or state (if development or deployment is outside the UK)
  4. Recommended mitigations that consider all risks, including AI governance, human rights, union law, employment rights, privacy rights, impact assessments, prohibitions, voluntary and harmonised standards, and good practice
  5. Potential risk level if mitigations are implemented together with residual risk which requires ongoing monitoring and management.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles

Safety, Security & Robustness

The Facial Recognition Privacy Impact Risk Assessment System™ helps organisations recognise their cyber security obligations under GDPR, whether they are developing or using facial recognition/remote biometrics high-risk AI products, to consider the impact of good cyber security practices on stakeholders and to support understanding of the stability and repeatability of the AI technology. The reports generated from the system lead the decision maker/risk holder to consider the relevant de facto tests which provide assurance that the software has been tested for bias and is able to detect a range of demographics. Additionally, the human-in-the-loop relationship is highlighted to stress the importance that for the identificati0n use case, a trained operator should always make the final decision before any action is taken following a face match.

Appropriate Transparency & Explainability

The reports helps organisations whether they are developing or using facial recognition or remote biometric high-risk AI products to learn about the risks and impacts of AI technology in a way which can encourage transparency, and support the steps needed towards achieving an appropriate level of explainability in real-world terms which are understandable by the user.

Fairness

Our reports help organisations whether they are developing or using facial recognition or remote biometric AI products to consider the fairness of the technology by examining the type of data collected and testing whether through processing such data, individuals or groups of individuals may be discriminated against. The tool stimulates the questioning process with the goal of leading to mitigations or avoidance of bias-inducing techniques altogether.

Accountability & Governance

Anekanta® AI: Facial Recognition Privacy Impact Risk Assessment System™, helps organisations planning to develop or use facial recognition or remote biometric high-risk AI products to elevate the AI risk landscape to the board and senior management agenda. This is achieved through the reports which support organisations to ensure that there are clear decision-making and escalation routes which lead to effective governance measures and allow the activation of an ‘emergency stop button’ if harm is discovered.

Contestability & Redress

Our reports help organisations whether they are developing or using facial recognition or remote biometric high-risk AI products to consider the impact of the technology on their stakeholders and as a result, create robust policies which are available to impacted parties such that they may contest the use of the technology and seek redress or removal from the database.

Why we took this approach

There is no single law in the UK which regulates facial recognition software and remote biometrics high-risk AI technologies. Users must navigate a range of different pieces of legislation dependent on the use case scenario. These pieces are not clear to organisations and as a result they are making decisions which they do not know with a degree of certainty that they are correct in their approach. This poses a problem which left unaddressed may stifle adoption or lead to misuse. Anekanta®’s system asks detailed questions about the use case and the effects of the software rather than making assumptions about how, when and where it will be used. The resulting report may be further analysed, and decisions made regarding the impact and risk associated with the use of the software which may include decisions about the need to achieve further compliance through standards or the prevailing legislation/regulation.

Benefits to the organisation using the technique

The Anekanta® AI: Facial Recognition Privacy Impact Risk Assessment System™ for facial recognition software and remote biometrics high-risk AI technologies supports organisations to build consistency into their governance processes. The system leads to the creation of a report which can be analysed to reach conclusions about the impact and risks associated with the use of the software.

The framework is based on the requirements of the EU AI Act regarding remote biometric high-risk AI technology and their potential impacts on the health, safety, and fundamental rights of people. It also aligns with the UK’s AI regulatory and principle-based governance frameworks, and in doing so complete vital groundwork for the preparations needed to comply with emerging standards such as ISO/IEC 23984 and ISO/IEC 42001.

Implementation of a range of AI standards may also provide developers and users with evidence of their fulfilment of certain legal aspects of the EU AI Act. Additionally, the system also signposts the operationalised trustworthy AI framework set out in the new BS9347 standard. In conclusion, the system provides risk reports for a UK, EU USA and APAC regional use of facial recognition software and remote biometric identification high-risk AI to lead the developer and user to their responsibilities and accountabilities in accordance with the law and good practice recommendations.

Limitations of the approach

The ethics and legality of the use of the technology can vary dependent upon the competency and of the deployer. The assurance system does not evaluate competency, other than advising that human oversight is required for certain scenarios and must be undertaken by a trained competent person. The assurance system cannot address these issues directly in a consistent way as there is no legal requirement in the UK for the technology to be certified to a given standard, nor are deployers licenced or legally required to institute a measurable level of training. These issues are outside of the control of Anekanta® therefore the assurance system cannot set out consistent, measurable mitigations in this regard.

For further information visit Anekanta®AI’s website or contact the company at ai-risk@anekanta.co.uk.

Further AI Assurance Information

Updates to this page

Published 26 September 2024