Advai: Advanced Evaluation of AI-Powered Identity Verification Systems
Case study from Advai.
Background & Description
The project introduces an innovative method to evaluate identity verification vendors’ AI systems, crucial to online identity verification, which goes beyond traditional sample image dataset testing. As verification tools and the methods to deceive them become more complex, it is vital to assess advanced machine learning claims accurately. Advai offers a service to cross-evaluate various providers against vulnerabilities that are most critical to an organisation, ensuring increased resilience to online threats, adversarial activities, and fraud.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
The approach prioritises the safety and robustness of ID verification systems by introducing advanced testing to identify and mitigate vulnerabilities, ensuring the systems can withstand sophisticated adversarial attacks.
Appropriate Transparency & Explainability
Advai’s method allows for informed comparisons and insights into the verification systems, providing clarity and understanding of AI model robustness, which is especially important for complex claims of advanced machine learning.
Fairness
Testing for bias in ID verification systems helps mitigate ethical and regulatory concerns, ensuring that features such as eye, hair, and skin colour, or the presence of accessories like beards and glasses, do not unduly influence verification outcomes.
Accountability & Governance
By working with vendors to suggest system improvements, Advai’s approach encourages responsible AI development and governance that aligns with an organisation’s unique risk profile and industry requirements.
Contestability & Redress
but the comprehensive reports and user-friendly dashboard may support mechanisms for contestability and redress by making it easier to identify and address issues.
Why we took this approach
Our advanced, adversarial-driven approach is taken to combat the increasingly sophisticated fraud landscape and to provide organisations with the confidence that their chosen ID verification system is robust, unbiased, and optimised for real-world challenges.
There are many different components to identity verification, and we do not claim to address the full scope of vulnerabilities that these systems may have. However, Advai holds a market-leading library of adversarial techniques and tools and believes we could successfully tackle a handful of these vulnerabilities.
Benefits to the organisation using the technique
-
Increased resilience and robustness of ID verification systems against fraud and adversarial attacks.
-
Enhanced understanding and control over potential biases and ethical concerns in ID verification.
-
Superior assessment capabilities leading to better-informed vendor selection, based on comprehensive, comparative benchmarks.
-
Real-world testing for vulnerabilities ensures the system’s effectiveness across various conditions and user demographics.
-
Streamlined vendor evaluation process through user-friendly tools and comprehensive reporting.
Limitations of the approach
-
May require sophisticated understanding and collaboration from vendors to implement suggested improvements.
-
Continuous evolution of adversarial techniques means that robustness assessments may need to be frequently updated.
-
The approach could potentially lengthen the vendor selection process due to the depth and breadth of testing.
-
There might be a trade-off between enhanced security and user convenience or system performance.
Further Links (including relevant standards)
Further AI Assurance Information
-
For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
-
For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org