Sheffield Hallam University: Accountability Principles for Artificial Intelligence (AP4AI)  

Case study from Sheffield Hallam University.

Although this example does not relate directly to UK regulatory principles, it does provide a useful example of how AI assurance techniques can be applied in a law enforcement context.

Background & Description

Accountability Principles for Artificial Intelligence (AP4AI) is a framework specifically designed for Law Enforcement Agencies when deploying Artificial Intelligence systems. This assurance framework functions through 12 core principles that if met through the AP4AI Self-Assessment Tool – would provide an AI Accountability Agreement which can be utilised to show the compliance that has been undertaken by a respective Law Enforcement Agency.

The twelve principles which LEAs must align themselves with to achieve this are: Universality, Legality, Explainability, Transparency, Independence, Compellability, Commitment to Robust Evidence, Pluralism, Conduct, Enforceability and Redress, Learning Organisation and Constructiveness.

The design of the principles is a product of CENTRIC and EUROPOL who combined an array of outputs from a 6,000+ International citizens consultations from 30 Different countries: 27 EU Member States, USA, Australia, and UK.

These consultations (alongside SME Consultation) consolidated the framework which consists of a citizen-centric approach that has been expertly validated. LEAs who use AP4AI during their deployment stage of their AI System will be able to provide more transparency and discussions surrounding their operations to support is alignment with regulatory requirements and standards while also promoting Fundamental Rights.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

Across the remit of AI Implementation there is a core requirement to create ensure that AI systems function in a robust, secure, and safe manner. AP4AI at the core of the self-assessment tool outputs the ability for end-users to see their assessed and managed AI cycle which meets the 12 Core Principles which were produced by SME and Citizen Consultations. AP4AI will produce a capability for end-users to have their own due diligence into the security and safety of their AI Practice.

Appropriate Transparency & Explainability

AP4AI provides a transparent outlook functionality for Artificial Intelligence which can be translated across a range of disciplines. AP4AI focuses on policing, however, the product of AP4AI which encourages developed conversations and the ability for citizens to understand how the AI system will affect them. This will highlight the proportion of risk to the benefit of the AI system and by consequence can be determined if it is safe to implement.

Fairness

At the current eco-system of AI functions, regulation is not present, AP4AI installs legal considerations into the principles to ensure that users are implementing and detailing relevant regulation, technical standards and national law when developing their AI systems. This also includes protecting Human Rights and privacy.

Accountability & Governance

Accountability is the product of the AP4AI principles, stating that there is an individual, a function or a requirement that can lead to the accountable problem. This can therefore improve how the courts see the enforceability of the positive best practice of AI systems.

Contestability & Redress

Redress and the need to provide effective remedy to those who have been wronged by AI systems and their outputs is something that is agreed upon as a necessity for any real accountability. The successful application of AP4AI requires public bodies to be accountable for their decisions and to put in place measures enable people to challenge decisions and to seek redress using procedures that are independent and transparent.

Why we took this approach

AP4AI was undertaken in this manner as the core elements of AP4AI is that this framework was designed for the citizen. This means that to achieve this goal, the citizen must determine their views and expectations of Law Enforcement when deploying AI systems. AP4AI uses theses perspectives to ensure that the functions of Law Enforcement do not infringe upon the rights nor affect the citizens in an intrusive way. The usage of the Self-Assessment Tool for Law Enforcement will allow a more transparent outlook for the LEA, they can then understand where they need to improve and develop their AI system before it can become operational.

Benefits to the organisation using the technique

AP4AI is a multi-stage framework with the primary element of producing safe AI system in Law Enforcement but also the education of Artificial Intelligence to Law Enforcement. AP4AI is a robust and agile framework which engages all areas of AI governance; this includes technical and non-technical subject-matter experts. Alongside the Fundamental Rights Agency, AP4AI will provide a stepchange to the way that LEAs approach the topic of AI and intends to remove the stigma behind AI Policing by being able to highlight an audit trail for the installation of innovative practice. AP4AI is technologically separate from the framework, the tools in the AP4AI project are adaptive and can grow to the requirements of the regulations that are in-place.

Limitations of the approach

Current limitations extend to the fact that there is currently no regulatory standard for AI at the current moment in time in the UK. AP4AI, however, is capable to module where required to any regulatory standard on an international scale.

https://www.ap4ai.eu/

Further AI Assurance Information

Updates to this page

Published 12 December 2023