Holistic AI: NYC Bias Audits

Case study from Holistic AI.

Although this example does not directly relate to the UK regulatory principles, it provides a useful example of how generic AI assurance techniques may need to be tailored to a particular regulatory context.

Background & Description

The Holistic AI Bias Audits platform is a complete solution for automated employment decision tools (AEDTs) used to assist in the making of employment decisions in terms of hiring or promotion. Created as a solution to New York City Local Law 144 (the Bias Audit Law), the platform is used to conduct independent, impartial audits of AEDTs, calculating the required metrics to compare the output of the tool for different subgroups based on their protected characteristics such as race/ethnicity and sex/gender.

The Bias Audit platform calculates metrics for both standalone and intersectional groups and provides a full assessment of the AEDT at the system and deployment level (across jobs, companies, and industries). The findings are reported in a dashboard for ongoing monitoring and the required Summary of Results is produced and hosted by Holistic AI to be shared on the employer or employment agency’s website, as well as a comprehensive internal report. Ongoing insight is provided to supplement the audit to prevent, detect and mitigate any AI bias issues that develop as the tool is used or modified.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Appropriate Transparency & Explainability

New York City Local Law 144 requires employers or employment agencies subject to the regulation to publish a summary of the results of the bias audit on their website. The Holistic AI Bias Audit Platform is used to produce the summary of results, which contains information about the type and source of data, distribution date, missing information, the presence of small sample sizes, and more, to be published on the website of the employer or employment agency using a hyperlink. Such Summary of Results provides greater transparency about the tool’s capabilities and how the outputs are used, as well as about how particular individuals may fare with the tool, providing the opportunity for candidates or employees to make more informed decisions about their interaction with the tool.

The Bias Audit also requires employers and employment agencies to provide notice of the use of an AEDT to assess candidates for employment or employees for promotion if they reside in New York City. Holistic AI’s Bias Audit package provides support for drafting the required notices, which further increase transparency.

Fairness

The Bias Audit platform calculates impact ratios based on the metrics provided by the Department of Consumer and Worker Protection (DCWP) in their rules to support the enforcement of Local Law 144. For categorical systems with binary outcomes, the metric is: where the selection rate represents the proportion of applicants in that group designated to the positive outcome.

For regression systems that result in a continuous output such as a score or ranking, outputs must first be binarized depending on whether they score above or below the median score for the dataset, known as the scoring rate. This is then used to calculate the impact ratio in accordance with the following metric: here, categories are defined in terms of race/ethnicity (Hispanic or Latino, White, Black or African American, Native Hawaiian or Pacific Islander, Asian, Native American or Alaska Native, and two or more races) and sex/gender (male/female and optionally ‘Other’).

The calculation of these impact ratios helps to examine the fairness of the system in terms of whether it results in equal outcomes for different groups. Based on these ratios, exceptions to the four-fifths threshold can be highlighted and used to inform recommendations and next steps from the employer or vendor.

Accountability & Governance

As an optional addition to the core requirements of New York City Local Law 144, the Bias Audit can also examine the internal controls associated with the AEDT in terms of efforts to prevent, detect, and mitigate bias. This establishes greater accountability for the tool and examines governance practices, allowing recommendations to be made to strengthen the fairness of the tool. The publication of the Summary of Results also creates greater accountability for the outputs of the tool.

Why we took this approach

Having a dedicated solution for compliance with New York City Local Law 144, as well as other laws with similar requirements, ensures that the requirements of the law can be fully met without compromise or complication. The Platform provides an interface for clients to provide the required information and the expertise of the multidisciplinary audit team to come together, enabling audits to be conducted swiftly, independently, and impartially.

Benefits to the organisation using the technique

Holistic AI’s Bias Audit platform is an easy-to-use solution that ensures the bias audit requirements are fully met, with all of the deliverables required from employers and employment agencies provided directly to the client. Clients are able to provide information about their system and shape how it is presented in the report and summary of results, ensuring that product descriptions are accurate while protecting the integrity of the AEDT.

Limitations of the approach

Local Law 144 is a landmark piece of legislation that is the first in the world to require independent, impartial audits of AEDTs. Unsurprisingly, there are still a number of concerns about who is exactly in scope and how best to navigate and enforce the novel regulation, with complete clarity only likely to come some time after the law is enforced.

Updates to this page

Published 19 September 2023