Credo AI Policy Packs: Human Resources Startup compliance with NYC LL-144
Credo AI's Policy Pack for NYC LL-144 encodes the law’s principles into actionable requirements and adds a layer of reliability and credibility to compliance efforts.
Background & Description
In December 2021, the New York City Council passed Local Law No. 144 (LL-144), mandating that AI and algorithm-based technologies used for recruiting, hiring, or promotion are audited for bias before being used.
The law also requires employers to annually conduct and publicly post independent audits to assess the statistical fairness of their processes across race and gender.
Credo AI created a Policy Pack for NYC LL-144 that encodes the law’s principles into actionable requirements and adds a layer of reliability and credibility to compliance efforts. An AI-powered HR talent-matching startup (“HR Startup”) used Credo AI’s Responsible AI Governance Platform and LL-144 Policy Pack to address this and other emerging AI Regulations.
This approach allows organisations to govern automated decision making tools used in hiring beyond NYC’s LL-144. Organisations using the Platform can map and measure bias of their systems and apply different policy packs, including custom policy packs that allows them to align with internal policies and meet regulatory requirements in different jurisdictions.
Relevant Cross-Sectoral Regulatory Principles
Safety, Security & Robustness
Under NYC LL-144, employers and employment agencies using automated employment decision tools need to provide a bias audit. The HR talent-matching startup used Credo AI’s Platform to perform a bias assessment for a tool that helps identify high-potential candidates for apprenticeship-based training and employment. This approach included defining context-driven governance requirements for AI systems by conducting technical assessments of data and models, generating governance artefacts, and providing human-in-the-loop reviews to effectively measure performance and robustness.
Appropriate Transparency & Explainability
NYC LL-144 requires organisations to publicly report their use of Artificial Intelligence and compliance with the regulations, which can be a complex and time-consuming task. Credo AI’s Platform Reports enabled the HR startup to generate a standardised LL-144 report with the custom add-on bias results they wanted to include in addition to the legal requirements.
Fairness
The HR Startup evaluated its models for fairness per the requirements outlined in our Policy Pack using the recommended fairness tools provided by Credo AI Control Docs. The HR Startup uploaded the results back to the Platform, which helped them see whether their results were within bounds of the “four-fifths rule” or not.
Accountability & Governance
Reporting requirements, like those stipulated in NYC LL-144, enable accountability for AI systems behaviours and can help establish standards and benchmarks with respect to what “good” looks like. Many organisations are wary about sharing results about the behaviour of their AI systems externally—because they don’t know how their results might compare with those of competitors, or whether they are “good” or “bad” for external stakeholders.
Why we took this approach
The HR tech startup was able to produce a bias audit in compliance with New York City’s algorithmic hiring law by using Credo AI’s Platform and LL-144 Policy Pack. By using the Credo AI Platform to perform bias assessments and engage in third-party reviews, the talent-matching platform met NYC LL-144’s requirements and improved customer trust.
Aside from assessing organisations’ systems for LL-144 compliance, Credo AI’s human review of the assessment report identifies assessment gaps and opportunities that help increase its reliability and provide additional assurance to stakeholders. This third-party review provided the HR start up with insights and recommendations for bias mitigation and improved compliance.
Beyond NYC’s LL-144, this approach can be applicable to other regulatory regimes that aim to prevent discrimination from algorithm-based or automated decision tools. For example, enterprises looking to map and measure bias of protected categories under UK’s Equality Act, or produce bias audits as part of the risk management system required under the EU AI Act, can leverage Credo AI’s Platform and custom policy packs or EU AI Act’s high-risk AI system policy pack.
Benefits to the organisation using the technique
Utilising Credo AI’s Platform and NYC’s LL-144 Policy Pack allowed the HR Startup to streamline the implementation of technical evaluations for their data and models, while also facilitating the creation of compliance reports with human-in-the-loop review. This process also enabled the HR Startup to showcase their commitment to responsible AI practices to both clients and regulatory bodies, achieving full compliance with the LL-144 within two months.
Furthermore, by establishing an AI Governance process, the HR Startup is able to apply additional Policy Packs to comply with other emerging regulations.
Limitations of the approach
Demographic data such as gender, race, and disability is necessary for bias assessment and mitigation of algorithms. It helps discover potential biases, identify their sources, develop strategies to address them, and evaluate the effectiveness of those strategies. However, “ground-truth” demographic data is not always available for a variety of reasons. While many organisations do not have access to datasets leading to partial fairness evaluations, the HR Startup did have access to self-reported data. While self-reported demographic data directly reflects the individual’s own perspective and self-identification, has high accuracy, is explainable, and does not require proxy data it also has limitations. Such limitations include having incomplete or unrepresentative datasets due to privacy concerns or fear of discrimination, availability latency, and its potential for errors due to social desirability bias and misinterpretation.
It is important to remember that other demographic data collection approaches like human annotation and algorithmic inference also have limitations. Human-annotated demographic data source relies on a human annotator’s best perception of an individual’s demographic attributes and is subject to observer bias while algorithmic inference or machine inferred demographic data can further propagate biases in training data and models and has limited explainability.
Bias and fairness assessments of algorithm-based technologies used for recruiting, hiring, or promotion can only be as good as the data that is available
Further Links (including relevant standards)
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI
Assurance Tools: https://www.gov.uk/ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: