FairNow: NYC Bias Audit With Synthetic Data (NYC Local Law 144)
FairNow's synthetic bias evaluation technique creates synthetic job resumes that reflect a wide range of jobs, specialisations and job levels so that organisations can conduct a bias assessment with data that reflects their candidate pool.
Background & Description
New York City’s Local Law 144 has been in effect since July 2023 and was the first law in the US to require bias audits of employers and employment agencies who use AI in hiring or promotion. Under the law, in-scope employers and employment agencies are required to enlist an independent auditor to conduct a disparate impact analysis by race, gender, and intersectional categories thereof. This type of analysis typically requires historical data, but when sufficient historical data is not available (for example: because an AI tool hasn’t launched yet or because data is otherwise unavailable), the NYC law allows for test data to be used.
FairNow’s synthetic bias evaluation technique creates synthetic job resumes that reflect a wide range of jobs, specialisations and job levels so that organisations can conduct a bias assessment with data that reflects their candidate pool. The synthetic resumes are constructed using templates where various attributes are added to connect the resume to a given race and gender. The resumes are otherwise identical in attributes related to the candidate’s capability to do the job successfully. Because of this construction, differences in model scores can be attributed to the candidate’s demographic attributes.
This approach can also be extended to bias testing beyond the NYC LL144 audit requirements. FairNow has leveraged this method to evaluate a leading HR recruitment software provider’s AI for bias by disability status and gender identity.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles
Safety, Security & Robustness
With FairNow’s synthetic data audit capabilities, organisations that use AI tools to support employment decisions can detect potential bias, even when sufficient real-world data is unavailable or a company does not wish to share certain data with an external third-party auditor. By identifying potential issues early, users can prevent the deployment of biased AI before it can cause harm to job candidates and workers.
Appropriate Transparency & Explainability
FairNow’s bias testing – including evaluations using synthetic data – provides transparency into areas of potential bias in AI tools used for hiring, promotion, and worker management. Supplemental explainability testing available on FairNow’s platform can help organisations quickly pinpoint potential drivers of any differences.
Fairness
FairNow’s synthetic bias audit capabilities allow users to detect bias in their AI across dimensions such as gender, race, gender identity, disability status, and more. Leveraging FairNow’s platform, users can conduct regular testing and monitoring to proactively address any issues.
Accountability & Governance
Assessments for bias are a critical component of AI risk management and mitigation and are referenced in multiple laws globally. On the FairNow platform, bias testing and monitoring can be automated, with key stakeholders in AI governance alerted automatically if findings fall outside threshold ranges.
Why we took this approach
This approach addresses several of the most significant pain points companies face when conducting bias audits on their data.
First, companies often lack historical data to conduct a bias audit. This could be because they haven’t launched the AI yet to collect data, they have some data but not enough for a statistically significant sample size, or because their demographic data collection is sparse.
Second, companies may have thin data on a particular segment or subtype of customers that they’d like to understand better. Our approach can enable organisations to test for potential bias even where actual data is not available.
Benefits to the organisation using the technique
Our approach solves many of the data-related problems that companies face when they look to test for bias. Another benefit is privacy - the organisation does not need to share confidential applicant data with a third party because the data used for this bias audit is synthetic. This saves significant time and effort in procurement and privacy workflows and reduces privacy risks.
Limitations of the approach
Because the data used for this audit is synthetically constructed, it may lack some of the nuance and variability seen in real-world job application data. Additionally, while the synthetic data can be customised to the organisation’s applicant pool, it may lag real-world shifts in applicant distributions or types.