IFOW: Good Work Algorithmic Impact Assessment

Case study from IFOW.

Background & Description

Artificial intelligence (AI) and algorithmic systems are increasingly used in workplace settings. They are being designed, developed and deployed in ways which can transform people’s access to work, the conditions under which they work and the quality of the work they are employed to do.

When well designed, these technologies offer new opportunities to increase efficiency, augment capacity and drive growth. But this transformation is also driving a wide range of social, psychological and material impacts. Whether it’s about how their rights are respected, how their working conditions are likely to change, or how their interests are balanced with those of the business, workers need confidence that these systems are being used fairly and transparently.

Developed by IFOW and supported by the UK Information Commissioner’s Office (ICO) Grants Programme, the Good Work Algorithmic Impact Assessment provided here is designed to help employers and engineers to involve workers and their representatives in the design, development and deployment of algorithmic systems. Doing so will mean that risks are anticipated and managed, ‘good work’ is promoted, the law is complied with, innovative approaches are unlocked and trust in technology is built.

As a complement to this guidance, the Institute for the Future of Work has produced two resources to help improve accessibility and understanding of the ways in which algorithmic systems can impact work.

First, the Good Work Charter Toolkit identifies ten dimensions of ‘good work’ and outlines the main legal and ethical frameworks that apply.

Second, Understanding AI at Work provides accessible explanations of how human choices in the design, development and deployment of AI at work are determined by human choices.

Together with the Good Work Algorithmic Impact Assessment, these resources will help employers assess the wide range of impacts that AI and other algorithmic systems may have on Good Work.

Relevant Cross-Sectoral Regulatory Principles

Safety Security and Robustness

The Good Work Algorithmic Impact Assessment (GWAIA) builds in a feedback loop at the point of deployment to ensure that impacts on safety – both material and psycho-social - are identified throughout the lifetime of a system. Through a socio-technical approach, the tool also helps with the identification of risks before deployment, draws in relevant expertise to aid risk forecasting.

Appropriate Transparency and Explainability

The GWAIA provides a set of informed questions for employers as accountable agents during procurement to ensure they hold adequate information to make robust and accountable choices about the use of automated decision making in human resource management. This set of disclosures from a provider - or recorded design choices by an employer if creating the tool internally - are documented alongside socio-technical considerations such as the approach being taken to deployment.

The ability for employers to do this in practice is shaped by the disclosure practices of the Software as a Service (SAAS) market. However, in this sense, as an adopted process the GWAIA can act to ‘shape’ markets via the procurement practices of responsible employers.

The process also requires technical audits to be produced with sufficient description and analysis that they can then be scrutinised by workers. (We understand that practical implementation of this framework will deepen our future understanding of meaningful explainability.)

In this process, decisions about the design, development and deployment of a system go on to inform a proportionality assessment regarding the completion of the wider Algorithmic Impact Assessment. If proceeding to a full assessment, the GWAIA itself requires transparent documentation about the process of workforce involvement, ex-ante risk assessment, forms of mitigation and the establishment of adequate ongoing monitoring infrastructures.

Fairness

IFOW has produced significant early research and analysis into the risks to equality presented by the use of Machine Learning in employment decisions spanning recruitment, hiring and employment and limited forms of technical auditing. This work, captured in the [Mind the Gap report of the Equality Task Force] (https://www.ifow.org/publications/mind-the-gap-the-final-report-of-the-equality-task-force) and shared directly with the CDEI steering group to support the CDEI Bias Review (on which IFOW was represented) led us to the model of Algorithmic Impact Assessment.

Our continued research into the application of these tools within the workplace then led us to the view that fairness across a wider range of principles than discrimination should be a consideration as these systems are deployed.

For this reason, our Good Work Charter extends to consider all aspects of Good Work as a set of ethico-legal principles in advance of the deployment of an algorithmic system.

We welcome the proposal to consider what types of fairness apply within a given regulatory domain, but also highlight that work is a specific context which spans sectors with a common set of laws that are not all covered by regulators.

Accountability and Governance

The GWAIA supports organisations to think through accountability in various ways:

  • Proposing relevant ‘accountable agents’ who could form an internal oversight body to think about the prospective impacts of algorithmic systems on the workforce.

  • Documenting the range of hard and soft regulatory requirements relating to work, via our Good Work Charter Toolkit.

  • Ensuring that choices made in the design, development and deployment of algorithmic systems are documented.

  • Proposing methodologies within which accountability can meet the development of trust, via involvement of impacted groups.

  • Ensuring that there is an ongoing architecture for contestability and redress, as well as evolved design of the system.

  • Creating a framework for documenting the ways in which systems have been reviewed, and where appropriate redesigned to mitigate risks.

Contestability and Redress

The GWAIA encourages firms to establish and resource the creation of a dedicated workplace forum for responsible AI.

This should include both accountable agents within an organisation who hold formal accountability for compliance with the law, and representatives of the workforce who are using these new technologies, have engaged in the process of a GWAIA, and/or the co-design of new tools.

These dedicated workplace forums support the ongoing monitoring of dynamic systems in practice and create arenas for contestability and redress.

Why we took this approach

The Government’s AI White Paper invites a context-specific approach which, rather than assigning rules or risk levels to entire sectors or technologies, instead seeks to regulate based on the outcomes AI is likely to generate in particular applications. This requires a granular, localised, and socio-technical approach, which is well served by the design of the Good Work Algorithmic Impact Assessment.

Our model, which sees employers undertake a responsible innovation process, in advance of deploying new algorithmic systems in the service of human resource management, aligns with the Government’s ambition to support rather than stifle innovation, and for approaches to avoid disproportionate burdens on business and regulators by advancing more conscious innovation within firms. As we support the implementation and iteration of this work through the Sandbox, we also practically demonstrate the principle of advancing a collaborative approach, one in which government, regulators, and industry to work together to facilitate AI innovation, build trust and ensure that the voice of the public is heard and considered.

Benefits to the organisation using the technique

This tool serves two functions.

The first is to support organisations to use the introduction of new technology as a mechanism to deliver better work. Poor job quality is associated with higher absenteeism, increased health problems, more health-related early retirement, and elevated turnover rates. All of these impede productivity and, in turn, firm performance. A good working environment is not only welfare-enhancing but also economically efficient. The introduction of new technologies presents a unique challenge, but also an opportunity for a redesign of work that is better for employees and firms.

The second function is to support organisations who are using an algorithmic system they design, procure or deploy that makes (or informs) decisions about access to work, terms and conditions of work (including pay promotion work allocation, evaluation of performance, or discipline) to do so in accordance with the Government’s AI Principles and the wider cohort of workplace specific law.

Limitations of the approach

This socio-technical process involves/incorporates technical auditing. Specifications for selecting a good technical auditor are not provided in the report.

Toolkig: The Good Work Charter

Toolkit: Understanding AI at Work

Further AI Assurance Information

Updates to this page

Published 12 December 2023