Kairoi: Responsible AI Interview Questions
Case study from Kairoi.
Background & Description
It is key that staff developing artificial intelligence (AI) tools, making decisions about tech procurement, and/or being exposed to AI in the workplace all have a keen sense of both effective use cases and the limitations of AI systems and research. Kairoi has developed a series of interview questions to ensure that job candidates are aware of the limitations of the systems they develop, implement and/or use.
The questions are a tool for enabling systemic shifts in future workforces and organisational cultures. On the one hand, integrating questions about responsible AI at the recruitment stage ensures that staff can meaningfully engage with technological innovations. The questions assess that, given the role the job candidates are applying for, they can demonstrate their skills and knowledge concerning best practices in AI. On the other hand, organisations have the chance to show prospective staff that they value critical thinking with regard to AI. Staff are shown at the interview stage that their analytical skills are welcome, encouraging a more responsible tech culture wherein staff can voice their reflections on the design, development, deployment and/or use of AI tools.
How this technique applies to the AI White Paper Regulatory Principles
More information on the AI White Paper Regulatory Principles.
Safety, Security & Robustness
The interview questions help foster responsible AI cultures by ensuring that successful job candidates are aware of – or interested in learning about – the effective potential and limitations of AI systems and research.
By testing for job candidates’ responsible AI readiness, employers can mitigate against risks posed by the misuse of AI tools. Amongst other things, this increases the security of proprietary, supplier and customer data, as successful job candidates will be more cautious in their use of such tools. Furthermore, the questions help select job candidates who will develop, implement and/or use AI tools in the most effective manner. This enables more rigorous approaches to AI-related practices.
Fairness
Fairness is promoted by enabling diverse perspectives into decision-making processes about AI. The interview questions have been adapted to different roles, ensuring that all job candidates may have the opportunity to be tested against relevant criteria. This is enabled by a tripartite categorisation of relevant criteria and questions according to the role a job candidate is interested in:
- Innovators include those developing the latest AI, such as engineers, data scientists and product managers.
- Buyers are those who make decisions about procuring and implementing technologies that impact fellow staff, including HR, finance, operations and IT departments.
- End users are those using AI technologies and, potentially, includes all employees.
By encouraging critical thinking about AI at the interview stage, we can best empower successful job candidates to voice their perspectives when designing, developing, deploying and/or using AI tools. Therefore, fairer outcomes can be enabled through multidisciplinary and cross-departmental collaboration.
Accountability & Governance
The interview questions can straightforwardly be adapted for interviews for the C-suite and non-executive directors (NEDs), whether they are involved with innovation or procurement, or are simply end users. Testing the C-suite and NEDs against the relevant criteria ensures that your organisation’s strategy is future-proof, as it will be informed by a thoughtful approach to the adoption of AI technologies and their organisational governance. Moreover, this demonstrates an organisation’s leadership in responsible AI – both to external parties and internal staff, thus furthering the impact on the AI ecosystem.
Why we took this approach
The rise of generative AI tools has led to a surge in AI-related initiatives in the workplace, whether it’s to increase productivity or improve the products or services on offer. This has meant that recruiting staff who are ready to engage in such initiatives has become crucial. However, it is complicated to develop interview questions to assess job candidates’ AI readiness when an organisation is only beginning to think about AI. This approach ensures that recruiters and hiring managers have well-informed questions to draw from whilst also following best human resources practices.
The interview questions also serve as an introduction of prospective employees to an organisations responsible AI culture. Successful job candidates may be trained on responsible AI during the onboarding process, they may have to adhere to internal AI-related policies, and they may be expected to contribute to AI-related work the organisation conducts. Thus, the interview questions establish clear expectations about the organisation’s culture and allow job candidates to demonstrate their willingness to authentically engage with a responsible AI culture.
The interview questions are also open to continuous improvement by being hosted openly on GitHub.
Benefits to the organisation using the technique
- Recruitment criteria tailored to different departments, promoting responsible AI behaviours across the organisation
- Future-proof workforces, capable of safely adopting the latest AI technologies
- Standardised questions paired with relevant assessment criteria to ensure fair recruitment practices
- Well-informed interview questions to assess the responsible AI readiness of job candidates
- The interview questions foster multidisciplinary engagement in recruitment processes, as they require the involvement of at least HR and hiring managers for their curation
- Signal the organisation’s responsible AI readiness
Limitations of the approach
The questions are intentionally vague, so that they apply to many contexts. To this effect, further curation is required by their adopters, which may require specialist knowledge.
Further Links (including relevant standards)
Further AI Assurance Information
- For more information about other techniques visit the CDEI Portfolio of AI Assurance Tools: https://www.gov.uk/ai-assurance-techniques
- For more information on relevant standards visit the AI Standards Hub: https://aistandardshub.org/