Government Response to the House of Lords Select Committee on Artificial Intelligence
Published 22 February 2021
Presented to Parliament by the Minister for Digital and Culture by Command of Her Majesty on 22 February 2021
Command Paper Number: 390
Crown Copyright 2021
ISBN 978-1-5286-2427-5
Dear Lord Clement-Jones,
The Government is grateful for the recent report by the Lords Select Committee on Artificial Intelligence, following up on progress made against their recommendations and outlining the need to double down on ambitions.
Government welcomes the positive reflections made in the report, but also takes seriously its message that there is no room for complacency. The committee will have noted that its own messaging is echoed in the AI Council’s Roadmap, the recommendations of which the Government is now considering.
As this response demonstrates, this Government is strongly committed to delivering on the power and promise of AI, including working with the AI Council to embed the recommendations of their AI Roadmap to ensure the UK retains a global leadership position in AI. But there is still much to do.
As you highlight in your report, the Government’s approach needs to focus on establishing the right arrangements between institutions: across Government and the public sector, between regulators, and with academia and industry. This approach will ensure that momentum gained over the past few years is not lost, but instead reinvigorated to drive economic recovery and prosperity across the union, and allow us to use our lead in AI to solve global challenges.
We hope you are reassured by this response.
Yours sincerely,
Caroline Dinenage MP
Minister of State for Digital and Culture
Department for Digital, Culture, Media & Sport
Amanda Solloway MP
Minister for Science, Research and Innovation
Department for Business, Energy & Industrial Strategy
Chapter 2: Living with artificial intelligence ‘Public understanding and data’ (8–18; 19, 20, 21)
19. Artificial intelligence is a complicated and emotive subject. The increase in reliance on technology caused by the COVID-19 pandemic, has highlighted the opportunities and risks associated with the use of technology, and in particular, data. It is no longer enough to expect the general public to learn about both AI and how their data is used passively. Active steps must be taken to explain to the general public the use of their personal data by AI. Greater public understanding is essential for the wider adoption of AI, and also to enable challenge to any organisation using to deploy AI in an ethically unsound manner.
20. The Government must lead the way on actively explaining how data is being used. Being passive in this regard is no longer an option. The general public are more sophisticated than they are given credit by the Government in their understanding of where data can and should be used and shared, and where it should not. The development of policy to safeguard the use of data, such as data trusts, must pick up pace, otherwise it risks being left behind by technological developments. This work should be reflected in the National Data Strategy.
21. The AI Council, as part of its brief from Government to focus on exploring how to develop and deploy safe, fair, legal and ethical data sharing frameworks, must make sure it is informing such policy development in a timely manner, and the Government must make sure it is listening to the Council’s advice. The AI Council should take into account the importance of public trust in AI systems, and ensure that developers are developing systems in a trustworthy manner. Furthermore, the Government needs to build upon the recommendations of the Hall-Pesenti Review, as well as the work done by the Open Data Institute, in conjunction with the Office for AI and Innovate UK, to develop, and deploy data trusts as envisaged in the Hall-Pesenti Review.
Response:
Government fully recognises the critical importance of building public trust in AI and data technologies, which requires both creating the right environment for developing and deploying AI from a legal and regulatory standpoint – including embedding ethical principles against a consensus normative framework – and ensuring the whole of society is informed and able to take active decisions regarding their relationship to AI technologies, including how data about them and those around them is used to provide services. This is demonstrated by the active stance the National Data Strategy has towards ensuring members of the public can become ‘responsible data citizens’. Here, it is important to highlight work conducted by the Open Data Institute in collaboration with the Royal Society of Arts and Luminate with their project ‘Data About Us’ – an ethnographic study that identified lack of vocabulary regarding either areas of concern or relative comfort with regard to the use of data about them, adding ‘behavioural’ (web browsing history; social media ‘likes’) and societal (e.g., census data) to the more familiar ‘personal’ and ‘sensitive’ data categories.
The AI Council has also recently published their AI Roadmap, making recommendations around improving access to data, in-line with the present liaison committee’s recommendations:
The government should focus its plans to make more public sector data safely and securely available, being clear about which data will be increasingly available, under what conditions and on what timescale. In the private sector, while regulators have begun good work to audit AI for personal data protection compliance, more work is needed to help businesses seeking to use data for AI by creating the conditions for the deployment of suitable privacy enhancing technologies.
This should be furthered by accelerating work on translating the intent of a data sharing agreement into an actionable legal framework, and establishing guidelines for legal frameworks around different data sharing structures such as trusts, cooperatives and contracts.
Regarding Data Trusts, the Open Data Institute have continued their work to test principles of data stewardship through their data institutions works, initially revisiting their initial definition of a data trust as a legal structure that provides independent fiduciary stewardship of data, and reporting on subsequent pilot projects in September 2020. The AI Council data working group has since led on a ‘Legal Mechanisms of Data Stewardship’ project, working with the Ada Lovelace Institute towards a repeatable legal framework for data governance, which is due to publish its report imminently.
As Secretariat to the Council and as part of the cohort taking this forward, the Office for AI continues to provide join-up between the AI Council and the National Data Strategy, both towards research aiming to address issues of data competition (as recommended by the Digital Markets/Furman Review) and recommendations made by the present committee towards data trusts for public health.
There is huge untapped potential in how data is collected, managed and shared. The NDS launched a work programme to better understand and create incentives for data availability in the economy. We have completed initial research to develop our evidence base on the barriers to data availability, and the opportunities and rationale for government intervention. This will be published shortly, and key findings, as well as responses to the NDS consultation, will inform a data availability policy framework.
DCMS is also delivering a new £2.6m project to address current barriers to data sharing and improve data interoperability in order to support innovation and competition in the detection of online harms. This project will model how data interoperability and the use of data trusts can address issues of data sharing to address online harms.
‘Ethics’ (22–31; 32, 33, 34)
32. Since the Committee’s report was published, the conversation around ethics and AI has evolved. There is a clear consensus that ethical AI is the only sustainable way forward. Now is the time to move that conversation from what are the ethics, to how to instil them in the development and deployment of AI systems.
33. The Government must lead the way on the operationalisation of ethical AI. There is a clear role for the CDEI in leading those conversations both nationally and internationally. The CDEI, and the Government with them, should not be afraid to challenge the unethical use of AI by other governments or organisations.
34. The CDEI should establish and publish national standards for the ethical development and deployment of AI. National standards will provide an ingrained approach to ethical AI, and ensure consistency and clarity on the practical standards expected for the companies developing AI, the businesses applying AI, and the consumers using AI. These standards should consist of two frameworks, one for the ethical development of AI, including issues of prejudice and bias, and the other for the ethical use of AI by policymakers and businesses. These two frameworks should reflect the different risks and considerations at each stage of AI use.
Response:
Currently, the tools such as the Data Ethics Framework and ‘Guide to Using AI in the Public Sector’, alongside other guidance, are available on GOV.UK to support ethical and safe use of algorithms in the public sector. Building on the existing work on algorithmic and data ethics, Government Digital Service will explore the development of an appropriate and effective mechanism to deliver more transparency on the use of algorithmic assisted decision making within the public sector in collaboration with the leading organisations in the field.
Government is considering what the Centre for Data Ethics’ future functions should be. As set out in the National Data Strategy, the planned future functions of the CDEI are: AI monitoring; partnership working; and piloting and testing potential interventions in the tech landscape. These are being considered as part of the NDS consultation process, alongside the recommendations made in this report and internal policy work.
The CDEI recently published a report on Bias in Algorithmic Decision Making. The report focused on the use of algorithms in significant decisions about individuals. Building on the sectoral findings, the report identifies a number of recommendations and concrete steps for government, regulators and industry to take to support responsible innovation and mitigate algorithmic bias. The Government will respond to these recommendations in due course.
‘Jobs’ (35–44; 45, 46)
45. There is no clear sense of the impact AI will have on jobs. It is however clear that there will be a change, and that complacency risks people finding themselves underequipped to participate in the employment market of the near future.
46. As and when the COVID-19 pandemic recedes and the Government has to address the economic impact of it, the nature of work will change and there will be a need for different jobs. This will be complemented by opportunities for AI, and the Government and industry must be ready to ensure that retraining opportunities take account of this. In particular the AI Council should identify the industries most at risk, and the skills gaps in those industries. A specific training scheme should be designed to support people to work alongside AI and automation, and to be able to maximise its potential.
Response:
Ensuring the population is equipped for the labour market of the future is critically important, and that is why the Government has invested in this area. The “Future of Work” is a key policy area for a number of departments across Government. The responsibility for anticipating and planning for the effects of automation sits across multiple Departments (including DCMS, BEIS, DfE, MHCLG). Automation will have an impact on the shape of the labour market, with research by the Office for National Statistics suggesting around 7.5% of jobs could be at high risk of automation.[footnote 1] But the bigger picture is that AI and automation is predicted to create many more jobs than it displaces: The World Economic Forum has estimated that robotics, automation and artificial intelligence (AI) will displace 75 million jobs globally between 2018 and 2022 but create 133 million new ones – a “net positive” of 58 million jobs. Automation also presents new opportunities for workers, such as remote working and supportive technology for disabled people and those with long term health conditions, who would otherwise be excluded from the workforce.
The Government aims to support workers with the changes that come about as a result of the “fourth revolution” by, for example, helping people to reskill and adapt (as we do for any other challenges that change the Labour Market landscape such the COVID-19 crisis). The Government is also working to ensure the welfare system is adaptable to support people when they most need it, when transitioning from one job to another; the Government is clear that every person, no matter what their background, must have the opportunity to progress in the workplace. This could include changing careers, perhaps several times, during their working life.
To this effect, the Committee will be aware that in September 2020, the Prime Minister announced a major expansion of post-18 education and training to level up and prepare workers for the post-COVID economy. This includes a Lifetime Skills Guarantee to give adults the chance to take free college courses valued by employers, and a new entitlement to flexible loans to allow courses to be taken in segments, boosting opportunities to retrain and enhancing the nation’s technical skills. As part of this, the Government is committed to making higher education more flexible to facilitate lifelong learning, and to make it easy for adults and young people to break up their study into segments, transfer credits between colleges and universities, and enable more part-time study. There is also an increased emphasis on higher technical qualifications.
The challenges of responding to COVID-19 have accelerated a move towards digital ways of working, including automation and AI, that was already under way in the workplace. Talking of the impact of the pandemic on the economy, the Chancellor has said that ‘we…cannot save every job. What we can do is give people the skills to find and create new and better jobs.’
Earlier this year the government launched its free online Skills Toolkit, helping people train in digital and numeracy skills. This is being expanded today to include 62 additional courses. £2.5 billion is also being made available through the National Skills Fund to help get people working again after COVID-19, as well as giving those in work the chance to train for higher-skilled, better-paid jobs.The interventions complement broad measures developed to manage disruptions to the labour market, including and not limited to the Plan for Jobs initiatives, the National Skills Fund (DfE), and regional devolution of labour market decisions in England (to Mayoral Combined Authorities).
In digital skills, DCMS is currently delivering The Fast Track Digital Workforce Fund, a £3 million programme within the Greater Manchester Combined Authority and Lancashire LEP areas to boost digital skills training. The Fund will encourage employers and training providers to form partnerships to co-design and co-deliver short, bespoke skills courses that match employers needs, and is supporting skills such as cyber security, data science, software development and digital marketing. The Government has in addition already committed £8 million for digital skills boot camps; expanding successful pilots in Greater Manchester and the West Midlands and introducing programmes in four new locations.
In 2020, the Government also announced AI apprenticeships, ensuring everyone – whether they are a young person leaving school or someone who wants to re-train or change career – can gain the training and qualifications they need to enter the job market and that employers can access the skills they need to make the country economically strong and globally competitive. Apprenticeships are already helping people from all walks of life to progress in their careers. After finishing an apprenticeship, 90% of apprentices go onto employment or further training, with 88% finding sustained employment.
The AI Council’s Roadmap also makes Skills and Diversity a central pillar of its recommendations. Following up on these recommendations would build on the work conducted by the Government through the Office for AI in conjunction with the Office for Students, Institute of Coding and Universities across all four regions. This is delivering 1000 more PhDs at 16 Centres for Doctoral Training, 100 Industry-funded Masters courses, and 2,500 AI conversion courses with 1000 scholarships for people from underrepresented groups. The Government is considering the recommendations in the AI Council’s Roadmap.
‘Public Trust and regulation’ (47–59; 60, 61)
60. The challenges posed by the development and deployment of AI cannot currently be tackled by cross-cutting regulation. The understanding by users and policymakers needs to be developed through a better understanding of risk and how it can be assessed and mitigated. Sector-specific regulators are better placed to identify gaps in regulation, and to learn about AI and apply it to their sectors. The CDEI and Office for AI can play a cross-cutting role, along with the ICO, to provide that understanding of risk and the necessary training and upskilling for sector specific regulators.
61. The ICO must develop a training course for use by regulators to ensure that their staff have a grounding in the ethical and appropriate use of public data and AI systems, and its opportunities and risks. It will be essential for sector specific regulators to be in a position to evaluate those risks, to assess ethical compliance, and to advise their sectors accordingly. Such training should be prepared with input from the CDEI, Office for AI and Alan Turing Institute. The uptake by regulators should be monitored by the Office for AI. The training should be prepared and rolled out by July 2021.
Response:
Government is considering what the Centre for Data Ethics’ future functions should be. These are being considered as part of the NDS consultation process, alongside the recommendations made in this report and internal policy work.
The CMA, the ICO and Ofcom have together formed a Digital Regulation Cooperation Forum (DRCF) to support regulatory coordination in digital markets and cooperation on areas of mutual importance, and enable coherent, informed and responsive regulation of the UK digital economy which serves citizens and consumers and enhances the global impact and position of the UK.
The Office for AI, CDEI, and ICO and other regulators also sit on a larger Regulators and AI working group, comprising 32 regulators and other organisations. This forum will be used to discuss how to take forward the recommendations made in your report, forming a special sub-group chaired by the ICO with active membership from the CDEI, Office for AI, Alan Turing Institute, and key regulators. They will identify gaps, consider training needs and make recommendations.
This may result in the ICO developing a training course as envisaged by recommendation 61, but the course of action will only be determined following consideration and consultation of regulators’ needs in this area. Once this course is created the assurance and uptake will be tracked and reported back by regulators to the Working Group.
In regulatory spaces where misuse of AI is of concern, the Government is pressing ahead with legislation to establish a new online harms regulatory framework to make the UK the safest place to be online. As the Online Harms White Paper outlines, we intend to establish in law a new duty of care on companies towards their users, backed by an independent regulator. The regulatory framework will apply to all services that host user-generated content or enable user interaction, such as liking and sharing. Online media and digital literacy can equip users to spot dangers online, and the White Paper sets out our ambition towards an online media literacy strategy.
The strategy will ensure a coordinated and strategic approach to online media literacy education and awareness for children, young people and adults. It will complement existing initiatives, including the work DfE is leading on ensuring that schools are equipped to teach online safety and digital literacy.
The strategy will support users in managing their privacy settings and their ‘online footprint’. And it will help them think critically about things they might come across online, like disinformation or catfishing, and how terms of service and moderating processes can be used to address harmful content.
Chapter 3: Leading on artificial intelligence
71. We commend the Government for its work to date in establishing a considered range of bodies to advise it on AI over the long term.
72. However we caution against complacency. There must be more and better coordination, and it must start at the top. A Cabinet Committee must be established whose terms of reference include the strategic direction of Government AI policy and the use of data and technology by national and local government.
73. The first task of the Committee should be to commission and approve a five year strategy for AI. Such a strategy should include a reflection on whether the existing bodies and their remits are sufficient, and the work required to prepare society to take advantage of AI rather than be taken advantage of by it.
Response:
The Government recognises the significant potential implications of Artificial Intelligence (AI) for society and the economy; the need to address the social, ethical and legal questions the technology raises; and to protect the public and to build confidence in UK developments in this sector.
The responsibility for AI policy and driving uptake across the economy is split across ministers in both the Department for Digital, Culture, Media & Sport (DCMS) and Department for Business, Energy, and Industrial Strategy (BEIS). The responsibility for uptake across Government lies with the Government Digital Service, answering to the Minister for implementation in the Cabinet Office. Ultimately, the Government wants all Departments to understand the benefits of AI for their own work and the sectors they work with, as has happened with NHSX – the digital innovation team in the NHS, for instance. This shared responsibility across the whole of Government ensures that the benefits of AI can be realised across wider government and agencies, from upskilling our workforce to the uptake of AI innovation.
The AI Council’s Roadmap includes recommendations towards the UK having a National written strategy on AI, which the Government is currently considering. Such a strategy would include considerations of governance, including at Government Department and Cabinet committee levels.
‘Chief Data Officer’ (74–78; 79)
79. The Government must take immediate steps to appoint a Chief Data Officer, whose responsibilities should include acting as a champion for the opportunities presented by AI in the public service, and ensuring that understanding and use of AI, and the safe and principled use of public data, are embedded across the public service.
Response:
On 12 January, Alex Chisholm, Chief Operating Officer for the Civil Service and Permanent Secretary for the Cabinet Office, announced the appointment of three senior Digital, Data and Technology (DDaT) leaders by the Government: Paul Willmott will Chair a new Central Digital and Data Office (CDDO) for the Government; Joanna Davinson has been appointed the Executive Director of CDDO and Tom Read has been appointed as CEO of the Government Digital Service.
The new leadership officially join in February 2021 and are reviewing the overall digital and data programme for government. Refreshed DDaT governance structures will be considered as part of this, to ensure appropriate leadership and accountability of the government’s work on data.
‘Autonomy Development Centre’ (80–82; 83)
80–82 83. We believe that the work of the Autonomy Development Centre will be inhibited by the failure to align the UK’s definition of autonomous weapons with international partners: doing so must be a first priority for the Centre once established.
Response:
We agree that the UK must be able to participate in international debates on autonomous weapons, taking an active role as moral and ethical leader on the global stage, and we further agree the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies.
The Committee is aware that definitions relating to autonomous systems – and to so-called Lethal Autonomous Weapon Systems (LAWS) in particular – are challenging. In recent years the MOD has subscribed to a number of definitions of autonomous systems, principally to distinguish them from unmanned or automated systems, and not specifically as the foundation for an ethical framework. On this aspect, we are aligned with our key allies. Most recently, the UK accepted NATO’s latest definitions of “autonomous” and “autonomy”, which are now in working use within the Alliance. The Committee should note that these definitions refer to broad categories of autonomous systems, and not specifically to LAWS. To assist the Committee we have provided a table setting out UK and some international definitions of key terms.
The term LAWS itself is not used consistently: some parties use it to refer to weapon systems that operate without meaningful human control; others use it to refer to weapons which operate with some degree of autonomy. The definition of such a system is therefore both technically complicated and highly subjective. The MOD does not have an operative definition of LAWS and there is similarly no international agreement on the definition or characteristics of LAWS. The UK considers that the provisions of international law – particularly International Humanitarian Law (IHL) – and the robust extant regulatory frameworks which apply to the development of weapons systems in Defence are appropriate to govern emerging technologies in this area.
We are, however, increasingly aligned with like-minded allies in terms of the outcomes we seek in our approach to responsible AI for Defence. The UK is a founder member of the US-led AI Partnership for Defence, created to “provide values-based global leadership in defense for policies and approaches in adopting AI.” The UK is also a prominent voice at discussions of this issue at the UN Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts (GGE) on LAWS, an international forum which brings together expertise from states, industry, academia and civil society. The GGE is yet to achieve consensus on an internationally accepted definition and there is therefore no common standard against which to align.
The MOD is preparing to publish a new Defence AI Strategy and will continue to review definitions as part of ongoing policy development in this area.
An AI centre within Defence will accelerate the research, development, testing, integration and deployment of world-leading AI – much of which will be used in information, logistics, ISTAR or otherwise ‘non-weapon’ systems. Nevertheless, as part of this work, Defence will continue to be proactive in addressing ethical issues surrounding the development and use of AI for military purposes.
Organisation | Automated system definition | Autonomous system definition | Autonomous weapon system definition |
---|---|---|---|
UK | See NATO definition. | See NATO definition. | No operative definition. |
NATO | A system that, in response to inputs, follows a predetermined set of rules to provide a predictable outcome. | A system that decides and acts to accomplish desired goals, within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action. | |
USA | See NATO definition. | See NATO definition. | A weapon system that, once activated, can select and engage targets without further intervention by a human operator. |
France | See NATO definition. | See NATO definition. | [Systems with] a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command…targeting and firing a lethal effector without any kind of human intervention or validation. |
The Non-Aligned Movement (A forum of non-aligned states which speak collectively at UN discussions on LAWS) | No definition. | No definition. | Weapons that can autonomously select and engage a target, also known as its critical functions, without the direct control or supervision of a human. |
International Committee of the Red Cross | No definition. | No definition. | Any weapon system with autonomy in its critical function - that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralise, damage or destroy) targets without human intervention. |
‘The United Kingdom as a world leader’ (95-96)
95. The UK remains an attractive place to learn, develop, and deploy AI. It has a strong legal system, coupled with world-leading academic institutions, and industry ready and willing to take advantage of the opportunities presented by AI. We also welcome the development of Global Partnership on Artificial Intelligence and the UK’s role as a founder member.
96. It will however be a cause for great concern if the UK is, or is seen to be, less welcoming to top researchers, and less supportive of them. The Government must ensure that the UK offers a welcoming environment for students and researchers, and helps businesses to maintain a presence here. Changes to the immigration rules must promote rather than obstruct the study, research and development of AI.
Response:
Attracting and retaining top international talent is of paramount importance. On 27 January, a new fast-track visa scheme to attract the world’s top scientists, researchers and mathematicians was announced which will open on 20 February. This follows a commitment by the Prime Minister last summer to put science, research and innovation on the top of the government’s agenda. The bespoke Global Talent route will have no cap on the number of people able to come to the UK, demonstrating the government’s commitment to supporting top talent.
The Global Talent route replaces the Tier 1 (Exceptional Talent) route and for the first time UK Research and Innovation (UKRI) will endorse applicants from the scientific and research community. The route will provide for a brand new fast-track scheme, managed by UKRI which will enable UK-based research projects that have received recognised prestigious grants and awards, including from the European Space Agency and the Japan Science and Technology Agency, to recruit top global talent, benefiting higher education institutions, research institutes and eligible public sector research establishments. This will enable an individual to be fast-tracked to the visa application stage double the number of eligible fellowships, such as Marie Skłodowska-Curie Actions, the European Research Council and Human Frontier Science, which also enable individuals to be fast-tracked.
The new scheme will continue to ensure dependents have full access to the labour market, preserve the route’s flexibility by not requiring an individual to hold an offer of employment before arriving or tying them to one specific job, and provide an accelerated path to settlement for all scientists and researchers who are endorsed on the route. It will provide for an exemption from our absences rules for researchers, and their dependants, where they are required overseas for work-related purposes, ensuring they are not penalised when they apply for settlement.
The Government has announced that the UK will associate to Horizon Europe, the EU research and innovation programme that will run from 2021 to 2027.
Association will give UK scientists, researchers and businesses access to funding under the programme on equivalent terms as organisations in EU countries.
The US and UK governments signed the ‘Declaration of the United States of America and the United Kingdom Cooperation in Artificial Intelligence’, also known as the ‘Statement of Intent’ on 25 September 2020 during a meeting of the US-UK Special Relationship Economic Working Group.
The Statement provides a solid basis for future collaboration between the US and the UK in the important area of Artificial Intelligence: a field where the US and the UK are joint pioneers at the frontier of research. It will drive further academic collaboration, such as student exchanges, between research institutions on both sides of the Atlantic.
In Autumn 2019, the Alan Turing Institute announced the appointment of the first five new and highly talented Turing AI Fellows.
The second phase of the investment is being led by UKRI, though the Alan Turing Institute remains a key partner. This UKRI-led phase of the Turing AI Fellowships consists of two separate programmes:
- Turing AI Acceleration Fellowships– intended to accelerate the careers of high potential researchers towards a world-leading position by the end of their fellowship.
- Turing AI World-Leading Researcher Fellowships–focused on recruiting and retaining world-leading AI researchers in order to establish centres of excellence and build critical mass in AI research in the UK.
On 27 November 2020, the Government announced a £20 million government investment to deliver Turing AI Acceleration Fellowships. These were described as giving ‘15 of the UK’s top AI innovators the resources to drive forward their ground-breaking research from speeding up medical diagnosis to increasing workplace productivity. These pioneering projects could enable the UK to meet some of today’s most pressing challenges, such as reducing carbon emissions, while helping to transform industries across the UK economy, including healthcare, energy and transport.’
UKRI is currently working with the Alan Turing Institute and Office for AI to deliver funding to a new cohort with the ‘World Leading Researcher Fellowship programme. We know that we cannot rest on our laurels, but there are indications that the Government’s approach and the healthiness of the UK AI ecosystem overall are working to attract top talent. A recent (2021) publication from Oxford’s Future of Humanity Institute and indicated that:
- The UK is considered the second most likely destination for AI researchers to work in, over the next three years, with 35% choosing the UK. The US is number 1 with 58%.
- The two most important motivating factors for AI researchers’ choice of work destination are “professional environment and work opportunities” (over 90%) and “lifestyle and culture” (79%).
- Nearly 70% of AI researchers based in the US said that legal and visa immigration issues were a serious problem for working in the country. 44% said the same of the UK.[footnote 2]
-
As of March 2019, prior to the impact of the COVID-19 pandemic. ↩
-
The Immigration Preferences of Top AI Researchers: New Survey Evidence, University of Oxford Future of Humanity Institute and Perry World House, University of Pennsylvania. ↩