Safety of advanced AI under the spotlight in first ever independent, international scientific report
The first iteration of the International Scientific Report on the Safety of Advanced AI has been published today.
- International AI Safety Report brings together latest science on capabilities and risks of advanced AI
- interim report to inform discussions as UK and Republic of Korea prepare to co-host AI Seoul Summit next week
- evidence and input from range of contributors to inform final report, expected by the end of 2024
New research supported by over 30 nations, as well as representatives from the EU and the UN, shows the impact AI could have if governments and wider society fail to deepen their collaboration on AI safety, as the first iteration of the International Scientific Report on the Safety of Advanced AI is published today. Launched at the AI Safety Summit, the development of the report was one of the key commitments to emerge from the Bletchley Park discussions, coming as part of the landmark Bletchley Declaration.
Initially launched as the State of Science report last November, the report unites a diverse global team of AI experts, including an Expert Advisory Panel from 30 leading AI nations from around the world, as well as representatives of the UN and the EU, to bring together the best existing scientific research on AI capabilities and risks. The report aims to give policymakers across the globe a single source of information to inform their approaches to AI safety.
Today’s report recognises that advanced AI can be used to boost wellbeing, prosperity, and new scientific breakthroughs – many of which have already been seen in fields including healthcare, drug discovery, and in how we can tackle climate change. But it notes that like all powerful technologies, current and future developments could result in harm. For example, malicious actors can use AI to spark large-scale disinformation campaigns, fraud, and scams. Future advances in advanced AI could also pose wider risks including labour market disruption, and economic power imbalances and inequalities.
However, the report highlights a lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time. It also explores the differing opinions on the likelihood of extreme risks which could impact society such as large-scale unemployment, AI-enabled terrorism, and a loss of control over the technology. With broad expert agreement highlighting that we need to prioritise improving our understanding, the future decisions of societies and governments will ultimately have an enormous impact.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said:
AI is the defining technology challenge of our time, but I have always been clear that ensuring its safe development is a shared global issue. When I commissioned Professor Bengio to produce this report last year, I was clear it had to reflect the enormous importance of international cooperation to build a scientific evidence-based understanding of advanced AI risks. This is exactly what the report does.
Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’s incredible opportunities safely and responsibly for decades to come.
The work of Yoshua Bengio and his team will play a substantial role informing our discussions at the AI Seoul Summit next week, as we continue to build on the legacy of Bletchley Park by bringing the best available scientific evidence to bear in advancing the global conversation on AI safety.
This interim publication is focused on advanced ‘general-purpose’ AI. This includes state of the art AI systems which can produce text, images, and make automated decisions. The final report is expected to be published in time for the AI Action Summit which is due to be hosted by France, but will now take on evidence from industry, civil society, and a wide range of representatives from the AI community. This feedback will mean the report will keep pace with the technology’s development, being updated to reflect the latest research and expanding on a range of other areas to ensure a comprehensive view of advanced AI risks.
International Scientific Report on the Safety of Advanced AI Chair, Professor Yoshua Bengio, said:
This report summarizes the existing scientific evidence on AI safety to date, and the work led by a broad swath of scientists and panel members from 30 nations, the EU and the UN over the past six months will now help inform the next chapter of discussions of policy makers at the AI Seoul Summit and beyond.
When used, developed and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.
Governments, academia, and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly, and successfully.
Prof. Andrew Yao, Institute for Interdisciplinary Information Sciences, Tsinghua University, said:
A timely and authoritative account on the vital issue of AI safety.
Marietje Schaake, International Policy Director, Stanford University Cyber Policy Center, said:
Democratic governance of AI is urgently needed, on the basis of independent research, beyond hype. The Interim International Scientific Report catalyses expert views about the evolution of general-purpose AI, its risks, and what future implications are.
While much remains unclear, action by public leaders is needed to keep society informed about AI, and to mitigate present day harms such as bias, disinformation and national security risks, while preparing for future consequences of more powerful general purpose AI systems.
Nuria Oliver, PhD, Director of ELLIS Alicante, the Institute of Humanity-centric AI
This must-read report – which is the result of a collaborative effort of 30 countries - provides the most comprehensive and balanced view to date of the risks posed by general purpose AI systems and showcases a global commitment to ensuring their safety, such that together we create secure and beneficial AI-based technology for all.
This year promises to be an important 12 months for the technology, as increasingly capable AI models are expected to hit the market. The speed of AI’s development is one of the several areas of focus for today’s report which notes that while its recent progress has been rapid, there is still considerable disagreement around current capabilities and uncertainty surrounding the long-term sustainability of this pace.
The UK has rapidly established a reputation as a trailblazer in AI safety, underpinned by the establishment of the AI Safety Institute. Backed by an initial £100 million in funding, the Institute represents the world’s first state-backed body dedicated to AI safety research. It has already agreed an historic alliance with the United States on AI safety and published its world-first approach to model safety evaluations earlier this year.
This month’s AI Seoul Summit represents an important opportunity to once again cement AI safety’s place on the international agenda. Attendees will be able to use the interim International AI Safety Report to further the discussions which were kickstarted at November’s AI Safety Summit. A final edition of the report is expected to be released ahead of next round of discussions on AI safety which will be hosted by France.
Further Information
Today’s report: International Scientific Report on the Safety of Advanced AI.
Additional supporting quotes
Prof. Nick Jennings CB FREng FRS, Vice-Chancellor and President, Loughborough University, said:
I’m delighted to have been part of this international group that has looked at the opportunities and challenges associated with advanced AI. Only by bringing together diverse perspectives from around the globe can we truly highlight the potential and the uncertainties for the development of AI technologies and their applications that impact all societies.
Alice Oh, Professor at the KAIST School of Computer Science, said:
There is heightened global attention to the extremely fast-paced development of AI. This report provides an important reference for global discourse on managing the risks of AI such that it will be used globally to benefit humanity and society.
Prof. Bronwyn Fox, Chief Scientist, Commonwealth Scientific and Industrial Research Organisation (CSIRO), said:
The Interim International AI Safety Report is brave and insightful. It sets a clear direction for future AI safety research, acknowledging where experts disagree. I commend the report to policymakers seeking to inform their responses to AI with rigorous, evidence-based science. I look forward to Australia elevating the voices and experience of First Nations peoples in future reports.
Dawn Song, Professor in the Department of Electrical Engineering and Computer Science at University of California – Berkeley, said:
Ensuring AI safety is of paramount importance for humanity to benefit from advanced AI in a safe manner. From an international collaborative effort from over 30 countries, this report provides the first comprehensive overview and important foundation for a collective understanding and exploration on risks and risk mitigations for general-purpose AI, towards ensuring AI safety.
Prof. John A McDermid OBE FREng, Director, Centre for Assuring Autonomy, University of York, said:
The report provides a thought-provoking review of the potential risks associated with general purpose AI, e.g. large language models such as Chat GPT, which will be particularly relevant to those wishing to apply the precautionary principle to this fast-moving area of technology.
Yejin Choi, Professor at the Paul G. Allen School of Computer Science & Engineering at University of Washington said:
This report provides a much needed survey on the general-purpose AI — rapidly increasing capabilities, fundamental limitations and challenges, a wide range of risk factors, early mitigation efforts, and open research questions.
This report aims to provide an accurate and balanced view, grounded in scientific literature, including where experts agree and disagree, and vetted by a diverse group of senior advisors.
Lee Tiedrich, Distinguished Faculty Fellow in Law & Responsible Technology, Duke University, said:
By drawing together global experts across disciplines to holistically address AI safety, this preliminary report creates an outstanding foundation for responsibly harnessing AI’s benefits while safeguarding against its risks. Thank you to the UK government for commissioning this work and to Yoshua Bengio for your amazing leadership.
Oleksii Molchanovskyi, Chair, Expert Committee on the Development of Artificial intelligence in Ukraine, said:
The International AI Safety Report is a solid foundation for governments, businesses, and public organizations to build strategies for applying and integrating artificial intelligence.
Ziv Katzir, Head of the National Plan for Artificial Intelligence Infrastructure, Israel Innovation Authority, said:
As was noted by the legendary Baseball payer Yogi-Berra, “it is difficult to make predictions, especially about the future”. This statement seems especially true in the case of AI, given the rapid technological and societal revolution we are all a part of.
The International Scientific Report on the Safety of Advanced AI is perhaps the most methodical, comprehensive, balanced, and nuanced attempt made so far to map the factors that influence the future trajectory of AI development. By highlighting current consensus as well as points of open debate, it provides an invaluable basis for facilitating AI policy discussions.
Prof. Yi Zeng, Professor at the Institute of Automation, Chinese Academy of Sciences, said:
The report starts with a long-term vision to consider and discuss towards General-purpose AI and focus on risks and potential solutions in a comprehensive way. Glad to see my major suggestions as an expert advisory panel member have been incorporated into the scientific report at different levels from the beginning to the final version.
This report shows where we are and what efforts must be made. But will we be able to prevent all the risks discussed and beyond this report? No. But this report is definitely one of the best tryings as a good starting point, and most importantly, we, the writers, senior advisors, expert advisory panel and secretariat across many countries contributed together, as a whole, for the world.