Emerging processes for frontier AI safety: companies AI safety policies
Published 27 October 2023
The UK recognises the enormous opportunities that AI can unlock across our economy and our society. However, without appropriate guardrails, such technologies can pose significant risks. Frontier AI companies play an important role in addressing these risks and promoting the safety of the development and deployment of frontier AI.
Ahead of the AI Safety Summit 2023 we requested that several leading AI companies outline their AI safety policies across nine areas of AI safety:
- responsible capability scaling provides a framework for managing risk as companies scale the capability of frontier AI systems, enabling companies to prepare for potential future, more dangerous AI risks before they occur
- model evaluations and red teaming can help assess the risks AI models pose and inform better decisions about training, securing, and deploying them
- model reporting and information sharing increases government visibility into frontier AI development and deployment and enables users to make well-informed choices about whether and how to use AI systems
- security controls including securing model weights are key underpinnings for the safety of an AI system
- reporting structure for vulnerabilities enables outsiders to identify safety and security issues in an AI system
- identifiers of AI-generated material provide additional information about whether content has been AI generated or modified, helping to prevent the creation and distribution of deceptive AI-generated content
- prioritising research on risks posed by AI will help identify and address the emerging risks posed by frontier AI
- preventing and monitoring model misuse is important as, once deployed, AI systems can be intentionally misused for harmful outcomes
- data input controls and audits can help identify and remove training data likely to increase the dangerous capabilities their frontier AI systems possess, and the risks they pose
The publication of companies’ AI safety policies will help drive transparency regarding how companies are putting into practice AI safety commitments and enable the sharing of safety good practices within the AI ecosystem.
The government’s emerging processes for frontier AI safety complements companies’ safety policies by providing a potential list of frontier AI companies’ safety policies. This is intended to be an early contribution to the discussion and will need updating regularly given the emerging nature of this technology.
Company | Policy |
---|---|
Amazon | Amazon policy |
ATHROP\C | ATHROP\C policy |
Google DeepMind | Google DeepMind policy |
Inflection | Inflection policy |
Meta | Meta policy |
Microsoft | Microsoft policy |
OpenAI | OpenAI policy |