Generative artificial intelligence (AI) in education
Updated 22 January 2025
Applies to England
Introduction
The Department for Education (DfE) is committed to supporting the AI Opportunities Action Plan. Generative artificial intelligence (AI) presents exciting opportunities to improve people’s lives, including by making our public services better. AI will support with the delivery of the Plan for Change and our opportunity mission.
If used safely, effectively and with the right infrastructure in place, AI can ensure that every child and young person, regardless of their background, is able to achieve at school or college and develop the knowledge and skills they need for life.
AI has the power to transform education by helping teachers focus on what they do best: teaching. This marks a shift in how we use technology to enhance lives and tap into the vast potential of AI in our classrooms.
To make the opportunity a reality, we will continue to explore this technology safely to encourage innovation and maximise the benefits for education.
Generative AI has demonstrated that it can help the education workforce by reducing some of the administrative burdens that hard-working teachers, staff and school leaders face in their day-to-day roles.
Research demonstrates that generative AI could also be used for tasks such as feedback and tailored support in schools.
Evidence is still emerging on the benefits and risks of pupils and students using generative AI themselves. We will continue to work with the education sector to develop understanding of effective and safe use cases.
We will:
- consider the risks and challenges alongside the opportunities and benefits
- continue to work to ensure the safety and reliability of technology, including AI tools, to support teachers and learners
- address the fundamental barriers to effective use, such as connectivity
This statement is informed by:
- Generative artificial intelligence in education: call for evidence
- Generative AI in education: educator and expert views
- Use cases for generative AI in education: user research and technical report
- Research on parent and pupil attitudes towards the use of AI in education
What is generative AI
Generative AI is one type of AI. It refers to technology that can be used to create new content based on large volumes of data that models have been trained on a variety of sources.
ChatGPT, Microsoft Copilot and Google Gemini are generative AI tools, built on large language models (LLMs). LLMs are a category of foundation models trained on large amounts of data, enabling them to understand and generate human-like content.
Tools such as ChatGPT, Microsoft Copilot and Google Gemini can:
- answer questions
- complete written tasks
- generate images, text or code
- respond to prompts in a human-like way
Other forms of generative AI can produce:
- audio
- simulations
- videos
AI is the defining technology of our age, and it is evolving at incredible speed. This technology has the potential to benefit the economy and meet societal challenges. This is not new, and we already use AI in everyday life for:
- email spam filtering
- media recommendation systems
- navigation apps
- online chatbots
Advances in technology mean that we can now use these tools to produce AI-generated content. This creates opportunities and challenges for the education sector.
Opportunities and challenges for the education sector
We have limited evidence on the impact of AI use in education on learners’ development, the relationship of AI use and educational outcomes, and the safety implications of children and young people using this technology in the classroom.
We are working with the education sector, educational technology (edtech) industry, experts and academics to build evidence and support the education sector to use AI safely, responsibly and effectively.
From our research and engagement with the sector, we have learned that generative AI could be used for:
- creating educational resources
- lesson and curriculum planning
- tailored feedback and revision activities
- administrative tasks
- supporting personalised learning
When used appropriately, generative AI has the potential to:
- reduce workload across the education sector
- free up teachers’ time, allowing them to focus on delivering excellent teaching
However, the content produced by generative AI could be:
- inaccurate
- inappropriate or unsafe
- biased
- taken out of context
- taken without permission (intellectual property infringement)
- out of date or unreliable
- low quality
This is because generative AI:
- returns results based on its training dataset, which may not be specific to our curriculum
- stores and learns from input data – any data entered should not contain information that could allow an individual to be identified
- may not provide results that are comparable with a human-designed resource developed in the context of our curriculum
- can generate believable content, including credible scam emails
- can provide instructions for illegal or harmful activities
- can produce nonsensical, inaccurate or false information presented as fact, known as hallucination
We see more immediate benefits and fewer risks from teacher-facing use of generative AI.
If schools and colleges choose to use pupil-facing generative AI, they must take great care to ensure they are abiding by their legal responsibilities, including those related to:
They should also consider possible impacts on learning, the importance of the teacher-learner relationship, and the risks of bias and misinformation.
Teachers, leaders and staff must use their professional judgement when using these tools. Any content produced requires critical judgement to check for appropriateness and accuracy. The quality and content of any final documents remains the responsibility of the professional who produced it and the organisation they belong to, regardless of the tools or resources used.
Generative AI tools can make certain written tasks quicker and easier, but it cannot replace the judgement and deep subject knowledge of a human expert.
The education sector should:
- make the most of the opportunities that technology provides
- use technology safely and effectively to deliver excellent education that prepares pupils and students to contribute to society and the future workplace
- be aware of the limitations and risks of this technology
Technology, including generative AI, should not replace the valuable relationship between teachers and pupils.
Using AI safely and effectively
Safety should be the top priority when deciding whether to use generative AI in your education setting
Any use of generative AI by staff, students, and pupils should be carefully considered and assessed, evaluating the benefits and risks of use in its education setting.
The intended use should be specified and have clear benefits that outweigh the risks. Different considerations will apply depending on whether it is staff or pupils (especially those under 18) using AI tools.
Safety should not be compromised. Schools and colleges should also consider that there may be uses of generative AI by staff or pupils that have not been explicitly approved or adopted in their setting.
Risk assessments should include plans for mitigating against unauthorised use cases. For example, students may use generative AI to create emails from the school to parents that seem realistic.
Schools and colleges are free to make their own choices about the most suitable use cases for generative AI tools in their settings, as long as they comply with their wider statutory obligations such as keeping children safe in education.
For example, schools and colleges may choose to only use AI tools with teachers, or only on administrative tasks. Others may choose to use AI tools with students, but only in particular subjects, year groups or key stages.
Pupils should only be using generative AI in education settings and with appropriate safeguards in place, such as close supervision and the use of tools with safety and filtering and monitoring features.
For any use of AI, schools and colleges should:
- comply with age restrictions set by AI tools and open access LLMs
- consider online safety, including AI, when creating and implementing their school or college approach to safeguarding and related policies and procedures
- consult Keeping children safe in education
- refer to our generative AI product safety expectations
- refer to the filtering and monitoring standards to make sure they have the appropriate systems in place, including filtering and monitoring approaches that cover generative AI.
Keeping children safe in education provides schools and colleges with information on:
- what they need to do to protect pupils and students online
- their responsibilities with regards to limiting children’s exposure to risks from the school’s or college’s IT system
- how to review and strengthen their cyber security by referring to our guidance on cyber security standards for schools and colleges – generative AI could be used to increase the sophistication and credibility of attacks
Schools and colleges may wish to review homework policies, and other types of unsupervised study to account for the availability of generative AI. This may include developing guidance on when it is acceptable or appropriate to use generative AI tools for educators, students and pupils. Schools and colleges may also wish to consider how they engage with parents around the use of AI tools.
The following articles from the National Cyber Security Centre have more information on security and generative AI:
Using AI responsibly
Data privacy
It is important to be aware of the data privacy implications when using generative AI tools
Personal data must be protected in accordance with data protection legislation. It is recommended that personal data is not used in generative AI tools.
If it is strictly necessary to use personal data in generative AI tools within a setting, the school or college must ensure that all steps are taken to protect the data and the products and procedures comply with:
- data protection legislation
- their data privacy policies
Schools and colleges should:
- be open and transparent where consideration is being given to the use of automated decision-making and profiling – this includes when they are developing their own in-house AI tools, such as AI chatbots or AI digital assistants
- ensure the data subjects (pupils and parents or legal guardians) understand that their personal data is being processed using AI tools
- seek agreement to use data in an AI tool
Use the guidance on Data protection in schools to find out more about:
- AI and data protection in schools
- personal and special category data
The Information Commissioner’s Office (ICO) has guidance on automated decision-making: What if we want to profile children or make automated decisions about them?
Intellectual property
It is important to be aware of the intellectual property (IP) implications when using generative AI tools
Materials protected by copyright can only be used to train AI if there is permission from the copyright holder, or a statutory exception applies.
Materials created by pupils and teachers may well be copyright material, assuming the statutory standard for what comprises copyright material is satisfied. This standard is generally considered to be low and does not factor in the quality of the work produced.
Copyright law is distinct from data protection law, so any consents or data processing agreements for personal data are separate from issues of compliance with copyright law.
Many free-to-access generative AI tools will use the inputs submitted by users to further train and refine their models. Some tools, largely paid tools, allow users to opt out of inputs being used to train the models.
Examples of what may be deemed original creative work include:
- essays, homework or any other materials written or drawn by a student – it is unlikely that multiple-choice questions responses will constitute copyright work
- lesson plans created by a teacher
- prompts entered into generative AI tools
Permission to use
Schools and colleges must not allow or cause students’ original work to be used to train generative AI models unless they have permission, or an exception to copyright applies.
Permission would need to be from the:
- student, as the copyright owner
- student’s parent or legal guardian, if the copyright owner is unable to consent because of being a minor
Exceptions to copyright are limited, and settings may wish to take legal advice to ensure they are acting within the law.
Secondary infringement
Schools and colleges should also be aware of this risk of secondary infringement. This could happen if AI products are trained on unlicensed material and outputs and then used in educational settings or published more widely – for example, on a school or college website.
Examples of this may include:
- publishing a policy that has been created by an AI tool that used input taken from another school or college’s policy without that setting’s permission
- using an image on a website that has been created by an AI tool using input taken from the copyright holder without their permission
Higher-education settings may wish to review the intellectual asset management guide in regard to:
- developing student policies on the intellectual property they create
- how they interact and use intellectual property of others in light of generative AI
Find out more about:
- How copyright protects your work
- Intellectual property: copyright
- The government’s code of practice on copyright and AI
The ICO has guidance on AI and data protection and what is personal data.
Formal assessments
Schools, colleges and awarding organisations need to continue taking reasonable steps where applicable, to prevent malpractice involving the use of generative AI.
The Joint Council for Qualifications has published guidance on AI use in assessments. This guidance provides teachers and exam centres with information to help them prevent and identify potential malpractice involving the misuse of AI. It includes information on:
- what counts as AI misuse and real-life examples of malpractice
- the requirements for teachers and exam centres to help prevent and detect malpractice
- AI use and marking
- an expanded list of AI tools, including AI detection tools
Ofsted’s and Ofqual’s approach
Ofsted’s policy paper sets out its approach to artificial intelligence (AI). Ofsted supports the use of AI by providers where it improves the care and education of children and learners. It considers providers’ use of AI by the effect it has on the criteria set out in existing inspection frameworks and regulations.
The paper also explains where AI could have the biggest benefits to Ofsted’s own work, including:
- in the risk assessment of good schools
- to automate tasks
- to generate new insights
It will ensure these applications align with the regulatory principles of safety, transparency, fairness, accountability and contestability.
Ofqual’s policy paper outlines its approach to regulating the use of artificial intelligence in the qualifications sector.
Ofqual’s priority is to ensure that, where AI is used by awarding organisations, it is applied in a safe and appropriate way that does not threaten the fairness and standards of, or public confidence in, qualifications. This publication includes information on Ofqual’s regulatory position on:
- managing malpractice risks
- using AI to mark pupils’ and students’ work
- using AI in remote invigilation
The future for generative AI in education
Investing in AI
We are investing in resources to facilitate the safe, responsible and effective use of generative AI in the education sector.
We have funded Oak National Academy to develop AI tools for teachers that will help to speed up lesson planning and reduce workloads. Oak recently launched Aila, an AI-powered lesson assistant.
The ‘content store’ pilot is a £3 million pilot funded by the Department for Science, Innovation and Technology (DSIT), aiming to make available the underpinning content and data that are needed for great AI tools.
DfE has awarded innovation funding through the AI tools for education competition to support innovators to develop tools based on this content. This aims to help reduce the burden of feedback and marking on teachers.
Following the call for evidence, the main request from educators is further training and guidance on the safe use of AI. To respond to this, we are developing a training package with online resources and an online toolkit developed collaboratively with users.
We are piloting an edtech evidence board. This will bring together a group of experts to assess and evaluate the evidence quality that edtech tools have a positive impact on teaching and learning against set criteria. This could then be shared with the sector to support and inform their technology choices.
We are funding Ofsted to conduct a study to gather insights from early-adopter schools and further education colleges on the use of AI and the role leaders are playing. The aim of this research is to provide an up-to-date assessment of what emerging practice is developing, including:
- how providers are governing the use of AI
- monitoring the impact of AI on the provision of education and training, and providers’ understanding of risk to children, learners and staff
We are continuing to seek opportunities to engage young people and parents directly on policy problems and policy co-design.
We will continue to work with teachers, school leaders, support staff and experts to:
- consider and respond to the implications of generative AI and other emerging technologies
- support primary and secondary schools to teach a knowledge-rich computing curriculum to children up to 16 years old
Further information
Plan technology for your school is a digital service to help schools benchmark themselves against the digital standards and receive actionable next steps on how to meet them.
Find out about AI: meeting the Public Sector Equality Duty (PSED) on the Equality and Human Rights Commission’s website.
Support is available via the Intellectual Property Office’s online tools.
Find out about the work from Incubator for Artificial Intelligence (i.AI).