Research on public attitudes towards the use of AI in education
Published 28 August 2024
1. Executive Summary
1.1 Background
The Responsible Technology Adoption Unit (RTA) within the Department for Science, Innovation and Technology (DSIT) commissioned this research in partnership with the Department for Education (DfE) to understand how parents and pupils feel about the use of AI tools in education.
As AI tools such as large language models (LLMs) become more advanced, there are opportunities for such tools to support both teachers and pupils by creating tailored content and support, as well as streamlining processes. However, there are many questions that need to be answered before AI is implemented widely.
1.2 Objectives
The project sought to answer the following research questions:
-
Under what circumstances, if any, are parents and pupils comfortable with AI tools being used in education?
-
Under what circumstances, if any, are parents and pupils comfortable with pupils’ work being used to optimise the performance of AI tools?
Through deliberative dialogue with parents and pupils, Thinks Insight and Strategy (Thinks) explored their concerns, hopes, and expectations, as well as the conditions for use of AI in this context.
1.3 Methodology
Thinks engaged a total of 108 parents and pupils across three locations in England in a mix of face-to-face and online activities. Each participant took part in four to six hours of engagement, following the below structure:
-
Inform: Participants were provided with information about the purpose of the research, as well as key principles such as machine learning, data protection, intellectual property, and current and potential AI applications in education.
-
Debate: Participants explored the potential social, ethical, legal, financial, and cultural issues associated with use of AI in education, and were provided with a range of views from experts and officials.
-
Decide: At the end of each session, each group of participants articulated their preferred conditions for use and explored areas of consensus and difference.
1.4 Summary of key findings
1. Parents and pupils frequently share personal information online, often without considering the implications. The benefits and convenience of using online services, especially those that provide a tailored experience, tend to outweigh any privacy concerns.
2. While awareness of AI among both parents and pupils was high, understanding did not run deep. AI is often associated with robots or machines, and fictional dystopian futures. Only some – those with more knowledge of or exposure to AI – thought of specific applications such as LLM-powered tools.
3. As a result, views on the use of AI in education were initially sceptical – but there was openness to learning more. Initial concerns about AI in education were often based on a lack of understanding or imagination of how it could be used.
4. Parents and pupils agreed that there are clear opportunities for teachers to use AI to support them in their jobs. They were largely comfortable with AI being used by teachers, though more hesitant about pupils interacting with it directly.
5. By the end of the sessions, participants understood that pupil work and data is needed to optimise AI tools. They were more comfortable with this when data is anonymised or pseudonymised, and when there are clear rules for data sharing both with private companies and across government.
6. The main concerns regarding AI use centred on overreliance – both by teachers and pupils. Participants were worried about the loss of key social and technical skills and reduced human contact-time leading to unintended adverse outcomes.
7. The research showed that opinions on AI tools are not yet fixed. Parents’ and pupils’ views of and trust in AI tools fluctuated throughout the sessions, as they reacted to new information and diverging opinions. This suggests that it will be important to build trust and continue engagement with different audiences as the technology becomes more established.
Participants agreed on some key conditions for the use of AI in education and the use of pupil work and data to optimise AI tools:
-
Human oversight: Human involvement in AI use to ensure teacher roles are not displaced, to correct for error and unfair bias, and to provide safeguarding.
-
Parent and pupil permissions: Providing parents and pupils with the necessary information to make informed decisions about the use of their data.
-
Standardisation and regulation: Ensuring that tools introduced at schools are of a uniform standard to avoid exacerbation of inequalities, with strict oversight of tech companies providing the tools.
-
Age and subject restrictions: Using AI tools only where appropriate and where they add value. Strict age restrictions on direct interaction with AI.
-
Profit sharing: Ensuring that tech companies share some of their profits so that these can be reinvested into the education system and benefit schools and pupils – while recognising that private companies will need to be incentivised to develop better tools.
While this report describes the views of the parents and pupils who participated in the research, the suggestions contained within would require further research, discussion and consultation (and use of other types of evidence) prior to translation into policy and practice.
2. Background and methodology
2.1 Project background
The use of AI in education has the potential to support pupils’ learning and help reduce teacher workload. But as with any new or emerging technology, there are a range of issues which need to be considered before this is implemented widely.
The Department for Education (DfE) and the Responsible Technology Adoption Unit (RTA) within the Department for Science Innovation and Technology (DSIT) wanted to understand how parents and pupils feel about AI tools being used in education, as well as what they think about pupils’ work (e.g. schoolwork, homework, exam scripts) being used to improve AI tools.
This research aimed to create a space for pupils and parents to learn about and discuss the issues, consider their preferences for the use of AI in education, and inform DfE’s approach to implementing AI within the education system.
2.2 Project objectives
The overall objectives of this project were to understand:
In which circumstances, if any, are parents and pupils comfortable with AI tools being used in education?
a. Which kinds of use cases are acceptable?
b. How much human oversight do parents and pupils want to see?
c. What concerns need to be addressed?
d. What wider factors affect acceptability?
In which circumstances, if any, are parents and pupils comfortable with pupils’ work being used to optimise the performance of AI tools?
a. Should parental agreement be required? If so, would parents give permission, and under which conditions?
b. Who should control how the work produced by pupils is used and accessed?
c. Who, if anyone, should profit from AI which is optimised with pupils’ work?
2.3 Methodology and sample
Sample
Thinks Insight & Strategy (Thinks) recruited six cohorts of parents across three locations in England. Three cohorts took part in an in-person workshop, while the other three took part in online workshops:
-
Parents of children with special educational needs and/or disabilities (SEND)
-
Parents of children of pre-school age
-
Parents of primary school pupils
-
Parents of pre-GCSE pupils
-
Parents of GCSE pupils
-
Parents of post-GCSE pupils (aged 17-18)
We also recruited three cohorts of pupils across the three locations for face-to-face workshops, all attending state-funded schools:
-
Pre-GCSE pupils
-
GCSE pupils
-
Post-GCSE pupils (aged 17-18)
Table 1 below shows the breakdown of parent and pupil cohorts across the three fieldwork locations, by mode (in-person or online). A demographic sample breakdown can be found in the Appendix.
Table 1: Breakdown of participant cohort by fieldwork location
In-person fieldwork
Location | Birmingham | Bristol | Newcastle | Total | |
Parents of pre-GCSE pupils | 6 | 6 | 12 | ||
Parents of GCSE pupils | 6 | 6 | 12 | ||
Parents of post-GCSE pupils | 6 | 6 | 12 | ||
Total parents | 12 | 12 | 12 | 36 | |
---|---|---|---|---|---|
Pre-GCSE pupils | 6 | 6 | 12 | ||
GCSE pupils | 6 | 6 | 12 | ||
Post-GCSE pupils | 6 | 6 | 12 | ||
Total children | 12 | 12 | 12 | 36 |
Online fieldwork
Location | Birmingham | Bristol | Newcastle | Total | |
Parents of children of pre-school age | 6 | 6 | 12 | ||
Parents of primary school pupils | 6 | 6 | 12 | ||
Parents of pupils with SEND | 6 | 6 | 12 | ||
Total parents | 12 | 12 | 12 | 36 |
---|
Methodology
In-person workshops
We engaged a total of 36 parents/carers (referred to as “parents” throughout) and their children aged 11-18 (36 in total) in six-hour long workshops. Workshops took place in three locations across England on 24 February, 25 February and 2 March 2024. In these workshops, we used the following structure:
-
Inform: First, we established the purpose of the dialogue and the reason for involving parents and pupils, providing contextual information about data, foundation models and potential applications. This included showing videos from those involved in the development of AI educational tools and a participant-led demo of some educational AI products.
-
Debate: After a short break, we explored the potential social, ethical, legal, financial, and cultural issues associated with use of AI in education. This included watching videos from government ministers, officials and specialists in education who explained some of the potential benefits and risks of AI in education.
-
Decide: After lunch we brought together participants in their groups to compare views and explore areas of consensus, conditions for use and preferences. This involved the groups discussing different AI use case suggestions and constructing their ideal future scenario.
Online workshops
We engaged a further 36 parents in two online workshops, on 21 February and 28 February 2024, each lasting two hours. We followed the same deliberative research process structure divided over the two sessions.
-
Inform: In the first workshop, we focused primarily on informing the participants and providing contextual information. We showed videos from those involved in the development of AI educational tools and used voting tools to interact with participants. This workshop ended by asking participants to reflect on any concerns or needs for reassurance they might have.
-
Debate and Decide: In the second workshop, participants were shown videos from government ministers, officials and specialists in education who explained the benefits and risks of AI in education. Following discussion on these topics, participants ranked potential uses of data and pupil work according to levels of comfort, before offering their thoughts and recommendations.
3. Baseline views on AI and its uses
3.1 Summary
-
While awareness of AI is relatively high, understanding does not run deep. Most participants had heard of and used AI-powered tools, although not necessarily on purpose.
-
With increasing use of AI, many accept it as part of modern life, but remain uneasy about the perceived invasiveness of the technology.
-
However, this generally did not stop participants from using and sharing their data with services that offer an improved experience based on machine learning, such as tailored recommendations or GPS. Expressed concerns about privacy were therefore at odds with actual behaviour.
-
Most parents had not previously considered the application of AI tools in education beyond the risk of pupils using LLMs to plagiarise. However, for many children, the use of technology is already a big part of their everyday lives at school, meaning they viewed this as a natural extension, or a continuation, of what is already happening.
3.2 Awareness and understanding of AI and its use
At the start of each workshop, participants were asked to list their first associations with the terms “AI” or “Artificial Intelligence”. Although awareness of AI as a “hot topic” was high, understanding of the technology did not run deep. Both pupils and parents were likely to associate AI with robots or machines, but also with social media, streaming and shopping platforms, apps, and websites. In particular, they thought of chatbots, targeted advertising, and algorithms recommending products. Despite some awareness, only a handful of participants across the parent and pupil samples had purposely interacted with LLM-powered tools or proactively used them regularly. When prompted with some other less obvious examples (such as GPS, AutoCorrect and predictive text) however, most discovered that they had much more exposure to AI than they had originally thought.
Parent of primary school pupil, Newcastle:
[An online video streaming platform] has tracked who I view and what kinds of people I have viewed and followed and brings up related videos.
3.3 Perceptions of AI
Most participants accepted the use of AI in various settings, products, or services, as an inevitability of modern life. However, many expressed unease about the technology, due to its perceived invasiveness both in terms of its increasing ubiquity and its reliance on personal data-sharing. Generally, participants found it easier to think of the risks of AI than benefits, even where they acknowledged that it could improve efficiency or convenience. These concerns often centred on humans being replaced by machines resulting in job displacement, but also machines not being an adequate replacement for humans because they are perceived to lack more nuanced understanding – for example, in customer service settings.
Younger children were generally the least worried about AI, often because they had not given much thought to it, were less concerned about data security, and more used to technology playing a role in their lives. Older children, and particularly those aged 17-18, were more likely to have used AI as well as to have a general awareness of its use. Some had used LLMs and found them useful, though only to an extent, as they had quickly found limitations of the technology. Even among children and young people, some aspects of AI were seen as “creepy” or going too far, particularly AI features used by social media platforms that mimicked human conversation too closely or felt overly friendly in tone to users.
Post-GCSE Pupil, Birmingham:
I use [LLM-powered tool] to help with my essays; it’s quicker.
Post-GCSE Pupil, Birmingham :
Sometimes it asks really random questions and you think do you need to know that?
The use of personal data in relation to AI was also a concern for both parents and children. In particular, concerns involved the sale of data to third parties by companies developing AI tools and misuse of data by other humans (for example, in the creation of deepfakes). Despite these concerns, parents and pupils reported frequently sharing their personal data online. They noted that personal information is shared to create accounts, verify their identity, and receive relevant and tailored information or experiences. They also acknowledged that the benefit and convenience of sharing this data largely outweighed their concerns. Participants noted that they had little understanding of, or gave little consideration to, what happens to their data once it has been shared, beyond a general assumption that companies store and sell it to third parties to make a profit. In part, this may be because the benefits of sharing personal data were felt to be more immediate and tangible than the risks (such as a hypothetical data breach), which can feel more abstract and far-removed as a possibility.
Parent of primary school pupil, Newcastle:
I’m not actually sure what [the app] does with my data, other than checking that I’m old enough to view the videos and the content is suitable.
Post-GCSE pupil, Newcastle:
[What does [a video streaming service] do with your information?] Stores it, recommends you shows, brings new things in, sells your information.
Compared with their children, parents demonstrated higher awareness of the risks of data sharing, both in relation to their own data, and that of their children. They were concerned about the information that was being put “out there”, but also felt resigned to it. A handful of parents with higher levels of knowledge of technology were excited about the opportunities offered by AI, though still wary.
Parent of a child with SEND, Bristol:
Helping overcome barriers is good, but thinking about, for example, language, research, literature, it might take away from that. Create an overreliance on tech and developing social skills. What would data mining implications be? Would teachers lose jobs?
3.4 Initial views on AI in education
Stimulus provided:
Before exploring views of AI in an education context, participants were shown a video explaining what AI is and why it is important to understand and engage with it.
In the context of limited understanding of AI, initial views of the use of AI in education were mostly sceptical. Most had not considered the use of AI in education before and found it difficult to imagine how it might be used within schools. Initially, participants were more likely to imagine pupils interacting directly with AI, rather than teachers using it to support them in their roles. Many participants immediately thought of the replacement of teachers with machines, in line with their initial concerns about human job displacement. This was rejected by participants, as they felt it was important for pupils to interact with human teachers. In addition, underlying assumptions about AI (and technology in general) making people lazy, particularly held by parents, also coloured spontaneous perceptions.
Parent of pre-GCSE pupil, Newcastle:
As long as the humans are not replaced, if it streamlines and allows for more personal time [with teachers], that’s got to be a benefit.
As a result of this relatively limited prior knowledge and understanding of AI, it was initially unclear to both parents and children what the potential benefits of AI might be for teaching quality or pupil attainment. There was also uncertainty about what the use(s) of AI in various educational settings could be in practice. However, with scepticism largely grounded in a lack of experience or understanding, participants expressed an openness to hearing more. This was particularly true of pupils, many of whom felt more comfortable sharing their data and using technology relative to parents. Some pupils had already used AI in an educational context or knew that their teachers did. Even those who had not actively used AI in a school setting were familiar with the idea of existing software supporting pupils and teachers. As a result, most pupils felt that AI use in schools was already becoming the norm and further use would be a natural progression of technology application, even if they had not fully considered the implications.
4. Using AI in education
4.1 Summary
-
Both parents and pupils thought the main advantage of AI use in education was its potential to support teachers and, by extension, improve pupils’ learning experience.
-
Parents, and to a lesser extent pupils, were much less certain about pupils interacting directly with AI, especially unsupervised – even though they could see benefits in AI providing highly tailored support.
-
Both parents’ and pupils’ main concerns revolved around the quality of teaching, overreliance on AI resulting in loss of key social and technical skills, as well as the suitability of AI to address certain subjects and pupil needs.
-
Across the board, participants were more comfortable with use cases where AI supports teachers (for example, preparing a lesson) or is used for “lower stakes” tasks (for example, marking a class test, rather than an exam).
-
There was a sense that AI use should always be optional, both for teachers and pupils, and that parents should have a say in whether and how AI is used – though there was little acknowledgement of the practical issues that could arise in introducing AI on an opt-in/out basis.
Stimulus provided:
Before exploring more detailed uses of AI in education, participants were provided with stimulus in the form of demonstrations of AI tools currently available to support with learning or in development, and videos explaining:
-
Machine learning and its potential uses in education
-
Current and potential benefits of AI for teachers and pupils
-
Current and potential risks of AI use, including data protection, privacy, intellectual property, and bias
-
The strategic benefits and risks of AI use in education from a policy perspective, and how they can be managed
4.2 Participants’ views regarding opportunities for AI use in education
Supporting teachers
The biggest perceived opportunity for AI use in education was to support teachers in generating classroom materials and managing feedback in more efficient ways. The perception was that this could reduce administration tasks and increase the attractiveness of teaching as a profession.
Across the workshops, parents and pupils felt most comfortable with teachers using AI as a tool to support lesson delivery (for example, by helping to plan lessons). They were less comfortable with the idea of pupils engaging directly with AI tools, as they wanted to ensure some level of human oversight and pupil-teacher interaction.
Pre-GCSE Pupil, Bristol:
It can help teachers making slides, like information slides, and answer questions about stuff.
Parent of post-GCSE Pupil, Birmingham:
It’s less stressful for teachers to sort the homework, lesson plans… and [gives them] more time to be present and support the kids.
Using AI as a support to teachers was felt to enable better learning experiences.
There was a higher level of comfort with AI when it was seen as enabling teachers to redirect their time and energy into delivering high quality education. For parents in particular, the terms “helping” and “assisting” the teacher reassured them AI would play a supporting role, rather than taking over the teacher’s role, and alleviated parents’ concerns about the risks of potential overreliance on AI (see section 4.2.1 Lower quality of learning). Interestingly, some parents and pupils assumed that the introduction of AI tools would lead to more contact time between teachers and pupils – though they were not clear on whether they would expect pupils to spend more time in school.
Parent of pre-GCSE pupil, Newcastle:
I think it’s great. I’m impressed by it. I think if teachers have got that kind of tool to help with the administrative side, they have more time in the classroom for actual teaching rather than having to go home and mark and make lesson plans.
The potential for AI tools to support teachers to provide detailed, timely, high-quality feedback was considered to be a key benefit. Parents felt that better quality feedback would help them understand their child’s progress, and identify areas where they need more support. As a result, parents were supportive of the benefits of using AI tools to help teachers to provide more frequent and personalised feedback.
Parent of pre-school pupil, Bristol:
It would be more targeted to my child; it would collect so much information on my child that it would support and help their learning. To show [what their] focus area [is], what subjects, might show me what might be good extra learning.
Participants’ views on the use of AI to enhance learning experiences
Both parents and pupils recognised that some AI tools could make learning more fun and engaging for pupils by generating visually engaging and creative resources that teachers might not currently have the time to create. During the in-person workshops, participants were encouraged to explore an LLM-powered tool using tablets and some suggested prompts. Many were impressed by the ways in which the tool could help quickly bring topics to life in the classroom, such as when assuming the character of a historical or literary figure and answering questions asked by pupils from the perspective of that character.
Some pupils saw an opportunity for LLM-powered tools to inspire them to be more creative in their work, either by expanding on pupils’ own ideas, or by providing a starting point on which pupils could then layer their own thinking and creativity, such as when writing an essay or story.
Using AI in these ways was felt to be exciting and engaging, bringing topics to life and helping pupils develop their own ideas. Participants, particularly pupils, expressed a more positive sentiment about AI tools creating a more interactive learning environment where they could input ideas and get interesting new feedback generated by the AI. This use of AI in education was seen by some as more acceptable than auto-correcting pupils’ work, or providing the answers to copy and paste in response to an assignment question being asked of AI. Some pupils felt more positive about AI being used interactively to gain ideas and enrich understanding, rather than inputting questions and extracting answers.
Parent of pre-GCSE pupil, Newcastle:
[Future vision of AI] To generate interactive lesson plans and deliver lessons that are more engaging.
While there was some interest in the opportunities for AI to provide personalised learning, most parents – and pupils – had concerns about the quality AI could achieve as a personal tutor.
Across the workshops, most participants emphasised the value of one-to-one support and feedback in education but acknowledged that it can be hard to attain for some, and is dependent on teachers’ availability. AI potentially providing the same support as a one-to-one personal tutor, immediately available and tailored to pupils’ needs, was seen as a clear opportunity to improve the quality of pupils’ education. We also heard from pupils that some felt personalised AI tools could “make learning more interactive” and be able to assess and identify areas pupils might need support in.
Parent of post-GCSE pupil, Birmingham:
It [AI tutor] might challenge them [the child] when the class isn’t ready to go on, but they could.
Participants recognised the potential for AI to offer more tailored and targeted support calibrated to the specific needs of individual pupils. Some pupils felt that personalised AI tools could help them improve by providing support with subjects they struggle with (such as via extension activities or summary sheets). Some parents of children with SEND saw an opportunity for AI tools to provide individualised support for their child, ranging from supporting speech or writing, tracking their progress, or even using AI as a tool for early identification of potential SEND.
Upon closer consideration of AI providing personalised learning experiences, parents and pupils raised concerns regarding the amount of data the AI would need in order to provide personalised experiences. Parents were also concerned about pupils using AI unsupervised – which they perceived would be the case if AI was used in this way. One barrier to using AI in this way was the association that some made with unsophisticated customer service “chatbots”, which most had experienced to lack nuance and understanding for individual situations. Despite some perceived benefits, parents of children with SEND in particular were hesitant about their child using these tools unsupervised due to concerns about unfair bias, lack of sensitivity, or access to harmful content.
As a result, whilst many saw an opportunity for AI to fill a gap in personalised learning, parents and pupils were unconvinced that the quality of the personalised learning that AI could deliver would be sufficient.
Parent of GCSE pupil, Birmingham:
The potential is phenomenal, it’s like the child would have its own teaching assistant, there has to be a big buy-in from the kids, parents and teachers themselves. Thinking about the implementation though, you’re looking at farm size data storage, how is that funded, and the upkeep of that as well, that’s a big cost.
Parent of post-GCSE pupil, Newcastle:
It would need a lot of data about your child to support your child in each area that they’re struggling in.
4.3 Concerns about AI use in education
Lower quality of learning
Concerns about overreliance on AI were prevalent among participants, particularly the perception that AI could reduce quality of education and socialisation through decreased human contact hours.
Human-to-human learning was seen as critical to providing children with a good education. We heard that there would need to be clear boundaries for the use of AI to ensure pupils benefit from social interaction with their teachers. This concern was particularly pronounced among parents of children with SEND.
This worry also compounded an overall concern about the amount of time children spend on screens. Some parents associated AI use in education with yet another chunk of their child’s time being spent on a screen rather than having human contact. There was uncertainty about what the long-term effects prolonged screen time might be on a child’s physical and mental wellbeing. Some parents suggested placing a time limit on the use of AI in the classroom and at home. Without this, there was felt to be a risk that, when combined with use of personal devices during their leisure time, children would never have a break from screens.
Following the experience of the pandemic, participating pupils were particularly keen to maximise face-to-face learning experiences and were consequently less positive regarding uses of AI which could result in increased screen time to the detriment of face-to-face activities.
GCSE pupil, Birmingham:
I missed the social interaction of being in school [during the lockdowns implemented in response to Covid-19].
GCSE pupil, Birmingham:
I feel restricted [when learning] online.
Parent of primary school pupil, Birmingham:
Too much screen time isn’t good for their head, it affects their sleep.
Impact on teachers
Related to their spontaneous concerns about AI’s potential impact on the labour market, participants worried about job losses caused by the displacement of teachers by AI.
In participants’ initial reactions prior to guided discussion, we heard concerns about AI being used to make up for teacher shortages, effectively making human teachers redundant in the process. Participants balance this concern against what they see as the key opportunity: AI freeing up teachers’ time to do what they do best.
Parent of pre-GCSE pupil, Bristol:
What will the teacher be doing with the saved time? And how do you know the tasks being given will be relevant?
Loss of key skills
Both parents and pupils were concerned that the use of AI in education could result in pupils failing to develop key skills.
In the context of overreliance on AI, there was concern that pupils could use AI to complete tasks such as maths problems or creative writing with little of their own input. There was also unanimous concern about AI leading to plagiarism. This overreliance could lead pupils to become unable to perform key skills without AI.
GCSE pupil, Bristol:
It feels really detrimental to use a lot of AI, because in the long-term you won’t know anything. You wouldn’t want to go to the dentist and they’ve done their homework with [LLM-powered tool] and they know nothing.
Pre-GCSE pupil, Bristol:
You need to be able to do it yourself and then get the feedback.
Parent of pre-GCSE pupil, Newcastle:
Older kids might use it to write assignments so they’re not actually learning. Instead of researching and learning about it, they just put it into [LLM-powered tool] to get the answer.
Some parents of children with SEND were concerned that their child could become over reliant on AI tools, particularly AI that personalised learning to their specific needs. Whilst this was seen to support them to some degree (as mentioned in section 4.1.4), it was also felt to risk a loss of key skills.
Parent of pre-GCSE pupil, Newcastle:
As a parent, my son has dyslexia, so he has to programme in text, and the computer processes it and helps him type it. So it’d be useful for that […] But you don’t want him to rely on that.
AI accuracy and risk of unfair bias
Data quality – specifically whether AI could misinform pupils – was a concern for many. Some felt that AI had the potential to reinforce unfair biases.
Throughout the workshops, many participants expressed uncertainty over whether, at its current stage of development, AI was good enough to be used in an educational context. As participants became more informed about machine learning and how it works, more participants questioned the quality of the data being used to train AI and whether there would be sufficient human oversight to quality check AI outputs.
Expectations of where and how interactive AI tools would use data, such as marking class tests or providing feedback, was not consistent among parents and pupils. Some were concerned about AI processing and learning from incorrect answers. This was seen to be potentially damaging to the educational process if it led to pupils receiving incorrect feedback from AI. Uncertainty about how AI learns and generates information for different uses was a driver of concern for AI being used in education, where it feels more important that data is accurate than in other settings. As a result, parents and pupils felt it was imperative that AI tools were carefully assured, and that appropriate training was provided, before AI is used widely in schools.
Parent of post-GCSE pupil, Birmingham:
Inaccurate information being fed through the software could be really concerning.
After showing participants a video about machine learning and an animation about bias (see Appendix), some expressed concern about the potential for AI to reinforce harmful biases and reproduce inaccurate information. This raised questions about how quickly AI can “unlearn” biases and how these unfair biases would be picked up. Unfair bias in AI was perceived as a potential risk, however, many parents acknowledged that this risk already exists in humans. The majority of participants wanted reassurance that AI was going to be monitored by a human to ensure the information given to pupils was accurate.
Parent of post-GCSE pupil, Birmingham:
The fact that AI is always learning, and it learns from the data the kids are putting in, so if they aren’t getting it right, it could take it off course.
Harmful content
Lack of safeguarding and the risk of encountering harmful content when pupils interact directly with AI were concerns for parents.
We heard concerns, particularly from parents of younger pupils, about children being exposed to harmful content at school when using AI, as it didn’t feel clear whether there would be robust safeguards in place. This built on an existing worry about how children interact with technology and what they are exposed to online. Some parents therefore suggested they would want to limit this risk where possible by reducing unsupervised technology use, rather than introducing a further opportunity for their child to encounter harmful content. At the same time, many parents felt that they already had little control over their child’s consumption of online content, and educational tools were likely to be safer than unregulated access to the internet.
Parent of post-GCSE pupil, Newcastle:
Like on [social media platform], and it learns from what you’re watching, if you’re watching suicidal content it’ll keep showing you suicidal content.
Parent of primary school pupil with SEND, Birmingham:
She’s already talking to [voice assistant] all the time, it’s a different world for them.
Clarification on whether pupils could be exposed to harmful content through their use of AI, and the steps to prevent this, was essential for all participants – but particularly parents. We consistently heard that parents would like a clear understanding of how AI will be used by their child and reassurance that steps are in place to protect their child from any harms. Additionally, both parents and pupils mentioned that they would expect there to be systems in place that would flag if a pupil was trying to access harmful content, or asked questions or mentioned real events in their personal or school lives that suggest a safeguarding issue.
Overall, most parents felt more comfortable with their child using AI in schools with supervision from a teacher or member of staff. If it was to be used at home, some said they would want to oversee use. This was particularly important for parents of pre-school and primary school pupils, who were at times worried about whether there would be any security controls to prevent pupils accidently seeing harmful content.
Unequal access to AI
Parents and pupils were concerned that AI use would exacerbate existing inequalities in society.
Almost all participants felt that if AI could indeed support children’s learning and potentially give them a head start, there should be equal access to it for all schools. Within the current education system, they assumed that the best AI tools would only be accessible to the schools that could afford it. They felt this would exacerbate existing inequalities, add to the unfair advantage of those who are better off, and lead to further stratification – of the education system, but also of the labour market and society as a whole. Parents of pupils who attend schools that are struggling or in disadvantaged areas felt resigned to inequality getting worse, with AI tools just another resource their child could miss out on.
There was also some concern about variation in teachers’ abilities to use AI to its full potential, at least at first. Both parents and pupils worried that if training and support wasn’t provided to ensure all teachers meet a minimum level of proficiency with AI tools, some pupils would benefit less from AI use than others.
As a result, many felt that the introduction of AI tools in schools should be centrally coordinated and funded, with tools standardised and quality assured, and profits from selling pupils’ work and data reinvested into the school system.
Parent of pre-GCSE pupil, Bristol:
What about schools that don’t have the facilities? It was hard enough before all this.
Parent of post-GCSE pupil, Birmingham:
It will just make the wealth divide worse.
Parent of post-GCSE pupil, Birmingham:
Poor and working class [areas] might not have access to computers, affluent areas will have the best access.
Data assurances
In order to give permission for their child’s data to be used, parents need more clarity and reassurance about how data will be collected, stored, used and shared.
Concerns about privacy and data breaches were prevalent among parents, many of whom had questions about how and where their child’s data will be stored and shared. They were also concerned about the potential longevity of data, and the extent to which it could “follow their child through life” and affect their employment and further education opportunities. There were also concerns about potential data sharing between government departments. Parents of pupils with SEND in particular were concerned that the data could affect their child’s eligibility for state-funded benefits.
Parent of post-GCSE pupil, Newcastle:
Where does it go, where does it stop? Will it always be tagged to you? What about applying to university?
Given these concerns, the majority of participants wanted to see data protection rules adhered to, and reassurances that data generated from pupils’ interactions with AI would not be used for wider, non-education related purposes. Alongside this, they needed clear information about why data is being collected, who will have access and how long it will be stored. For any use of AI in education, pupils’ personal data being accessed or hacked was a key concern which led to some participants feeling uncomfortable with pupil data being used to train educational AI tools.
Parent of GCSE pupil, Birmingham:
There is a sense of big brother about it all. Infant school, they’ve got your whole life in a data bank, how is that information going to be utilised.
4.4 Acceptable and unacceptable use cases
Table of AI use case acceptability
Acceptable
Acceptable uses of AI were felt to be those that help rather than replace teachers:
-
Creating a lesson plan
-
Generating class tests
-
Generating class materials
AI was also felt to be acceptable if being used by teachers as a tool to provide additional academic support:
-
Generating feedback on pupils’ work
-
Marking classwork
-
Marking class tests
Unsure
Participants, especially parents, were hesitant to say AI was acceptable to personalise learning:
-
Helping teachers decide what support a pupil needs
-
Personal tutor chatbot for pupils
There was some positive sentiment towards personalised learning and the potential benefits to the quality of education. When it was considered acceptable, specific conditions were required:
-
The personalised AI tool is monitored and ‘signed off’ by a teacher
-
Clear information is provided about what pupil data will be used and how it will be stored
-
Parents’ permission is obtained before personalised AI tools are used
Pseudonymised or anonymised data to be used, with robust data protection.
Unacceptable
Use cases felt less acceptable where AI error could negatively impact educational outcomes (and therefore the future prospects of children) by getting an exam mark wrong.
- Marking exams
5. Using pupil work and data to optimise AI tools
5.1 Summary
-
Parents and pupils were generally comfortable with pupil work being used to optimise AI tools, with very few concerns about intellectual property.
-
However, there was much more uncertainty about work being personally identifiable and personal data being shared outside of schools and DfE.
-
Both parents and pupils needed reassurance about the de-identification or anonymisation of data, especially concerning special category data, which was seen as requiring more protection, or the links to other information, such as patient records (such as for children with SEND).
-
Although neither parents nor pupils thought that they should be directly compensated for providing their work or data to tech companies, they strongly felt that private companies should be required to share at least some of the profit with schools (via DfE).
Stimulus provided:
After receiving an explanation of machine learning, participants were provided with examples of different forms of pupil work (such as homework, class work, mock exams, exams) and data (such as name, age, SEND status) that could be used to optimise AI tools.
5.2 Pupil work
Pupil work that can be used to optimise AI tools
Parents and pupils were comfortable with pupil work being used for AI tool development in the vast majority of cases.
Most participants understood that greater breadth and volume of data provided to optimise AI tools results in AI tools having a greater understanding of what constitutes ‘good’ and ‘bad’ work, and being able to provide constructive feedback. Most grasped the need for AI tools to be optimised with work spanning higher to lower grades, and some specifically pointed out that without examples of ‘bad’ work and the ability to identify what makes work stronger or poorer, AI tools would not be able to assess work as needed.
In particular, participants felt that AI tools would need to be optimised with as many different styles of work as possible, in order to fairly and accurately assess and support pupils with differing abilities and needs, especially children with SEND. They noted the particular importance of this in more subjective cases, such as in creative writing.
Parent of primary school pupil, Newcastle:
For me it would be that what is put into the system is enough to get a positive outcome for the children.
Although there was confusion about how exactly AI tools would learn from pupils’ work, parents and pupils still felt pupil work was fine to share. By the end of the engagement, both parents and pupils understood that providing a wide range and quality of work would improve AI outcomes in the long run. As a result, they accepted data sharing as a necessity.
Concerns about the use of pupil work to optimise AI tools
While most types of work are fine to be used, usage needs to be clearly communicated to avoid concerns about plagiarism or penalisation.
The topmost concern about sharing work with AI tools was of more substantial pieces of work (such as coursework) being plagiarised by other pupils. Parents and particularly pupils’ first assumption was that AI tools could be used by other pupils to generate work that draws heavily from their own work, leading to their efforts being co-opted. Some understood AI ‘learning’ from pupils’ work to mean that AI would then use it to create new pieces of work for other pupils.
Post-GCSE pupil, Birmingham:
Not okay to share [Homework] – because your schoolwork is your intellectual property, it’s you and you did that. If the AI takes that then you can’t copyright it.
GCSE pupil, Bristol:
It can’t use everyone’s homework so it can be copied and plagiarised.
Despite this assumption, this concern was only notable for larger pieces of work that pupils spent considerable time on, with little concern about other more routine work produced by pupils (such as class test answers).
There was also concern from some about pupil work being shared more widely by AI tools, with pupils in particular worrying that this would mean that examples of ‘bad’ work they produced would be circulated among or accessible to other people and cause embarrassment or judgement.
Further explanation of how work would be used to optimise AI tools, rather than being regurgitated or circulated, provided reassurance to uncertain pupils and parents. Emphasis on the volume of data required to optimise AI tools, and reiterating that an individual piece of work would be one among millions of pieces of pupil work, also reassured some parents and pupils.
Additionally, some parents noted that examples of high-scoring essays or exam answers were already shared more widely, and did not feel sharing work with AI tools would be cause for more concern.
However, pupils and parents maintained some doubts about the limitations of AI optimisation, especially in relation to more creative or subjective pieces of work.
Some parents and pupils were unconvinced by the ability of AI tools to assess work for subjective subjects requiring more nuanced interpretation such as PSHE, or creative subjects like Art and English. They did not feel that pupil work would optimise tools to the extent needed for them to achieve a human level of expertise and understanding, making the use of pupil work feel futile.
GCSE pupil, Birmingham:
I think it makes sense with the factual subjects, because with science and maths most of the time there is a definitive answer. But like English there is a main answer but there are other right answers too.
Concerns about plagiarism were also heightened for creative work such as artwork or longer essays, which pupils felt was more obviously valuable intellectual property and could hold more personal significance than written work. As above, they struggled to understand how AI tools could be optimised using this work or to believe that a sufficient level of optimisation could be achieved.
Parent of post-GCSE pupil, Birmingham:
It wasn’t very clear about the copyright situation, I think that’s a huge thing to know, for all children, some children have been designing logos and stuff from like 13/14.
Acceptability of the use of different types of pupil work
Acceptable
Acceptable pieces of work were those felt to be less ‘valuable’, with fewer concerns about them being plagiarised or misinterpreted by the AI:
-
Classwork
-
Homework
Unsure
Participants were less sure about the use of work that more effort had gone into or that felt more subjective or creative:
-
Coursework
-
Artwork
There was more reluctance about the use of more ‘serious’ pieces of work with higher stakes, and more reassurance needed for their use:
-
Mock exams
-
Exam answers
5.3 Types of data
Types of data
Parents and pupils were most comfortable with anonymised demographic data being used and shared.
In almost all cases, participants were comfortable with anonymous demographic data being used to optimise AI tools. They particularly recognised the importance of providing AI tools with information on pupils’ ages or year groups, in order to accurately gauge the progress and performance of pupils at this level.
While there was some confusion about the need for data like gender, most participants were nevertheless fine with it being provided as it was not a threat to pupils’ anonymity. A few parents expressed concern that this data could contribute to unfair bias or discrimination, and some parents and pupils stressed the need for data about gender in particular to be inclusive, reflecting pupils’ own gender identities rather than erasing them.
Parent of pre-GCSE pupil, Bristol:
You’ve got bias in AI but its already there, probably easier to correct than it is in a person.
More conditions were attached to the use of pseudonymised and special category data which was seen as requiring more protection, despite recognition of its necessity and openness to its use.
Post-GCSE pupil, Birmingham:
[On including gender] It depends what it’s being used to train it for. It doesn’t really bother me but bias can happen.
Parents and pupils understood that in order for AI tools to provide personalised, lifelong support for pupils that is tailored to their educational needs and learning styles, data linkage is necessary via pupil identifiers. There was openness to this due to the potential benefits for pupils and the perception that this tailored support would lead to better outcomes than generic AI use.
However, participants were deeply concerned about the security of this data, especially special category data, fearing that any breaches would result in comprehensive datasets about individual pupils’ demographics, abilities, and weaknesses being shared more widely and exploited. This was a particular worry for parents of children with SEND, for whom concerns centred around their children’s future opportunities. They were particularly concerned that their child’s SEND status could be shared between government departments which could impact the benefits their child might be entitled to, or about future employers accessing their child’s data via the companies developing AI, impacting their child’s future.
Both parents and pupils strongly felt if data is pseudonymised, identifiers should be held at a school level and ought not to be shared with tech companies or the government. There should also be stringent restrictions and safeguards in place to ensure the security of this data, with assurances communicated to parents and pupils of how the data is stored, who has access to it, and when and where it will be used.
Parent of GCSE pupil, Birmingham:
Data should only be shared with schools, parents and education department.
Parents and pupils felt strongly that personally identifiable data should not be used in any circumstance.
Participants emphasised that data that allows individual pupils to be identified, such as name or date of birth, should not be used. This data was seen as unnecessary for AI optimisation in an educational context, and was deemed to carry too many risks for pupils when linked with the other data being collected, particularly special category data. While parents were more resistant to the use of this data, citing the concerns about future opportunities covered above, pupils also strongly preferred the use of their data in an anonymised or pseudonymised form.
Acceptability of the use of different types of pupil data
Acceptable
Use of data that could easily be anonymised and was felt to be relevant to AI understanding of pupils’ work was widely accepted.
-
Age
-
Gender
Unsure
Assurances were needed about data perceived as more sensitive or pseudonymised, particularly to address concerns about data security and storage:
-
Pupil identifier
-
Information about SEND (or any health conditions)
Unacceptable
Data identifying pupils was unacceptable and felt to be unnecessary:
-
Name
-
Date of birth
5.4 Control of pupil work and data
Parents and pupils
All participants expected to be involved in decisions made around the use of pupil’s work and data, with parents and pupils having final say.
While parents and pupils didn’t expect to make specific decisions about AI optimisation, they did expect to be consulted on whether and by whom pupil work and data can be used. There was widespread consensus that work and data should not be used without parents’ and/or pupils’ explicit agreement. Parents, in particular, stressed the need for clear and comprehensive information about pupil work and data use and any potential risks relating to data security and privacy breaches.
Pupils also felt that knowing how their work and data would be used would be important, and that they should have a say alongside their parents, especially if they were old enough (see 7.2 Parent and Pupil permission for further discussion of age at which pupils should have a say). However, they were less likely to require extensive detail about its intended use, reflecting their higher level of comfort with data sharing and acceptance of its necessity in order to benefit from the tools using it. With the understanding that pupil work is their intellectual property, some pupils were more concerned about the use of their work than their data (see 6.1.2 for concerns about work use).
Parent of GCSE pupil, Birmingham:
If child’s work is going to be used/processed in AI the parents should be advised.
Schools
Schools were most trusted to make decisions about the use of pupils’ work and data, as well as to hold data that was seen as more sensitive (such as SEND data or pupil identifiers). Where concerns about school involvement existed, they were centred around unequitable AI use and access.
Parents and pupils felt that schools could be relied on to make decisions in the best interest of pupils and to prioritise educational outcomes and safety over other considerations like AI development and profit. Central to this trust was the widely held perception that schools are not primarily profit-motivated and are already trusted to safeguard pupils, which led to the assumption that schools can be relied upon to continue doing this when it comes to AI. As a result, participants wanted schools to have the final say in how pupil work and data is used, with the ability to approve or reject uses suggested by the government or tech companies if they are felt to harm pupils or jeopardise their privacy and safety.
Schools were also trusted to hold pupil data, with many who were uncertain about special category data being shared and used feeling reassured about this data being collected if schools could control its use and guarantee that it would not be shared beyond the school.
GCSE pupil, Bristol:
The ID number has to stay within the school and be really safe.
Parent of pre-school pupil, Bristol:
I would want to feel the school (teachers especially) have all the info and are confident the AI is safe.
A few parents noted that schools may not all choose to use AI, or that there could be disparities within schools if it were left up to teachers’ discretion and some refused to integrate AI into their teaching. Some worried that schools with fewer resources would be left behind as other schools (such as private schools) adopted AI use to their advantage. There was also a minor question about the impact schools’ teaching philosophies might have on the decision to use AI or not, for example whether religious schools might choose not to use a standardised AI tool in order to have control over what exactly students learn.
However, there was little real concern about schools’ oversight of AI tools and pupil work or data, with most participants feeling the more control schools have, the better.
Parent of GCSE pupil, Bristol:
What about schools that don’t have the facilities? It was hard enough before all this.
Parent of pre-GCSE pupil, Bristol:
Access is a concern, ensure there’s a level playing field across the board.
Government
Parents and pupils saw a role for DfE in setting rules around AI use and (to a lesser extent) pupil work and data, recognising the need for a central authority. However, many participants were worried about potential negative impacts from the use of AI tools and pupil work and data by government.
Most felt there was a role for DfE having a say in how AI is used in schools, feeling that central guidelines would make AI use more consistent. DfE was also generally trusted to make decisions in the best interest of pupils and with education rather than profit in mind. This was seen to necessitate its involvement in any decisions made by tech companies.
However, trust in DfE to set rules was predicated on school involvement in decisions made, particularly those around the use of pupil work and data to optimise AI tools. While there was a need for DfE to provide central oversight, parents and pupils were still hesitant to hand over complete control of pupil data. In this, participants’ preferences reflected pre-established views that schools, being closer to pupils and in close communication with them and their parents, were more familiar with pupils’ needs and parents’ concerns, and were therefore more likely to make decisions accordingly.
Parent of GCSE pupil, Bristol:
Pupil-centric at every stage, profits should be distributed to the schools and [for] development not just led by tech companies, with the education [sector] as well.
There was a notable tension between the desire and perceived need for robust government oversight, and concern around government involvement. Many parents and pupils worried that other government departments might not make decisions in the best interest of pupils, or might not have the ability to direct efficient, effective, and beneficial use of AI.
Parent of pre-GCSE pupil, Bristol:
My initial thought is an independent regulatory body so they’re a step away from it but I don’t know what that looks like.
Parents also worried about how pupils’ performance and special category data (such as SEND status) could be used by government if held in a central database accessible beyond DfE. There were also concerns around how particular agendas might determine the content used to optimise AI and therefore how and what AI tools teach pupils.
This was a particular concern for parents of children with SEND, who worried that their children’s future could be affected if pseudonymised or personally identifiable data is held and accessed by government beyond their time at school. They required reassurance that data showing their children’s level of ability and any SEND would not be used in future, for example to affect their entitlement to government assistance.
Many parents also generally worried about increased surveillance if provided with data on children throughout their formative years, particularly if AI use becomes standard and most or all of the population’s data in this context is held and used by a limited number of central organisations.
Parent of pre-school pupil, Bristol:
Thinking about the work…How long will it be kept there - who will it be shared with and how much of my child’s personal info is attached to it?
Participants feared that particular viewpoints or biases, including those within the curriculum, could become more entrenched in AI and harder to correct. For these participants, involvement of independent experts within the field of AI and education could mitigate some of this risk by providing a check for decisions and ensuring a balance of views.
Parent of post-GCSE pupil, Birmingham:
I feel like they’re trying to push the kids in a certain direction, and then the government gets to know everything [decision] they make.
Tech companies
Trust in tech companies was extremely limited and there was little to no support for them to be granted control over AI and pupil work and data use.
Profit was almost universally assumed to be the primary or sole motivation of tech companies, rather than the desire to improve education and pupil outcomes. Reflecting starting views of tech companies as non-transparent and assumptions that data is sold on to third parties, participants did not trust them to protect or use data responsibly. Parents and pupils assumed that given free rein and with no oversight, tech companies would choose to sell data on to other companies with little concern for pupil privacy or wellbeing.
Parent of GCSE pupil, Bristol:
I think yes, the company is going to benefit, that’s economics, but I think it would be good to give it back to schools.
GCSE pupil, Birmingham:
Yeah, you kind of want to know what type of people are developing [it], if the people running it are doing it for the wrong reasons, it could get out of hand, you want to know they’re doing it for the right reasons.
Participants did note that tech companies working in close partnership with schools or DfE, with clear oversight and regulation, would provide some assurances that they would be more likely to use pupil work and data responsibly and to benefit pupils.
6. Conditions for use
6.1 Summary
Participants’ identified the following conditions for the use of AI in education and the use of pupils’ work and data to optimise AI tools:
-
Human oversight: Human involvement in AI use to correct for error and unfair bias, as well as providing safeguarding.
-
Parent and pupil permissions: Providing parents and pupils with the necessary information and the opportunity to make informed decisions about the use of their data.
-
Standardisation and regulation: Ensuring that AI tools used within schools are of a uniform standard to avoid exacerbation of inequalities, with strict oversight of any tech companies providing the tools.
-
Age and subject restrictions: Using AI tools only where appropriate and where they add value. Strict age restrictions on direct interaction with AI.
-
Profit sharing: Ensuring that tech companies that benefit from accessing data share some of their profits so that this can be reinvested into the education system and benefit schools and pupils – while recognising that private companies will need to be incentivised to develop better tools.
6.2 Human oversight
Participants stressed the importance of human involvement in AI use at every step of the process.
Given the recent developments in AI, and the need to continue to optimise it, the use of any tools in the classroom or at home was seen as risky if not overseen by humans, at least to begin with. This concern was particularly pronounced after participants heard about the risks of bias and about AI only being as good as the data it learns from. Many noted that AI can make mistakes or ‘hallucinate’ inaccurate responses, and would need humans to ensure nothing was being taught or assessed incorrectly. There was also an assumption that errors made by AI would be harder to correct than those made by a teacher, which can often be addressed directly by parents or pupils in conversation. This means AI tools should always be checked, with any resources created looked over by teachers, any marking or feedback generated by AI tools reviewed by teachers, and any tests or exams marked by AI being assessed by teachers or external markers.
Parents were particularly keen that pupils’ AI use is supervised or at least controlled, and that AI tools are never used as a substitute for a teacher. Pupils similarly stressed that learning should not be solely delivered by an AI tool operating independently, as teacher-pupil interaction is highly valued and most felt some level of human subjectivity is always needed. Pupils also worried that AI use without human oversight might mean errors made by AI are overlooked, leading to them not learning the skills they need or being taught incorrectly. Any potential errors should and could be picked up by earlier human assessment of AI outputs.
Parent of Pre-GCSE pupil, Newcastle:
The [AI] tool should supplement the teacher, not replace or undermine [the teacher]. A pupil-teacher relationship is still very important for [the pupil’s] development.
6.3 Parent and pupil agreement for use of work and data
Both parents and pupils felt they should be enabled to make free and informed decisions about how pupil work and data is used.
This means having an understanding of when AI tools will be used and why, and how pupil work and data will be used to optimise them and why. Almost all participants felt that agreement should be a pre-condition of AI use.
Despite consensus that agreement should be required, views around the details of agreement differed:
-
Parents emphasised their responsibility to make informed decisions for their children’s wellbeing. They therefore felt their permission ought to be required, particularly for younger pupils (generally those aged under 16). Many were resistant to the idea that their children could make these decisions for themselves, wanting to have a say in all aspects of their children’s education.
-
Pupils tended to attach more importance to their own comfort with AI and work and data use, particularly with the understanding that the work they create is their intellectual property. Most pupils we spoke to had experience of permitting data sharing for themselves when signing up to and/or using apps and websites, and most did not view agreeing to work and data use for AI optimisation purposes any differently. While many were happy for their parents to also have a say, some felt this should not supersede their own wishes, and that pupils should have final say over the use of their work and data above a certain age (13 or 16).
Parent of GCSE pupil, Birmingham:
Up to 16, it’s definitely a parental choice, but as they start to make their own choices this would be included.
Post-GCSE pupil, Birmingham:
Might be good to trial with older kids, because we can already consent ourselves and then you could show the parents the positive data.
Expectations for how permission would be provided varied, but most parents described an “opt-in” model and expected to be given the chance to understand and agree to all potential uses of their child’s data and work. Parents suggested that this agreement could be “staggered” as understanding of AI tools and comfort with its use grows, and that schools and DfE could make decisions about AI use within the parameters of permission provided. Generally, the expectation was that even completely anonymised data and work would require some level of permission to be shared and/or used, though most participants indicated they would agree to its use. However, there was little consideration of how this would work in practice, especially alongside equitable access to AI for all pupils and schools, which was seen as an important condition for its use.
Generally, pupils expressed higher levels of comfort with sharing their data than parents, many of whom had serious concerns about data privacy, security and storage. A few pupils assumed their parents would lack understanding and would be reluctant to allow them to share their data as a result, in contrast to their own willingness to share it. Many parents noted that widespread AI use and normalisation of data-sharing would make them feel more positively about it and more likely to easily provide permission, assuming that once AI use becomes “tried and tested”, concerns are likely to be alleviated.
6.4 Standardisation and regulation
AI use in schools should only be through standardised and strictly regulated tools to ensure quality control and equity of access.
Parents and pupils stressed that all schools should have access to the same, quality assured, AI tools. Many suggested this could be provided by certification processes sanctioned by schools and the government, with only AI tools that are officially tested and meet a minimum performance standard being approved for use in education. For many, this would alleviate concerns about some pupils or schools benefitting over others by accessing more developed AI tools than others.
Concerns about the quality of AI tools also led to worries that pupils could be penalised for, or disadvantaged by, poor teaching or support provided by low-performing AI tools. Pupils worried that they would be held accountable for any errors committed as a result of incorrect AI teaching or support. Parents also wanted guarantees that, in cases where low-performing AI tools led to poor pupil performance, the pupil would not be penalised, and emphasised a need for regulations ensuring clear accountability in case of AI error or misuse. In particular, parents of primary and pre-school children wanted guarantees of accountability in the case of malicious or inappropriate content being propagated by AI tools, along with strong and appropriate content safeguards to ensure they are safe for children to use.
Parent of Post-GCSE pupil, Newcastle:
If used in marking exams, make sure its accurate so pupils are not disadvantaged.
While there was no overall consensus on who ultimately could be held accountable for any issues that arise, many suggested DfE and schools both have a responsibility to ensure AI tools are fit for use, and to minimise and rectify any errors or misuse. Others felt that this responsibility should lie with tech companies, and that as the developers of these tools, they should be made to answer if their use harms pupils.
Regulation was also felt to be crucial for ensuring stringent data collection, privacy, and security.
DfE and the wider government were generally seen as responsible for setting, communicating, and maintaining these standards. Parents in particular expected clear rules to be established for:
-
How pupil data can be collected;
-
For what purpose it can be collected;
-
How it will be stored;
-
How long it will be stored for; and
-
Who can access it.
Parents emphasised the importance of these regulations being put in place and communicated as a pre-condition for widespread AI use in education.
6.5 Age and subject restrictions
Parents and pupils were in agreement that the use of AI tools should be restricted, with the most accepted uses involving older pupils and subjects seen as “objective”.
There was a general consensus that AI tools would be best used directly by pupils in secondary education, at which point both parents and pupils felt that pupils would be able to confidently and safely interact with the technology. There was less concern about pupils not developing necessary social skills at this point (due to interacting with AI tools alongside teachers), and less concern about the use of pupils’ data and work. Overall, both parents and pupils felt most comfortable with AI tools being directly used by pupils old enough to understand the tools and agree to their use. Parents’ estimation of this age tended to be higher than pupils, as pupils were more likely to set the minimum age at 11 or 13, while many parents felt that pupils would only be able to meaningfully agree at age 16.
GCSE pupil, Birmingham:
Maybe it’s not appropriate for young kids, you should have restricted access, and it might not simplify it enough.
Parents of primary and pre-school pupils were least comfortable with the potential use of AI tools, citing concerns around unintentional exposure to harmful content and children not picking up the skills they need to develop. At this age, the importance of play and socialisation was emphasised, and parents worried these elements of young children’s day-to-day education would be lost or minimised through reliance on AI.
Both parents and pupils were most comfortable with AI being used to support learning (and particularly to mark work and/or provide feedback) in subjects seen to have more concrete, and therefore more easily assessed answers, such as Science or Maths. These subjects, which contain simple answers (for example, multiple choice), were seen as less likely to confuse AI tools or to be incorrectly assessed due to bias or a lack of understanding. Participants broadly felt reassured that AI tools could be sufficiently optimised to correctly assess these forms of work and would trust their use when overseen by a teacher.
There was considerably less openness to AI being used to support marking or to assess more creative or subjective subjects like Art, English, Religious Studies or Social Studies. Participants deeply doubted that AI could engage with pupils’ schoolwork on these subjects in the same way as a human, or to grasp their nuances as a teacher would. They also broadly felt that these forms of schoolwork are more personal to pupils, or involve more effort to create, making the stakes of any AI error feel higher.
Parent of primary school pupil, Bristol:
You lose being creative, the students being creative, relying on an AI to educate them, and then using AI to do their homework, they’re going to lose that creativity.
6.6 Profit sharing
There was widespread consensus that, if profit were to be generated through the use of pupil’s work to enhance AI in education, schools would be the preferred beneficiaries, and resistance to the idea of tech companies being the sole profiteers.
Generally, parents and pupils acknowledged that pupils profiting individually from the use of their work and data would not be feasible, but almost all strongly believed that any profits derived from this data use should be distributed among schools to enable pupils to benefit. This belief was intensified by the understanding of intellectual property and pupils’ ownership of their work and data. Participants suggested a minimum share of the profits being handed back to schools, but views on how this should be done varied, with many feeling this should be done to maximise equality of access to AI (with profits being used to fund AI tools and resources for schools who are not able to do this themselves), while others felt profits should be equally shared. Few participants thought profits should correspond to each school’s level of data sharing and AI use, and participants were especially positive about profits being used to level the playing field for schools.
While participants did want schools to profit from AI use, some felt this could happen through profits being used by local authorities or regional bodies to improve education in the area, or by DfE to improve the education sector at a national level, rather than being distributed to individual schools. Most were comfortable with profits being shared between schools and DfE, however, the general assumption was that pupils would benefit most directly if profits were distributed to individual schools.
Participants accepted that tech companies would profit in some way from the use of pupil work and data, but the consensus was that they should not be the sole beneficiaries. Parents of children with SEND were particularly negative about AI tool development becoming a money-making exercise. Understanding of how exactly tech companies could profit was limited, with most assuming that they would make money by selling pupils’ data to third parties. There was a lack of awareness of other ways in which they might benefit from this data use such as by developing other AI tools for commercial use. On prompting, this form of benefit was generally seen as acceptable if used to develop educational tools for use outside the education sector, but unacceptable if used to develop tools for other purposes. This possibility was seen as misusing data for something other than its intended use, reflecting existing discomfort and concerns about data being sold by tech companies without participants’ knowledge or agreement.
7. Reflections and implications for future research
7.1 Methodological reflections
Due to time pressures, the in-person fieldwork was carried out as a single six-hour session per location. Sitting still and processing information for this length of time can be challenging for adults’ attention spans and energy, but it was particularly difficult for pupils. We knew we would need to share large volumes of information, and aimed to make the sessions as engaging as possible by:
-
Using different types of stimulus (including animations, videos from experts, worksheets, hands-on demonstrations of AI tools);
-
Providing written summaries of all videos; and
-
Including activities that would require participants to stand up and move around (including voting exercises).
However, in the end, we had to adapt our approach in several ways to counteract participant fatigue:
-
In the first workshop, we asked participants to compare three different future scenarios, with detailed information about the different use cases of AI in education, the types of data and work that would be used to optimise it, and the conditions in place to regulate its use. This activity took place towards the end of the workshop, and participants found it very challenging to compare such abstract, yet detailed, scenarios. In subsequent workshops, we focussed instead on asking participants to describe the future they would like to see, rather than testing potential scenarios first.
-
We gave pupils additional break time after lunch. By this point they had understood the basic principles of machine-learning and this meant they were more refreshed for the final activity where we discussed conditions for use.
Some lessons for future engagement workshops:
-
Including more interactive tools can help to bring concepts to life and keep participants engaged. Participants who had not previously used LLM tools, benefited from being able to see how it works in reality. For future engagements, it may be worth thinking carefully about how devices and applications can be used in sessions.
-
There are some practical implications for running joint sessions for parent and pupil groups, as they have different needs. We adapted discussion guides for parents and pupils and, as much as possible, made all stimulus suitable for the youngest sample members. However, it may be worth considering splitting groups, so their agendas are decoupled from one another, allowing more flexibility and further adaptation to suit participants’ age.
-
Shorter sessions over several weeks, as well as a mix of in-person and online fieldwork, may be more suitable for complex topics such as this. Online participants, who had a week between workshops, returned to the second session refreshed. In addition, many had used the interim to think about or discuss what they had learnt with friends or family, which meant they brought more nuanced perceptions and opinions to the final session.
7.2 Areas for future research
The research showed that awareness, understanding, and opinions of AI are all still evolving. As the technology becomes more established, the public will be further exposed to its applications and form opinions based on those experiences. However, we also know how important the commentary and opinion of others - both expert and lay person - are in shaping views and impacting trust. For parents in particular, other parents are powerful influencers, so it will be important to continue engaging with this audience to understand how they feel about the use of AI in education.
There are also a number of specific questions surfaced by the research, which we feel warrant further exploration:
-
The relationship between private interest and public good: How comfortable are parents and pupils with private companies profiting and how are they held to account and incentivised to ensure they put public good first?
-
Oversight and coordination of data sharing: To what extent is there support for the central management and facilitation of data access across government and with researchers and private companies? Would parents and pupils be comfortable with an “EDR UK” organisation, similar to HDR UK, ADR UK, or SDR UK?
-
Equal access and opting out: What happens if you want to opt out? And how can we ensure nobody is left behind?
8. Appendix
8.1 Demographic sample breakdown
Category | Criteria | Total | |
Location | Bristol | 36 | |
---|---|---|---|
Birmingham | 36 | ||
Newcastle | 36 | ||
Location Type | City/Urban | 48 | |
Suburban/Small town/Large village | 32 | ||
Rural | 26 | ||
Unknown | 2 | ||
Gender | Male | 43 | |
Female | 65 | ||
Age | 18 and under | 36 | |
19-24 | 1 | ||
25-39 | 22 | ||
40-59 | 47 | ||
60+ | 2 | ||
Ethnicity | White | 79 | |
Black, Black British, Caribbean or African | 16 | ||
Asian or Asian British | 10 | ||
Mixed or Multiple ethnic groups | 2 | ||
Other | 1 | ||
Feeling about technological developments and uses of AI (parents only) | Excited | 36 | |
Sceptical/Worried | 36 | ||
Total | 108 |
8.2 Expert video breakdown
Role and organisation | Name | Topic | |
Head of Government Practice at Faculty | Tom Nixon | What is AI and why is it important? | |
Data Scientist at 10 Downing Street | Andreas Varotsis | What is machine learning? | |
Head of Digital Education at Bourne Educational Trust | Chris Goodall | Potential benefits of using AI for teachers and pupils | |
Head of Digital Learning at Basingstoke College of Technology | Scott Hayden | Potential benefits of using AI for teachers and pupils | |
Digital Strategy at the Department for Education | Fay Skevington | Potential risks of using AI around data protection, privacy, and IP | |
Parliamentary Under-Secretary of State at the Department for Education | Baroness Barran | The bigger picture: wider risks and benefits of AI use and how to manage them |