Guidance

Online disinformation and AI threat guidance for electoral candidates and officials

Updated 17 June 2024

This guidance outlines mitigations to disrupt the impact of disinformation campaigns, which are increasingly being created using generative AI. It should be read alongside the NCSC’s Defending Democracy guidance, which provides more detailed guidance for all those working within political parties, local authorities, central government and devolved administrations.  

In recent years, the emergence of generative artificial intelligence (AI) has provided attackers with further tools that can be used to disrupt the security of elections in the UK, to influence the result, or to undermine citizens’ trust in the electoral process itself.

Anyone involved in the election process could be targeted by online disinformation. This includes high-profile candidates and local party offices, as well as officials required to run the election and IT staff who provide technical support to candidates, local and central government, and political parties. 

What should you do?

Reducing the likelihood of an attack

  • Personal information can be used to make scams or fake content more convincing.  Consider how you use social media (both personally and professionally), what you share, and your account privacy settings.
  • For creating and sharing official documents and official communications, always use official devices and/or communication, rather than your own. Similarly, avoid using official devices and accounts for personal purposes.
  • Use strong passwords and set up two-set verification (2SV) to make it harder for an attacker to access your devices. Your support staff will be able to help with this, as well as ensuring your devices and communication channels are secure.  
  • Familiarise yourself with social media platforms’ policies and processes regarding disinformation and AI-generated media to ensure you know how to report content in advance (there are links from the social media page on gov.uk).

If you are affected by disinformation or generative AI content

  • Report details to the relevant platform – there are details on the gov.uk page above on how to contact X (formerly Twitter), Meta (Facebook, Instagram, WhatsApp, and Threads), Google (YouTube), TikTok, and Microsoft. 
  • Report this to your political party, who should be able to offer support and have relevant comms channels in place to escalate cases to platforms or the police. Independent candidates should contact platforms/police directly, in the absence of a central party. 
  • Think before you respond to any reports of disinformation. This may inadvertently amplify the suspected disinformation and could make the matter worse. If an official response is required, use official channels and avoid referencing the disinformation.
  • If you feel a threat or danger is immediate, you should call 999.

Following an attack

  • Election Officials should report details of deepfake incidents or any instances of false information relating to the administration of the election (such as when, where and how people can vote, and who can vote), to the Returning Officer (RO) for their area. Addresses and telephone numbers of all elections offices can be found on the Electoral Commission Website. Local authorities should ensure that staff, including those employed through an agency, know how to report concerns.
  • Returning Officers should liaise with the Electoral Commission and their local police Elections Special Point of Contact (SPoC) if they are made aware of a deepfake incident involving a candidate or false information.
  • Candidates may wish to report details of deepfake incidents to the Returning Officer (RO) for their area, in addition to reporting the content directly to the platform and their party.

Note: Where material is thought to constitute a criminal offence, you should report to the Police as soon as possible. We encourage candidates to keep in touch with their Operation Bridger SPoC to find out about the support available to them. Further guidance for candidates on online harassment and abuse can be found in the  Joint Policy Guidance (PDF, 278KB) for candidates in elections.

What is the risk from generative AI? 

Generative AI is software that can create high quality ‘fake content’, including text, images and video. It has been possible to create or doctor images for a long time; what’s changed is the ease with which fake content can now be created (and how quickly it can be shared online) allowing attackers to spread disinformation. The most prevalent types of content created by generative AI tools (and some high-profile examples) are included below.