Guidance

Live streaming: improve the safety of your online platform

Practical steps to manage the risk of online harm if your online platform allows people to live stream, or view live streams by others.

Live streaming is when audio or video is broadcasted live over the internet. This page will help you understand how live streaming on a platform can create a risk to users’ safety, and how to manage those risks.

New online safety legislation is coming which will aim to reduce online harms. If you own or manage an online platform in scope of the forthcoming legislation, you will have a legal duty to protect users against illegal content. You will also have to put in place measures to protect children if they are likely to use your service.

How to design safer live streaming

Live streaming is when audio or video is broadcasted live over the internet. Live broadcasts are harder to monitor for harmful activity because they are happening in real time, which makes preventing harms more difficult.

Vulnerable users are more at risk from being pressured into risky behaviour during live streams. A user who is live streaming can also expose their location, which leaves them open to being tracked or targeted.

Harms that can happen during live streaming include (but are not limited to):

  • cyberbullying and cyberstalking

  • child sexual exploitation and abuse

  • terrorist content

  • hate crime

  • self harm and suicide content

How to manage risks and harms if you offer live streaming

1. Know your users

If you allow your users to create accounts, you could:

  • have users verify their accounts during account creation - for example, using two-factor authentication (2FA)

  • establish how old your users are, using age assurance technology such as age verification

Find out more about safety technology providers

2. Protect children with appropriate functionality

Do this by preventing users under the age of 18 from live streaming:

  • without parental consent

  • to strangers or unverified accounts

You should also:

  • consider the risk of allowing children to live stream

  • limit audience sizes

  • prevent unverified users from messaging child users or commenting on their content

  • consider whether end-to-end-encryption is necessary, and if you can manage the risks it might pose to users

  • moderate the stream and any associated messaging, while it is happening

3. Set safety settings to high by default

Do this for all users. The highest safety level you offer should make sure that:

  • users’ live streams, content, contacts and activity are only visible to friends

  • users cannot share their location with strangers

  • automatic face recognition is turned off

  • unverified users cannot watch or interact with users under the age of 18

For users under the age of 18, you may want to (one of the following):

  • stop them from reducing their safety settings

  • require additional authorisation before they can reduce their safety settings - for example, from a verified parent or guardian using parental controls

4. Make it easier for users to report harmful content or behaviour

Do this by making sure your reporting processes are:

  • available at relevant locations and times - for example, prompt age appropriate reporting tools at the start and end of all broadcasts, or if suspicious activity is identified

  • easy to use and understand by users of all ages and abilities

You can also make sure users can also access appropriate support and resources for the type of harm they may have encountered. For example, you could direct users to charities that work to tackle specific harms, who may have appropriate resources to share.

5. Consider using automated technology

Automated safety technology can scan for, identify and remove known illegal or potentially harmful content. You can use it to detect, flag and remove or block broadcasts that contain illegal content. This should also be supported by human moderators

Take care not to limit your users’ freedom of expression. Not all harmful content needs to be removed, unless it violates your terms of service. Having clear and accessible terms of service will help users to understand what is and is not allowed on your platform.


Part of Online safety guidance if you own or manage an online platform

Updates to this page

Published 29 June 2021

Sign up for emails or print this page