Guidance

Private and public channels: improve the safety of your online platform

Practical steps to manage the risk of online harm if your online platform allows people to interact, and to share text and other content.

Public channels are areas of services where content is visible to the general public or any other user. Private channels are services where users can expect more privacy, such as private messaging or closed social media groups.

Both private and public channels can be used by people to interact, and to share text and other content. This page will help you understand how public and private channels on a platform create a risk to users’ safety, and how to manage those risks.

New online safety legislation is coming which will aim to reduce online harms. If you own or manage an online platform in scope of the forthcoming legislation, you will have a legal duty to protect users against illegal content. You will also have to put in place measures to protect children if they are likely to use your service.

Harms that can happen on private channels

Example of harm on a private channel

A 13 year old child enjoys using a social networking website that allows users to interact privately. The platform has a safety setting that stops users from messaging accounts they do not know, but this is not turned on by default.

The child starts to get inappropriate messages from a stranger. They tell their parents, who report the account to the platform owner and change the child’s settings to private. If the settings were set to private by default, the child could not have been sent the messages in the first place.

How harms can happen on private channels

When user activity is hidden, this makes online harms more difficult to identify and prevent. Some users may also take advantage of this for illegal or harmful purposes. This increases the risk of serious and illegal harms such as grooming and child sexual exploitation and abuse.

Harms that can happen on public channels

Public channels allow users to share content that can be viewed by large numbers of people. This makes it easier for harmful or illegal content to be viewed by or shared with a large number of users very quickly. It can also increase the risk of harms such as cyberbullying, where large numbers of users, including strangers, may direct abuse towards a user.

Harms that can occur on private channels include (but are not limited to):

  • cyberbullying and cyberstalking

  • child sexual exploitation and abuse

  • terrorist content

  • hate crime

  • self harm and suicide content

How to improve safety by reducing harms on your channels

1. Know your users

If you allow your users to create accounts, you should:

  • make users verify their accounts during account creation - for example, using two-factor authentication (2FA)

  • establish how old your users are, using age assurance technology

Find out more about safety technology providers

2. Set safety settings to high by default

Consider this for all users. Your options for privacy and safety should be clear and accessible for all users. The highest safety level you offer should make sure that:

  • users’ content, contacts and activity are only visible to friends

  • users’ location is not shared with strangers

  • automatic face recognition for images and videos is turned off

  • users must confirm they understand the risks of uploading or sharing personal information before they proceed

For users under the age of 18, you may want to (one or more of the following):

  • stop them from changing their safety levels to low or turning them off

  • require additional authorisation before they can reduce their safety levels - for example, from a verified parent or guardian using parental controls

  • provide clear, age appropriate information on the consequences of changing their default safety and privacy settings

3. Protect children by limiting functionality

On public and private channels, you can do this by stopping unconnected and unverified users from messaging a child account or interacting with their content.

For private channels only, you can also prevent end-to-end encryption for child accounts.

End-to-end encryption makes it more difficult for you to identify illegal and harmful content occurring on private channels. You should consider the risks this might pose to your users.

4. Make it easier for individuals to report harmful content or behaviour

Do this by making sure your reporting processes are:

  • available at relevant locations and times, such as when a user is sending a private message

  • easy to use and understand, by users of all ages and abilities

prompted to a user when suspicious activity has been detected

You can also make sure users can also access appropriate support and resources for the type of harm they may have encountered. For example, you could direct users to charities that work to tackle specific harms, who may have appropriate resources to share.

5. Consider using automated safety technology

Automated safety technology can scan for, identify and remove both known and new content that may be harmful to your users. If you choose to use automated technology to remove content before it is posted, you should consider using human moderators to support this process. Many harms require a human moderator to assess the context and understand whether it is illegal or violates your terms of service.

Find out more about safety technology providers.

If you identify that a user may be about to share or access content that is likely to be illegal or against your terms or service you can make them aware of this risk. You can also direct them to resources where they can get support.

Take care not to infringe on your users’ right to privacy or limit their freedom of expression. Not all harmful content needs to be removed, unless it violates your terms of service.

Once the new online safety legislation becomes law, harmful content should be removed if it violates your terms of service, or it poses a risk to child users.

Having clear and accessible terms of service will help users to understand what is and is not allowed on your platform.

Example of harm on a public channel

A forum website allows users with accounts to start discussion threads with multiple users. The website has automated technology built into it that flags high-risk terms, such as abusive and threatening language.

When one user uses a term associated with hate speech, the post is immediately flagged to a human moderator who is able to prioritise it for review and removal.


Part of Online safety guidance if you own or manage an online platform

Updates to this page

Published 29 June 2021

Sign up for emails or print this page