Interim report: Review into online targeting
Updated 25 July 2019
1. About the CDEI
The adoption of data-driven technology affects every aspect of our society and its use is creating opportunities as well as new ethical challenges.
The Centre for Data Ethics and Innovation (CDEI) is an independent advisory body, led by a board of experts, set up and tasked by the UK Government to investigate and advise on how we maximise the benefits of these technologies.
The CDEI has a unique mandate to make recommendations to government on these issues, drawing on expertise and perspectives from across society, as well as to provide advice for regulators and industry, that supports responsible innovation and helps build a strong, trustworthy system of governance. The Government is required to consider and respond publicly to these recommendations.
We convene and build on the UK’s vast expertise in governing complex technology, innovation-friendly regulation and our global strength in research and academia. We aim to give the public a voice in how new technologies are governed, promoting the trust that’s crucial for the UK to enjoy the full benefits of data-driven technology. The CDEI analyses and anticipates the opportunities and risks posed by data- driven technology and puts forward practical and evidence-based advice to address them. We do this by taking a broad view of the landscape while also completing policy reviews of particular topics.
In the October 2018 Budget, it was announced that the CDEI would be exploring the use of data in shaping people’s online experiences and the potential for bias in decisions made using algorithms. These two large-scale reviews form a key part of the CDEI’s 2019/2020 Work Programme. More information about the CDEI can be found at here.
2. Foreword from Roger Taylor, Chair, Centre for Data Ethics and Innovation
New data-driven technology is transforming our society. Whether it is deciding what video to recommend or diagnosing a serious illness, the ability of machines to process vast amounts of data and make decisions is powering the economy and industry, reshaping public services and opening up new areas of research and discovery.
Artificial intelligence and algorithmic systems can now operate vehicles, decide on loan applications and screen candidates for jobs. The technology has the potential to improve lives and benefit society but it also brings ethical challenges which need to be carefully navigated if we are to make full use of it. It is an issue that governments worldwide are now grappling with – including the UK through the establishment of the CDEI. There are significant rewards for societies that can find the right combination of market driven innovation and regulation to maximise the benefits of data-driven technology and minimise the harms. The UK with its robust legal and regulatory systems, its thriving technology industry, and its leading academic institutions is well placed to achieve this.
Our goal is an environment in which the public are confident their values are reflected in the way data-driven technology is developed and deployed, where we can trust that decisions informed by algorithms are fair, and one where risks posed by innovation are identified and addressed. It is in such an environment that ethical innovation will flourish.
The CDEI’s role is to set out what needs to be done to address the challenges and realise the benefits posed by data-driven technology. We will do this by providing high quality and robust advice to the Government. Our role is in part to identify gaps in current governance frameworks, whether that is in legislation, regulation or other mechanisms. But equally important is our duty to identify how public policy can support the development of technologies and industries which allow us to benefit safely from automated decisions systems, robotics and artificial intelligence.
While data-driven technology continues to develop at great speed there is no shortage of predictions about its future impact. But some of the challenges are with us now and are not merely theoretical. In these Reviews, the first for the CDEI, we are focusing on two of the more urgent issues – Online Targeting and Bias in Algorithmic Decision-Making. Both are topics which cut across a range of applications of data-driven technology and force us to confront different ethical questions.
I am delighted to publish our interim reports setting out the progress we have made in our first two Reviews. The reports outline our analysis to date and our emerging insights as we develop our recommendations for the Government.
I would like to thank all those who have inputted to the CDEI’s work so far. These contributions have been invaluable in helping us explore the complicated issues we need to understand. I look forward to continuing to work together as we develop our final recommendations.
3. Executive summary
Online targeting is a critical part of modern internet societies. Algorithms designed by technology companies and the ‘ad-tech ecosystem’ influence much of what we see online. They suggest videos for us to watch, products to buy, and people we might connect with.
Technology that underpins online targeting has been enormously beneficial in enabling us to navigate the internet and find relevant content and products. It does this by predicting what we will find interesting or useful. It is being used for social benefit: for example helping to identify education needs, recommend jobs and careers or advise on health and diet. But as targeting has become more accurate in its predictions and more powerful in its ability to influence our behaviour, concerns have grown about the extent to which we understand the way it influences our individual decisions and the impact it is having on our society.
Online targeting features three clear changes from more traditional types of targeting. First, it involves the collection of an unprecedented amount of data about people, and uses increasingly sophisticated analyses to yield powerful insights about people and groups. Second, it enables the targeting of content, products and services to different individuals at scale at relatively low cost, and often without transparency to the individual that they are being targeted and on what basis. Third, targeting algorithms learn from our behaviour by monitoring outcomes and improving on targeting approaches in real-time.
This Review seeks to answer three sets of questions:
- Public attitudes: Where is the use of technology out of line with public values, and what’s the right balance of responsibility between individuals, companies, and government?
- Regulation and governance: Are current regulatory mechanisms able to deliver their intended outcomes? How well do they align with public expectations? Is the use of targeting online consistent with principles applied through legislation and regulation offline?
- Solutions: What technical, legal or other mechanisms could help ensure that the use of online targeting is consistent with the law and public values? What combination of individual capabilities, market incentives and regulatory powers would best support this?
Our work to date has led to some emerging insights that respond to these three sets of questions and will guide our subsequent work.
3.1 Public attitudes
We are undertaking an extensive public dialogue exercise to better understand public attitudes towards targeting. Our early findings suggest that people’s attitudes towards targeting change when they understand more of how it works and pervasive it is. While people recognise the benefits of online targeting, most seem to agree that there are some forms of targeting which make them uncomfortable, and that changes are needed to the way targeting is practised and overseen.
3.2 Regulation and governance
We are also reviewing the oversight and governance of online targeting. There are a number of features of online services which may impact the effectiveness of the market and the ability of individuals to adequately protect their own interests, including limited transparency and competition, and the potential for consumer harm. But any changes to oversight mechanisms need to take into account how responsibility should be split between different actors; how to make the most of market incentives, voluntary regulation and empowering users; and how to enable effective monitoring and enforcement.
3.3 Solutions
We are exploring a wide range of solutions that could address concerns or enable more beneficial uses of targeting. In some areas stronger regulations may be needed. In other areas, greater transparency and visibility of how targeting operates may be more useful. Giving individuals stronger controls or rights over how data about them is used may provide both protection from harm as well as opportunities for innovation.
While there is an understandable desire to see action taken swiftly, it is equally important to identify how longer term changes could help build a trusted and trustworthy online environment.
4. Explaining the issue
Online targeting is a critical part of modern internet societies. Algorithms designed by technology companies and the ‘ad-tech ecosystem’ shape what people see online. They determine which content (news stories, posts, videos) to recommend to us, who to suggest we could connect with, which groups we could join, and which products and services we see advertised. They are used by social media, online retailers, search engines, and across the internet.
Much of the value of today’s internet economy rests on the ability of algorithms to predict what we want. These predictions are based on detailed information about individuals, their behaviours and inferences made about them based on similarities between groups of people. The ability of algorithms to identify what we want from the vast array of content available online is a remarkable and valuable technical achievement.
However, there are concerns about how online targeting operates. The ability of algorithms to rapidly disseminate information outside of traditional regulatory structures, with very little public visibility, has raised concerns about their impact on various issues including democracy and public health.
There are also concerns about more direct harm to individuals. An algorithm builds its understanding of what we need from a only a limited set of data. The results can at times be inappropriate or even harmful, serving up images of self-harm to distressed teenagers or gambling adverts to people struggling with addiction.
Beyond all of these concerns, there is a fear that these systems have become so pervasive that they no longer simply predict our existing beliefs and desires, they are starting to shape them – using personalised recommendations, alerts and notifications to encourage us to spend more time online, capturing our attention by targeting us with sensationalised content, and encouraging us to succumb to anxieties and impulses.
The landscape summary of the academic, policy and other literature relating to online targeting has informed our understanding and analysis of the issue.
4.1 What has changed?
Personalisation and targeting of products and services is a longstanding practice on and offline. However, the advent of data-driven technologies and the internet mark a clear change from most traditional forms of targeting. They provide the tools to:
-
Collect an unprecedented breadth and depth of data about people and their online behaviours, and analyse it in more sophisticated ways than ever before. Previously, organisations might have relied on carrying out target audience research to find out basic information about groups they wanted to target. Now vast digital profiles, made up of detailed, and sometimes sensitive, data about who people are, what their preferences are, and much more, are generated and maintained by a wide range of different organisations.
-
Target content, products and services to different individuals at scale and relatively low cost, often featuring a lack of transparency that something has been targeted and on what basis. Organisations have long been able to tailor the content they show people based on broad assumptions such as the demographics of people in a given locality or of people who consume a particular type of media. Now they can reach more precise target audiences online relatively cheaply and simply.
-
Learn from our behaviour by monitoring outcomes and improving on targeting approaches in real-time. In the past, organisations might have had to spend considerable time testing, for instance through target audience research, how people respond to different items of content and how they are framed. Now they can employ machine learning algorithms and tracking technologies which monitor how people respond to options they are shown online and learn from that in real-time to optimise future targeting approaches.
These tools distinguish online targeting from the relatively broader kind of targeting of populations that has been used for years. This is enabling more - and more precise - targeting to take place and for it to be constantly improved through algorithmic feedback loops.
Online targeting is difficult to control for various reasons. By its nature, online targeting means an individual may not know whether what they see on their screen is the same as what another individual sees on theirs. It is complicated, fast-moving and multi-jurisdictional and it tends to lack transparency. Independent research into how online targeting operates is difficult to conduct, while monitoring and enforcing compliance with regulations presents new challenges.
Online targeting systems pose difficult and complex trade-offs. The benefits to individuals are often greatest if they share more data about themselves, but then the risks are greater too. Platforms can deliver the greatest benefits when operating at scale, but this has resulted in a concentration of power in a small number of technology giants who operate globally and can be hard to hold to account.
4.2 Benefits and harms
Online targeting offers real benefits to individuals, organisations, and wider society. In helping people to navigate the internet and by providing relevant and engaging content on a personalised basis, it makes people’s lives easier. Online targeting supports businesses searching for the right customers and sustains the UK’s world-leading digital advertising sector. It also offers wider benefits to society, for example by enabling more efficient public service delivery.
However, online targeting also poses risks to us as individuals and as a society. At a high-level, online targeting can undermine autonomy and social cohesion. It can do this directly, but also indirectly, through undermining trust in information, trust in markets, the protection of vulnerable people, and protection against harmful discrimination.
These concerns are arising in similar but different guises across many fields, including potential impacts on fair elections, mental health and competition policy. A range of organisations are working within each of these areas.
The role of the CDEI is to look across industry, government and society to identify the nature of any problems and to recommend how government can respond in a way that is consistent with public values and ethical principles3 across these varied domains.
5. Scope of the Review
This Review is focused on the introduction of data-driven technology, in combination with the use of the internet, to create online targeting as we know it today. More traditional offline targeting practices are not in scope.
We are using a broad definition of online targeting that includes any technology used to analyse information about people and then customise their online experience automatically. This includes targeted online advertising, recommendation engines and content ranking systems.
We are primarily looking at the online targeting of individuals rather than businesses. We are focusing on the sectors and uses of targeting most closely identified with potential risks, primarily the targeting of news and information, media, user generated content, advertising (including political advertising), retail and public services.
Our work addresses problems for which the appropriate response is unclear – either because they fall outside existing regulatory structures, or because online targeting has meant that regulations no longer have the intended effect or are difficult to enforce. We are not concerned with problems where the regulatory response is clear, for example unambiguous breaches of GDPR.
5.1 Wider context
There are a number of other projects and initiatives being carried out across government, regulators and elsewhere which links to our work.
By looking across existing policy and regulation, we aim to draw out potential tensions as well as identify gaps in the current and potential future oversight regimes and make recommendations that apply across the system as a whole. This includes identifying potential solutions with applicability across sectors and domains.
5.2 Selected organisations’ work programmes relating to online targeting
Online harms agenda
Earlier this year the UK Government published its Online Harms White Paper, setting out its plans for online safety measures that also support innovation and a thriving digital economy. This package comprises legislative and non-legislative measures and will make companies more responsible for their users’ safety online, especially children and other vulnerable groups. It proposes establishing in law a new duty of care towards users, which will be overseen by an independent regulator. Companies will be held to account for tackling a comprehensive set of online harms, ranging from illegal activity and content to behaviours which are harmful but not necessarily illegal.
Online advertising regulation review
In February 2019, the Government announced a review into how online advertising is regulated in the UK. The review will assess the impact of the online advertising sector on both society and the economy, and will consider the extent to which the current regulatory regime is equipped to tackle the challenges posed by rapid technological developments seen in online advertising.
GDPR and e-Privacy/PECR
The implementation of GDPR in the UK is likely to have profound impacts on online targeting, which is underpinned by data collection, analysis and sharing. The Information Commissioner’s Office (ICO) is working on an age-appropriate design code for online services, as well as, for instance: ad-tech and real-time bidding; the use of data analytics in political advertising; and recently updated guidance on the use of cookies. Meanwhile, the European Union continues to discuss future e-Privacy regulations that are likely to involve an update of the Privacy and Electronic Communications Regulation (PECR) in the UK.
Competition and platform-to-business issues
As part of its Digital Markets Strategy, the Competition and Markets Authority (CMA) has launched a market study into online platforms and the digital advertising market in the UK. This will build on many of the recommendations from Professor Jason Furman’s Unlocking Digital Competition report – published earlier this year, to which the Government has committed to responding.
The European Union has recently adopted regulations promoting fairness and transparency for business users of online intermediation services. It involves measures such as heightened requirements for online marketplaces and search engines to disclose the main parameters they use to rank goods and services on their site, to help sellers understand how to optimise their presence.
6. Our approach
Given the scope set out in the previous chapter, our approach to delivering this Review seeks to answer three sets of questions:
-
Public attitudes: Where is the use of technology out of line with public values, and what is the right balance of responsibility between individuals, companies and government?
-
Regulation and governance: Are current regulatory mechanisms able to deliver their intended outcomes? How well do they align with public expectations? Is the use of targeting online consistent with principles applied through legislation and regulation offline?
-
Solutions: What technical, legal or other mechanisms could help ensure that the use of online targeting is consistent with the law and public values? What combination of individual capabilities, market incentives and regulatory powers would best support this?
We are approaching these questions by:
Public attitudes: Undertaking an extensive public dialogue to understand public attitudes towards online targeting and their perspective on possible solutions. This work is currently ongoing but preliminary conclusions are included later in this report.
Regulation and governance: Conducting a review to assess the effectiveness of the regulatory environment that currently governs online targeting. This will identify gaps, areas of insufficient clarity, and opportunities for improvement.
Solutions: Engaging widely with industry, academia, civil society and beyond to establish an accurate assessment of the current and likely future benefits and harms posed by online targeting, the availability and effectiveness of potential solutions, and the trade-offs they involve.
7. Progress to date
7.1 Evidence collection
Our first phase of work has focused on building an understanding of how targeting works, the wide variety of contexts in which online targeting approaches are used, and the key opportunities and risks associated with these uses. We have also started to frame questions and hypotheses to test during the next phase of our work.
In building this foundational understanding we have conducted ongoing stakeholder engagement with individuals, groups and organisations with an interest in online targeting, including technology firms, trade bodies, government, regulators, civil society organisations, and academics. To date, we have spoken to over 80 organisations across these fields.
We commissioned a team of academics led by Professor David Beer of the University of York to conduct an assessment of the current academic, policy and other literature relating to online targeting.
We will publish a summary of the wide-ranging responses we received to our Call for Evidence asking individuals, groups and organisations to come forward with information about online targeting.
Alongside this, we continue to analyse reports and policy documents from the UK and internationally which relate to online targeting. We are also making use of additional legal and technical expertise as required.
7.2 Public engagement
One of our priorities in this Review is developing a better understanding of public attitudes towards online targeting. Given the important role that online targeting plays in modern life, there are relatively low levels of awareness and understanding of it. The evidence suggests that a substantial proportion of the population have little or no understanding of online targeting and how it could affect them.
With the support of Sciencewise and market and opinion research specialists Ipsos MORI, we are conducting a public dialogue to understand attitudes towards online targeting. The dialogue includes 150 people who will each participate in two days of in-depth discussions three weeks apart. This comprises three workshops, made up of groups which broadly reflect the general population, and four smaller workshops which bring together particular groups, including people from minority ethnic backgrounds, people who are or have been financially vulnerable, people who have experience of mental illness, and 16–18 year olds. The workshops will take place in seven different locations across the UK.
We have recently completed the first series of workshops, which captured views on the potential benefits and harms of online targeting, as well as the appetite for change. The second round of workshops will focus more on potential solutions.
The research is ongoing, but tentative results from the first dialogue sessions are included in section 5 of this report. We aim to publish the findings of our public engagement research in the autumn.
7.3 Regulatory review
As part of this Review we are analysing whether the relevant oversight mechanisms fit together to create a coherent and enforceable framework for governing online targeting.
As the following short section sets out, online targeting approaches can interact with a wide variety of existing oversight mechanisms expressed in legislation, regulation and other governance instruments, depending on the contexts in which they are used and the impacts they have. This is reflective of the cross-cutting nature of online targeting.
Our role is to review the governance landscape as a whole and to identify where policy and regulation may need strengthening, reshaping or removing.
We have begun a programme of structured interviews to engage with specific regulators, government departments, businesses and civil society to understand in more detail the effectiveness of the application of different oversight arrangements.
7.4 Relevant areas of legislation, regulations and other governance mechanisms
The most relevant areas of oversight we are exploring, based on our initial research, include:
Human rights law – encoded in UK law through the Human Rights Act 1998, and internationally through the Universal Declaration of Human Rights and the European Convention on Human Rights. Provisions such as the right to a private and family life, the right to liberty and freedom, the right to no discrimination, the right to freedom of thought, religion and belief, and the right to free speech and peaceful protest all appear relevant.
Emerging online harms and online platform law and regulations – including the UK Government’s Online Harms proposals. We are also considering regulations at EU-level, such as the E-Commerce regulations, the new European regulation on fairness and transparency in online platform trading, and rules around copyright, audiovisual media services and terrorist content.
Data protection and privacy law and regulations – primarily GDPR and the Data Protection Act 2018 which governs the data collection, processing and sharing that underpins how online targeting works. The PECR and future e-Privacy regulation that will govern the use of cookies and other tracking technologies may also be relevant.
Advertising law and regulations – enforced online largely through collective self-regulation of the CAP code, overseen by the Advertising Standards Authority (ASA), and supported by a number of voluntary initiatives coordinated by various industry bodies, such as the Internet Advertising Bureau UK’s “Gold Standard”. Notably, political advertising is not within the scope of the ASA.
Competition and consumer protection law and regulations – apply across all sectors, normally by individual sectoral regulators and by the CMA. The CMA has a wide remit to enforce consumer protection and competition legislation to tackle practices and market conditions that make it difficult for consumers to exercise choice.
Media law and regulations – enforced by Ofcom, for instance through its Broadcasting Code and underpinning legislation. There are additional voluntary initiatives such as the Independent Press Standards Organisation (IPSO). IPSO regulates the majority of UK national written news-media publications. It can investigate complaints that a publication has breached the Editors’ Code and may ensure publications uphold factual standards. Notably, the European Commission’s voluntary code of practice on disinformation, agreed in 2018, involves commitments to action from major online platforms to prevent the spread of disinformation.
Electoral law and regulations – enforced by the Electoral Commission (EC) which oversees elections and regulates political funding and spending in the UK. The EC has publicly identified that there are limited online electoral regulations in place, and that online political material whose principal function is to influence voters is currently exempt from its remit. The UK Government has committed to mandating the use of imprints on electoral advertising online saying who is behind the campaign, and to running a consultation on electoral integrity.
Anti-discrimination law and regulations – such as the Equality Act 2010 which protects people from discrimination based on protected characteristics, from discrimination based on those characteristics. Anti-discrimination law is enforced in the UK by the Equality and Human Rights Commission.
Law and regulations designed to protect vulnerable people – normally designed and implemented on a sectoral basis and overseen by the relevant sectoral regulator.
Self-regulatory approaches – platforms impose their own standards, limits and guidelines on online targeting processes. These measures can go beyond the requirements of the law, for example:
- Imposing a minimum number of people who can be targeted with adverts to prevent micro-targeting.
- Limiting the characteristics that can be used to target specific advertising content.
- Publicly accessible “archives” or “libraries” of political adverts to facilitate public scrutiny.
- Increased transparency measures helping people to understand how and why they have been targeted with specific content.
8. Emerging insights
In section 3, we set out that this Review seeks to answer three sets of questions:
- Public attitudes: Where is the use of technology out of line with public values, and what should be the balance of responsibility between individuals, companies, and government?
- Regulation and governance: Are current oversight mechanisms and controls available to individuals able to deliver the aims of existing regulations and how well does this align with public expectations?
- Solutions: What technical or other mechanisms are available to introduce greater controls for individuals, companies and government?
Our work so far has led to some emerging insights about each of these three areas which will guide our subsequent work on the Review.
8.1 Public attitudes
While our public dialogue on online targeting is still ongoing, we have been able to identify some early themes and views.
Most participants felt that online targeting offered benefits to them as individuals, and saw it as playing an important role in creating a good customer experience. Participants recognised that online targeting can be highly beneficial in sifting and identifying the information, services and products of most interest to them online.
However, people’s attitudes towards targeting change when they understand more of how it works and how pervasive it is. With increased knowledge, most people agreed that some forms of targeting made them uncomfortable. There was a general consensus that changes are needed to the way targeting is practised and overseen.
Overall, participants found it fairly easy to identify harms to the individual caused by online targeting. They found it more challenging to relate those individual harms to the implications for society more widely. However, once such harms were better understood participants felt that these were also important issues needing to be addressed.
The areas of greatest public concern related to the potential to exploit people’s vulnerabilities, and the impact of targeting on trust in information and political debate across society. But there was an understanding that if targeting technology could exploit people based on their vulnerabilities it could also be used to protect them, for example, by bringing support services to their attention.
Many participants understood that there may be trade-offs to be made when it comes to being protected from the harms posed by online targeting. For example, limiting organisations’ access to information about them might not be satisfactory if it also reduces some of the benefits that people enjoy.
In the next phase of the public dialogue, we will explore whether participants would like to see additional protections, both for individuals and for society, and how these might work. Participants will discuss to what extent they would like more control over the content they see online and how they are targeted. They will also consider the balance of responsibilities between individuals, industry, government and regulators.
8.2 Regulation and governance
Our starting position is that law and regulation should only be strengthened where there is clear evidence that market or regulatory mechanisms are failing and that there is a workable way to reduce any resulting harms.
Our research to date has highlighted a number of areas where online targeting can have significant impact, both positive and negative, and where the effectiveness of relevant governance and oversight is particularly important. We have grouped these around concepts and values that are reflected in existing laws, regulations and cultural norms designed to encourage benefit and prevent harm.
At a high-level, our emerging view is that online targeting presents particular opportunities and challenges with respect to autonomy and social cohesion. Areas of particular interest here include:
-
The protection of vulnerable people: Online targeting approaches enable various people, such as children, those with poor mental health, and those with other types of vulnerability potentially unique to online exchanges, to be targeted on the basis of their vulnerabilities or other factors that correlate with them. This provides opportunities for the provision of support as well as for exploitation.
-
Trust in information (particularly media, advertising and political content): Online targeting may connect people to news content that is likely to be relevant and of interest to them. But they may also allow for the amplification of fake or misleading content through recommendation systems that optimise metrics like ‘click through rates’. This may affect social cohesion through the development of “filter bubbles”, and impact people’s ability to make well-informed decisions as citizens and consumers. At a societal level, this might impact the effectiveness or integrity of democratic systems.
-
Trust in markets: Online targeting may help people identify relevant products at the right price for them; but if it isn’t possible to know whether the products and prices you are offered are different to those being offered to other people, this may reduce overall trust in – and the effectiveness of – online markets.
-
Protection against harmful discrimination: Targeting is discriminatory by its nature. This can be positive when people are targeted with content that is most relevant to them. It may also be harmful, for example, if it reflects societal biases embedded in targeting criteria, algorithms and the datasets that underpin them.
Current and potential impacts of online targeting on autonomy and social cohesion will therefore be yard-sticks by which we intend to make our assessment of the effectiveness of current governance and oversight, and opportunities for improvement.
However, our research to date has highlighted a number of features of the online world which may undermine both the effectiveness of the market and the ability of individuals to adequately protect their own interests. Many of these are explored in more detail in Professor Jason Furman’s Unlocking Digital Competition report, and the CMA’s Digital Markets Strategy
In particular:
Limited competition: Markets are more likely to drive out bad practice when we have a choice of services and can accurately distinguish between those that meet our needs and those that serve us well. However, the nature of large networked online platforms and the pervasive operation of the ad-tech and data trading ecosystems mean that there is limited competition in the way in which data-driven technology is used to target people.
In some areas, such as services recommending streaming music or films, there is a range of competing services. In others, such as social media platforms, competition is limited. It is hard or impossible for customers to assess whether the things they were shown. They cannot tell whether targeting is serving their best interests or whether alternative interests are at play.
Limited transparency: One algorithm may aim to improve customer experience, another may seek to influence, manipulate or exploit people. The same algorithm can have both effects, depending on the individual. Distinguishing harmful from beneficial targeting is challenging. There is limited but growing transparency in relation to the information held about people and how it is used. There is little or no transparency about the extent to which people are treated differently and the impact this has.
Risk of consumer harm: There are commercial incentives for organisations to use information about people and exploit their weaknesses by targeting them with products and services online. There are also commercial incentives for online services to encourage people to spend as much time using these services as possible. The result is algorithms that are optimised to capture our short-term attention and exploit people’s instinctive responses to what they see online. Online services designed to meet people’s more considered desires, might be preferable.
8.3 Challenges to oversight mechanisms
While there appears to be public support for the establishment of clearer standards, particularly in relation to the impact of targeting on vulnerability and trust in information and political debate online. Any changes to oversight systems will need to take account of the following:
Getting the burden of responsibility right: We must consider what is fair and reasonable to expect of individuals, companies, regulators and others. For example, there are particular concerns about the cognitive load for individuals in tracking and managing how their data is being collected and used, and the types of targeting they are happy to accept.
Strengthening voluntary regulation, user empowerment and market incentives: GDPR has significantly enhanced the degree of control people have over how information about them is used. In addition, several platforms have introduced new and improved user controls which help tackle some of the issues relating to online targeting, for example by allowing people to pause the collection of data about their activity online across the platform, or to personalise the adverts they are shown.
There is also a nascent data portability industry that aims to empower people to derive greater benefit from their data. Finally, there is growing interest in how competition rules might allow competitors access to the data about people held by major online services. While challenging, facilitation of competition between organisations over whose algorithms best serve people’s interests might create commercial incentives that align more closely with consumer benefit.
Working across sectors and jurisdictions: Online targeting, like online services more broadly, takes place across multiple sectors and jurisdictions. It can often fall across a number of regulatory remits, for example the targeting of media and political content online is captured by various electoral, media, telecommunications and digital services rules. Likewise, online targeting often involves cross-border transactions and flows of data.
Enabling effective monitoring and enforcement: In the context of internet communications and data-driven technologies, monitoring and enforcement is notoriously difficult. There are pertinent questions about how proactive regulation should be. Given the complex nature of machine learning and data-driven technologies, it may be most practical to focus on disclosure and reporting and tackling harmful outcomes. However many of the potential harms caused by targeting are things that people may not be able to identify at the time. In these contexts, relying on complaints-based monitoring schemes would be ineffective, and proactive monitoring arrangements would be needed to provide assurance.
There is also a call for more accurate and timely information about the impact of algorithms on society. Concerns about the effect of targeting on politics, extremism and mental health have come to light only gradually. In many cases the evidence of harm is uncertain. Enhancing the ability of researchers, regulators or other organisations independent of the platforms to assess the impact of targeting at an earlier stage may be beneficial.
Enacting change widely: Levers for change such as accountability measures and clear responsibilities for organisations to undertake thorough due diligence of their supply chains and partners are likely to encourage widespread behaviour change. We are looking at precedent developed in the oversight of the analogue world, with equivalence in mind.
8.4 Solutions
We will be considering the full range of potential interventions in evaluating potential solutions, and presenting recommendations in our final report.
In guiding our thinking, we have identified four categories of intervention that we will be exploring in the next phase of our work.
Accountability and oversight, including: mechanisms to give platforms greater responsibility for the content distributed on them and to be open about the processes used to determine the acceptability of targeting algorithms; to give individuals and society greater transparency over the adverts and content being circulated and the groups to which they are circulated, and to allow independent research and information collection about the impacts of targeting.
More restrictive regulation of content distribution, including: mechanisms to restrict the types of inferences that can be made and used in targeting processes, or the types and narrowness of targeting that can be undertaken, to introduce stronger obligations on organisations to protect against vulnerability, and to introduce stronger obligations to default towards reliable or diverse sources of information, or to restrict content in response to concerns whether raised by moderation, complaint or automated monitoring.
Strengthening individual powers and information, including: mechanisms to require stronger consent rules, to introduce greater transparency about information held about people, how it is used in targeting processes, and the sources of that information, to improve tools to enable people to prevent the exploitation of vulnerabilities, and source reliable or diverse sources of information, improved access to recourse and redress, and the facilitation of data portability rights set out in GDPR.
Enhancing competition, through, for instance, identifying policies that would support the development of new business models such as third parties to manage individuals’ data on their behalf.
Many of the possible solutions could help with any one of the issues identified. Each approach could have benefits for some people, but have downsides for others. They may exacerbate existing tensions or expose new ones. Some will be better suited to specific issues or risks than others, and there are other mechanisms that do not feature on this list yet, but which might work best in certain circumstances.
In making recommendations about these options we will:
- Consider the benefits of a growing internet economy alongside the risks of harm.
- Recognise the potential for economic growth from the provision of services that address the problems identified in this report.
- Be proportionate and pragmatic in weighing up what can be implemented effectively in a reasonable time frame to provide stronger regulatory safeguards.
- Seek to identify policy options that would have an impact over the longer term, including those with applicability across sectors and across domains of issues.
9. Next steps
As we move out of our evidence-gathering phase, the remainder of the Review will be spent undertaking analysis and testing potential approaches to change oversight mechanisms. Next steps over the next five months include:
Call for Evidence: we will publish a summary of responses received later in the summer.
Regulatory review: we continue will our analysis of the strengths and shortcomings of the current oversight framework relating to online targeting.
Public engagement: we will publish a report with the full findings of our public dialogue on online targeting in the autumn.
We will submit a final report with recommendations to the Government in December 2019.