Call for Evidence Summary of Responses - Review into Online Targeting
Published 4 February 2020
Call for Evidence Summary of Responses - Review into Online Targeting
1. Introduction
1.1 The CDEI’s Review into Online Targeting
As part of the CDEI’s 2019/2020 Work Programme, we are undertaking a Review focusing on Online Targeting. The purpose of the Review is to analyse the use of online targeting approaches and to make practical recommendations to Government, industry and civil society for how online targeting can be conducted and governed in a way that facilitates the benefits and minimises the risks it presents.
The Review seeks to answer three sets of questions:
Public attitudes: Where is the use of technology out of line with public values, and what is the right balance of responsibility between individuals, companies and government?
Regulation and governance: Are current regulatory mechanisms able to deliver their intended outcomes? How well do they align with public expectations? Is the use of targeting online consistent with principles applied through legislation and regulation offline?
Solutions: What technical, legal or other mechanisms could help ensure that the use of online targeting is consistent with the law and public values? What combination of individual capabilities, market incentives and regulatory powers would best support this?
More information about the review can be found in our Final Report, which sets out how we are defining the issue, our approach to the Review, our progress to date, and emerging insights.
1.2 The Call for Evidence
The Review into Online Targeting is being informed by the following evidence:
- research undertaken by our policy teams;
- stakeholder engagement with government, regulators, industry and the public;
- call for evidence;
- landscape summary, carried out by academics to assess the academic and policy landscape in this area.
The Call for Evidence document was published on 7 May 2019 and invited submissions on four question areas:
- What evidence is there about the harms and benefits of online targeting?
- How do organisations carry out online targeting in practice?
- Should online targeting be regulated, and if so, how should this be done in a way that maximises the benefits and minimises the risks targeting presents?
- How is online targeting evolving over time, what are the likely future developments in online targeting, and do these present novel issues?
Evidence responding to any or all of these four questions was welcomed. The deadline for responses was 14 June 2019.
This report summarises the findings and provides a general overview of the number and type of responses received, followed by a more detailed look at responses in each question area, identifying themes and gaps in the responses received.
2. Methodology
Responses to the Call for Evidence were logged and reviewed by the CDEI’s policy team leading on the Review into Online Targeting.
- The following information was captured as part of this process:
- The type of organisation that responded;
- Which of the four questions were responded to;
- High-level themes covered in the response;
- Key points and arguments made.
An overview of this information is set out in the following section of this report.
The policy team analysed this information in order to supplement their wider research and stakeholder engagement being undertaken to inform the Review.
Where respondents recommended or included further reading, the policy team has incorporated this into their evidence base for the Review where appropriate.
Respondents offered to meet the CDEI to discuss their response further, and the policy team is holding additional meetings with individual stakeholders where relevant.
3. Overview of Responses Received
3.1 Type of organisation that responded
39 responses were received from a variety of different types of organisations. The table below shows the breakdown of the type of organisation that responded. We were pleased that the responses came from a range of perspectives. It is worth noting that while technology industry associations TechUK and the Advertising Association responded to the call for evidence, we did not receive written responses from the major online platforms.
Type of respondent | Number |
---|---|
Academic (institutions, individuals) | 15 |
Advertising (companies, sector bodies) | 4 |
Civil society (think tanks, academies, non-profits) | 7 |
Finance (companies, sector bodies) | 6 |
News Media (companies, sector bodies) | 3 |
Regulators | 3 |
Technology (companies, sector bodies) | 4 |
Total | 44 |
See the final section of this report for a complete list of organisations that responded.
3.2 Responses answering specific questions
Respondents could choose to answer any or all of the questions. The table below shows the number of respondents answering each question.
Question Area | Number of Respondents |
---|---|
What evidence is there about the harms and benefits of online targeting? | 42 |
How do organisations carry out online targeting in practice? | 34 |
Should online targeting be regulated, and if so, how should this be done in a way that maximises the benefits and minimises the risks targeting presents? | 39 |
How is online targeting evolving over time, what are the likely future developments in online targeting, and do these present novel issues? | 25 |
3.3 Themes and arguments in the evidence received
There was a broad consensus that online targeting is a powerful tool which several respondents described as a significant change to the fabric of society. There was widespread agreement that online targeting is already being used pervasively across the online space, and that its uptake has been rapid. Respondents set out that it relies on the collection of data about people, complex analysis including the creation of digital profiles and the segmentation of people into groups, and in many cases the sharing of data between multiple parties. Respondents agreed that the greater the accuracy and depth of data held about people, the more accurate targeting can be - and therefore the more powerful it becomes.
Most respondents focused their responses on personalised advertising, though some considered content recommendation systems such as social media feeds and search engine results. Respondents considered uses of online targeting across different sectors, including the targeting of news and information, media, user generated content, and commercial and political advertising content.
A large majority of responses argued that online targeting offers significant benefits and harms, both real and potential, to individuals, groups and wider society. However, there were a number of responses which focused only on benefits or only on harms, including at the extremes some which actively denied knowledge of any evidence of harms or benefits. It was often suggested that the negative or positive impacts of online targeting might be relative to the sector in focus. For instance, respondents tended to suggest that the targeting of some political and media content online might have a greater impact than some commercial content.
A number of respondents mentioned that targeted advertising enables businesses to maximise the reach of their advertising and offer personalised, relevant and timely adverts to potential consumers - which could achieve better return on investment than other types of advertising. Beyond targeted advertising, respondents also discussed the role online targeting can play in building engagement with online services, which was generally seen to help maximise advertising revenues and market share.
The respondents also broadly agreed that the interaction between the benefits and harms caused by online targeting can bring about complex trade offs. For example, through online targeting, data collected about vulnerable people might reveal highly sensitive information about them (impacting their privacy) which might then be used to support or exploit them (impacting their autonomy).
However, many respondents also cautioned that there is limited reliable evidence in the public domain about how online targeting works and the impacts it has on people, businesses and society. Respondents saw this as a serious problem for policymakers, regulators and researchers, for businesses in some cases, and for individuals.
In addition, many respondents noted that online targeting is often opaque and difficult to understand, and that many people currently feel unable to understand the effect of targeting on their lives. A large number of respondents commented that a lack of clarity that the content people see is targeted at them is likely to impact their ability to critically evaluate it. This was seen to be particularly evident in the case of children and vulnerable adults.
Respondents also agreed that online targeting is a dynamically evolving practice, but that it is generally becoming more granular, sophisticated, and pervasive - driven by organisations’ need to understand their customers and offer more attractive, personalised, and timely products and services. There was consensus that this is likely to involve new sources of data and more intensive use of technologies like machine learning. Many respondents also noted that the techniques that enable online targeting are likely to be deployed across other parts of our lives, through greater use of geolocation data, cross-device tracking, and the growth of the Internet of Things. This was seen to offer risks and opportunities.
In this context, there was broad support among respondents from a range of sectors including finance, advertising, academia, and civil society for changes to regulation to provide greater oversight of online targeting. Proposals included changes to improve societal and regulatory understanding of how online targeting works and its impacts, and changes to improve people’s awareness, understanding and control over how they are targeted online. This was seen by many as a way to maximise the benefits of online targeting and to build trust in the use of the technology across society.
Respondents proposed a variety of different changes to the regulatory environment - relevant to both personalised advertising and content recommendation systems - noting the different trade offs that these might introduce. Many respondents advised on the need to consider evidence closely in designing regulation, and cautioned against knee jerk responses to poorly understood issues which might create barriers to innovation and have other harmful unintended consequences. Where there is limited evidence, a number of respondents proposed the use of precautionary principles.
Some respondents, particularly in the technology and advertising industries, were positive about industry self-regulation as an alternative to changes to formal regulation. It was also noted that there is already some regulation of online targeting in the UK (particularly through data protection regulation and the Advertising Standards Authority’s CAP Code) and that this should be taken into account alongside likely upcoming changes to regulation such as the Government’s Online Harms Bill, the Information Commissioner’s Office’s Age Appropriate Design Code, the European Commission’s e-privacy Regulation, and the Competition and Market Authority’s market study into online platforms and digital advertising.
3.4 Summary of Responses
This section sets out a summary of the key themes in each question.
3.5 Question 1: What evidence is there about the harms and benefits of online targeting?
- What evidence is there about the harms and benefits of online targeting?
- What evidence is there concerning the impact - both positive and negative - of online targeting on individuals, organisations, and society? In particular:
- Its impact on our autonomy
- Its impact on vulnerable or potentially vulnerable people
- Its impact on our ability to discern the trustworthiness of news and advertising content online
- The impact on privacy and data protection rights of the collection, processing and sharing of data that underpins online targeting
- What opportunities are there for targeting and personalisation to further benefit individuals, organisations, and society?
Almost all respondents responded to this question, considering the impact of online targeting across each of the areas our questions referred to (autonomy, vulnerability, the trustworthiness of news and advertising content, and privacy) as well as opportunities for further benefit. Many raised significant concerns about the lack of reliable evidence about how online targeting works and the impacts it has on people, businesses and society. Several respondents highlighted the impact of the pervasive use of online targeting approaches online - as multiple low impacts might accumulate into a more serious impact. Respondents saw this as a serious problem for policymakers, regulators and researchers, for businesses in some cases, and for individuals.
Respondents from the technology, advertising, and news sectors generally referred to the benefits of users being able to find relevant content and generally aiding content discovery. Academic and civil society responses were more split with both noting the benefits in terms of finding information but both concerned about the impact on privacy as well as on the vulnerable and implications of persuasiveness undermining autonomy, with civil society tending to focus on the former and academia on both.
Autonomy Many respondents noted the positive impacts of online targeting on individual autonomy. They argued that it empowers people with greater convenience and choice, and that it enables people to discover new interests and opportunities. Others argued that these benefits were outweighed by risks to autonomy. There was broad agreement that online targeting can restrict consumer choice and access to information, particularly from diverse sources or perspectives. It can be highly persuasive and have a strong influence on behaviour. Algorithms are often optimised to drive engagement, which can lead to the exploitation of cognitive biases or vulnerabilities. Beyond this, some argued that it can have a negative impact on the development of people’s identities and their ability to navigate their own interests, particularly in the case of children.
As well as focusing on individual autonomy, some respondents argued that online targeting fosters wide reaching asymmetries of power and information across society.
Vulnerability Many respondents noted the positive impacts of online targeting on vulnerable people. They argued that it enables organisations to support vulnerable people by targeting harmful content away from them. For example, it was noted that the CAP Code already requires advertisers in the UK to make full use of online targeting tools to ensure age-restricted products are directed away from children.
On the other hand, respondents argued that vulnerable people can be particularly affected by targeted online content. For instance, representatives of civil society, the technology sector and regulators argued that children (a particular focus), the elderly, and people with diminished capacity may be less likely to understand that the content they see has been targeted to them, to understand and use any controls available to them, and to filter out targeted advertising content. Additionally, representatives of civil society, academia, and regulators highlighted people with poor mental health could be more severely impacted by content targeted to them. Moreover, certain vulnerable groups, such as those with a predisposition to addictive behaviours could be exploited by algorithms that seek to maximise their engagement.
Finally, a number of academic respondents set out that everyone is vulnerable to exploitative or manipulative targeting in the online world, not just particularly vulnerable groups. This was seen to be caused by organisations’ ability to infer highly personal details about people based on their online behaviours, and by the existence of broad limits to human cognition that can be exploited through online targeting and potentially addictive design.
Trust in information Most respondents agreed that online targeting plays a critical role in the dissemination of media, advertising and political content online. This can be positive - respondents noted that online targeting supports people to discover new and relevant media, advertising and political content online. But many argued that it has also led to an unprecedented situation in which the information people see is personalised to them and is controlled to a significant extent by online platforms, which are not widely considered to be accountable for the content they host. Algorithms were seen to prioritise engaging and sensational content, play a significant role in the spread of untrue, violent or extreme content, and be prone to manipulation by automated accounts (bots) and malign actors. Limited transparency makes it difficult for regulators or the public to understand the way online platforms shape the information environment.
Respondents from a range of sectors including academia, news, civil society, technology and advertising shared concerns around two broad impacts relating to trust in information online. Firstly, online targeting was seen to exacerbate the threat of misinformation, as consistent tailored messaging might make people more likely to believe false information, and widespread misinformation online might result in broad mistrust of information on and offline. Secondly, online targeting was seen to risk fragmentation of information across society, with different groups of people living in separate realities with different versions of the truth. This might lead to increased discord between social groups. One example commonly considered in responses was the impact of targeted political content focused on polarising topics, which was seen to contribute to the polarisation of democratic discourse.
Finally, respondents raised concerns about the impact of online targeting on the sustainability of independent news media. On the one hand, online targeting was seen to enable news publishers to generate greater advertising revenues on their digital services. On the other, the control held by online platforms over flows of information online was seen to create significant disruption to news publishers.
Privacy Respondents were strongly concerned about the impact of online targeting on privacy, with many commenting that it motivates the collection and processing of personal data for profiling purposes. They commented that the surveillance and data collection that underpins online targeting is virtually inescapable, and so complex that it is impossible for individuals to understand and control what is happening with data about them. On top of this, there are limited opportunities for people to challenge or contest the way that they have been profiled. Many commented that people - especially vulnerable people or those with protected characteristics - are concerned about their privacy online, but that they are faced with no choice but to accept the status quo. This was seen to have a potentially chilling effect on online behaviour.
Academia and civil society were the most concerned about privacy. We received multiple submissions from several privacy rights groups outlining their concerns about this practice. These groups noted that the nature of data transfers through real time bidding mean that effective control over private and often sensitive data is hard to guarantee. They argued that the way the ecosystem works is likely in violation of GDPR, and the complexity and omnipresence of data collection online means that people are often either resigned to or ignorant of the loss of privacy. They also argued that vulnerable groups such as children might be less capable of managing their data online and therefore more susceptible to privacy risks.
Respondents noted the tension between privacy and the effectiveness of targeting approaches (where the more information held about someone, the more precisely they can be targeted), emphasising the need to strike a balance on this point. They pointed out that in many cases, greater levels of personal information about people might enable actors to mitigate some of the problems caused by online targeting. However, many respondents also set out that there is a strong link between privacy and trust in digital services, and that in the long run consumer privacy concerns might lead to the lower uptake of digital services.
Other Respondents also mentioned the impact of online targeting on: trust in and functioning of online markets (e.g. through price personalisation); social cohesion (e.g. through the potential isolation of individuals); and people with protected characteristics (e.g. through harmful discriminatory targeting). For instance, some respondents outlined two principal ways that discrimination in online targeting is possible - the use of explicit targeting options to exclude certain groups from targeted content, and the risk that algorithms used for targeting are themselves biased resulting in biased targeting.
News media respondents also highlighted how the concentration of market power in a small number of firms had distorting effect on the market and impacted the sustainability of their funding. Civil society and regulator respondents also noted this market is highly concentrated.
Opportunities for further benefits The opportunities for further benefits identified largely fell into two groups. Many respondents discussed opportunities to change the way online targeting works for the better, such as: aligning the goals of algorithms to users’ best interests (which they might want to disclose themselves) rather than what they tend to click on; enabling users to more easily prevent tracking or adapt their privacy preferences across the internet; introducing privacy enhancing technologies to online targeting; and providing more transparency about how targeting works and its impacts (seen by many to be critical to enabling the benefits of online targeting). Other respondents discussed opportunities to use online targeting approaches beneficially, like through protecting vulnerable people, reaching marginalised people, and providing personalised nudges towards socially desirable outcomes, with this positive nudging endorsed by a range of sectors across finance, academia and news.
3.6 Question 2: How do organisations carry out online targeting in practice?
- How do organisations carry out online targeting in practice?
- What do organisations use online targeting for? What are the intended outcomes?
- What are the key technical aspects of online targeting? (what data is used; how is it collected, processed, and/or shared; what customisation is carried out; on what media; and how is this monitored, evaluated, and iterated on)?
- How is this work governed within organisations (how are risks monitored and mitigated, including unintended consequences, who is accountable within organisations, how are complaints dealt with)?
Over half of the responses provided evidence in relation to this question, though there was notably less focus on the final sub-question on internal governance.
Benefits to organisations Many respondents mentioned that personalised online advertising plays an important role for businesses. They reported that online targeting technology enables businesses to maximise the reach of their advertising and offer personalised, relevant and timely adverts to potential consumers - which could achieve better return on investment than other types of advertising. This was seen to be particularly useful to small and medium sized enterprises, who might otherwise be priced out of (e.g. broadcast or print advertising). In the field of politics in particular, online targeting was broadly seen to offer an unprecedented opportunity for actors to target people based on their interests at scale - identifying and engaging new audiences or re-engaging audiences already known with pertinent information. Respondents also highlighted the importance of online targeting to the business models of online publishers - such as the news media sector - who often rely on hosting advertising content to generate revenue for their online services.
Beyond targeted advertising, respondents also discussed the role online targeting can play in building engagement with online services, as users are seen to expect increasingly personalised and relevant content online. This was broadly seen to improve user experience and boost the discovery of new content as well as maximise businesses’ advertising revenue and market share.
Data collection and analysis Many respondents discussed the critical role of data collection and analysis in online targeting, explaining that recent years have seen a significant change in the amount of data available to use for targeting purposes (e.g. as online platforms provide access to insights from new sources of data). Respondents described how cookies or other online identifiers are used to track people across the internet (including across multiple devices). Data collected can include demographic, behavioural, location, device, and other types of data. Multiple data sources are analysed and combined into individual profiles, which can be segmented into different audience groups for targeting purposes. Some respondents were sceptical about the accuracy of data collected and inferences made about people, and identified this as a risk. It was broadly agreed that individuals have limited control over how data about them is used.
Actors in the ecosystem Many respondents identified a number of different actors involved in online targeting. Online platforms were often described as holding large amounts of data about people, which they do not tend to share directly with others. Brands and advertisers were seen to use data they hold themselves to support their targeted advertising. Many respondents raised concerns about the large number of companies involved in the ad-tech ecosystem, the way data is shared between them, and a perceived lack of transparency and accountability. These include data brokers, ad-exchanges, supply-side and demand-side platforms, and data management platforms. Some respondents pointed out the vast scale of the operations of some of these companies, which combine information from a variety of sources into detailed profiles of hundreds of millions of people, processing hundreds of millions of data events per day. Others set out that when organisations integrate data received from elsewhere into their targeting models, and sub-contract analytical and compliance operations to external organisations, this can lead to knowledge gaps and raise questions around accountability. Finally, respondents outlined that different actors employ different governance systems (and in some cases there are a number of different systems being used within one large organisation), and achieve different levels of compliance with regulation.
How targeting works In relation to personalised online advertising, many respondents explained how real-time bidding (RTB) processes work, referring to the ICO’s ongoing investigation. Respondents from civil society and academia also discussed the use of “custom lists” and “lookalike” targeting tools available on most online platforms, highlighting that they can lead to reduced transparency, and can be manipulated. Finally, some respondents discussed how advertising campaigns are evaluated, for instance through click through rates and impressions.
Some respondents also described how content recommendation systems work. They tend to rely on the prediction of which content users are more likely to engage with, making recommendations based on the content the user has previously consumed, or the content other, similar users have previously consumed. Respondents also noted the difference between systems that involve the dissemination of user-generated content and of curated content, arguing that “open” systems are likely to be more problematic.
Across both types of online targeting discussed, some respondents pointed out that targeting often aims to take into account wider considerations like compliance with regulation and the safeguarding of children. This came across particularly strongly in responses from sectors with rigorous sectoral regulation regimes in place such as the finance sector. Others disagreed, arguing that online targeting systems take advantage of the lack of scrutiny they face and pay only lip service to regulation and their impact on people’s lives. In this respect, respondents tended to highlight what they considered to be the highly opaque, complex, and difficult to understand and control nature of online targeting, and a lack of external scrutiny over online targeting practices.
3.7 Question 3: Should online targeting be regulated, and if so, how should this be done in a way that maximises the benefits and minimises the risks targeting presents?
- Should online targeting be regulated, and if so, how should this be done in a way that maximises the benefits and minimises the risks targeting presents?
- What is the current legal and regulatory environment around online targeting in the UK? *How effective is it?
- How significant are the burdens placed on organisations by this environment?
- Are there laws and regulations designed for the “analogue” world that should be applied to online targeting in the UK?
- Are there any international examples of regulation and legislation of online targeting that we can learn from?
A large number of responses provided evidence on this question, mainly focused on the current legal and regulatory environment in the UK and whether and how this should change. Notably, of the 35 respondents who focused on this question, 22 explicitly called for changes to regulation, seven were neutral, and only six called for no change. Many respondents saw updated regulation as key to building trust and enabling the benefits of targeting to be enhanced.
Current regulatory environment Many respondents agreed that there are some relevant regulatory and legal structures already in place. Respondents tended to focus on the roles of the data protection and privacy regulation overseen by the ICO and online advertising self-regulation overseen by the ASA, but also mentioned consumer protection and competition law, anti-discrimination law, election law, and platform liability rules.
Most responses pointed out weaknesses in the current regulatory environment. Across the board, respondents raised concerns about the difficulty of monitoring and enforcing regulations in the context of online targeting. Many respondents, particularly from civil society but also academia, also raised questions about the effectiveness of GDPR in this regard, arguing that elements of online targeting are at tension with GDPR, and questioning the extent to which GDPR protects people from the use of inferences made about them. Many respondents also outlined concerns with electoral law and the powers and functions of the Electoral Commission, and pointed out that the CAP Code excludes online political advertising. Only a small number of respondents argued that the current regulatory environment is overly burdensome, though several, particularly in the finance and news sectors, said there is currently an “uneven playing field” between those that operate partly online but are heavily regulated due to their presence in other markets, and the major online platforms.
In addition to hard regulation, many respondents discussed self-regulatory initiatives across the online advertising ecosystem and the online platforms. For example, a number of responses considered recent transparency measures adopted by platforms on online political advertising. While some were positive about the impact they have had, others criticised them for not going far enough. Equally, a number of respondents highlighted concerns that the policies set by online platforms and the algorithms they use to target content have a significant impact on the dissemination of information online, but with limited public supervision or accountability.
Desired outcomes of regulatory changes There was broad support among respondents for changes to regulation to provide greater societal oversight of online targeting - to help hold organisations accountable for their use of online targeting approaches, to better understand how it works and the impacts it has on society and individuals, and to improve people’s awareness, understanding and control over how they are targeted online. A number of respondents emphasised the perceived asymmetry of power and information between online platforms and consumers, businesses, and regulators, and thought that changes in regulation could help to level the playing field.
Changes proposed Respondents proposed a variety of different changes to regulation - relevant to both personalised advertising and content recommendation systems. Proposals included a focus on increased transparency (for example the introduction of mandatory public archives of adverts), on increased accountability (such as changes to the e-Commerce Directive for content recommended by platforms), and changes to internal governance and documentation requirements (such as the introduction of mandatory algorithmic impact assessments).
A large number of respondents also considered how the regulation of digital markets could be most effective. A number of these respondents (primarily representing academia) argued that reform should be comprehensive and that any changes should be part of a holistic approach. In addition, changes should take into account the balance between benefits and harms, and carefully consider trade-offs around impacts on freedom of expression and other fundamental rights. Co-regulatory structures with a focus on accountability and transparency were broadly considered as likely to be more effective than traditional regulatory approaches. Where there is limited evidence about harms, many respondents suggested the use of precautionary principles as outlined by the Health and Safety Executive, this is when there is good reason to believe that harmful effects may occur to human, animal or plant health or to the environment; and the level of scientific uncertainty about the consequences or likelihood of the risk is such that the best available scientific advice cannot assess the risk with sufficient confidence to inform decision-making.
Risks and unintended consequences Many respondents, especially, but not limited to, the news and advertising sectors, advised on the need to consider evidence closely in designing regulation, cautioning against knee jerk responses to issues that are poorly understood and rapidly changing. These were seen to be likely to create barriers to innovation and have other harmful unintended consequences. Equally, a number of respondents mentioned likely upcoming changes to regulation, such as the Government’s proposed Online Harms Bill, the Information Commissioner’s Office’s Age Appropriate Design Code, and the Competition and Market Authority’s market study into online platforms and digital advertising, as well as changes to European regulations such as the e-Privacy Directive, the Audio-Visual Media Services Directive, and the Regulation on Platform-to-Business Relations.
3.8 Question 4: How is online targeting evolving over time, what are the likely future developments in online targeting, and do these present novel issues?
- How is online targeting evolving over time, what are the likely future developments in online targeting, and do these present novel issues?
- What emerging technologies might change the way that online targeting is carried out? *Might these present novel issues?
- How might existing and emerging governance regimes (such as the General Data Protection Regulation, European e-Privacy and e-Commerce Directives, and potential Online Harms legislation) impact online targeting practices?
- Are there examples of types of online targeting and personalisation (that might have either negative or positive impacts) that are currently possible but not taking place? If so, why are they not taking place?
There was broad agreement among those who responded to this question that online targeting is a dynamically evolving field, but that it is generally becoming more granular, sophisticated, and pervasive. Civil society respondents told us that they see that current trends are likely to continue. This was seen to offer opportunities and risks.
New sources of data There was broad agreement on the likelihood that new sources of data will be used in online targeting practices in the future. Respondents from a range of sectors including civil society, academia and advertising, thought that the technological changes such as the growth of the Internet of Things (IoT) and the increase of data sharing through initiatives such as Open Banking would enable the combination of data collected online with data about people’s activities offline to create more rounded user profiles.
Emerging technology Many respondents suggested that emerging technologies provide both opportunities and risks in relation to online targeting. They noted that improvements to organisations’ analytical capabilities, combined with increased access to data, are likely to make online targeting more effective and result in more personalised online experiences. Respondents expected increased use of sentiment analysis, facial recognition technology, and psychometric analysis to inform targeting approaches. They also raised concerns about the use of AI-generated content and algorithmic optimisation processes, which they thought could make online targeting even more persuasive.
Privacy and changes to regulation While many respondents were concerned about increasing risks to privacy, some from the technology and finance industries also noted that consumer attitudes towards privacy and changes to regulation are already leading some organisations to improve the levels of privacy they provide. Respondents also noted the possibility for greater use of privacy enhancing technologies. More broadly, a number of responses agreed that the regulation of digital markets is still evolving, and that changes to the regulatory environment have the potential to build trust and unlock the benefits of online targeting.
4. List of Organisations that Responded
- 5Rights Foundation
- Advertising Association
- ASA
- Barclays
- BBC
- Birmingham Law School
- Cambridge Muslim College
- Carnegie Trust
- Leverhulme Centre for the Future of Intelligence
- Channel 4
- Christopher Burr and Jessica Morley (Answers given in a personal capacity)
- Competition and Markets Authority
- Data & Marketing Association (DMA)
- Demos
- Direct Line Insurance Group plc
- DMG Media
- Dr Anya Skatova, Dr Philipp Lorenz-Spreen, Professor Stephan Lewandowsky, Dr Mark Leiser, Dr Stefan
- Herzog (Answers given in a personal capacity)
- Electoral Commission
- Gemserv Limited
- Horizon Digital Economy Research, University of Nottingham
- Human Rights Centre, University of Essex
- Huawei
- Institute for the Future of Work
- IPA
- Kate Dommett
- Market Research Society
- Moten Analytics
- Mozilla Foundation
- Nello Cristianini, Professor of AI at the University Bristol
- News Media Association
- Open Rights Group
- Oxford University’s HCC Research Group
- Privacy International
- Rafael Calvo
- Royal Statistical Society
- School of Advanced Study, University of London
- Superawesome
- TechUK
- The Telegraph
- Trust & Technology Initiative, University of Cambridge
- UK Finance
- Visa
- Which?
- Zeynep Engin, Adriano Koshiyama, Alixe Lay