Speech

Managing risk, not avoiding it

Speech by Sir Mark Walport to the Herrenhausen conference in Germany.

Professor Sir Mark Walport

Listen to the lecture.

We’ve heard this afternoon about what I think is a very important concept, which is that of ‘risk competence’. I think that if we’re going to be able to manage risks and think about them in relation to innovation, then we have to improve the levels of risk competence at all levels - as I will illustrate.

A number of other things have come out very importantly this afternoon, and one of those is about the relationship between science and values. I think there are 2 different types of relationship, and one is the values of science itself. In the last talk when we saw the codes of conduct we saw that the most important elements of that are actually in the leadership and the values of the organisation - and that applies in business, and it applies in the enterprise of science.

I think we also heard earlier in the day some of the principles of supporting the best science, which I thought was very well summarised in the concept of ‘hire well, and then trust’. It’s actually all about supporting people, supporting the brightest minds and then giving them the tools that they need, the environment, the infrastructure, to conduct the best form of science.

Role of the Government Chief Scientific Adviser

Wilhelm has kindly already introduced the role of the Chief Scientific Adviser, but I thought I would say a little bit more just to explain why risk is such an important part of the job. My job involves looking at all aspects of science, engineering, technology and social science. That is where you have a word ‘Wissenschaft’ that, I think, captures this in a way that no other word in the English language quite does. But I view it as all of the sciences, and the challenge is to provide advice in all those domains for the whole of government policy.

So how does one think about the job to make it one that is manageable? We think about it in terms of: what are the things the government really cares about? I would suggest that good government cares about the health, the well-being, the security and the resilience of its population. In parallel with that, it worries about the economy, because of course those 2 are inter-related. And governments that succeed in doing that are likely to be popular governments.

What is it that our well-being, our health, our security, our resilience, actually depend upon? It’s something that we take for granted until it goes wrong, and that is our infrastructure. In this building, the most critical piece of infrastructure is actually the power supply, because when the power supply disappears, then we have a problem, and of course the whole city of Hanover runs into difficulties very quickly. As modern societies have become more efficient, in many ways they’ve become less resilient. One of the things that happens when the power goes out is that people go and empty the shops. And while governments may say: ‘We don’t want you to do that’, that is an entirely rational behaviour for each of us, and so shops empty very quickly. Because we now have just-in-time supply lines, it means that actually, supplies of food and water can run out very quickly.

You can think about infrastructure in terms of the built infrastructure, and then you can think of infrastructure in terms of the natural infrastructure, by which I mean human, animal and plant health, the geology of the planet – so earthquakes and volcanoes – and of course weather and climate. Of course, one of the places where those 2 come together is climate and energy, and I’ll come back to that.

The second thing I worry about - though I worry a lot about all those areas - is in particular about how we can have the most resilient, secure and safe infrastructure. When it comes to knowledge translated to economic advantage, I think that the challenge for government is to provide the best policy environment and the appropriate use of funding derived from taxpayers, to bring together academia and industry, and get the right policies. Where I think having a Chief Scientific Adviser in place is very, very important is when emergencies arise. We had some discussion earlier this afternoon about Fukushima and the effects that that has on people’s attitudes to nuclear energy throughout the world.

There is a committee in the UK which is called SAGE, which stands for Scientific Advisory Group for Emergencies. This is chaired by the Government Chief Scientific Adviser, and when there is an emergency in the UK, or indeed anywhere in the world which has the potential to affect UK citizens, where science is involved, then that committee is called together very quickly.

So after Fukushima happened, the question was of what advice to give to UK citizens living in Japan? It was my predecessor John Beddington who was able to assemble, in a matter of hours, a group of experts. These experts knew first of all about radiation and nuclear energy, then they knew about plumes - because actually any leak from the reactor was going into the atmosphere, as well as into the immediate surroundings, and so you could model things - and knew about weather, because you need to know which direction the wind is going where a plume will end up. As a result of a very fast piece of work, they were able to work out that the risk to UK citizens living in Tokyo from increased exposure to radiation was negligible, and very clear advice was given through the British Embassy rapidly to UK citizens that they should stay put. That was actually very helpful in Japan as a whole in helping to calm it down. It’s actually a good illustration of the importance of science in diplomacy, because we face so many common challenges in Europe and around the world, in some of the big issues of the planet, where science is very important. And we underuse science in diplomacy.

But finally, the role is, as Wilhelm has said, is to try and underpin policy with the best evidence wherever one can, and to provide a leadership role for science.

Risk

Turning now to the main topic of this session, which is about risk, and risk in relation to innovation as I will explore in a minute: You’ve only got to be in the job for about a day or so to realise that almost every aspect of the job, and indeed a large amount of the work of the government, is actually thinking about and managing risk. Risk comes from natural events, from natural hazards, and here are 3 examples of issues that are more or less contemporary:

One example is - and this refers to the volcano in Iceland that went off a few years ago, although of course there’s a volcano erupting at the moment - is it safe to fly an aeroplane through a volcanic ash cloud, and how do you actually manage the risks associated with that? A second example is the challenge to animals from diseases such as bovine tuberculosis, and the third example is the example of the sort of risk that one faces - and this was my first experience of the government emergency procedures about a year ago - when you have floods. Of course, one of the important questions for government is: how do you balance your investments in flood defences on the one hand against mitigating the risks, for example, of bovine tuberculosis?

So there are all sorts of questions if you’re in government - or indeed in any other walk of life or in industry - about how you actually use risks to prioritise your investment. And then of course there are all the human-induced threats. Here, you see the threat of terrorism, and of course science is very important in that, understanding the means that terrorists use, and the human sciences are as important as the physical and biological sciences here. Then there are the risks that we impose on the environment through the measures that we have taken to adapt our environment to enable more than 7 billion people to live on the planet. The reason that we have been so successful is because we have modified our environment, but of course the price of that is the 10 gigatonnes or more of carbon that we’re emitting into the atmosphere each year. There are the challenges that go with feeding populations where, on the one hand, you have the use of pesticides – and I’ll talk about some of the challenges of all of this in a minute – and then on the other hand: could one reduce the use of pesticides by making plants that are genetically modified and therefore able to resist the pathogens and other damage much more effectively?

Putting this all together, companies have risk registers, foundations have risk registers, and governments have risk registers, and so the UK has a national risk register. This is a publicly available document, and in a way that any other risk register would work it looks on the one hand at the impact of an event, and on the other axis at the likelihood of an event. You can see that near the top right of this figure are pandemic human diseases, and that is, I think, obvious to everyone. Influenza is something that we worry about, and at the moment we have a very unpleasant Ebola epidemic in West Africa. This does not pose a global threat as things stand, but is a very, very serious problem - as WHO and the UN have been making clear - in terms of what’s happening in West Africa, with all of the challenges that go to civil society and the disruption of civil society when you have a severe infection.

Another one, which I think is much less well known, is the effects of severe space weather, and that’s a solar coronal ejection which causes electromagnetic storms on the earth. This has the potential to disrupt telecommunications, to disrupt electricity supplies, power supplies, transformers and communications. So that’s another area where, actually, there’s a great deal of collaboration between scientists around the world to try and understand how we can measure, predict and mitigate the effects of a burst of space weather. In fact, there has been a bit of space weather recently with the aurora borealis, which is a sort of visual cue that there’s solar radiation coming in from a mass ejection. And as I say, these are all important things.

The only point of having a risk register is if you use it for 4 purposes. Those 4 purposes are as follows: First of all, to try and prevent things happening. Secondly, in the event that they do happen, you want to mitigate them. Thirdly, you want to be able to handle them when they happen, and finally you need to be able to clear up afterwards.

Risk and hazard

One of the challenges, and I think we’ve already come across this this afternoon, is the challenge of risk competence. One of the real problems here is the use of language, because there are some very important questions here. I think this illustration of the bird, the plover, sitting in the jaws of the crocodile, illustrates most of the issues around the terminology. The question that you might ask is: what is the risk to the plover of being in that precarious position in the crocodile’s jaws? The hazard is fairly obvious here: it is the teeth of the crocodile and its extremely strong jaw muscles which could clamp down and cause irreparable damage to the plover. That’s the hazard. But the hazard is not the same as the risk. The difference between hazard and risk is exposure. So the question actually is: is that plover exposed to the hazard of the crocodile’s jaws? If it was our head in there, then the answer is we would be highly exposed. But the reality is that that plover knows that it’s not at risk of exposure here, because actually it co-exists with the crocodile, it cleans its teeth. There’s a very important thing in there. In fact, the vulnerability of the plover is very low, it’s not vulnerable, whereas a small mammal, or a large mammal in the case of a human, would be extremely vulnerable to the crocodile.

I suppose what we suffer from in all of this is potential uncertainty. In other words, we don’t or we might not know, so the uncertainty is not knowing about those relationships, and the term ‘threat’ really applies to human threat. The fundamental point here, which I think comes back in again and again, is that we are sometimes in danger of thinking about and trying to regulate and legislate for hazards rather than for risks. Our houses, for example, are full of extremely hazardous substances, our kitchens have knives in them, we have bleach, and there are electric devices that can catch fire. These are all extremely hazardous, but they pose us very little risk indeed, because we manage our exposure. And what we have to distinguish between is the hazard and the risk. Actually, at some level, that was illustrated with Fukushima, with the question about citizens living in Tokyo. The hazard of radioactivity is very well known, but what actually matters is your exposure to it. And if your exposure is very low, then the risk is very low. We confuse those terms all too often and I think that unless we’re able to have that very clear conversation and understand the terms, then we’re in danger of not having good conversations about risk.

Innovation

Let’s look now at innovation. As I’ve already said, there is only this number of humans on the planet because of innovation. We are only in this room and able to live comfortable lives because of innovation. And of course it was power, above all, that drove the industrial revolution and that drove us into the modern societies that we are now. But just imagine the discussion we might have now if electricity had not been discovered and applied. You’ve got this stuff that you’re going to put in wires in our rooms, which has the potential to give us an electric shock that will make us drop dead. You can just imagine the discussion; the hazard of electricity is enormous, and so how would we have been able to think about and regulate the innovation of electricity in 2014? What would different countries around the world think about it?

We’ve done an enormous amount of innovation and, indeed, much of our modern society has depended on people thinking about risk in appropriate ways, or sometimes a lot of it happened before people even started thinking about risks very clearly. But it is innovation that has got us where we are, so it’s about power, it’s about electricity supplies, it’s about all the improvements in healthcare and the development of drugs. Would we have introduced aspirin if we had known all the side effects of aspirin 200 years ago? It’s about mass production systems, it’s about better transport links.

So innovation has been key to our lives, and of course we’re going through effectively a second industrial revolution at the moment. This is the industrial revolution of the IT age, which, again, we take for granted but has the potential to change our lives in the most extraordinary ways. We’re really only at the beginning of that, and actually one of the things that I am doing came from the British Prime Minister’s visit to the CeBIT fair here earlier in the year; he talked about the importance of the internet of things and announced a review that in fact I and my office are leading for him in the UK.

We’re moving towards a world where, in the internet of things, almost all of the objects that we live with have microchips in them, and they have the potential to be connected to wireless networks so they can connect one with another. We’re moving into a ubiquitously sensed world, and of course the internet of things is already here, because I suspect that everyone in this room probably has a smartphone - and that of course uses GPS and is a very powerful location tool. We are an extraordinary innovative generation, our demographics are changing, our interconnectedness is changing, and of course that brings its own opportunities and threats as well. And, frankly, we can only move forward if we continue to innovate.

Sometimes innovation is looked at in terms of causing risk, whereas actually the reality is that we need innovation to reduce risk. If we don’t innovate, the challenges of the way we use energy at the moment are going to cause terrible problems for future generations of humans. We face the challenges of climate change, of resource shortage, water shortage, but also of other resources, challenges to food and agriculture. If we are going to be able to live well and if our children and their children are going to be able to live well, then we require innovation.

There are different types of innovation and I think it’s quite helpful to think about 4 categories of innovation in terms of how we think about them and how we think about the risks. This is because I think one of the mistakes that scientists sometimes make is that we think that science is the sole problem, whereas actually it’s about much more than science: it’s about science, it’s about values, it’s about costs. It’s actually about how science meets society.

Cost

So here’s the first form of innovation, in association with risk, where we’ve actually largely worked our way through it. If you take the development of new medicines, those are innovations that are very widely accepted. We’ve worked out reasonably effective means of regulation, and I’ll come back to those in a moment. From time to time there are significant challenges about who should pay, and a good example of that is medicines in the developing world and vaccines, where the question is: who should pay? But as a form of innovation, this is one that is widely accepted, and while there may be from time to time specific questions, this is a form of innovation that people don’t worry about.

Values

Here’s a second form of innovation which causes all sorts of problems, and this is what I term ‘science meets values’, and there are 3 examples here, There are genetically modified organisms, there’s embryo research, and there’s nuclear energy. In each of these cases, and I’ll come back to the science and how I think we need to discuss this in a moment, there’s the issue of the science on the one hand and then there’s the issue of the values and how people think about these technologies. Here, frankly, there are different situations - they’re not universal values here, there can’t be universal values actually, there are cultural values, there are religious values, and different societies have a different view.

I think that that raises an important question in a confederation of countries such as the European Union where, inevitably, because of our histories, we have different views on each of these technologies: the value systems that came up earlier this afternoon, that embryo treatments are much more accepted in the UK than they are, for example, in Germany. I think the challenge for the European Union is how actually we manage these scientific issues where the values are contested and different in different parts of Europe. And that’s not a question for the scientists, it’s a question ultimately for citizens as a whole and our politicians. But, and I’m not trying to put value judgements, I think the challenge here is: do we go with the country that is the most allergic to any particular technology, or do we go at the other end, or do we find a sort of middle place? I think that’s a challenge for a confederation of countries or a grouping of countries and that I think is the second form of innovation in relation to risks.

‘My pain, your gain’

A third category, which I think everyone would recognise, and which is important in infrastructure as well as innovation, is: my pain, your gain. My risk, your benefit. For example, a high-speed train line is built adjacent to my house and I have to go 100 kilometres to get on it. How do we balance that? And there are quite a lot of situations like that where we have a piece, usually of physical infrastructure, which may be associated with innovation, be it a windmill or a nuclear waste disposal, where you have a facility which imposes risk in one place and benefits in the other. What I’m trying to get across here is not that I’m giving you the answer to each of these, but that I think we’re only going to have the best discussion if we actually think about the taxonomy in such a way that we can really understand that the issues here are not purely scientific. It is about what you’re balancing, and what politicians, and democratically elected citizens have to balance in order to make the societal decisions.

Unintended consequences

The final example of innovation in relation to risks - there may be others and I’m happy to discuss this afterwards - is, I think, ‘unintended consequences’ where a new technology is very widely and very rapidly accepted but then you start discovering some of the problems afterwards. Smartphones would be a very good example of that. We are all extremely wedded to our smartphones, they give us an enormous amount of personal benefit, but then there are all the challenges that have come with this extraordinary revolution in information technology and becoming an internet of things. We can’t anticipate every consequence, but we have to think about how we manage them. And I think if we start thinking about innovation in those 4 different ‘buckets’, recognising that of course there’s an overlap between them, that you know sometimes it’s actually a combination of values and ‘my pain, your gain’ - and I’ll show you an example of that in a minute - then I think we’ll have the best discussion.

Communicating risk

Coming back to what has been a theme of this afternoon, we have to, I think, have a better and more effective conversation about risk. Risk is absolutely a societal issue, and I think there’s another interesting point here. I was in New Zealand just a few weeks ago, and if you’re a citizen of New Zealand, which is prone to earthquakes, then the citizens there know that if there’s an earthquake they’re potentially on their own for 5 days. So there’s a discussion which means that they understand the risks associated with a natural hazard, of earthquakes in this case. I think that certainly in some countries - I think in the UK and possibly in Germany - our citizens tend to think that they are just completely safe and government will look after them. I think there is a discussion about the fact that we actually do live in a world where there are significant risks, and not all of the resilience can be provided by the state on your behalf. So that, I think, is part of the discussion as well.

There’s the issue of language, which I’ve already talked about. Just actually understanding the terms, because if you don’t understand them, if you can think that hazard and risk are the same thing, then I think you’re going to have a very bad discussion. There’s value in ‘lenses’, and I’ll come back to that in a minute, and there’s the issue I’ve already raised about who gains the benefit and who carries the risk. There’s the discussion about transparency: is transparency simply the answer, just put all the information out there and people can decide? That can’t be the complete explanation because actually some of this stuff does require expert knowledge. Of course, that’s not an argument against transparency but it’s actually about meaningful transparency. Simply putting a load of numbers out there doesn’t help anyone, it’s all about turning numbers into information and information ultimately into knowledge. And then I would argue that widening that conversation is a democratic necessity.

What is it that science can contribute? One of the most important things that science can contribute is the best possible assessment of the state of the scientific knowledge, while recognising something very important. That is that scientific knowledge is contingent, in other words we know what we know at a particular time, but as new facts are discovered and as new observations are made, as new experiments are done, we will learn more, and we may change our views about things or have our views reinforced about others. And one of the things that’s very important, and I think it’s why the IPCC is so important, because this is actually a methodology that came out of medicine, is that in medicine, which is my field, things have advanced by a series of clinical trials and relatively small experiments. Can you decide whether to use a new drug based on a single clinical trial? Very very rarely indeed. What you need to do is to look at all of the evidence and do what’s called a meta-analysis, and it’s that sort of meta-analysis, it’s that sort of evidence summary, that is extremely useful and important for people like me actually. A large part of my job, while I don’t have expertise in all the sciences, is that I act as the transmission mechanism between the outside world of science and the inside world of government. And there’s nothing that makes my life easier than a high quality evidence review. So the power of the IPCC - the Intergovernmental Panel on Climate Change - is the rigour with which it has looked at all of the available evidence in relation to climate change and human emissions of carbon and other greenhouse gases.

Another example of a more local one to the UK is work done by the Royal Society and the Royal Academy of Engineering. I think learned academies play an important role as well, and my predecessor asked the Royal Society and the Royal Academy of Engineering to do a report on hydraulic fracturing to extract shale gas. I’ll come back to that in a minute as well. These are all entirely non-controversial areas of science as you will see! And then the third area, and this is an example of the European academies working together very effectively through a body called EASAC, is an excellent report called ‘Planting the future: opportunities and challenges for using crop genetic improvement technologies for sustainable agriculture’. These documents conducted by impeccable scientists working together are very important as the actual scientific contribution to the discussion.

Another problem has been, and I think this is quite a common problem, that we tend to treat technologies in a generic way. And it’s actually a completely ridiculous question to say: ‘Is genetic modification a good thing?’ ‘Is nanotechnology a good or a bad thing?’ ‘Is synthetic biology a good or a bad thing?’ because any technology has the potential for both positive and negative uses. So if we want to have a sensible conversation about genetically modified organisms, it has to start being specific. It’s actually what organism, what gene, for what purpose. If it’s a piece of synthetic biology, it would be fairly obvious that changing a bacterial toxin to make it more dangerous would not be a good use of synthetic biology. On the other hand, adapting an organism, for example so that it could sense arsenic contamination of the soil, most people would think would be a good thing. We make the mistake of talking about technologies as though they were generic, where no technologies are generic, they’re all specific in their applications.

Policy lenses

My next point, which I think is also an important one, is that when it comes to policymakers - and the policymakers are the people that we elect, our politicians - my job is to provide the scientific advice, it’s for the politicians who’ve been elected to actually decide what to do with that advice, and what policymakers have to do is look at issues through a whole variety of different ‘lenses’. Let’s take one which is very well known, which is the energy question:

There are really 3 questions if you’re thinking about power supply. The first and foremost is actually security of supply, because from what I’ve already said, if the power goes off, then we’re in trouble. So security of supply is the first of the lenses. And then the second and third lenses that policymakers need to look through in relation to energy and power are the sustainability in relation to the challenges to the climate and the planet, and then the affordability lens. Because it’s no use having a solution that then gives you energy that’s so expensive that no one can afford to pay for it. So any policy approach which looks through one of those lenses alone is unlikely to give you a sensible answer if you are a politician or policymaker. They have to look through all of those lenses to get to the sensible policies. Again, I think that what science has to realise is that science can contribute to one, maybe more of those lenses, but ultimately there are all these different lenses that have to be looked through. In a sense I’ve already come to that with the discussion on the different forms of innovation, where you’re balancing the values, you’re balancing the ‘my pain, your gain’. Putting that together, let’s just take shale gas as an example:

There are really 3 science and engineering concerns about hydraulic fracturing (fracking). The first of these is: will it cause earth tremors? The second is: will you get contamination of the water table? And the third is: will there be fugitive release of the methane gas? (In other words if you leak all the gas then you lose the advantage of it as a fossil fuel). And what the science and the engineering tells you is that this is a drilling technology and no drilling technology is completely risk-free. But if it is done well, if it is engineered well, if it is governed well, then it is as safe as any other form of drilling, recognising that there is no ‘free lunch’, there is nothing that is completely risk-free.

Those are the engineering concerns, and that’s what the Royal Academy of Engineers’ report said and actually multiple other reports have all essentially said the same thing. But the public or publics who are protesting, at least in some parts of the world, about fracking are coming at in from a different angle. They’re coming at it from the values angle and from the ‘my pain, your gain’ angle. And so there’s a group that dislike fracking because they dislike fossil fuels, there’s another group that dislike fracking because they actually just don’t like big companies, and then there’s a third group who just don’t want the inconvenience of having something industrial happening in their back yard. I think that unless we understand and analyse that different set of issues, then we’re in danger of having completely the wrong conversation and actually talking past each other. So analysing what goes around this sort of public discussion around new technology is very important.

Regulation

One of the things that we have evolved over the years are regulators as ways of solving these problems for us. There are some interesting challenges here and I think, again, it goes back to a discussion we’ve been having before which is: can we solve all the problems around innovation with legislation? The answer to that is going to be that it’s extremely difficult to do that because legislation is very precise, and in doing so a rather blunt tool, because you can’t specify every possible option in legislation. ‘All or nothing’ would be an exaggeration, but it’s actually a bluntish tool. So people use regulation and the question then is how you regulate in the most effective ways. I think that there are 3 challenges for regulation, although I believe, and I will show you in a minute, that it is possible to get the regulation right if you do it in a very thoughtful fashion.

The first challenge is regulation in those industries where there isn’t a normal free market and where there are semi-monopolistic suppliers, and in the UK many of our utilities would fall into that category. We haven’t got multiple different water suppliers bringing their own competitive pipes to our front door, and so that has to be regulated. The challenge for the economic regulator is whether the regulator then has the correct incentives, so the incentive they do have is to get us the cheapest water that we can get. But there are other challenges to that, because if we’re going to have the sort of innovation that we need, if we’re going to have a water system as well designed as possible, we need other things as well. You need not only a secure supply of water, it needs to have resilience against shocks. And it needs to innovate, and so you need research and development and the introduction of appropriate innovation to deal with aging infrastructure. One of the challenges for every part of the world that’s had its infrastructure for a long time is that it is tending to age, and what we need to do is to extend the life of our infrastructure but also to ensure that we can evolve it - to the infrastructure that’s going to last us the next 100 years. And of course much of our UK infrastructure, for example in London, until recently has still been dependent on the great sewers built by Victorian engineers.

So the question is whether the economic regulator has the right incentives, or whether by being strict on economic regulation alone they have the potential to drive out innovation, research and development and resilience. The second problem of regulation is something again that we’ve discussed this afternoon already, which is the challenge of what I call ‘asymmetric incentives’. What I mean by that is that if you are a regulator deciding whether to allow something to happen or not, then you will get into terrible trouble if you allow something to happen that causes harm. But you will not get into trouble if you stop something happening that would have done good. That’s one of the challenges for regulators and it’s not an easy one to solve. The only way you can probably do it is by finding mechanisms for making people making decisions about this accountable for all of their decisions, both to let something happen and to stop something happening.

The third challenge for regulation is encrusted or burdensome regulation, in other words what has tended to happen. Drugs regulation in some ways is an example, although I think I’m being potentially a little unfair to that. We’ve developed systems of regulation which have become more and more elaborate, and we’re very good at adding to regulation but we’re quite poor at subtracting things which turn out to be redundant or unnecessary. I don’t want this section of my talk in any sense to be taken as a sort of generic criticism of regulation, because I think regulation is extremely important. The challenge is actually to make the regulation work in the most effective fashion.

Precautionary principle

That takes me briefly to the precautionary principle, which is something that is discussed a great deal in Europe, and where I think there is the danger of misapplying it. The first thing to say is that there is no single definition of the precautionary principle. There are lots of international agreements and organisations, there have been different definitions of it over the years, there isn’t one, as it were, biblical interpretation of the precautionary principle. The EU communication from 2000: ‘based on the fullest possible scientific evaluation of a hazard, a decision to act or not preceded by risk evaluation of those hazards, and the consequences of inaction’ - so this does distinguish between hazard and risk - ‘involves transparency, and takes into account the principles of risk management, proportionate response, risk-benefit analysis, cost of action or inaction reviewed in light of new science’. Now described in that way, that must be an entirely sensible thing to do. But the question is how it is actually interpreted, and I think, increasingly, there’s been a tendency to use the precautionary principle as a red light that says ‘Stop’, full stop, as opposed to ‘We need to pause, we maybe need to find things out’, but it’s not an absolute go/no-go. And so of course the precautionary principle makes perfect sense, but the question with anything like that is: is it applied in an appropriate fashion?

‘The thoughtful regulator’

Of course you can only do that at the end of the day if you actually understand the terminology - and that does go back to that sort of capacity to have good risk discussions. Let me turn to a contentious area in this audience, but one where I think the UK has achieved a form of regulation that serves us well. This is in the area of embryology, fertility, fertilisation, treatments of various sorts, IVF for example. Many years ago the UK set up a body called the Human Fertilisation and Embryology Authority, and this is an example of a regulator that’s worked in an area of emerging science. That is one of the challenges with innovation - that actually, innovation doesn’t come fully formed, it emerges, there’s new experiments, there’s new innovation, things are tried, things do work, things don’t work. How do you actually legislate and regulate in an advancing field of innovation?

And so the model here is a ‘thoughtful regulator’, and it basically assesses the policy issues related to fertility regulation. This is in an area which is undoubtedly ethically and clinically complex; it takes scrupulous care in assessing a wide range of data and evidence, so it does evidence reviews, looks at what’s going on, has a comprehensive programme of public engagement, so it’s part of the discussion with the public. This is a regulator that works openly in consultation. It’s been very effective at building trust in the community, and of course it’s worked closely with legislators in parliament, and so from time to time new regulations or new legislation is needed to allow something to happen. And so there’s this very good interplay between a regulator which is, if you like, a ‘thoughtful regulator’ – and I’m sure all regulators are thoughtful, but there are sort of processes they go through that I think make it extra thoughtful – and Parliament.

Now it’s absolutely clear this is a value-laden area. I supported some of the research in this area when I was at the Wellcome Trust, and for example the treatment of mitochondrial diseases is something that’s now becoming possible through research. This is highly contentious, there are some people who for very strong, personally held reasons, believe that this is the wrong thing to do. The challenge there is not a scientific challenge, it is actually a challenge for democracies. That’s why we have democracies, and ultimately it is our legislators that have to decide what to do. But this is a way of having the best conversation, and as a result, in relation to the values in the UK, this is an area where the UK has, I think, a very strong opinion, though I recognise that some people will disagree.

Conclusion

And that takes me in some senses to how I got here, and I think I got here through 2 journeys really. The first was my time in medicine, and I chaired an ethical research committee at Hammersmith Hospital in Imperial College many years ago, and through the work of 10 years as Director of the Wellcome Trust, having to think about some of these areas. And then, I came into the world of government, where you suddenly discover that it’s all about risk, and how do you think about risk, how do you manage it, how do you provide the scientific evidence on the one hand, recognising that it is contingent, it’s evolving? But policymakers can’t always wait. Scientists sometimes say: ‘Well, terribly sorry minister that’s the wrong question’ or they say ‘Yes minister that’s a very good question, if you give me £5 million and 10 years I’ll be able to give you the answer,’ but actually the policymaker needs to make a decision rather sooner than that.

So what I’m doing, and will be publishing fairly soon, is producing an annual report. This is actually copying an idea that my close colleague Dame Sally Davies, who’s the Chief Medical Officer in the UK, does each year, which is produce an annual report, and she’s just produced one on mental health. But fairly recently she produced one on antimicrobial resistance, which I think is recognised around the world as one of the big scientific challenges that’s facing us. We took antibiotics for granted, but it’s the nature of Darwinian selection that exposed to an antibiotic a microorganism will evolve changes that will make it resistant. And so I’ve taken a slightly more open topic for this year which is on the topic of managing risk and not of, as it were, ducking it. And hopefully that will be published sometime in the next couple of months.

But, as part of that work, I think we have to have a stronger European discussion, I think we have to have a strong global discussion, and I would argue that I think one of the challenges for Europe is for the scientific community to come up with the best advice to policymakers, both nationally and in Brussels, so that we can have the most effective and thoughtful both legislation and regulation. And I think one of the problems that we have in science in Europe is the transmission mechanisms between the scientific community and governments and the Commission, and the Parliament. I think that’s one of the challenges. And I don’t think there’s an easy answer, and there isn’t a ‘one size fits all’ approach to how scientific advice gets into government. I happen to be quite attached to the UK model for fairly obvious reasons, but it isn’t the only model I think. So, to finish, I think we do need a wider and more thoughtful debate and engagement on the subject of risk: risk as a whole, actually, but particularly risk in the context of innovation, where we frankly need innovation if we’re going to manage some of the grand challenges that face the global population of humans and all the other species with whom we share our planet.

I think that we should improve our understanding of different forms of innovation and how they result in risks, and so I think that discussion about taxonomy is quite important otherwise we can’t have the best conversations. I do think we want, we need, more adaptive forms of regulation, recognising that you’re trying to regulate in an evolving field – that’s very different from regulating something which is well established. Then I think that we need to look at the balance of incentives to work out how we’re actually going to manage the ‘my pain, your gain’ and all of these other issues around innovation which are around people’s values of equity and fairness. And I think that’s probably the best note to end on. So thank you for your attention.

Updates to this page

Published 17 October 2014