The Safety of Work

Ep.46 Is risk compensation a real thing?

Episode Summary

Welcome back to the Safety of Work podcast. Today, we discuss risk compensation and decide whether or not it actually exists.

Episode Notes

We are fortunate to have a few resources we can reference for today’s topic. Please see below for links to the papers we mentioned in our conversation.





“...I think this is the sort of phenomenon that causes people to believe in risk compensation.”

“Basically, what they’re saying is, if there was a real effect, it would be robust regardless of how you crunched the data.”

“Just because someone does lots of citing of literature or quotes from scientific literature, doesn’t mean that their interpretation of that literature is rigorous and scientific.”



Bicycle Helmets and Risky Behaviour: A Systematic Review

Risk Compensation Literature - The Theory and Evidence

Driver Approach Behaviour at an Unprotected Railway Crossing Before and After Enhancement of Lateral Sight Distance

The Effects of Automobile Safety Regulation

The Theory of Risk Homeostasis

Episode Transcription

David: You're listening to the Safety of Work podcast episode 46. Today, we're asking the question, is risk compensation a real thing? Let's get started. 

Hey, everybody. My name is David Provan. I'm here with Drew Rae, and we're from the Safety Science Innovation Lab at Griffith University. In each episode of the Safety of Work podcast, we ask the important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

Drew, you wanted to do this question today. Do you want to start by telling us a little bit of background and what the question is?

Drew: David, we've talked before about the concept of deepity. Deepity is something that's got two meanings. One of them is trivial and true, and then there is a deeper meaning which sounds profound but is actually false or nonsensical. 

Today, we're going to be looking at the deepity that often comes up in safety called risk compensation. Sometimes it's also called risk homeostasis. The longer time we go, it got called the Peltzman Effect.

Risk compensation is the idea that when we introduce protections to make people safer, people adjust their own behavior in order to compensate. It's got a trivial, true meaning. When we perceive risk differently, we behave differently. That's fairly obvious and it's true. But that's not what the proponents of risk compensation claim. They go further and they claim that there is a fairly strong and consistent effect where people change their behavior in response to safety improvements. They change their behavior so much that it cancels out most or even all of the benefit of the safety improvement. 

It gets used for things like vaccines, bicycle helmets, or seatbelts, and claims that these safety improvements are not in fact improvements because people change their behavior to compensate. Just to be clear, this is nonsense. 

In this episode, we're going to go through the history and theory of risk compensation, and then we're going to have to look at the evidence. Because not everyone listens all the way through a podcast, I don't want anyone to drop out halfway through when we've explained the theory but not explained the evidence. That is why I'm giving the spoilers upfront and say that the theory doesn’t make sense. It doesn’t even make sense theoretically, and it certainly doesn’t make sense once you look at the data. But it’s a good illustration of what happens if you just cherry-pick one or two studies about something. 

There are definitely studies that exist that show that risk compensation works. That is why it’s really important when we’re talking about evidence-based safety to examine the body of evidence rather than just one or two papers. When you look systematically at risk compensation, the effect disappears. 

Before we go into some of the details, David, anything you’d like to say about your own experience with the idea?

David: Yeah, I thought about two examples, Drew. These might be more aligned to how behavior gets modified when risk is perceived differently as opposed to the more general theory of risk compensation broadly, but when we’re seeing a lot of new safety equipment in (let’s just say) sporting endeavors like extreme sports—I was thinking of big waves surfing—now I’ve got inflatable vests, jet skis, and portable oxygen canisters, but when they get held down by big waves. I think we’ve been progressively seeing more and more extreme situations that these athletes put themselves in because of the measures that were available. I suppose that, I thought that, that would be one of those top examples where the level of risk may be quite static because of the balancing effect of taking on more risk, because of the more controls that are available for them. 

The other one that I thought of is a conversation I had with the manager at the site once. When I asked him what he was most worried about, he told me that he was worried that his people thought that the site wasn’t dangerous. This was a large major hazard facility, and he thought because we talk to them so much about safety—I do so much paperwork about safety, and I’ve done so much safety training and so much safety messaging—he was of the view that he thought that the workers were going to be less cautious in their work because they thought things were much safer than they were.

Those were two examples. I don’t know if they’re in line with the broad theory of risk compensation or just about risk perception. 

Drew: Thanks for the examples, David. I think those are the sort of things that showed the difficulty in thinking around this sort of topic because we can have things that are very self-evidently true. But if you have a site where people are complacent, that’s probably more dangerous than if people are alert to the risks. We can have situations with extreme risk tolerance where they use risk protection as justification for taking more and more risk as they push the envelope. Those things are definitely real effects that happen. 

One of the problems with risk compensation is the way that people make lots of arguments by analogy from some of these corner cases into claiming that there’s some sort of a universal general effect. 

Let’s go through the history of the idea and then we’ll talk about a couple of the papers that throw in the evidence. Oddly enough, like a lot of stuff about risk from economics rather than from safety science, people may know that until relatively recently, economists like to model humans as if we were all hyper-rational actors. Ideally, each human has got a set of utility functions, which is adding together all the positives and negatives of any action we want to take. We take an action when the positives outweigh the negatives. 

Lots of modern economists disagree with that idea, but it was very, very popular in the 50s, 60s, 70s, even onto the 80s a bit. It might lead you to function for buying a chocolate bar. It might be the positive value of enjoying chocolate minus my fear of gaining weight, minus the cost of the chocolate bar. The economist would say that if my fear of gaining weight increased, or if the price of the chocolate bar increased, then I'm less likely to buy the chocolate bar. 

A guy called Sam Peltzman applied this idea in 1975 to traffic regulations. His argument was that as car safety increased, the expected cost of dangerous driving decreased. Drivers were more likely to do what he called driving intensity behaviors. That covers everything from drunk driving to speeding, to just carelessness. He has lumped these all together and said that as the car gets safer, driving intensity increases. 

There is, of course, a massive problem with this argument, which is that there is zero evidence that driving intensity had in fact increased, and there was clear evidence that the number of traffic fatalities was decreasing over time. In particular, in response to a big raft of legislation that the US had put in place in the 1960s to improve traffic safety. But that didn't stop Peltzman, so he conducted a detailed statistical analysis. David, I know how much you love statistics. Did you have a look at the Peltzman paper?

David: I actually didn't. Well, I scanned it but I went very fast through the statistics. 

Drew: The paper is filled with tables and pretty much into every table goes heaps of assumptions and his justifications for why those assumptions make sense. He says things like the private value of safety is increasing along with wage inflation. As people get richer, they care more about dying. He claims that although we can't measure how fast drivers are actually going, we can use the speed limit as a measure of whether drivers are going faster or slower. He thinks that risk for drunk driving is a good estimate for how much drunk driving there was, ignoring the fact that when we had blitzing against drunk driving, that's when you get all of the arrests. Hopefully, you get the picture. 

If you accept that all of these assumptions are correct, then maybe you can get to the same claim that he did. Even though the number of deaths is going down, safety regulation has nothing to do with it. That was the original idea, the Peltzman effect, 1975. He used this argument about traffic safety actually as a broad argument. He said that there's this psychological effect that as technological safety increases, driving intensity or whatever risk behavior we're talking about, increases; and the two cancel out. But keep in mind that there isn't evidence that dangerous behavior does actually increase. The evidence comes all from this statistical analysis to show that the risk hasn't increased on balance. 

So then we come to Gerald Wilde. In 1982, he proposed an even stronger version of the theory that he called risk homeostasis. He removed all the rest of the utility functions. He purely said that individuals have a target level of risk that they are willing to accept. If the amount of risk they experience goes down, they'll stop taking more risky behavior to compensate. If the amount of risk they experience goes up, they'll stop taking safer behavior to compensate. 

The logical implication of that is that the only way to make someone safer is to change their target level of risk, not to change their actual level of risk. Now, that’s not me extrapolating. That's Wilde’s explicit claim, that nothing society does to improve safety by reducing risk will work. The only way to improve safety is to change the target level of risk that the population is willing to accept.

David: Drew, that is fairly close to some of the mainstream approaches that we've taken with things like safety culture and some of the program is actually being about trying to get people to care more about safety for the last 20 or 30 years. Could I think of that as being what Wilde was saying there? Actually, that getting people to really care more is a good thing to do?

Drew: If you're looking for a justification for why it might make sense to spend nothing on actually improving the safety of your worksite and spend everything on just blaming your workers for it and making them feel responsible for their safety, then this is the perfect theory for you because it does make that sort of behavior rational if you believe the theory. 

David: Okay, because we do talk a lot about organizations that have (in a way) lessen the risk perception of the workforce in getting them to not tolerate a higher level of risk or to tolerate lower levels of risk than what they might be currently doing, so I’m seeing pretty strong alignment in some of those conversations with this theory. 

Drew: To be fair to the people who promote those theories though, often, it comes from a frustration that they feel that they've done everything that they technically can do, that they feel that they already improved the physical environment. They've already improved physical safety. They've already provided the equipment. I think it's reasonable and sensible if you feel that you've done all of that to say all that's left is to change the attitudes and behaviors because I've changed everything else. 

Wilde is saying that all that point up to there is a waste of time. You're not improving safety by making the work happen on the ground instead of up in the air. You're not improving safety by getting people the right tools. You're not improving safety by getting them to wear a helmet. All of those things, people just behave more dangerously to compensate. I think there is a difference between whether you see behavior change as an add-on to technical safety, or whether you see it as an alternative. You pick one or the other. 

So how does Wilde get to this theory? He uses lots of arguments by analogy and he uses very little direct evidence. I'm not going to go through and sort of nitpick every bit of evidence that he uses. One of the early examples is a 1964 experiment that measured skin conductivity for 20 drivers. 

Now, I don't know about you David, but I have tried to do experiments with things like skin conductivity with 2020 technology, and the number of artifacts and aberrations you get makes it impossible to do field experiments well. We're talking 1960s technology, twenty drivers. The authors of that study basically concluded that the measurement technique had lots of unexplained variables, which doesn't surprise me at all. 

Your skin conductivity senses spike when someone moves their hands. Doing things like having the same driver do the exact same course gets totally different results without getting spikes with nothing external to the car that could possibly explain it. It's a really dodgy little study and the authors were honest about its limitations. But Wilde picked that study up and claimed that it provided strong evidence that drivers regulate/adjust their speed to keep the amount of risk they experience constantly. He sort of picked one study that does the same much and interpreted it as strong evidence for his theory. 

David: Drew, in preparation for this episode, I sent you an article that I became very familiar within the early 2000s. It was a Wilde and Ward paper in 1996, published in Safety Science. It was titled, Driver Approach Behavior at an Unprotected Railway Crossing Before and After Enhancement of Lateral Sight Distances. In this study, we're having a lot of trouble at the time in Queensland with unprotected railway crossings, particularly at sugar cane fields with cane farmers and trucks. Certain times of the year when the sugar cane is high, crossings don't have a lot of visibility for little cane trains and things like that. We were looking at the standard and the system around that and what we should do. 

This study is sort of pretty central to what we were looking at because what they'd studied and done was picked at a particular level-crossing that was obstructed by vegetation. They observed drivers going across that crossing, how fast they went on approach, and how much time they spent looking for trains. Then, they cut the vegetation back to provide more sight and distance for motorists. 

I suppose in the results of the study they said that people looked early and they didn't check as much once they thought that they could see more and they increased their speed. Wilde and Ward, in that paper, concluded that providing greater lateral sighting distance didn't do anything to change the actual risk level because of the drivers of the vehicles who sped up. What do you think of that study then, Drew?

Drew: I guess I'm in two minds. On the one hand, I think this is the sort of phenomenon that causes people to believe in risk compensation. People start with protection that seems to make a total sense. Let's improve visibility, and then they discover, hey, the crossing didn't get safer. So they're searching for a theory that explains why the mitigation didn't work.

On the other hand, this study has lots of variables. It's got anomalies in the data that are not explained by any theory, including risk compensation. Most of these people kept looking to their right more than their left, and the authors just had no idea why that might be the case. If you've got a study that has lots and lots of ways of interpreting data that's got lots of anomalies in it, then it's easy to claim that it supports your theory. 

But if you are genuinely designing an experiment to test risk compensation properly, this is not how you design the experiment. It's an experiment designed to do something else and they can't explain anything, so they use risk compensation as the only explanation for the data. I think that's the problem. We've definitely got this real thing that happens. We put in place mitigations and they don’t know ways to why we think they work. 

That was the explanation, the reason why Peltzman started off as well. He was saying that lots of these people who are telling us to put seatbelts in cars, have speed limits, and drunk driving enforcing, are predicting big gains in safety and we haven't seen those big gains in safety. So, he was searching for a reason why. And I think that's true. The people who were talking about seat belts were massively overestimating the effect that they would have. 

David: I think when I was reflecting on that paper again, having seen the preparation you've done for this, I thought, were you still talking about an individual behavior which is driven by the individual perceptions of risk, where, right back at the start of the episode that behavior will change with the perception of risk? But that's not to say that putting in level-crossing protection or grade separation, people are going to find ways to actually still hurt themselves like driving through boom gates or driving up onto railway tracks.

Drew: Yeah, the history of level crossing tells us that nothing we put in place is a protection. It's not going to work as well as we think it is because people behave in weird ways. That’s self-evidently true. 

David: You take this to Peltzman and Wilde, and the theory in support of risk compensation and/or this homeostasis theory. What is the counterargument?

Drew: The counterargument is fairly straightforward, is backed up by a lot of psychological evidence, and it comes in a few parts. The first one is just that people are very, very poor at estimating risk. Any theory that says that we carefully balance out our target level of risk needs to somehow take into account the fact that we don't know what our current level of risk is, so we can't be possibly regulating it accurately. The most we can do is regulating our perceived risks. 

David: Yeah, I think that I read in the paper that people consistently underestimate the risk that they're exposed to while driving by as much as 40%. 

Drew: Yeah, and if we underestimate our risk, then that should be working on the exact opposite way for risk compensation. That should mean that we are constantly taking more and more risks, whereas, in fact, we're not. 

The second thing is that people's perception of risk goes up and down due to all sorts of things other than the risk itself. Among the things we do know about is that the things that are regulatory like asking someone to wear a seatbelt can actually increase their perceived risk of driving. So a driver wearing a seatbelt reminds them that driving is dangerous, so even though the actual risk is lower, their perceived risk has gone up instead of down. 

Similar things happen on rollover bars on quad bikes. The presence of the bars reminds you of the risk that the thing might roll over. Those things create a real problem for the theory that we regulate stuff in the right direction because I suggest we would be regulating in the exact opposite way. 

It's fair to say that those psychological theories are much more robust than the theories presented by Peltzman and Wilde. I don’t think it's fair to use them as evidence against risk compensation. Ultimately this is the same thing that they are doing. It’s taking a theory and applying it to the data when it is something that we should be able to directly measure and observe.

I want to pull out a couple of papers that look at the evidence that actually says whether risk compensation happens or not. The first one is by Levym and Miller. It’s called Risk Compensation Literature—The Theory and Evidence. It’s published in 2000 in a journal that, at that time, was called Crash Prevention and Injury Control. It is now just called The Journal of Traffic Injury Prevention. It's a fairly specialized journal, but it's definitely reputable.

As the title of the paper suggests, this is a literature review. It starts off as a sort of narrative review providing really quite a fair and balanced summary of the theory of risk compensation and the criticisms, and then it has a more systematic review style look at the evidence. Broadly speaking, it covers three types of papers. 

The first one is the stuff that directly sets out to replicate Peltzman’s results. Some of these are really interesting because they used the exact same traffic data that Peltzman used. Some of them are more conceptual replications where they look at the other countries to see if the same thing can be found there. The results were really, really mixed. Some of the studies showed no compensation effect, some showed that they might be in effect, some showed that there is probably an effect but it is small. 

This is the conclusion both in some of those studies and it’s the conclusion that Levym and Miller conclude, is that whether there is an effect or not, it’s highly sensitive to exactly how you create your model, exactly what data you include or exclude, and exactly what year you start or finish your analysis. 

Now, we all say things like that when they say the existence of the effect is highly dependent on all of these things. Basically, what they are saying is that if there was a real effect, it would be robust regardless of how you crunched the data. The fact that it only shows up when you crunch the data in very specific ways is a good reason to believe that there is no effect. 

The next thing they did is they moved on to looking at enforcement of some specific rules and they looked at some studies of individual behavior. These should be a bit less sensitive through the statistical crunching, but they still came up with very mixed results. Some study is showing no effect, some study is showing no significant effect but maybe in effect, some showing that there is an effect but small in size. But this is a 2000 paper. We cautioned on the show before about looking at the stuff that is too old. 

I have sort of looked to see how much people have tried to follow-up and it seems to be that after a sort of flurry of papers criticizing the idea showing there is no evidence for it, no one has really done a strong follow-up study of just the general idea of risk compensation. It was one of those fashionable ideas which turned out not to be true, disappeared but sort of lingers around in the zombie life. 

There had been a bunch of studies about much more specific things. This whole bunch of study is actually in the HIV literature and in the vaccine literature. The one I'm going to pull out looks specifically at the question of bicycle helmets. This is typical of the type of thing that gets published. David, do you want to have a go at pronouncing these names. You’re much better than I am at all the names.

David: I suppose the idea of the bicycle helmet literature for risk compensation is that if people are made to wear or are wearing bicycle helmets, then they’ll ride their bike more dangerously and we’ll still have the same number of accidents and injuries. Is that what we're going here? 

Drew: That’s what we’re going here.

David: Okay, thanks for the hospital pass. Mahsa Esmaeilikia, Igor Radun, Raphael Grzebieta, and Jake Olivier. They looked at this specific question of whether wearing bicycle helmets encourages people into riskier cycling behavior. 

Drew: This is much more of a systematic review. The typical process here is you search for papers using a set of keywords. You plot every paper that matches the keyword, get a couple of different authors to read the abstract, and select the ones that are relevant for your study. You then look at the methods and you classify the paper based on the quality of the methods. 

They found 23 relevant studies. Eight of them were based on surveys, so basically people self-reported risky behavior. Eight of them were based on crunched data, and seven of them were based on experiments. Fairly typical for this sort of literature, only three of the studies actually compared the same cyclists without helmets and later with helmets. So there are very few studies that directly test the question. 

So, 23studies. Only two of them confirmed the idea of risk compensation. Both of them were from the same lab, so they shared authors. Eighteen of the studies, including all of the really good ones, didn't show risk compensation. In fact, 10 of them showed the opposite. They show that people who wear a bike helmet exhibit safer cycling behavior than those who don't. 

The authors are really careful. They say that the systematic review doesn't show risk compensation in this case, but it doesn't rule out the possibility of risk compensation as a general thing. It's interesting that every time we test risk compensation in a specific case like this, it comes out with the same result. What doesn't exist in this particular case; might still exist as a general thing. 

I think to finish off, David, we've got an extract that I'll ask you to read (if you’re willing) from a paper by Barry Pless published in the General Safety in 2016 called Risk Compensation: Revisited and Rebutted. This is mainly just a commentary paper, but I think Pless does a good job of going through all the examples that get you to support risk compensation. And he explains how you should think about each of these individual studies. Why you shouldn't just take this individual study and use it as good evidence. This is what he concludes.

David: Risk compensation theory is often used as a smokescreen by some who are opposed to tough safety measures such as helmet legislation and even voluntary helmet use. Policymakers may also use it as a crutch to avoid having to make hard decisions. If taken as proven, we could seriously inhibit prevention research because logically, if all safety measures are offset by risk compensation behavior, then why bother with any of them? 

However, this is how science works or is supposed to work. We identify a problem, we formulate a theory or hypothesis as part of the solution, and then we test that hypothesis as best as we can. The “best” means using the most powerful designs and measures available. We then publish evidence for our studies, respond openly, and fully to any criticisms. If we're forced to accept that our theory is wrong or has been disproven, then we go back to the drawing board and reformulate it. 

In contrast, to the best of my knowledge, few have tried to test the risk compensation theory empirically. By empirically, I mean using a randomized trial design rather than methods that are essentially observational. Nor has anyone who is convinced of the truth of risk compensation theory offered any plausible interventions to promote safety other than somehow persuading people to reset their risk-taking thermostat at a lower level. But how this can be done is never explained. Those convinced that risk compensation theory has been proven rarely cite work that fails to support the risk homeostasis theory. 

We don’t often find discussions of the corollary, that fewer safety measures should result in less risk-taking. It is also noteworthy that so much emphasis is placed on examples from car crashes and far less on data from other injuries where the theory should work in the same way. Finally, and most damningly, the steady decline in injury death rates is either dismissed or left unexplained. 

Drew: Thanks for reading that verse, David. I think that's a pretty much fair summary of where we're at once we look at the evidence. Do you want me to do some practical takeaways or do you have anything to say? 

David: Let’s get stuck into the practical takeaways. I’m interested to see where you’re going to go here. 

Drew: I think the first one is that risk compensation and risk homeostasis are really interesting ideas, and they're really attractive for people to talk about like they know something about safety. But they're just not supported by evidence. So please don't propagate the misunderstanding. Don't use them as excuses or cruel things to say when talking about risk. 

The second one is, be particularly wary when people use theories like these as an argument against doing something that will improve safety. It's quite okay, and we should ask evidence for, or against specific controlling mitigation. There's nothing wrong with saying, I don't think that would work. I think we need evidence. I think I need to know if this could be effective before we spend money on it. 

But risk compensation has been used as if it is evidence against things. As if it's evidence against vaccinations, HIV prevention programs, recently respiratory masks for COVID-19, and improved safety standards for cars and quad bikes. It shouldn't be used as evidence against something. At the very most, it should be a reason that causes you to want to go out and test something, to ask for evidence. 

One is a more major, which is how we go about reading scientific literature? Just because someone does a lot of citing literature or quotes from scientific literature doesn't mean that their interpretation of that literature is rigorous and scientific. The whole point of pseudoscience is that people get away with it because it looks like science. Risk compensation theory basically draws all of its stuff from taking other studies and placing new interpretations over those other studies, that perhaps even the original authors would not have agreed with. And it involves a lot of cherry-picking—picking out studies that support the idea and ignoring very similar studies that say the exact opposite. You'll look at the weight of evidence. 

And then the final one argues the exact opposite. Just because the deep part of this isn't real, doesn’t mean that there isn't a simple and true thing. That not all safety mitigations work as effectively as we think they will. I think that is always worth bearing in mind. It is always important to measure whether mitigations work as well as we think they do. It's always okay to ask for evidence for things. It's always okay to be skeptical rather than just assume that we’re putting place mitigation. Therefore it should work. 

David: That is probably my takeaway in the level crossing railway paper. It is easy to think quite simply, let's just improve the lateral sighting distance so that people can see the trains earlier and I think that's going to be better but really needing to go back and say that just means that the cars are going to speed up and the stopping time is exactly the same.

I think, as what you said when we're playing around with changes in our organization, that come down to how people perceive risk and then how they behave because engineering controls or something like that, then probably we are probably just second that point of measuring the effectiveness of the mitigations that we put in place and whether they have the outcome that we have intended for them. 

Drew: Yeah. In the case of the railway, I would definitely not say let us not improve the sighting distance because people will just change their behavior to compensate. I would definitely say let's test this out on a few sites and measure the effect it has before we spend a fortune doing this at every level-crossing around the country. 

Any other thing we’d like to know? Anything we should invite our listeners to? 

David: I'd really like to hear some stories of people where they think they’ve observed this risk compensation happening with a particular risk control or safeguard that they put into their organization, and where they think it's a non-intending consequence of changing behavior in a way they hadn't planned, which really didn’t do anything to reduce the risks. I’d be really interested in those sorts of stories, Drew.

So, the question for this week was, is real compensation a real thing? And your answer?

Drew: Well, it's definitely not real enough to use it as an argument against [...] safety. It probably is just real enough that we should ask for evidence about the efficacy of safety measures. 

David: That’s it for this week. We hope you find this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Join in on the conversation in LinkedIn or send any comments, questions, or ideas for future episodes directly to us at