The Safety of Work

Ep. 123: Is risk a science or a feeling?

Episode Summary

Join us for an insightful exploration of the complex world of risk perception and decision-making, by examining the foundational work of Paul Slovic, whose groundbreaking research has significantly shaped our understanding of how humans evaluate risk. Through the lens of Slovic's influential 2004 paper "Risk as Analysis and Risk as Feelings," we uncover the intricate interplay between analytical reasoning and emotional intuition in safety management.

Episode Notes

From the perceived control in everyday activities like driving, to the dread associated with nuclear accidents, we discuss how emotional responses can sometimes skew our rational assessments of risk. Finally, we explore the ethical and practical challenges of balancing emotional and analytical approaches in risk communication, especially in high-stakes scenarios like terrorism and public safety. The conversation touches on real-world examples, such as the aftermath of the September 11 attacks and the controversial discussions around gun ownership. We emphasize the importance of framing and narrative in conveying risk information effectively, ensuring that it resonates with and is clearly understood by diverse audiences. 
 

Discussion Points:

 

Quotes:

“Risk is analysis where we bring logic, reason, and science or data or facts, and bring it to bear on hazard management.” - David

“There may not be a perfect representation of any risk.” - Drew

“If that's the important bit, then blow it up to the entire slide and get rid of the diagram and just show us the important bit.”- Drew

“It's probably a bit unfair on humans to say that using feeling and emotion isn't a rational thing to do.” - David

“The authors are almost saying here that for some types of risks and situations, risk as a feeling is great.” - David


 

Resources:

The Paper: Risk as Analysis and Risk as Feelings: Some thoughts about Affect, Reason, Risk and Rationality

The Safety of Work Podcast

The Safety of Work on LinkedIn

Feedback@safetyofwork

Episode Transcription

Drew:  You're listening to the Safety of Work podcast episode 123. Today we're asking the question, is risk a silence or a feeling? Let's get started. 

Hey, everyone. My name's Drew Rae, and I'm here with David Provan. We're from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety, and examine the evidence surrounding it. We've talked about risk management a few times on the podcast, but not perhaps as much as it deserves in terms of how central it is to how we manage safety.

David: Yeah, and I think from my recollection, you have had quite a close interest and involvement in the risk literature, particularly in your role at York for a while. I don't know if that makes us a little bit hesitant to talk about it, because most of the risk papers that we throw out, I don't know if you really think that they're great pieces of work, maybe, but we have it on the podcast.

Drew: Yeah, I think it might be accurate to say it's a topic where I've got strong opinions. We try to keep this podcast as a balanced statement about the evidence rather than talking about your contested theories. That might be a reason why we've shied away from it a bit.

David: Yeah, and I went back through the list though. We've talked about risk matrices, that was a very early episode. We've talked about when can we trust expert opinions on risk. We debunked a little bit of risk compensation theory. We talked about the subjectivity of technical risk assessments.

Our most downloaded episode, one of the ones in the 90s was your research or the university's research on take fives. We've talked about risk, but today, I think we're going to have a fun discussion about how we as human beings evaluate and make decisions around risk.

Drew: Yeah. I think it's going to be interesting, because I've heard a lot of people say, we evaluate risk all the time, but then we also have this specific activity that we call risk assessment. Those two things, humans evaluating risk constantly and doing risk assessment,  can't be totally separate. There are interesting questions about how much they're the same activity, one just more formal than the other, and how much the two influence each other when we're going about assessing risk, either in our everyday lives or as part of a business.

David: Yeah. Drew, in episode 101 that I didn't mention before, we talked about when incidents should cause us to question risk assessments. We reviewed one of the papers from John Downer in relation to the Fukushima disaster. We talked a little bit about the limitations of technical risk assessment and how it may, I don't know if misrepresents the right word, but not tell the full story about risk.

Today, we're going to discuss a paper, which sort of expands on the role and relationship between feeling and emotion in decisions about risk. I thought we might start by introducing the paper because it's a bit of a theory style of paper with a selective literature review component as well. Drew, do you want to introduce the paper?

Drew: Okay, the paper is called Risk as Analysis and Risk as Feelings: Some Thoughts about Affect, Reason, Risk, and Rationality. The authors are Paul Slovic, Melissa Finucane, Ellen Peters, and Donald MacGregor.

David: Drew, when we thought we'd do a paper on risk, and I think you said to me a couple of weeks or so ago, just anything by Paul Slovic would be a good place to start, I looked him up a little bit because I wasn't overly familiar with his work because risk is in a space that I'm very familiar with the literature, but he seems to still be a professor of psychology currently at the University of Oregon. He had a 65-year academic career. He's the president of an association titled Decision Research. I'm really curious as to why he seemed to be the go to author on this topic from your suggestion.

Drew: Yes. Slovic basically founded one of the ways of looking at risk assessment. Arguably, if you go back in history through the field, some of the worst theories about risk, the most naive, the most discredited, the most today seen as madly over simplistic, those ideas were founded by Paul Slovic. But then you look at who debunked those ideas, who moved the field on, who critiqued those ideas, and found more sophisticated ways of thinking about it, half of those papers are written by Paul Slovic too. He's been around long enough that he's grown and shaped this field of looking at how people make risk based decisions from its infancy to where it is today.

David: Yeah. He started his doctorate research in 1959. This paper we're talking about today was published in 2004 in the Journal of Risk Analysis. From my understanding, that's quite a credible journal. One of the things that we haven't spoken about, I don't think, on the podcast yet, is this article itself's got over 5000 citations. I just thought maybe a minute or two just sharing with our authors, what it might mean when a paper has a large number of citations like that.

Drew: Citations don't tell us a lot. Literally what all a number of citations tells you is how many other papers have referenced that, and there could be lots of reasons for that. Some papers just become popular. They have catchy titles, or the author does a TED talk and lots of people cite them. Sometimes they're just very well framed, so they're the first thing that comes up whenever someone searches for a topic or searches for a citation to back up a particular point.

Some papers are just things that get taught in textbooks. They're the first thing everyone thinks about associated with an idea from their undergraduate studies. Once you get a paper that's got citations in the thousands, that's usually either a textbook that everyone uses, or it's a foundational idea in a field.

This particular one, the paper itself doesn't really introduce any brand new foundational ideas. I suspect that this is the paper that comes up when everyone says, Paul Slovic must've written something about this. This is the citation that people locate first. It encapsulates  a broad strand of thinking that Slovic's talked about across quite a large number of papers. This one is just very representative of that way of thinking about risk.

David: Yeah, and there's quite a lot in here. There's actually quite a lot of ideas that we'll talk about today inside this paper. Also references a lot of the work and a lot of that work that is referenced is by Paul Slovic and also Melissa Finucane as well. They're both drawing in a lot of their empirical research into the ideas we'll talk about today.

Drew, I thought we might start by introducing the paper. We'll just work our way through the paper and the way the paper starts off, they talk about the risk being being confronted and dealt in three fundamental ways by people. Risk as feelings. What are our fast, instinctive, intuitive reactions to danger in the real world? Risk as analysis, where we bring logic, reason, and science, data, or facts if we want to use that word, and bring it to bear on hazard management. How can we understand and mitigate these risks?

There's a bit of a talk through human evolution in the introduction and just saying, look, people have evolved to where they are by having both the instinctive ability to understand and evaluate risk, as well as the cognitive ability to think through, weigh up options, and make a judgment.

Drew: Yeah. David, if you're willing, it might be worthwhile to go into just a little bit of the history of this particular work as well, and how Slovic and the authors before him originally started framing this problem, how we got to where we are now. The earliest work on risk perception was by Barry Fischoff and then by Paul Slovic.

At that time, they were both coming from this idea that experts are very good at assessing risk and other people are very bad at assessing risk. What we need to do is explain why lay people are bad. It was very naive work, because it assumed that the experts were right. It was actually saying, how come experts agree with experts and other people disagree? There must be something broken about other people.

Since that time, it's moved on to this much more sophisticated idea that there may not be a perfect representation of any risk. Some of the experiments that they'll talk about in this paper have a correct answer as to what the risk is.  But for most real world risk problems, there isn't a correct answer. You can't really say that some people get the answer right and some people get the answer wrong.

We've moved on to more of this idea that how people perceive risk is what the risk is. It's just a question of why do some people experience different risks in situations that might be the same, and why do people come to a different result when they use their intuition to when they use their maths. Not with the assumption that the maths is right, often it reveals that there's a gap in what the maths is, including or thinking about it.

Getting away from it, you're originally trying to explain why people are biased, now trying to understand more just how these decisions are made. We have these three framings of thinking about them as an individual gut reaction or feeling that is highly shaped by our psychology and presumably therefore by how that psychology evolved, a more scientific or at least more analytical way of looking at risk that is about collecting data and processing that data and a more political way, which is about trying to get to the actions that we want and having negotiations about risk as part of the way of making a decision that people want to make.

David: Yeah. You can infer the tension in the introductory discussion, like you mentioned about what is the actual risk or the expert decided risk, and then what variants there are with individuals for that same risk. I think that tension plays out in saying that, because I want us to also be practical in this podcast for our listeners too, that the history of human evolution haven't had these quantitative reason based methods for making decisions, they have been largely instinctual decisions.

The foundation of our modern risk management is all built on processes, calculations, and criterias. The introduction of this paper is trying to make sense of how do these two things coexist, and have we forgotten the importance of the emotional and the feeling component to risk?

Drew: Yeah. One of the threads throughout this paper is that all of those apparently scientific and analytic processes we go through, all of them include various steps where we have humans making judgments. Those humans making judgments, just because they're part of an analytical process, it doesn't mean they're not applying that same brain that has evolved with risk as a type of feeling. It doesn't mean that those judgments themselves are scientific or defendable from the analytic basis.

David: Drew, maybe we'll talk about the idea of dual processing theory and two modes of thinking, and then we might introduce what the authors propose, title, and affect heuristic. Two modes of thinking, it was really interesting reading this in the paper, because as soon as I started talking about thinking and feeling, I went in my mind straight to Kahneman and Tversky's work on thinking fast and thinking slow. 

Listeners might know that Kahneman won a Nobel Prize for economics in 2002 for how he brought psychology into the field of economics and economic decision making, yet the authors didn't really cite much of their work. They cite a lot of Epstein's work, and they didn't even mention other theories like hot and cold cognition. There seems to be a number of different theories out there about this dual processing cognition. How do we process information with emotion, and how do we process information with reason or logic?

Drew: Yeah, I think part of that is just timing when this was published versus when some of that other work became popular. Also, the dual process theories, as I understand it, are much more contested within psychology than they are in the popular literature. Don't take Kahneman and Tversky's thinking fast and thinking slow as settled science. It's a way of thinking about it, but actually, the more you dig into it, the binaries like that are really quite rightfully held suspiciously.  It's always more complicated than a simple this or that.

David: A lesson for me in combining a Nobel Prize winner with the New York Times bestseller and placing a lot of credibility in it, but there is a broad, there is a broad literature base here and an evolving literature base about how we as humans process information store and retrieve. 

Drew: We do a little bit of credentialing on this podcast talking about authors, backgrounds, and journals they publish. Just a reminder that if you think a Nobel Prize is an indication of critical thinking, then I have some mega doses of vitamin C to tell you from a Nobel Prize winning physicist who knows nothing about medicine. 

David: This idea, if we break it down, is that as humans, we receive, interpret, store and retrieve information in two ways in this dual process. One of the authors refer to is as we go through life, we hear things, and we experience things, we store them in our mind as images and reference points in our mind. We attach emotion or feeling to those things. We'll talk a little bit about those emotional types.

As we go forward, when we see experience, hear something that relates to those, we then retrieve that. We then, based on our experiences and the emotional risk, the emotions that we felt, attach to those experiences, we'll make judgments about what we do moving forward.

I guess at the other side of the coin is, as we haven't experienced everything, we will come across information. We'll have a more disassociated way of actually dealing with that information. That's much more logical and reason based. We'll also talk about some examples of that type of thing as well. These processes are both going on at the same time. Throughout the paper today, we'll say about how do they both play a role in the way that we evaluate risk.

Drew: Yeah. The claim at its most extreme is this idea that there are almost two modes of making decisions. There's one which is highly dissociated, abstract and fits the cold rational model of decision making and one which is much more faster and intuitive. It's not a full distinction because human brains don't dissociate to that extent, but you can definitely look at different types of reasoning that people apply in making decisions and different types of ways that people make judgments. Certainly there is a way which is much more able to be articulated as a set of steps, a set of symbols, and a set of logical in the mathematical sense, not logical in the necessarily true sense, propositions that can be stated, measured, and checked.

David: The authors say that we know from psychology research that this experiential mode of thinking and this analytic mode of thinking that they're both continually active and interacting with each other. The authors of this paper characterize this as the dance of affect and reason. Do you want to present the idea that the authors propose of this affect heuristic?

Drew: Can I start just with what I think is a quote that you've got here? They're talking about this history of the development. They say, "Subsequently, analytic thinking was placed on a pedestal and portrayed as the epitome of rationality." Affect and emotions were seen as interfering with reason. That's the more naive historical view of it that they're moving away from. You could argue that a lot of what we try to do as we try to be more logical and to be more systematic is we try to strip out some of that affect and emotion.

It's one of the things that we do in scientific research. We often try to remove the possibility of human judgment and human wishful thinking affecting our experiments by designing them in ways with blinding and such so that the only thing that matters for the outcome is the process. The human wishes and evaluations are removed as possible. But a lot of our instinctive reactions to things are directly shaped by what would be an appropriate way to process information about risk.

In some senses, you could argue that a lot of human reflexes are in fact optimized for making good decisions without slowing it down and stepping it out. What is the best thing to do when a ball is thrown at your head? The best thing to do is absolutely not to calculate the trajectory of the ball and to think about the most appropriate thing to do. You're probably better just letting your reflexes take over, because they're probably pretty good at getting out of the way.

David: Yeah. I'm very cautious of the words that I choose to use in this episode, but the reality is that as humans, we're not computers, just storing and filing data. The affect heuristic is that information that we receive, the way we interpret it, the way we store it, and the way we use it going forward has a motion attached to it, which I guess a computer wouldn't have that emotion attached to it. That's the way I was thinking about it as we store things, use things,and think about things in relation to risk both instinctively. Also, logically, we've got a lot of emotion tied up in that.

Drew: Yeah. If you want to think of the human brain as some biological computer, it's got to have a prioritization system for the memories it brings up. We've got a lot of ways of judging probability that are really about how strongly we feel about things and how salient those things are. The more something scares us, the more we're likely to remember it, the more we're likely to see it as highly probable. The more we have directly observed something, the more we're likely to think it is highly probable compared to a story that is just told to us.

The original way information is presented to us is going to shape the way future information is processed and filtered. Someone who has experienced something really badly is going to make negative associations with anything that links into that and is going to be you systematically more fearful of things associated with it. There's all of these processing that are linked to feeling, but also memory, prioritization, and what we see as important information in guessing probabilities. They're going to affect things when we start to make judgments about how likely things are.

David: Drew, what I also liked about this paper is they did draw in selectively, but they were obviously talking about their specific affect heuristic, they put in a lot of empirical research. They also, within the paper, actually described what the study was and what the study found, which made it a lot easier than going to the citations, finding the articles, and then reading those referenced articles.

I want to talk about some of that research as we go through here, because I personally found it quite fascinating. This empirical support for their affect heuristic, do you want to talk a little bit about this idea of how we judge risk based on benefits and how we evaluate the risk based on the strength of the positive or negative effect associated with the decision.

Drew: One of the big things that people were trying to figure out in the late 70s, 80s, and a little bit into the 90s was just why do humans seem to be so inconsistent in which risks they're willing to accept and not accept. It's confusing because the willingness to accept a risk shapes what we say we think the size of the risk is. If we want to accept a risk, we're likely to think that the risk is smaller as a result, not as a cause of wanting to accept the risk.

If you really are afraid of having a nuclear power station, and you under no circumstances want to have a nuclear power station, you are going to tell me that the probability of a nuclear power station blowing up is higher. If for other reasons, you want to have a nuclear power station, you're going to assess that risk as lower. Whether that's something that people genuinely believe or whether it's how we communicate is quite hard to disentangle.

It's something that we were trying hard to explain. Why is nuclear power so scary to people?By anything that we can quantify, it shouldn't be. The likelihood and severity of nuclear accidents aren't nearly as bad from a purely scientific point of view as they are from a social point of view. Even in making a statement like that, I'm stepping a little bit into the experts are good and everyone else is irrational mindset. Part of the problem with the research is we all have our own standpoints on that. 

We try to come up with things that make people feel icky about certain risks, some of which are fairly easy to characterize. The more we feel in control and the more we feel that it's our choice, the more we're willing to accept a risk, and the lower we're going to think that risk is. That's commonly used to explain why people are willing to drive cars. People should be terrified of automobiles, and they're not. We think the reason is because people have a much higher belief that they are in control than they actually do, and it's a risk they take on much more voluntarily, or at least they think it's voluntary.

People are more likely to be scared of things that they think are unfair, which is just weird, but true. It's empirically supported. There's a whole category of things that can't be explained by all of those factors that get lumped into this idea of dread risks. Some types of risk give off spotty horror, particularly, that's used to explain things like nuclear power, that nuclear is scary. Anything that affects your DNA is scarier. Anything that seems chemically is scarier.

There are these just socially scary concepts that make some risks get driven up without any clear understanding why, except that people who are familiar and comfortable with science and see these things as normal give very different assessments of the risk to people who are unfamiliar and just see these things as the unknown. It's almost the monster you can't see is scarier than the monster that you can see, because special effects are never as good as our own imaginations.

David: Yeah. Drew, there's some work that came after this paper by David Kovick. He steps through. I think his theory had 14 different emotions or individual factors, which influence the judgment and perception of risk. I think for the dread one, I think the example is something like being buried alive and things like that, just these things that we just have this visceral reaction to makes us feel like it may be more likely to happen than it actually is likely to happen. That may not be the greatest example of a risk decision, but drowning, for example, is another one, which people really fear more than maybe other ways of being fatally injured.

We know emotion from that discussion there plays a big role, particularly whether we're avoiding a negative consequence, or we've got the opportunity to receive a positive experience or a positive outcome, is going to change the way we evaluate the likelihood of it happening or the probability.

The next section of paper goes on to talk about this idea of probability and risk as a numerator. It's almost saying like we see the chance of something happening, and we don't worry as much as we maybe should about the whole range of options that could happen or what they call the denominator. Do we want to talk about a few of these studies because these were really fascinating?

Drew: Yeah, these are fun. There are multiple interpretations for these results. Just as we go through them, I would like our audience to bear in mind that the most obvious or salient explanation for most of these is that people are really bad at math. There's also always just this underlying suspicion that there's missing information, and people have an intuition that what they're seeing right in front of them is not the whole story. That intuition is wrong in a carefully designed experiment, but in a real world risk situation may in fact be more likely to be right. These are very controlled situations where there is no hidden information. David, do you want to tell us about the picking a red jelly bean study?

David: Yeah. Like Drew said, these were designed to be carefully controlled studies, but participants were given the option of choosing a jelly bean, a small candy, or lolly out of a bowl. In one of those bowls, there were ten beans, nine of them were black, and one of them was red. In another bowl, there were a hundred beans, ninety three of them were black, and seven of them were red. Participants were asked which one they would like to choose. If they chose a red jelly bean, they would get one dollar.

People preferred to choose the seven red jelly beans in a hundred over the one in 10, because they felt like that because there was seven red jelly beans in there, they were more likely to get one of those seven. Even when it was pointed out to them that mathematically, they were giving up a 10% chance in favor of a 7% chance, they still wanted to choose from the bowl with the seven red jelly beans in them.

Drew: Yeah. My immediate instinct goes off. People are bad at math, but then the real explanation here is there's got to be something else going on here. Even when the math is explained, they still would rather go with the intuition than with the math.

David: Yeah. This idea of seven is the numerator, and a hundred is the denominator. People see the seven is bigger than the one red jelly bean, so they overweight that numerator. Another example, then they went to psychiatrists and looked at making decisions about discharging psychiatric patients based on whether they thought that they were going to reoffend within six months.

When they framed the information for those psychiatrists that, okay, so patients with the condition and situation of this person, 10 out of every 100 patients reoffend, when they presented it as 10 out of every 100 patient patients, we know that is 10%. That's 10 out of every hundred. That's one in 10. There's a 10% chance. They told the psychiatrists that 20 out of a hundred will reoffend, and 41% of psychiatrists would refuse to discharge a patient presenting those conditions because that's 20 real people that have reoffended.

When they told those psychiatrists that this person had a 20% chance of reoffending, then only 21% refused to discharge. I guess the exact same statistics by actually having 20 offenses thrown at people creates this real image of reoffending, these 20 violent attacks that occurred within the next six months. When people were told it in a way that suggested they've got a 20% chance, people just see a 10% or 20% chances as being quite small and no real emotion attached to that percentage because they're not 20 real cases. Do you want to  share your thoughts on that, Drew?

Drew: Yup. This thing crops up a lot in risk communication. There's something really interesting going on that. is difficult to understand and difficult to unpack. Basically, in order for this to make sense, we're telling people twice, 20%. We're telling them the 20% in two different ways. As it goes into their brain, they're interpreting it as different amounts, which  really drives home that things that seem very precise, like numbers on a page, are still not precise communication because your communication doesn't matter just how you send it. It doesn't matter what the  sender intends or what the message is, it matters how it's received.

Experiments like this are proof that the number 20 is not a stable concept when it's out in the ether before it goes into someone's head. In their head it could seem like a certainty of 20 people we're offending, or it could seem like just a very low number that's not going to crop up because it's really low.

David: Drew, based on that, I'm convinced as someone in Victoria, Australia, who was locked down during Covid 270 and change days, that we were getting Covid case numbers presented to us. Thinking about it, it feels very different when you say that there are 700 cases of Covid in Victoria, as opposed to over a five million population saying 0.01% of the population have Covid. It's not 770 people anymore, it's just a negligible percentage that we don't really fear or worry too much about.

Drew: Yeah. Often we don't consider the base rate, which is related to this when it comes to risks. How many people would have been going to hospital if not for Covid? It's very different when you say, oh, an extra 700 people or extra 1% than if you say, well, normally the emergency room would handle 50 people. This is driving it up to 500 people. Presenting the same information in different ways triggers different emotions, different images, different memories, and different analogies, all of the different ways we make sense of and react to risk.

David: Which impacts our assessment and evaluation of the probability or likelihood of that thing happening. The authors then go on to say, the way we present the numbers and the information matters for the emotion that people feel. They then went on to say that, however, the way that we tell the storytelling or the narrative has a bigger impact. I think that might make intuitive sense to people that the way the story gets told would evoke emotion as well.

There was a bunch of studies here. One of the ones that I liked was just asking people about how frequent or likely certain causes of death were to happen, highly publicized causes like traumatic accidents, murders, natural disasters, cancer, and these types of things, as opposed to less publicized causes like diabetes, stroke, asthma, tuberculosis, and those things. One of the things that the authors were pointing out is that people have far different levels of information. The way that they receive the stories around that information are very different to what might be the real prevalence or likelihood of things happening in society.

Drew: Yeah. I don't think they mentioned this at all in this paper, but that gets used to talk about this. How scared people are of shark attacks versus how scared they are of tripping over in their own house and hitting their head? If you are the person who goes to the beach and has I thought, maybe there's a shark out there, or you turn on the news and like shark attacks seem horrible, scary, and frequent, then you should be terrified of your bathroom, because the relative risk really says that bath, those tiles, and that edge on the way into the bathroom, is out to get you far more than the shark is. One of them is body horror and newsworthy, and the other isn't. It changes how we see the risks.

David: They talk in this paper. They bring in a bunch of other existing theories of bias and heuristics like the availability bias or heuristic, where we have to rely on the information available to us, and the information is never complete. Again with risk and we'll talk about practical takeaways at the end, but what we're starting to realize now as we go through this paper is that people have very different experiences with certain risks and very different emotions attached to the experience they've got. They've got very different information with also different emotions attached to that information.

I think the interesting one is the example that also is in this paper where people say, I'm in favor of something like nuclear power as long as the nuclear power station isn't next to my house. The risk is low enough for it to be next to someone else's house, but the risk is different when it's next to my house.

Drew: The fun thing is they're not being hypocritical because if you get them in experiments to assess the risk, they will genuinely think, if you compare, that the power station is less likely to blow up when it's next to someone else's house than it is to their own. It's not just being selfish. Rationally, they see the risk as different. 

David: The way you mentioned rational there, that word, the authors in this paper actually don't use the label of rational when they're talking about this feeling as opposed to rational. They're changing this paper to analytic. They do that because they say that it's probably a bit unfair on humans to say that using feeling and emotion isn't a rational thing to do.

Drew: Yes. What I was referring to there was not the analytic. What I should have said is consistent, which is something we often tie to rationality but is not the same thing, and I was misusing by labeling it as rational. People can be rationally inconsistent and consistently irrational. It's better just to refer to the process of stepping through things as analytic and consistent when there's two things that mathematically should be the same and are different. We should just call it inconsistent rather than call it irrational.

Thank you for calling me out on that. It's something that they are very careful of in writing this paper. I bet they did three drafts and picked up themselves multiple times to avoid this, because we always get into that habit of talking about rational versus irrational.

David: Yeah. In some of Kahneman's work, there's a good paper of his titled Rational Choice and the Framing of Decisions, really. I think the authors here were saying it's widely used in the literature to refer to this deliberate, objective, logical, reason based, analytic way of thinking. They just felt  that was suggesting that using emotion and feeling wasn't a rational thing for people to do. I guess given that they are proposing a theory or talking about the importance of risk as feeling, they may have wanted to claim that word back on their side as well.

Drew: A lot of what they're doing in these experiments, and we'll talk about a few more because they're fun, is that they're comparing people's choices under different situations that should be mathematically equivalent. They're noticing that inconsistency and then say, okay, because there's this inconsistency, therefore there must be something else going on in the brain other than straightforward mathematical processing of the probabilities.

David: Drew, let's talk a bit. There's a section in this paper after what we just talked about, about probability and risk as a numerator. There's another heading on proportion dominance. There's a long description of research by Slovic himself that concluded that the probability of something happening, people give somewhere between five and 16 times more weight to than what the outcome or consequence is.

If we think in a typical risk assessment about likelihood, consequence, risk, the research is saying that people might weight the likelihood component of that equation five to 16 times more than the consequence or outcome component on that. They've got some good studies which show why that might be the case. Drew, do you want to say anything before we talk about some of those studies?

Drew: Can I leap right into my favorite one of these studies, which is the 150 lives one?

David: Okay, and then I'll come back and talk about the nine dollar.

Drew: Yeah, and then you can talk about the jelly bean and the five cents. This is something we do in safety all the time. Have a think about this. You could have a chance of saving 150 people, or you could have a certainty of saving a percentage of those people.

Let's take as an example, you could have a 98% chance of saving 150 people, or you could have a certainty of saving 98% of the people, and a certainty that some of them would die. Using standard engineering risk mathematics, we should multiply the likelihood by the outcome and come to the exact same result each time, which is 98% of 150. People have really quite a strong preference for one rather than the other, and it's a strong preference for saving 98% of the lives rather than the 98% chance of saving all of the lives.

David: Yeah. I think the interesting thing here is when people were told they could have a certainty of saving 150 people, and then a different group people were told they would have a 98% chance of 150, and it goes down in a chart on the thing, the thing that was really interesting is that ultimately, people would select having an 85% chance of saving 150 people over saving 150 people without a reference point for, is that 150?

Drew: Yeah, you can just keep sliding this till people get really quite inconsistent.

David: Yeah. What the authors are saying in his study is that if you just tell people you can save 150 people, people don't know if that's 150 out of 15,000, and you're only saving 1% of people, and there's no point in having that equipment as opposed to a guarantee of 150 out of 150 people. Without a range, a scale, or some bookends to know whether a number is good or not, people will say, well, I know that 98% is good. I know that's a good number, 98%, so we should definitely do something if it gives us a 98% chance of something happening.

Drew: Yeah. This is the decision that we expect people to make during risk assessments all the time. This experiment proves exactly how we phrase it or which order we offer people things during the risk assessment workshop clearly, and it consistently changes the answers that people give to the point, where they would change their purchase decision about which equipment to buy or whether or not to buy equipment based on the order of the information is presented and how the information is presented.

David: Drew, talk about the second study then here. This is really interesting. I think that point about, to whether something's good or bad, we need a comparison. It's better or worse. It's not necessarily just an absolute good or bad. 

People were given a chance of winning $9, and they were given a seven out of 36 chance. That's roughly about 20%. They were asked to rate how attractive is this gamble. If you were given a seven out of 36 chance to win $9, how attractive is that gamble? Zero being it's a really bad gamble to make and 20 being it's a fantastic gamble to make.The average is about approximately nine. People say it's about nine out of 20, a 20% chance, $9.

When they change the study to be, you've got a seven out of 36 chance of winning $9 and a 29 out of 36 chance of losing five cents, this is like saying, okay there's an 80% chance you'll have to pay us five cents, but there's a 20% chance that you'll win $9 from us. The attractiveness of that gamble to people rises to 15 out of 20. Drew, that jumps over 50% increase in attractiveness to people when the situation is actually worse for them. They had no chance of losing in the first case, and now they've got 80% chance of having to pay five cents, yet they're still 50% more attracted to that particular situation.

The authors conclude this because the $9 seems so big compared to the five cents. Suddenly that $9, it's got a comparison point. We go, ooh, that's a really great opportunity, let's go after that. Interestingly, when we think about what we do in safety risk management, just having three injuries or one fatality doesn't have any outcome reference point. It may be changing the way that we think about likelihood quite a bit.

Drew: Yeah. One way to sum up this type of research is the idea that it's causing people to not pay enough attention, at least not compared to mathematically what the straightforward math says that they should, to the size of the outcomes and placing too much attention to the probability. There are other situations in which people seem to almost ignore the probability in going to this binary mode of thinking, almost as if thinking that every outcome is 50%.

That happens with really strong outcomes like winning the lottery or having cancer. People seem almost insensitive to changes in the probability, because the focus on just how bad the outcome would be or how good the outcome would dominate their assessment of the situation. It's not that people are consistently waiting one thing more than the other. It's that under different circumstances, people will sometimes really focus on the probability and on other circumstances, they'll be almost completely insensitive to the probability.

David: Drew, call me bad at maths. I'm sure you possibly might, but I am one of those people that plays the lottery and donate $10 a week, one of those people who are paying $10 a week for the opportunity of retiring early, even though obviously the statistics is about one in seven million, one in 10 million chance of winning it, or something like that.

Drew: Yes. I believe you're drastically overestimating the probability there, both intuitively and in your statement of the maths. One thing that these things show consistently is that you're not making that choice really based off the probability. You could rationally describe what the probability is, and it would be unlikely to change your decision making. But we would also find that if we just cold-called you out of the blue and got you to assess the probability, as a lottery player, you would give an intuitively different answer to a non lottery player about what the probabilities are.

This idea that any of us are rational or any of us are good at the maths, the evidence shows quite the contrary that it's only under very precisely defined circumstances when we can go fully into the analytical mode and make correct decisions. Even then, there are famous experiments that show that, experts in maths sometimes get the maths wrong when it comes to probability just because the intuitions are so dominant in working out these things.

David: Drew, what else do we want to talk about before we do some practical takeaways? We talked a little bit there about where we're insensitive to probability with some of those outcomes like cancer on the downside and lottery on the upside. The author's then gone to talk about failures of this experiential system and how the limitations and how, although it helps us navigate life mostly successfully, it can also misguide us around certain types of risks. The authors are almost saying here that for some types of risks and situations, risk as a feeling is great. They suggest that there are other types of risks and situations based on the limitations of our experience as individuals or the information present, where risk as an analytic process might be more useful.

Drew: There are a couple of really important things that they draw out. One of them is that these ideas that they're surveying here, and remember this is 2004, so people have had a lot longer since then for these ideas to become more current, these are all levers that can be deliberately pulled to skew people's risk perception. It's not just that we have these tendencies, but news organizations, marketers, and even people who are concerned by our safety trying to make us safer, can use knowledge of the heuristics we use to process risk to skew people's decision making one way or the other way.

Correct me if I'm reading this wrong. I think they're making a case in this paper that when it comes to public policy, perhaps, in order to have fair outcomes and to have consistent outcomes, we do need to actually deliberately push ourselves away from some of the intuitive risk making processes, particularly not making those decisions as a society of people who are all doing it to each other and all subject to these same heuristics and the same pushes and pulls. If we're going to be consistent and fair as a society in policymaking, then we need to be conscious of people's heuristic and conscious of risk as effect, but we may want to deliberately distance ourselves from it as a way of making better decisions and making better policy.

David: Yeah. This paper is I guess being written in 2002-2003 in the US by American authors just after September 11 terrorist attacks. There's a section in this paper about, how can risk is feeling help us with evaluating the risk of terrorism and so on. The thing that I mentioned there to steer away from is this idea that everyone in the US should go and buy a handgun to fight back against terrorists. They point out that the analytic selves should heed the evidence that a gun being fired in a house is 22 times more likely to harm themselves a friend or family member than to harm an unknown or hostile intruder.

We've got this really strong feeling about the risk of terrorism and the need to have a gun in every house, or every person have a gun, and then the analytics statistics that suggests that that might not be a good thing to do. Those are the type of examples where they're saying, while risk as feeling is critical, important, useful part in risk evaluation, we don't want to ignore the analytic side of the equation as well.

Drew: Fifty percent of our US listeners who have just unsubscribed, thank you for joining us on the journey so far.

David: I didn't say whether they should or shouldn't. I suggested that they should leverage both sides of the thinking process around that, but please don't go.

Drew: It is an important ethical decision then for risk communicators. Is it better to push and use some of these effect heuristics like using storytelling in order to try to persuade people of our view of the risk? Or should we be studying the best ways to communicate to give people the most accurate perception of the risk as we believe it?

Different people have made that decision in different ways. For example, the International Committee on Climate Change has put a lot of research into how they communicate risk of different climate change events. Studying, for example, is it better to say more than 50%, more likely than not, or probably? A language that most precisely communicates successfully gives people the right of impression. Do you say, we're here to scare people about climate change, let's tell some stories, because we know that'll drive up the probabilities in people's minds?

David: Drew, do we want to talk about some practical takeaways then? I think the way that we frame risk and communicate risk, and I think we can talk a little bit practically about what our listeners might be able to do in that space, is there anything you want to say before we do that?

Drew: Nope, let's move on to takeaways.

David: Starting with that idea of storytelling about risks to invoke emotion and attach affect to the information, I think when we think about our risk descriptions and the narratives that we use to talk about risk, I think it's really important that we talk to that. I think what's interesting just reflecting on preparing this episode is that we have this tension in risk evaluation about high probability, low consequence events like recordable injuries and low probability, high consequence events like major accidents. One of the things that we try to do in safety is to get attention on those low frequency, high consequence events.

This paper is suggesting that we perhaps might put more weight on the probability side of the equation. It's also maybe suggesting that we might have more experiences as individuals with people having minor incidents. We might have strong emotion tied to those things, because someone lost an arm or was never able to do something again in their lives.

It's interesting when you're involved in an organization, where a leader in that business has actually been close to a fatal accident or a multiple fatality event and how it changes, as a leader, their evaluation of the chance that something like that could happen again. How we talk about injuries and major accident events, we need to think about the experience that people have had, the emotions I've got attached to them, and maybe think about whether we can attach certain emotion to some of those risk descriptions.

Drew: Can I grab the second one, David? Our job in safety is not to make people scared about risk or to make people think that safety is more important than it should be. Our job is to clearly and accurately communicate what we think the risk is. That's not our whole job, but with respect to risk, our job is to clearly and accurately communicate what the risks actually are.

Simply thinking that writing things mathematically or expressing them in one particular way will successfully communicate that is misunderstanding a large part of the job. How we present probabilities, how we talk about risk, how we frame it, what colors we use, what shapes the tables are, all of those things are going to affect whether our listeners end up with the same understanding of the risk that we're trying to communicate. It is really important not to just think that we're being objective in what we write and say, but think about how we're describing things and how we're presenting things if we want to accurately communicate what we think the risk of various things are.

Just keep in mind that difference between 20% and 20 out of a hundred people can sometimes be as simple as to change from a misleading communication to a clear communication. People will act differently under those different environments.

David: Yeah. I think one scenario, I don't know how you feel about this and that, but I thought an organization I was involved in, we need to bring a bit more attention to driving and the risk of motor vehicle incidents. Even just looking at the background risk in a population of one every 10,000, one every 20,000 per head of population, in Australia at least, is fatally injured, fatally killed in a road accident every year. If you say, okay, there's a 1% chance of something like that happening, but when I took the number of employees, I said, look, if we don't do anything above and beyond what the general public do in terms of controls for driving, then we are going to have one employee killed every two years. That's just going to be what happens.

Going from something that's a 1% percent chance of happening or a one in 10,000, one in 20,000 to say, we're going to have a worker killed every two years on average, very much changed the way as an executive group. We were talking about that risk and the urgency to do something with it. I wouldn't have necessarily considered it unethical because I was just presenting information.

Drew: Yeah, and you're just making a choice of communicating the exact same mathematical reality in two different ways. 

David: Yeah.

Drew: Another common strategy there is to do it by comparisons, is to say we're worried about this thing. Okay, our risk of this other thing is twice this thing that we're already worried about, is a good way to communicate. We're panicking about hand injuries. Our chance of having someone with a serious back injury is three times that, rather than just presenting the raw statistic of the likelihood of the back injury.

David: If we're worried about the sharks, we should be more worried about the bathrooms?

Drew: Panic about the bathrooms, David.  You've got to listen here, it's the precautionary principle. The way I read this is the idea that if our intuition is telling us something different than the technical risk assessment, then that's not a reason to override our intuition with the analytical mode of thinking. Sometimes the analytical mode is more accurate, but it could be that the intuition is alerting us to things that haven't been properly considered in the technical risk assessment.

Our intuitions can lead us astray, but they can also be very good when it comes to risk. We really shouldn't ignore, particularly when the two things are telling us different things. That's something to explore, not to override one with the other.

David: Yeah, that's how I read it in the paper. They only put a very short sentence about the precautionary principle, but the way I interpreted it in the context of that was that testing everything that comes out of our analytical risk process is against it. Does this feel right? I think to your point, it's not so much to override. For one to override the other is to combine the two and go, do we need to do more deliberation or more thinking about this?

Drew: The final takeaway, David?

David: Yeah, the final takeaway, I thought about organizational risk assessment and evaluation processes. Like you said at the start, I think a lot of organizations still try to take the human element out of the risk management process. Also, let's have an objective, not a subjective risk management process. I think we lose in that process, the benefit of all of those things that all of those experiences, feelings, emotions, and instincts that we have as people.

I suggest that it could be a whole bunch of questions that you could add to your risk assessment and evaluation processes. When you're talking about risk with people, asking them what experiences they've had with this risk in the past, how they felt about those experiences or those situations at the time, how you feel about those situations now, and questions like what do we feel is the right thing to do in relation to this risk.

Thinking about the questions that you ask in risk workshops, because one of the studies we didn't mention in the paper was the study about cigarette smoking, various studies, but generally across the board, 85% to 90% of people who are regular smokers want to stop smoking. Given the opportunity to never start smoking in the first place again, 85% or 90% percent of people would say they would never have started, but that's not the feeling they had about the risk before they'd ever experienced the risk themselves. People can think about risk in different ways through the experiences that I have with that risk.

If you were sitting in a room and no one had any real experience with the risk or had never seen that type of incident occurring before, you may miss an opportunity. I guess if I was running a risk assessment workshop, I actually want to find people who felt genuine emotion around that particular risk topic and get their perspective on the risk issue.

Drew: Yeah. Interesting. David, the question we asked this week was, is risk a science or a feeling?

David: I think the short answer is both with this dual processing theory. I think as humans, it's likely to be, in my mind, more feeling based, and all of our safety risk management approaches in organizations that generally assume that the best thing to do is the opposite of that. 

Drew: Fair enough. I would not disagree. That's it for this week. We hope you found this episode thought provoking and ultimately useful in shaping the safety of work in your own organization. As always, join us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.