The Safety of Work

Ep.61 Is Swiss cheese helpful for understanding accident causation?

Episode Summary

Welcome back to another great episode of the Safety of Work podcast. Today, we dive into whether Swiss cheese is a helpful metaphor for understanding accident causation. We thought this would be a great topic to kick off the new year.

Episode Notes

The article we reference provides a historical account of the “Swiss Cheese Model”. Since there are many versions of this same diagram, we thought it best to look back through time and see the evolution of this particular safety model.

 

Topics:

 

Quotes:

“He’s just trying to understand this broad range of errors and sort of work with the assumption that there must be different cognitive processes.”

“It was initially, sort of, only published once in a medical journal as an oversimplification of his own diagram.”

“The other critique is that the model lacks guidance.”

“ ‘I never intended to produce a scientific model’ is the worst excuse possible that an academic can give in defense of their own model.”

 

Resources:

Good and Bad Reasons: The Swiss Cheese Model and its Critics

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work Podcast episode 61. Today we’re asking the question, is Swiss cheese helpful for understanding accident causation? Let's get started.

Hi everybody, my name's David Provan and I'm here with Drew Rae. We’re from the Safety Science Innovation Lab at Griffith University. 

Welcome to the Safety of Work Podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety and we examine the evidence surrounding it. Drew, what's today’s question?

Drew: Today, we’re going to do a little bit of safety history, which seems a little bit appropriate for the start of a brand new—does it count as the start of a new decade when we’re going into 2021? We're going to be talking about an article which in turn provides a historical account of the Swiss cheese model, which is what James Reason is probably best known for in his contribution to safety research. I'd be surprised if any of our listeners don't have some idea of what we mean by the Swiss cheese model. I’ll just give a couple of quick visual metaphors for an audible medium.

The original picture is basically four slices of cheese. Each of them with a number of holes because it’s Swiss cheese. There's an arrow going through all four slices where the holes have lined up, and the error is labeled hazards at the start and losses at the end. One of the things we’re going to be talking about in the podcast though is that there are actually many, many versions of the same diagram, even if we just stick to ones that are produced by James Reason. 

The one that some of our listeners might be more familiar with is the one that comes out of the ICAM accident investigation method where there are four layers, and they've got labels like these are organizational factors, these are human factors, and these are technical factors. The other one that people might be familiar with is one that is explicitly labeled as a defense in-depth, where the slices are labeled as defenses. The accident is shown as something penetrating through the defenses from the hazard towards an accident. 

We're talking about all of these variants today as the collective history of the Swiss cheese model. Before we start off with David, I'm sure a lot of our listeners have got opinions already or they've had arguments for and against the model, but what are your own thoughts about the Swiss cheese model?

David: Most of my career as a practitioner, the Swiss cheese model was obviously known and used in every organization that I was working in. From a practitioner’s viewpoint, it makes sense. I never thought too much about it, but it looked at individual behavioral aspects, it looked at task aspects, it looked at organizational aspects. It kind of made sense, and it definitely aided conversations that I'd have about incidents in the organization. 

A couple of things that struck me though in preparing for this episode was I was surprised how widely the model was known. You could almost talk to the frontline supervisor, a contractor, or someone somewhere and they just say the hole just lined up in the cheese for this accident to occur. The penetration of the model into the normal safety vernacular in organizations was really, really widespread. Probably as widespread as only maybe a couple of other phrases or models known. That was my experience as a practitioner. 

When I came to read more widely on safety and do more research on safety, I thought hang on a minute, there's a lot more to managing safety and to understanding accidents than what's in a model like the Swiss cheese model. They're my two opinions going around in my head, but neither really answers the question that we’re going to answer in the episode today about how useful the model is. Your thoughts, Drew?

Drew: To be honest, I can't really remember seeing the model used in an industrial context, except maybe appearing just as a sort of illustrative diagram in accident reports. The earliest explicit memory I have is seeing it as a diagram on a PowerPoint slide as part of a standard introduction to system safety course. 

What immediately struck me was this was a set of slides for a course that got presented over and over again by different people. I’d get to see what each person said when that slide came up on the screen. I was struck by how everyone had a different interpretation of why the slide was there. They’d just start off on their own story about what this said, and all of them were different.

For me, the Swiss cheese model has always just sort of represented this example of how folklore happens in safety. That we have these ideas and everyone keeps reusing the ideas, but no one really knows what the idea is. It's just a giant game of telephone tag, slides copied, or diagrams copied. Everyone creates their own interpretation, creates their own meaning from the diagram.

David: The Swiss cheese model, I think many of our listeners would hear and we'll talk about some of them in this podcast. It's got a lot of detractors within the new view safety community that we'll talk more about labeling the Swiss cheese model is linear or nothing more than a cartoon. 

But like in episode 17, when I interviewed Carsten Busch about that the title of what did Heinrich really say, in this episode, we’re going to situate the Swiss cheese model within that historical context of the 1980s and ‘90s, and talk about the substance underneath and surrounding the model, not just the model as it's represented as a figure or as an image on Google images.

Let's start with a bit of background on the Swiss cheese model. Jim Reason was credited with developing the model, although did benefit from the direct contribution of John Wreathall. It was a product of evolving safety thinking at the time. There was actually 10 years between Reason first publishing his organizational accident model or OAM and the Swiss cheese model. He published many alternative versions in between. I think at least five different versions of this model during that time. 

At the same time, Jim Reason researched and published a whole wide range of safety topics during the late ‘80s, throughout the ‘90s, and into the early 2000s on safety culture, on human error, particularly human error taxonomies and classifications and risk management.

Drew, in my opinion, Jim Reason is one of the most influential safety theorists of the 20th century. What are your thoughts on Jim Reason’s contribution to safety science broadly?

Drew: I’ll tell you a little bit more when we get into the history of the model. This was a really interesting time in safety. Sometimes, I think back to the 1890s when you could go to the world fair and see all of the greatest inventors you've ever heard of, all in the same place with their same work. The idea that Madam Curie, H. G. Wells, and Tesla were walking around the same place and could have conversations with each other. 

The 1980s weren’t like that for theories of organizational accidents. There weren't that many people trying to explain why big accidents happened. They all closely read each other's work and knew each other's theories were. They also tried to wrestle with this understanding of, it's going to be more complex than these simple explanations that we have. It was also a time when most of the medical metaphors for accidents hadn't been used up. People could invent their own metaphors, throw them in, and it will be this brand new theory.

David: I remember we spoke about people like Charles Perrow—normal accident theory, Karlene Roberts—HRO. This was all sort of early, mid-80s through to the end of the 1980s. It appears as though in 1987, the Swiss cheese model was borne by Jim Reason and John Wreathall actually sitting in a pub. Rumor has it that it was actually drawn on a napkin where John was explaining his defense in-depth concept by drawing these overlapping planes on these napkins, and Jim leaned over and actually drilled holes on John's planes.

It’s like one of these stories. You got an engineer and a psychologist walking to the bar and they walked out with the Swiss cheese model. Drew, do you want to introduce the paper that we’re going to talk about today?

Drew: The paper is called Good and bad reasons: The Swiss cheese model and its critics. It's published in Safety Science. The good news is that it's published as open access, which means that we can look at it in the show notes, on LinkedIn, and anyone can just read it. It's pretty easy to read. The authors are Justin Larouzeea and Jean-Christophe Le Coze, both from the Center for Research on Risk and Crisis. Is it at MINES ParisTech in France? I've probably mangled the pronunciation of that one.

I don't know much about Justin, I've got a lot of time for Jean-Christophe. He's one of the editors at Safety Science. In particular, he's done this great series of legacy articles, in which each article focuses on one or two key theorists in safety. He’s done one Jens Rasmussen, Perrow, Hopkins, and Barry Turner. He really knows his stuff when it comes to these theories in-depth and the whole range of work of the scholars rather than just these oversimplified ideas.

David: The method for this paper is a little bit different to the research paper that we might talk about, even a systematic literature review paper that we might talk about, or maybe even a theory paper where someone’s trying to propose an idea. This is more of a descriptive account of an individual and a model and trying to create some critical discussion around that. The broad method for the paper was the researchers went and identified and analyzed all of Reason’s articles, chapters, and books published over several decades from the ‘70s to 2000s.

Then worked at describing how Reason’s models, the experiences, and encounters that he had during his academic career created these conditions for the Swiss cheese model to emerge. They identify and present the various critiques of the model in other parts of the safety science literature and other safety scientists. And then actually outlining conversations between Reason, Wreathall, and the first author of the article. 

He actually did some interviews with Jim and John about this time and this work, and then tried to situate Reason’s work—because of Jean Le Coze’s other work around the time with other theorists—with other conceptual safety theories and studies at the time.

It's a really interesting research method, Drew. I've actually read a lot of these types of articles, but until I prepared this one, I hadn’t actually thought of it as a research method. What are your experiences with and thought around this style of paper?

Drew: The closest thing I can think of is this is the way that people very often do scholarship in the humanities. In particular, you see it a lot in continental philosophy where you'll talk to someone about what your research is about and they would say I'm a scholar of Derrida or I research Hume. Where the actual object of research is the thought by and around a particular person in the past. The idea is that we advance our thinking by thinking deeply about how people previously thought and then try to move on from that,

There are two important features that we miss out a lot sometimes in safety that they're very good at in philosophy. The first one is that you can't really understand what an author is saying unless you think about who they're responding to. Because usually, particularly when scholars write, they assume their readers are smart and the readers have already read all of the stuff that's out there. They don't fill in the blanks of what's already been said. They move on. 

Unless you know what they're reacting to, very often they'll seem a lot more extreme than they actually are, or not seem as nuanced as they actually are. I think we particularly suffer in things like Safety-I and Safety-II when we read Hollnagel without realizing that Hollnagel is responding to people rather than giving ideas of the first impression. 

The second thing is those big thinkers don't just have one source and then stop. Each thing that they write builds on and changes what they've already written. To really understand someone, you’ve got to look at all of their work. The dumbest criticism of the Swiss cheese is it's just a cartoon because it is the most simple model that Reason ever produced. Unless you realize that he wrote lots more complex versions of it, then blaming it for being simple when you just deliberately picked out the most simple one is just cherry-picking criticism

I love the style of picking an idea but then engaging deeply with all of the thoughts by that person over time, around the idea, who they're responding to, and who’s responded to them

David: I think when we talk a lot about the authors, Drew, you got to trust that the author is widely read around the ideas at the time, and he's widely (I suppose) across the domain that they're trying to sense. Jean-Christophe definitely has an academic reputation that allows us to look at these papers and go, actually this is probably a fairly good representation of the modeling context of the time. 

Basically, the way the paper is written is there's a history of the ideas behind the models. Going back to basics about how the model emerged, then a look at the criticisms, and then looking forward.

We're going to do a little bit about that now through this paper. We’re going to talk about the ideas behind the model. We’re going to talk about the criticisms and how they're discussed in the paper, and then our version of probably looking forward is going to be some practical take-outs and what might come next. Drew, do you want to maybe give a bit of history of going back behind when the model was developed?

Drew: This is something that I didn't know until I read this paper that I find really fascinating. I did know that Reason started off his career looking at motion sickness. What I didn't understand is that in the middle, he did this style of psychology that I personally love, and that has mostly been swallowed up now in very quantitative experiments in psychology. It's a very naturalistic or descriptive way of looking at things. 

Reason one day made a mistake and thought that's weird. Why did I make that mistake? He then just started keeping diaries of all of the mistakes he made and then got other people to keep diaries of the mistakes he made. He tried to align that with theories about how the brain works to understand his own mistakes and understand the cognitive processes that generate mistakes.

David: Actually the story in the paper—I'm not sure whether it was referenced or from the conversations with Jim. When he decided to go down this path of human error and safety, he was actually in his kitchen boiling a pot to make tea. I think he talked about the days before this tea kettle where you actually put the leaves in the pot, his cat was hungry, and his cat was intimidating to him. He took a big scoop of cat food and put the cat food in the boiling pot on the thing instead of the tea leaves. 

I suppose he was like, that's interesting. I didn't mean to make a mistake, and I've done this a lot of times. I wonder what happened there? He almost points to that one innocuous situation in his kitchen as setting his entire career direction.

Drew: Why did he put the cat food in the tea, but not also put the tea in the cat food. He didn't just reverse the two things. What's to come to the process that leads you to combine the tasks in that particular way? There are all sorts of fascinating stuff in here that I think has since been misinterpreted. 

One of my other favorites was this distinction between errors and violations that very often crops up in just culture models as ways of distinguishing errors are innocent but violations are blameworthy. It originally came from Reason trying to sort out what are the different cognitive processes that lead to mistakes and saying that because some acts are intentional but still don't do what you want, that's a violation versus something that you didn't intend to do. That's an error.

He does similar things like separates lapse and slips because the lapse is an attention failure, and slip is a memory process failure. There was never any intention that this will be like any scale of least bad to most bad, judgment, or anything like that. He's just trying to understand this broad range of errors and worked with the assumption that there must be different cognitive processes. That's why he’s classifying errors to find out the different cognitive processes that lead to different types of errors. 

That's the early Reason. I think it's relevant as an illustration of how these real curiosity type of work can become misinterpreted by other people and turned into much more normative models of how we should think about the world.

Reason himself shifted from this real naive curiosity much more into management consulting. His work started toward both trying to explain organizational accidents, but also to provide consulting services to organizations to manage human error, to improve human error. That's where he started coming up with these more normative diagrams that describe how bad things happen and where the points are that organizations should try to work with them.

David: Drew, Reason was also exploring this other idea of latent conditions. I suppose there are the actions of the individuals and the reasons for those actions, but then also the latent conditions in the organization that obviously give rise to make it more or less likely that accidents will occur. 

Reason, early on, was concluding that it was more interesting to focus organizational safety management efforts on the detection and elimination of these latent conditions rather than any individual active errors or trying to work really hard at the individual errors on the surface. Even some of that latent condition stuff, we still have as part of our incident investigation vernacular, even now.

Drew: You can also see a lot of parallels between Reason’s treatment of latent conditions and some of the new view or safety differently ideas. Reason was saying don't focus on the moment of the accident because the moment of the accident is just the last-minute response to the underlying problems in the organization. The problems are things that have a deep set, that has been around for a while. We don't need an accident to study those. We should be able to find the organization now. 

He was much more about problematizing it than most of the current everyday work people are. He was using terms like pathogens and talking about organizational viruses that spread and how we need to root them out. He was definitely talking about focusing on what goes wrong normally every day rather than focusing on the errors made by humans, which he saw as just an emergent property of the organization.

David: I think for people who are newer to safety, these worlds collided or maybe not so much. In the early 2000s, when the Resilience Engineering Community was founded in 2004, when Sidney Dekker, Erik Hollnagel, David Woods, Nancy Leveson, and Jim Reason were there, they all wrote chapters in the first Resilience Engineering book that was published in 2006. 

As little as 15 years ago, all of these people were in the same room discussing very similar ideas and the foundations of a lot of their own individual theories that have come subsequently—and obviously prior to in the case of Jim Reason and others. It's not like they're all living at opposite ends of the world developing these opposing views in isolation.

Drew: What was missing at this time was our modern notion of safety management systems. A lot of the work that they were trying to do was they were essentially trying to invent safety management systems. They were trying to explain what are the key things that an organization needs to do or have in order to be a safe organization. 

Whereas the HRO people were focusing very much on these broad properties such as your indifference to expertise, moving things towards the frontline. Reason and Wreathall were focusing on sort of what are the political and institutional structures you need to have within your organization. What do you need to have as your requirements for decision-makers? What do you need to do in order to have a management chain? What are the defenses in depth that you need to have in place? 

They're just sort of trying to create different subcategories, categorization systems, triangles, or systems that link all of these things together into one diagram that explains, this is what an organization needs to be, have, or do to be a safe organization.

David: For the model itself, Drew, we've mentioned that John Wreathall had the defense in-depth concept. He was a nuclear engineer. He was actually trying to understand the physical reality about how that nuclear industry was working to design an upright safer nuclear system. He had this defense in-depth concept. 

It provided Reason with his normative model, which is like you said, organizations need to have decision-makers, they need a managerial change of departments, and they need these organizational preconditions like training, equipment, and plans maintenance. They need their defense or their safeguards—this technical human and organizational safeguards.

He had this model, and like you said, I didn’t realize that Drew, but now you’ve mentioned that idea of this is the early architecture of safety management systems. Reason combined these ideas with his existing work on human error and latent conditions to publish his first organizational accident model. But it was still yet to be known as the Swiss cheese model. 

I wasn’t aware of this, but in 2000, Reason published his model in the British medical journal. This was two or three years after the version of the organizational accident model, which became known as the Swiss cheese when it was published. He was worried that when he was writing for a medical audience at that time, they would be less familiar with some of these more complex or modern human factors ideas like the aviation or the nuclear industry would be.

He simplified the model and just simply represented it as the cartoon you mentioned at the start of this podcast with a few slices of cheese, a few holes in it, an arrow going through it, and then just discussion in the text. Most of our listeners, when they think about Swiss cheese, they’ve probably seen that graphical representation. It was initially only published once in a medical journal as an oversimplification of his own diagram.

Drew: Yeah. It's fascinating that the more it's oversimplified, the more it has spread. Le Coze tries to give some sort of explanation for why this particular model was so successful, and why out of all of these diagrams, that's the one that's been picked up and spread. I'm not sure David how compelling you find the argument. The idea that it's simple, it's got a metaphor, and it’s not been to any domain. It sort of got this defense in depth coming from engineering, it's got the pathogens from the medical domain like the holes in the cheese. The fact that it's got food in the diagram because he seems to think it’s important as well for the ultimate appeal.

David: I've had a few conversations since we've been talking about preparing this podcast, and just probably for our listeners’ benefit, we’re not that far in front of the moment. We’re not recording this a long time before you're going to be listening to it. I've been talking to people and I said, what are these models that people know? People know about the triangle. People know about the iceberg. People know about dominoes. People know about Swiss cheese in terms of safety. There's something for us to reflect on and think about in safety, which is some of these simple ideas that are often conveying quite complex perspectives. The theory of perspective sort of becomes so catchy. 

This graphical representation of ideas as being popular on the management side, and it seems to work with Safety-II because what you have in management and what you got in safety is the need to communicate ideas with a broad range of practitioners, with a broad knowledge base. Putting a whole lot of domain-specific words on a page can make it really hard to talk across the domain in an organizational sense with people at different levels of the organization, with people with different roles in the organization.

That's why the simple ideas catch on because they can—at least at a surface level—be talked about widely and potentially misunderstood widely, and that's the real problem. The problem is that the simple you try to make some of these complex ideas, the multiple and wrong interpretations you could get. That's how I thought about it, but It's funny that in safety, we've got these super simple weird things that seem to be really stuck hard.

Drew: One of the arguments that they make in the paper is that the openness to misinterpretation is a feature, not a bug. If you've got a really simple diagram without a lot of words on it, then it becomes a boundary object where people can agree that it's true even while they're thinking different things inside their own heads. We’ll get on to criticisms in a moment, but that's something that when Dekker is being less dismissive than he is sometimes. He points out what exactly are the holes, what exactly is the arrow. These things are not clearly specified. If you try to pin them down to one explanation, they might not even make sense or coherent sense.

But everyone can have their own rough idea about what it means, and because it's a rough idea we’re not forced into particular words, then we can all agree that it's a good diagram or agree that it's a suitable metaphor. I found that really interesting. It says a lot perhaps about the propagation of other ideas. 

I imagine there are similar things such as with a learning team. If we have the concept vague enough, then we can all have slightly different ideas about why we do it or exactly how we do it. We can all agree that learning things are a good idea because we're not forced to pin down exactly what we mean, which means we're not forced to disagree with each other.

David: I think that's really insightful, Drew. We did the episode on fads and fashions, and we talked about the cycle of innovation and commercialization sort of like then renewal. The argument you're making here is that if you put something on the table that no one can disagree with, then it's likely to hang around. If you make something simple and vague enough, then everyone can form the interpretation that they can agree with, then you probably got a good chance of getting it going.

Drew: It can't just be any idea, it has to be seen as insightful. Everyone's got to agree that it's important and says something, just without being pinned down to exactly what that importance is. David, do you want to take us through the formal arguments that people have made against the model?

David: Yeah. There are two main arguments that the paper raises about the model. The first is that there are people who argued about the model as it relates to the representation of an accident—about how well it does or doesn't represent accident and accident causation. And then there's the second argument, which is the model is just too generic and too underspecified to be useful. Let's discuss each of these, Drew.

In relation to the first one that the Swiss cheese model doesn't represent accident causation appropriately. There are three main arguments made by three people. We’ll name and talk about those arguments. Hollnagel talks about Swiss cheese being a complex linear model, but we actually need systemic models. He talks about accidents being dynamic, where causes and effects interact, and Swiss cheese doesn't actually explain accidents in that way.

Dekker talks about Swiss cheese representing a more static view of the organization and assumes that we can actually take accidents, organizations, and events, and decompose them. Focus on a part of the system, fix that system, and then actually get the whole system to function as we intend.

Leveson is sort of probably the most critical in some of this work where she says that the Swiss cheese model is simply nothing more than a version of Heinrich’s domino model from the 1930s. Together, I think Drew, these critics claim that the Swiss cheese model is no longer useful to anticipating today's accidents. Your thoughts on these reflections or these criticisms?

Drew: I think this is the downside of presenting a model that’s open to multiple interpretations. If you get people who want to criticize it, all they have to do is impose their least charitable interpretation onto it, and then knock down that straw man.

In reality, Hollnagel and Reason would both agree that there are these vague underlying causes of accidents that can't be put into a linear diagram. In fact, that's what Reason has said those holes are. They're organizational pathogens. They’re underlying latent causes. Hollnagel deliberately misinterprets them to say, oh no, they're specific events chains, and I disagree with that. Even though Hollnagel would agree with Reason’s underlying view of what's going on.

I think Dekker’s criticism is a bit fair. They are particularly the more sophisticated versions of the model. If you look at Reason’s organizational accident model, he is trying to present a more static view of organizations, defenses, and decomposable parts. Reason is very much about putting things into categories in order to understand them. That's not going to appeal to someone like Dekker who loves to criticize the idea of putting things into categories. I don't know whether you could say that it's a fair criticism or not. I'm not trying to agree with Dekker’s criticism. I like categories myself. I love putting things into diagrams, but at least not misrepresenting them.

David: He’s probably the one that's aligned, Drew, with our current thinking in complexity science, systems of systems, and things like that. I suppose behind that criticism is a theoretical perspective on the way that complex systems work.

Drew: Leveson calling it just another version of Heinrich’s domino model. Leveson is criticizing the way most safety engineers misinterpret the Swiss cheese model. If you want to do that, fine. You're just missing out on most of what Reason said in return for knocking down the straw man.

David: The second criticism, Drew, is that the Swiss cheese model is too generic. This was a little bit more vague for me. The authors of this paper talk about there's a critique that recognizes the complexity and systemic nature of the model. Let’s assume that the Swiss cheese model can represent complex and dynamic systems, but its graphical simplicity just doesn't provide any understanding of the links between the different causal factors. There are not enough arrows, there are not enough feedback loops, there are not enough multiple pathways. It’s just graphically too simple.

The other critique is that the model just lacks guidance. Because like you said, the holes aren’t defined, the rest of the cheese isn't defined, the arrow is not defined, even a lot of the individual slices of cheese are only generically labeled and unspecific to organizations. It’s therefore open that practitioners need to make their own interpretations and adaptations so the model can fit. Therefore, it's got little utility in real life. People can't just pick up the model and go, great. I can use this as a tool in my day-to-day work.

Drew: I think this criticism—even though it's a bit vaguer—is probably also a bit fairer. Probably the largest single practical use of the Swiss cheese model is its translation into accident investigation tools such as ICAM, which try to take the generic picture and start dividing it up into categories, labels, and put names on the slices. 

In doing so, they embrace all of the actual criticisms that people have made off the model. They turned it into something which is static, which is linear, which is thinking that organizational courses are like dominoes. From that point of view, the lack of detail in the model leaves it open to that misinterpretation, which then leads to all of these other things which are rightly criticized. 

While I don't think Reason can be blamed for the misinterpretations, I think he can certainly be blamed for putting out into the world a loose metaphor, which is so easy to misinterpret.

David: Yeah, Drew. This paper then goes on to provide a critique of these criticisms. It’s almost like the author is trying to be the voice of Jim Reason so tried to make the argument, and referring to even some of Reason's own words from as late as 2006 and 2008. 

As the model started getting traction in organizations, Reason felt the need to come time and time again and clarify his model and clarify his model and say, no, no, no, this is not what I meant. This is what I meant. Drew, do you want to a little bit about how the authors critiqued some of these criticisms?

Drew: David, I think you’ve actually provided a pretty fair summary there. Most of the response to the criticisms is just to say that's not what Reason meant, which as I said is like a fair response to all of the criticisms except for Reason himself wasn't clear. The key example of that is when in 2006, Reason said that it's weak because it doesn't really make scientific predictions. It's not a scientific model. I never intended to produce a scientific model, is the worst excuse possible that an academic can give in defense of their own model.

I wasn't trying to do science, I was trying to be [...] the consultant is not an excuse. I think the response to the criticisms is they misinterpret Reason, that is totally fair. The fact that the model lacks specificity is open to misinterpretation. Really, I think it leads to this fundamental question of what does it usefully tell us then. If all of these different interpretations of it are not what Reason meant, then does it still offer us something appropriate and useful? 

I think that's where the paper then sort of moves on to point out, what was Reason trying to communicate, and is that still actually a valid useful message?

David: I think Drew that’s why it's good that it is open access, and I’d encourage our listeners, if you are going to read one paper in response to an episode, this is a good one to read. It’s an easy one to read. There are some good direct quotes from Jim Reason from the interviews and discussions with him about what he might have meant and what he might not have meant as well. 

In the 1990s, what Reason had attempted to do, Drew, was reconcile these three different approaches to safety management at the time. This person-centric model, this engineering model, and this organizational model, and try to actually create a Swiss cheese model of the organizational accident model that reconciled these three worlds, if you like.

We've heard some of these things now when we talked about an incident investigation about workers, work, workplace, and things like this. We talk sometimes about person, task, organization. This was the model that tried to bring it together. 

A word of caution though, there was one empirical study that was referenced in this paper where researchers asked a whole range of quality and safety professionals who were all frequent users of the model, and all reported as being very conversant, very at ease, and very confident in their understanding of the model. And asked them questions about some of those things like you said, Drew. What does the arrow mean, what does this mean, and what would you do in this situation?

There were vast differences in those practitioners. How they interpreted the model. How they thought of it in relation to accidents, to errors. How to fix holes. How to add barriers or slices if you like, of cheese. Just be mindful, and it's probably a good segue into practical takeaways, but if you are talking to someone about the Swiss cheese model, just remember that what you're talking about when you say the Swiss cheese model may be very different from what the person you're talking to is thinking about or saying when they talk about the Swiss cheese model.

Notwithstanding that, Drew, in my opinion, before we do practical takeaways. I agree with the authors that the Swiss cheese model significantly contributed—at the time that it was developed—to shifting this focus of accident prevention from the individual into a broader systemic view.

Drew: I've got a question for you though, David. Reason was a psychologist being called in organizations to help them out with their safety practices. The message he brought was, even though you've summoned me here because I am a psychologist who has deeply studied how people make mistakes, and the cognitive processes to deal with them. He didn't come in with a psychological approach to fixing people and stopping them from making mistakes.

He said instead, what you're going to do for safety is you’ve got to reconcile these three things all of which need to be managed. Yes, you need to spend some time understanding the people, how people work. You’ve also got to think of your organization in engineering terms and what are the defences in-depth, how your defense is established and maintained. You’ve also got to think of your organization as an organization and how is it played, how is it managed. 

His other works should be like what is the culture of the organization. Now, my question for you, David. You want to explain that to people and put it up on a slide, is a set of Swiss cheese really the best way to explain the need to embrace those different things?

David: Well, maybe not. It's a good question. I didn’t know where you're going to end up with that question so I was preparing a few different answers depending on where the question went. I think his models in many of these articles were much more elaborate. The Swiss cheese one was a way of maybe talking in a very brief—judging by the British medical journal—was probably a 2-3 page article with one figure, and that became it.

He did have a more elaborate model. I think it would have also been a great time to be a safety scientist because he could blend those worlds. He could get his psychological world, then he could speak to Wreathall, and he could speak to Charles Perrow. It was a good time to be thinking about safety and trying to think about, how do we think about safety beyond the individual. 

I don’t know if that answers your question. I probably wouldn’t have thought to put up Swiss cheese, although maybe I’d be a better consultant if I had thought about something like Swiss cheese. But I'll probably be more academic. I don’t quite know how to answer that one, Drew.

How about you lay it off? My only practical takeaway was at a practitioner level, which was like just don’t discard models, or don’t throw models into your organization unless you know what you're trying to achieve. That was about as good as I could get from this episode, but you've given it some more thought than me.

Drew: That's actually closely related to my first takeaway, which is that if you're using a model, ask yourself what you're trying to achieve by using it. Any model is helpful if it's giving you something useful. 

If you're using Swiss cheese because you need to explain to management why we should focus on the latent everyday problems instead of on the errors that people are making, and you think that the Swiss cheese diagram is a good way to show that and then they’ll understand it, then Swiss cheese works for you. You shouldn't really care what other people say about the model if it's helping you to communicate that point. 

But if instead, you're using the model just as a way of dumbing things down, someone else is trying to explain their ideas to you, you're saying, that’s just Swiss cheese. Then the model is not helpful. It's just being reductionist.

The second big takeaway, which is something that we say all the time on the podcast is I think Swiss cheese more than anything else illustrates the importance of reading the source material. There are so many different interpretations that Reason himself had behind that diagram that you are missing out on if all you do is take someone else's folk interpretation of what Swiss cheese means. If you like the diagram, do the guy the courtesy of reading some of his other stuff. It's not that hard to get a hold of.

David: Yeah, like I've mentioned on the podcast, Drew, he played around with these ideas for decades at the same time when he was writing about safety culture and human error, trying to come up with a taxonomy of unsafe facts, and all these stuff. They sort of discount it because another theorist just said it just represents linear accident causation, which we know is not true. It’s a bit unfair.

Drew: The final takeaway I'd have, this is really aimed towards—I know we've got some academics and researchers who do listen to the podcast. I think Reason’s career shows a really good illustration of the opportunities and traps that consultancy provides. You’ve got a career that starts off in a topic—motion sickness—that he never touches again. That's familiar to all academics, your first work becomes irrelevant.

He then goes into this interesting period of intense curiosity about the way we all make mistakes and does this fascinating work. But then he gets captured by the consultancy. His work becomes less about how do we understand the world and more about how can I explain my ideas using the right diagram so that these ideas are communicated to organizations to apply them. 

You can see the many, many times he tries to come up with different diagrams to explain the ideas to communicate them. The end result I think is a cautionary tale, because, despite all of this work, the thing that was successful is the least informative, least useful, most open to misinterpretation version of this lifetime of work.

Don’t aspire to be successful. Don’t aspire to become a meme. Don’t aspire that your particular model or metaphor catches on because you're going to lose control of it. Your life's work is just going to disappear down that metaphor. It's not going to be your success, it's going to be the success of something you didn't intend. I know Hollnagel has complained about that multiple times. I know Reason was disappointed by it as well—that the ideas got lost behind the model.

David: I'm sure now, the things that we talked about with Heinrich and some other safety theorists, maybe that's a bit of personal advice there, Drew, as someone who tries to straddle the consulting and the academic worlds personally a little bit. I haven’t thought of anything as good as a triangle, an iceberg, or a Swiss cheese just yet. 

Like I said, my takeaway is that we talked about safety being as much about the communication of ideas and the alignment of people in our organization as much as the technical aspects of how we manage tools and tasks. If these models and ways of communicating actually help with that messaging in your organization, then it’s good. But I think Drew’s caution is really helpful, which is to make sure you don't lose control of that message by having something that's open to interpretation.

Drew: David, we haven't got any empirical paper this week. It's more of a discussion piece. We did still ask you a question, is Swiss cheese helpful for understanding accident causation? We don't have evidence, but we do have opinions. What do you think?

David: I'm going to say yes, Drew, on the basis that if you took what's underneath the model and this idea of the layers as safeguards in the holes and looked at everything that makes that up, define that as something contextual and specific to your organization, and then overlay that on particular events. I think it's a place to start. I don’t think it's probably the place to finish, but I definitely don’t think it's fair to say that it's entirely unhelpful for understanding accident causation.

Drew: I'm going to say the opposite, David. I would say that I think Reason himself had some really good and useful things to say about safety in the organization. I think the Swiss cheese model itself has done more to hide and destroy those ideas than it has to reveal them. The noise from the misinterpretations has actually done more to drown out the good ideas that it has contributed.

David: Drew and I don’t agree, which wasn’t planned at all. But [...] say Drew comment, David comment, and see what happens. We’d love to hear your feedback. If you do join the episode conversation in the comments on LinkedIn, let us know what you think in relation to this question. Do you think the Swiss cheese model is helpful for understanding accident causation? If you get a chance to read the paper as well, let us know what you think. 

Drew, that's it for this week. We hope you found this episode thought-provoking and ultimately useful in checking the safety of work in your own organization. Join us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.