The Safety of Work

Ep.58 What is the full story behind safety I and safety II (Part 2)?

Episode Summary

Welcome back to part two of our three-part series on Safety I and Safety II by Erik Hollnagel. Today, we continue digging into the full story behind this book, its theories, and the conclusions drawn therein.

Episode Notes

Picking up where we left off, we begin our discussion with chapter three. Over the course of this episode, we talk about Hollnagel’s definition of Safety I, the myths of safety, and causality (among other things). Tune in for part two of our in-depth look at this important book.

 

Topics:

Chapter 3.

Chapter 4.

Chapter 5.

 

Quotes:

“...I think this one particular idea of work as imagined/work as done has been thought about a lot in the time since this book was published…”

“What is this measure of successful work? What is this way that we would categorize something as successful, if it’s not, not having accidents?”

“It’s a misinterpretation of Heinrich to apply the ratios.”

“And that sort of criticism of the old to explain the new, I think is never as firm a foundation as clearly explaining what you’re sort of underlying ideas and principles are and then building on top of them.”

 

Resources:

Safety I and Safety II: The Past and Future of Safety Management

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work Podcast episode 58. Today, we’re continuing to ask the question what's the full story about Safety-I and Safety-II, part two. Let's get started. Hi everybody, my name's David Provan and I'm here with Drew Ray, and we're from the Safety Science Innovation Lab at Griffith University.

Welcome to the Safety of Work Podcast. This week, we’re continuing our discussion from the last episode about the book Safety-I and Safety-II. If you haven't listened to episode 57, we’d suggest you go back to listen to that one first and we’ll wait. Okay Drew, let's dive into chapter three.

Drew: Chapter three is called The Current State. It begins by restating the basic problem that Hollnagel presented in chapter one, that focusing on things that go wrong is limited. The chapter introduces a few new ideas about why it's limited. The first one is the idea of habituation. He says that since things go right most of the time, we don't really notice what's causing them to go right, we just think of that as normal. We've got this sort of unexplored opportunity to understand why they're going right that we don't take advantage of because we're just not even paying attention until things go wrong.

David: I think Drew, this is this idea in safety of complacency that when things aren’t going wrong, then organizations and people within them become complacent. In my view, the same mechanism that Erik's talking about here is prompted the ideas a preoccupation with value which is the first HRO principle and then later, Reason, when he first proposed chronic unease and Karl Weick with collective mindfulness as a solution to overcome this complacency and accepting that things not going wrong is just normal. We’re just starting to see these on continuing overlaps in some of the ideas and some of the new view safety theories.

Drew: David to be honest, I'm not certain that I agree with that connection, but I don't have any textual evidence otherwise. I think this is something where the book at least at this point is genuinely unclear what it would mean to focus on success. One interpretation is this idea of preoccupation with failure or that the seeds of failure are buried in success. Even though we’re successful, there are things going on that shouldn't be going on with all the warning signs that we could detect. The other way of interpreting that is that that's still Safety-I thinking that when things are going well, we should be looking at what is making them go well. Looking at what are the mechanisms that are causing success and encouraging those mechanisms.

I would have said previously that I thought that second interpretation was the correct or canonical one, but I recognized the fact Hollnagel hasn't said that. All he said is we don't study normal work very well. We get complacent about not thinking about the mechanisms. I think that either interpretation is really quite valid.

David: I think Drew it's a good point, because it started when I thought about it, habituation is a very cognitive psychology idea. I was like, why is this in a book about—I'm trying to understand organizations and understand safety, because habituation isn't talking about what's right and what's wrong. You could if you go deeply I suppose theoretically in the psychology side is like, things go right most of the time that we accept this is normal. Habituation is not about what happens most of the time. Habituation is what happens all of the time. It has maybe selected the wrong theory to link to this idea.

Drew: This is something that we’re definitely going to come back to, David. This is a particular habit in the writing style in this particular book. There are a lot of words that get repurposed away from even the original theory that generated that word towards a new meaning of the word. It's a style which I think is unnecessarily confusing, because it carries all of the old meaning of the word, all the old sophistication of the word. As you said, habituation comes out of this deep vein of theory about psychology, but actually Hollnagel is really just using it to say, we don't tend to notice what's going on around us, we just accept it as normal, which is much simpler than the word implies.

David: Yeah. It’s probably some of the heuristic theory, even some of Canaman and Taberski’s work or something else that would have been better placed in this argument than what was chosen. He also introduced us here I suppose for the first time in the book, this idea of work is imagined versus work is done. This gap between how organizations prescribe work and how it's actually carried out. He says that we assure that work is going well because of how it's prescribed. When we investigate accidents, we find out that it varies from the procedure on how we prescribe it and we assume that this variation is the cause of the accident.

Drew: I don't know if you noticed this, but the way Hollnagel uses the terms work is imagined versus work is done, is not exactly the same as I'm used to using them either. Hollnagel explicitly says, this is work as prescribed by management. This is work as imagined according to Hollnagel, is how the management thinks work should be done. Then work as done is how it's actually done. Work as imagined doesn't have any of those connotations, or routines, or representations of work, or management's picture of what's actually going on. It's actually how management thinks work should be done, the prescription.

David: I think I just passed over this because this book is six or so years old. I think this one particular idea of work is imagined and work is done has been thought about a lot in the time since this book was published by Erik and also by other authors, Stephen and other people. We've talked now about work as prescribed, work as done, work as desired, work as reported, all these different ways we think about the lens through which we’re viewing work as intended or as performed. I think this is an earlier definition of saying that okay, work is imagined, what's in the policy which is what management things happens, and work is done is what the workers do. I'm used to that a little bit.

Drew: Maybe that's an interesting warning for ourselves and for other readers. Even the person who first introduces you to an idea, their own ideas about that idea can evolve. It's not like this book in 2014 is the authority of deep source for Hollnagel’s views on Safety-I and Safety-II, or Hollnagel’s view on work as imagined, work as done. People are allowed to evolve and update their ideas. At this point, we've got this fairly simple picture that work as it's prescribed by management can be different to how work is done.

If we assume that work is going well because of how we think it should be done, then anytime we investigate an accident, we’re going to find out that work doesn't quite happen like that. We can assume that that's the cause of the accident. There's many uses of the word assume there. Hollnagel is suggesting that this could be a very faulty picture of what's actually going on. It could actually be that work most of the time is happening not as it's been prescribed. In fact, that's quite successful and is working quite well. The cause of the accident is not that variation. The cause of the accident if there is a cause at all is something different.

David: I really like the idea of performance variability when it comes to work. I suppose it's again, safety is hard and complex, because sometimes that variability of work is a source of adaptation that resolves conflicts, and challenges, and risks that emerge through your network. Sometimes that variation is an adaptation to other precious cold conflicts, and tradeoffs, and it erodes margins for safety leads to incidents. It's a great simplification as Erik is pointing out here to say that if work varies from the procedure, then it's automatically unsafe.

Drew: Chapter three is also the first time in this book that we get a definition of Safety-I. I think it's best if I just quote it directly. Hollnagel says that Safety-I is a perspective. He says that the perspective is Safety-I defines safety as a condition where the number of adverse outcomes such as accidents, incidents, and near misses is as low as possible. He didn't immediately redefine it and he says that as low as possible is modeled and complicated, because what is possible? He says, it probably in practice means something like is reduced to an acceptable level. The overall definition then is Safety-I define safety as a condition, where the number of adverse outcomes is reduced to an acceptable level.

David: I've struggled with this the most in terms of Safety-1 and Safety-II. I think I mentioned last week where if we're saying that Safety-I defines safety as a condition where the number of adverse outcomes is as low as possible or acceptable level, but then my working definition of Safety-II as Erik has said, if we focus on more things going right, then they can't go wrong at the same time. My extension of that is the same extent if all things are going well, then you've got a condition where the number of adverse outcomes will be as low as possible or at an acceptable level. I still can't get my own head out of this secular argument saying, is Safety-1 and Safety-II pushing for the same eventual outcome.

Drew: Here we have that thing that Hollnagel has hinted at which is, he said that success and failure are not two sides of the same coin. He's saying, they're not the only two possible outcomes, but he hasn't explained what he means by that. If we get a clearer explanation at some point of what we can have other than success or failure, then we do have actually quite a clear difference. One of them is measuring failure, the other is measuring success. One isn't just the opposite of the other, because we've got possibly a third category, possibly some overlapping definition in some sense.

David: I think this is what's being hinted at here is maybe Erik's early ideas around work outcomes being not just about safety and he has recently published a book this year in fact called Synesis where he’s trying to recombine safety, and productivity, and cost, and other outcomes of work. He introduces some interesting arguments that don't really belong in the chapter about safety as a cost. It's very hard to get investment in safety when you've already got low numbers, but it's easy to get investment in improving work. I think if I take I suppose maybe some of your interpretation there to the next conclusion would be that Erik will be trying to say that we actually just should make work better on a whole range of levels including safety. That's different to just trying to prevent safety incidents.

Drew: Listeners, if it sounds like we’re sort of struggling here to interpret a difficult verse in the Bible, David asked me before we started the podcast to avoid any sort of direct criticism of Hollnagel versus criticism of the work. Let me just say that it is a feature that appears in writing that has come from Hollnagel. You get very few, very clear definitions. You get all of this talk about what is not and what the alternative is, but you very seldom get a nice precise definition of what Hollnagel is claiming or persuading, which is why we need to do this heavy interpretive work to try to work out what is the implication and what are the hints of the implication by one of the other ideas around each idea. It makes it fairly frustrating for a reader. It makes me very forgiving and sympathetic to all people who are very harsh when they're talking about Safety-I and Safety-II. We say go and read the original, but the original is actually really hard to read at times.

David: I think you talked about DP on the podcast a number of times before. I think because the language is quite accessible, it's not academic pro, it's not heavily referenced as a book, you can read through it and go, it makes sense. What we've tried to do in three episodes here is try to pick it apart almost page by page. That's when it gets actually really challenging to make sense of. It comes down to this thing. I think if you're trying to write a book to explain safety to anyone who picks it up in 200 pages, it's you always are maybe setting yourself up to have a very difficult challenge.

Drew: There are some clear things though that Hollnagel says about Safety-I. I think this chapter does give us quite a good working definition of Safety-I even if the actual definition isn't very good. He says that under Safety-I, work is either safe or unsafe. Remember, unsafe doesn't mean zero accidents. It means it’s an acceptable number of accidents. We've got these two states for work to be in. Safety-I assumes that when work is safe, it's because it's operating as it's intended to operate as it's prescribed. If it's unsafe, it's because it's operating incorrectly. 

I think that is a very clear world view and I think it is consistent with the way a lot of people think about and manage Safety-I, is there's these two worlds. There's the unsafe world, there's the safe world, and we're supposed to be operating in the safe world. There's two ways you can do that, you can detect when you drift out of the safe world, you can spot the problem, you can fix it, or you can hold yourself in the safe world by preventing work from varying or drifting.

David: I think Drew, these are the things that we know. If you do an audit maybe or an investigation to work out the problems and fix them, or you have supervisors doing inspections and activities to try to make sure that work is happening in real time how it's supposed to be. The mechanisms and practices we've seen in safety for all of our careers have lined up with some of these world views that Hollnagel is sort of painting about what he's calling Safety-I.

Drew: What Hollnagel also makes I think is an original and very astute observation which is that if you have these world views, there's an underlying theory about the causes of accidents. The theory is that if you get an unsafe outcome, then it has to have been caused by something that is specific and unusual. At least in principle, you should be able to trace back those causes and find that there's a difference between the things that caused the safe work and the things that caused the unsafe work. A lot of our safety methods, and mechanisms, and practices are built on that assumption that we can unpack those causes in advance to prevent us from going down the unsafe path.

David: There's two further points that are made about Safety-I in this chapter. The point that Safety-I is reactive. This is, again, mostly just a narrative. We're not sure how much of some of this writing is about Erik's observations of the industry versus Erik's interpretation of some of the historical safety theory. I think what Erik’s trying to do is just talk about Safety-I as not being a sufficient picture of safety. 

When he talks about things like risk assessment Drew, I think he talks about risk assessment as proactive in a sense that it's happening before a problem, but he also then does say that the assessment of the risk is just responding to the identification of a problem in the business which makes it reactive. What he’s trying to say in Safety-I is we're looking for these problems. It's always going to be reactive, because it's always going to be trying to spot and understand the problems in the business.

Drew: I have just said that Hollnagel has spelled out quite a fair description of Safety-I. I think the rest of this chapter including this bit about reactive versus proactive is where Hollnagel starts to insert a bit of the rhetoric he gives when he's talking in public trying to explain these ideas and trying to motivate the ideas. I don't particularly think that labels like reactive and proactive are descriptive in any sense.

They're really just reactive is bad, no one wants to be reactive. Proactive is good, everyone wants to be proactive. You apply the word reactive to everything you don't like. If you're using the words in any sort of technical sense, I would say that you could do Safety-I proactively by trying to anticipate, or you could do it reactively by waiting until things drift and then bring them back into line. I don't think Safety-I or Safety-II are inherently proactive or are inherently reactive. Those are just value based labels.

Hollnagel does also make work quite a legitimate point about justification of safety. This is really a sort of a side for the main argument. I just think it's really interesting and it belonged in the book somewhere. If you're always protecting against harm, then you can never justify the cause. It's really hard to make a business case for events that don't happen. It said, this is what safety people are always trying to do. They're trying to say, sure, safety is costing you money, but think how much the accident would cost you. The trouble is, you don't know what the accident is, you don't know how likely the accident is, it hasn't happened, you have no evidence of it. You're justifying safety based on this non event.

David: Yeah, Drew. This is probably about some real problems that I've had through my career and I think it's almost at times an existential crisis for safety management. I think our listeners who are working in safety could reflect as I've done how many incidents do we think could have occurred in our career if it wasn't for some of the work that we were doing. And by the same token to what extent do we think that we've actually made work more successful for people in our organization.

It's very hard. I mean we know that you can never measure something that hasn't happened. It becomes very hard in safety. When you're trying to actually get an investment to protect against harm, what you're saying to an organization is, I want you to certainly give me $100,000 on the chance that I may or may not reduce an incident that may or may not happen, which is a very complicated tradeoff for an organization to make when trying to make decisions.

Drew: I think this is also the practical attractiveness of Safety-II. If it could actually offer us a way of measuring positive achievements other than just as safety work that we've done in the absence of negatives, that would be really valuable just for the business management of safety.

David: I think Drew again to the end of chapter three, we’re third of the way through the book. I think one of the challenges that we’re faced with, which is a challenge that I think we're still faced with today six years on, is that if we say that the opposite of having accidents is not not having accidents, but it's work going right or being successful, which is not not having accidents. What is this measure of successful work? What is this way that we would categorize something as successful if it’s not not having accidents? We’re in chapter three, we don't have an answer yet. Let’s not give away the book, but I'll be surprised on a reread if I get the answer in the book.

Drew: At the end of chapter three, Hollnagel has been painting a picture of current problems in safety. By calling chapter three the current state, I think Hollnagel is at least implicitly saying that most preexisting safety management is Safety-I. I'll just give a bit of a spoiler for chapter five, he does make this explicit. He does start name checking authors, and theories, and models, and he really does think that everything that is not coming from Hollnagel is Safety-I. I think that is not an unfair characterization of what's in the book. I do think that it's an unfair characterization of the world for him to do that.

I think it's fair to say that most safety theory, most of the way academics think about safety does come from the Safety-I mindset of safety science is about understanding the cause of accidents, and safety management is about preventing those accidents. A lot of safety practices don’t draw explicitly on that theory. A lot of things we do in organization are not about defining a safe and unsafe world and keeping us in the safe world.

David: I agree. We can look at the number of  practices that we do in our organizations, Drew, like training, we've mentioned risk assessment already. There's lots of things that we do in our organizations, even a learning thing. There's lots of things that we do in our organizations that we’re sort of going, we don't need to think of those things in the context of either Safety-I or Safety-II, we need to think of them in the context like we always try to do on the podcast, what's the mechanism in the organization that we're trying to influence and how is that making some connection to the risk that people face rather than trying to say is this in Safety-I or Safety-II care.

Drew: I think I’ll disagree a little bit and say that I think risk assessment does squarely feet within that model of Safety-I. The risk assessment is about trying to work out what an unsafe world looks like, how could we get there, how could we prevent getting there. I don’t think that's a criticism of Safety-I or risk assessment, I think it's an activity that is very logically consistent with a very clear theory of safety.

David: I think if you think about the definition of risk assessment, I think if you fast forward to 2017 and Hollnagel publishes a book called Safety-II in Practice, which is about the resilient potentials. We're talking monitoring, anticipating, responding, and learning. Hollnagel will then say that the thing we need to do is anticipate which is specifically about anticipating future operating scenarios and things that could go wrong. That for me is how anticipation in the context of safety during practice is different from some of the risk assessment theory.

Drew: That is very fair. I have sort of forgotten about that anticipating thing. I think this is just illustrating that trying to put things neatly into Safety-I or Safety-II can be difficult. Hollnagel tried to say that everything came before in Safety-I and therefore that Safety-I owns every problem in safety which he does tend to do a bit. It's both an unfair characterization but it also makes it difficult to work out exactly what he means by Safety-I and Safety-II when he's putting all of these things into the same basket.

David: This is where I don’t know how much Erik was trying to refer to the practice of safety management inside organizations or the theories of safety management that had come before. Some of those things are very different and we know we talked about that with Heinrich, we talked about that with the Swiss cheese model and how industry is applying some of those ideas is not anywhere near what the author has intended or not even anywhere near what the theory on the model is actually—or empirical findings are actually suggesting that organization should do with those ideas.

I got the impression that he was vaguely implying, that he was sort of characterizing the practice he brought in organizations and anytime we’re measuring harm, trying to understand the causes, and then coming up with solutions, that practice is consistent across industry is where this idea of lumping everything that's happening in Safety-I in his view is coming from that world view.

Drew: Let's take that and have a look at chapter four because chapter four is very much about trying to justify the shift to Safety-II by pointing out all of the problems with Safety-I. The chapter is called Myths of Safety-I. It's neatly laid out as four separate ideas. Hollnagel both presents the ideas and criticizes the ideas. That's why he's labeling each one as a myth because he doesn't agree with it.

The first one is, he calls it the causality credo. It's got three steps. The first one is, things that go right and things that go wrong have different causes. The second one is that the causes of adverse outcomes. The things that go wrong can be known and neutralized. The third one is that since all of the bad outcomes have causes and all of those causes can be known and neutralized, then all accidents can be prevented.

He links that to the vision of zero accidents. Hollnagel suggests that most models of accidents feed into that way of looking at causes. He says that that applies both to sequential models which are just like domino lines of causes, this cause, that cause, that caused the accident, and he also applies this to any sort of epidemiological model which is the phrase he applies to where there are complex combinations of causes. He would put the Swiss cheese model and even Nancy Leveson’s STAMP as a type of epidemiological model.

David: I was surprised when rereading, I hadn't picked that up. I hadn’t recalled that from my first reading. Models about STAMP, CAST, and Accimap, and some of all of these others—pretty much any accident model Hollnagel suggests is just basically not a good thing to do to try to model accidents or try to make the model work. Frame doesn't get a mention here, but this might be a year before the frame book, but it's interesting that that's been absent from the books so far.

Drew: I don't know if you noticed, but Hollnagel actually said that there are three types of accident models altogether. Two of the types of accident model subscribed to the credo. The third type of model he says are systemic models of accidents. He doesn't give an example. My immediate thought was, what would the systemic model look like? That would look like STAMP, but then later on he specifically says that no, STAMP is one of the things that he counts as an epidemiological model. He does definitely think that there are types of accident models that don’t fit this. He doesn't give any examples of them.

David: This accident model causality credo is important for the theory of Safety-II and the ideas of Safety-I and Safety-II, but we’re going to pause, Drew. We’re going to keep talking about more myths. I might let you continue to talk about some of these other myths.

Drew: Okay, so the second myth he talks about is Heinrich’s pyramid model of accidents. David, I don't know how much we need to go into it here. It’s this broad idea that for every 300 unsafe acts, you have 30 incidents for which you have your three minor injuries, which you have one fatality. Sometimes people read very specific ratios into Heinrich’s work. I've found this with the book quite confusing because Heinrich doesn't actually believe in the causality credo. Now, you talked to Kasten Bush about the history of this. I’m interested in your thoughts on Kasten’s thoughts on this. My understanding is that Heinrich says that success and failure are just different outcomes from the same causes with different probabilities. That's the whole point of Heinrich's ratios.

Heinrich’s saying that the same condition could sometimes lead to no problem at all, could sometimes lead to a minor injury, could sometimes lead to a major injury, could sometimes lead to a fatality. That's why we should focus on the unsafe conditions and the unsafe acts instead of the outcomes, because each one could lead to all of these different outcomes.

David: Yeah. In my recall of the conversation with Kasten, that's a fair representation. I think Heinrich made any claims that these ratios move in lockstep. It was an observation that was made through analysis of a set of data that might be useful for how we think about safety 90 years ago. I don’t think our listeners [...] with Heinrich, but it gets again in his book that’s saying here’s one of the myths of Safety-I. That we have these ratios and that these ratios become predictive and Hollnagel saying that that's a myth. I think how industries are applying Heinrich’s work is probably a fair thing for Hollnagel to say.

Drew: It's a bit confusing because Hollnagel’s careful to say that Heinrich didn't say this, it's a misinterpretation of Heinrich to apply the ratios. Instead of talking about why the misinterpretation of this 1920s idea needs to be debunked. As far as I can understand, in terms of actual argument we’re back where we started. He hasn’t actually proved anything by debunking a misinterpretation of an idea that he agrees with.

David: The 3rd and the 4th myths are a little bit related, Drew, and I might just move through them. The 3rd myth is this idea that 90% of accidents are caused by human error and the book says that it could be 90% of accidents, it could be 96% of course by behavior, but it says there’s thousands of thousands of citations and commentary about just the extremely large proportion of accidents that are caused by human error. I think we don’t need to talk too much necessarily now about human error, but Erik’s basically claiming that there was a person involved and they are directly connected to the outcome, but saying that 90% of accidents are caused by human error is probably not a good thing for us to think in terms of safety.

I think, Drew, before I get your comments on the 4th myth, there is a distinction that this idea of hunting for root causes and the message here is really just that it's always problematic to over simplify the causes of accidents. Erik actually goes a little bit further to talk about, in chapter 5, which we’ll talk about soon, that any pursuit of cause is following a Safety-I world view.

Drew: Overall, this chapter needs to be put into the context of Hollnagel considers that everything that has gone before is Safety-I. Safety-I, even though he's given a definition, it's a definition that he thinks embraces all the current safety practices at the time he wrote. I think it's probably fair from his point of view to say if all current safety practice comes from Safety-I thinking, then all of these problems that we see are the result of this thinking. I don't personally quite like that way of making an argument. I think it's a good way of offending people—it lumps together a whole heap of different practices that have different theoretical conceptualizations, ranging from people who are strong believers in Heinrich, to people who've never heard of Heinrich.

He puts together all as Safety-I. The problems themselves are real and fair to point out.

David: Yeah. I think, Drew, that's the comment that I was going to make, I think that the arguments could be better constructed in the book. But if we think about what Erik’s trying to say that up until now the way that we thought about safety is about preventing incidents and the way to do this is to find problems in our business and fix them. All safety theories that have come before Safety-II have come with that world view. It doesn't matter how elaborated it is. It's all about this, which we talk about reactively and all of these ways of thinking about safety management.

In Erik’s mind, at that level of abstraction, he's lumped all these things together. Now, at the time, if we think about how the decade before this book was written, how the industry was approaching the management of safety, I think these four myths, the problems are really useful to tell organizations that normal work and work involving accidents are not vastly different, that minor injuries and fatalities move up and down and they don’t move in step with each other. That human behavior is shaped by organizational factors, that there's no silver bullet for solving any safety incidents and problems.

If those are the high level messages from debunking these four myths, then I think that becomes very practically useful. The way that we bounce between theory and practice and generalizations and oversimplification, it makes it, like I said earlier, really nice and easy reading, but when we get underneath it, it creates some problematic arguments.

Drew: This might be familiar that Nancy Leveson published a piece called Safety-III that was a step by step criticism or attack on Safety-I and Safety-II. I think one of the valid points that she makes is that Hollnagel isn't making a strong distinction between quite different safety practices. You've got engineering practices that are very much focused on doing risk assessment in advance and designing systems with a view that those systems never end up being unsafe or capable of gracefully dealing with difficult circumstances. Then you've got business is doing front line workers who are strong believers in zero harm Heinrich identifying and stamping out unsafe acts.

To treat those all together is obviously going to seem very unfair. It’s going to be unclear just how much the ideas apply to the different safety practices.

David: Drew, we'll talk about chapter 5, that way I suppose the easy reading stops a little bit at least in my experience because we get into some fairly deep academic words and I'm going to let you do some of the pronunciation of some of those words as we move through. What he's really trying to do in chapter 5 is promising now I'm going to do a little formal and systematic analysis of Safety-I. He goes I'm going to deconstruct Safety-I. He talks about deconstruction and decomposition and what deconstruction means and then says I'm not actually going to deconstruct it, I'm actually just going to break it down and look at some of the arguments.

Drew: I guess this is sort of a warning for readers of this chapter. This is most where Hollnagel starts to use words away from their accepted meaning. There’s that style of writing way he says he's going to do deconstruction, and then he explains what deconstruction is and gives a bit a sort philosophical history of deconstruction and then says but actually I'm not going to do deconstruction in that sense, I'm going to do something else. He does the same thing with some quite sophisticated words like phenomenology, he's not actually doing phenomenology. Etiology, he's not actually doing etiology. Ontology, he’s not actually doing ontology.

For readers who are unfamiliar with those terms, it sounds very, very technical in chapter 5. For readers who are familiar with those terms, it sounds really, really confusing when Hollnagel makes these promises with the terms that he uses and then doesn't actually live up to those promises in what he delivers. It makes it hard to be fair to what he actually does say because he does make some very valid points here just under some very confusing language. By phenomenology, Hollnagel is really just talking about the way we measure safety. He doesn't say a lot more in this chapter than previously, just that we measure safety under Safety-I by measuring the things that go wrong and by then considering the absence of those things as safety.

By etiology, Hollnagel says Safety-I is intrinsically linked to the linear accident models. I should point out here that by this point I'm fairly sure that the word linear is one of the ways that Hollnagel has redefined. He includes Accimap and STAMP as examples of linear models. Hollnagel is the only person, only academic even, that I know who would consider STAMP to be a linear model.

David: I think he's just referring to—in his view, any model that tries to define cause and effect—a reliable cause and effect, he sees that as linear.

Drew: Yes, which is what had me confused because STAMP is not actually a causal model. It's a model of feedback in a system, which to an engineer by definition, a system with feedback is a nonlinear system. I think it's the way he's using that word would be very confusing to any sort of reader with an engineering background. It becomes fairly clear going into this chapter that Hollnagel is actually making a very deep and important point. It really forces people to either agree with him or disagree with him, which I think is a very useful thing to do rather than giving terms that everyone can agree with.

Hollnagel gets to the point where he's saying that if you think that the causes of accidents are knowable, then that's really Safety-I, because he says here if you think causes are knowable and you think you can go out and find those causes and you think that by finding those causes, you can prevent accidents. That is clearly Safety-I thinking about how the world works. Hollnagel claims that in the modern world, for lots of systems, he doesn't say all, he says there are still some systems that Safety-I works for. For most systems, it's so hard to find the cause-effect relationships, that they might as well not exist.

It’s a futile thing to try to explain an accident because this is what caused the accident in any sense. Whether you are someone creating a domino theory, this causes that, or whether you're someone as sophisticated as Leveson creating these big models of controlled feedback and constraint, he says any of those attempts are futile with modern complex systems. I love it when the author makes a strong position like that that you have to agree or disagree with it, you can’t just nod along.

David: I think, Drew, it's also hard not to agree because I think a lot about complexity science and lots of the academic words that came before that talked about the complexity of our systems. This idea that if you define Safety-I as when you look for problems and causes to those problems and try to fix it, and then you can say that our systems aren't that simple, it’s creating an obvious need for a complementary approach which Hollnagel is talking about Safety-II. I still find it as a strong point but also in 2014, it's still a very hard point to disagree that all accidents, we can identify the cause and fix them because otherwise we wouldn’t be 60 episodes into a podcast.

Drew: David, this might be where I'm getting hung up on Hollnagel’s language, he says clearly he's talking about ontology. Ontology is about what is and isn't in the world. It's about what categories are valid. If he's making an ontological point, he's literally claiming the cause and effect don’t exist. I absolutely cannot accept that. I find that not just easy to agree with but impossible to agree with, because if that was genuinely true at a fundamental level, then we couldn't influence the likelihood of accidents happening. The whole project of safety as a science and as a mission in the world would be futile.

I think there's an alternate way of reading it which requires us to assume that Hollnagel is actually misusing his language, but hopefully is a more charitable reading. I think he's talking about epistemology, not about ontology. He's not saying that cause and effect don’t exist, just that for all practical purposes, it's futile to go out and try to hunt and explain those causes. There are better things that we can be doing that that hunt is going to lead us more often into error than it is to lead us into truth. With that reinterpretation of the language that it's about what is knowable and achievable rather than what it is, I'm much more comfortable with it.

David: I think because he does use some examples in the book which didn't sit that well with me when he was talking about modern complex technologies. He was talking about batteries and typewriters. Like I said, I'm no engineer but I think they're bad examples because they're quite close technical systems. I'm not an engineer but these are clearly complicated systems, not complex. I mean when you get a typewriter, even a modern laptop or a battery, we can deconstruct it, we can look at it. We can understand how it works, how it doesn't work. It may work or it may not work, but the components within that are not radically going to behave differently on different days. I'm not sure that some of the examples that are being used here make the point.

Drew: I think there is a particular end goal here in that the batteries Hollnagel is talking about, the Boeing 787 lithium battery fires. At the time that Hollnagel wrote this, the causes of those fires were not fully understood. Boeing was taking a precautionary approach, trying to close off all possible causes of the fires rather than saying they definitely happened because of this cause. There were multiple fires after that. We know what was wrong with the batteries, we know what was wrong with the manufacturing processes, we know what was wrong with the inspection processes.

That very example is more one about what is reasonably achievable rather one about what is fundamentally a cause or not a cause.

David: You do end up in a double loop here, Drew, just when you talk about that example of battery failures because if you look at something as complex as an airplane and you look at some of the arguments to date in this book about Safety-II is like looking at successful work so those batteries are manufactured. They passed a test, they got installed in the airplane. That airplane flies well for 15,000 or 20,000 or 30,000 hours. It's really hard to know how Safety-II, some of the ideas...

I think this is why Erik only ever stated, and industries misunderstood at times, that Safety-II is complementary to Safety-I. I still don’t live in a world where I think we should try to actually set you up for problems.

Drew: Hold that thought, David, because I'm not certain that that position is exactly the position that gets made in this book. We’ve just had a pretty strong claim that Safety-I is literally impossible for a lot of modern systems. We haven’t actually got to the point of saying that we need to replace it, but the idea that the two are complementary is an idea that evolved after this particular book and I think is inconsistent with some of the theory clearly espoused in the book.

We’ve got this fundamental point here. I guess I should point out that I don't agree with Hollnagel’s reasoning but I absolutely agree with the fact that most of our investigations are not nearly as productive as we would like them to be. I would tend to ascribe that to social reasons, rather than fundamental ontological limitations. I think it's a fair point that as technology becomes more and more complex, it becomes harder and harder to pin accidents down to specific causes. That gets much more true when we move away from technology and start talking about organizational causes of accidents.

I'm going to go out on a limb and I reckon that you and I did a much better job of laying out our phenomenology, etiology and ontology when we wrote our Safety Work Versus the Safety of Work paper. I think that's actually a clear description of the underlying categories and reasoning that you can build Safety-II on top of, than what Hollnagel has provided in this book. He's mainly built on top of a criticism of the way things are currently happening, and that’s a criticism of the old to explain the news, I think is never as firm a foundation as clearly explaining what the underlying ideas and principles are and then building on top of them.

David: I think this is one caution maybe foolishness is that a book is setting out to achieve something and it’s setting out to argue something and it's not subject to peer review. Even now, it's not even subject to a publishing house approving that you can self-publish a book, any of our listeners can write and self-publish anything they want. It's just a good lesson that anything that’s not peer-reviewed, we actually should be taking the time that we're taking over these three episodes to really break it apart and say what parts of it should we take forward and what parts of it are an argument that satisfies the needs of the overall intention of the book.

Drew: What do we have here, by the time we've got to the end of chapter 5? I think we've got some really important stuff. Hollnagel has pointed out some serious problems with traditional conceptions of safety. At the very least, he’s shown that we often don't think clearly enough about what it is we mean by safety and how we’re using our understanding of safety to build our safety practices. It's worth being a bit explicit about what we're doing and why we're doing it. He's shown that the way we define and measure safety can be a real problem.

He's shown that taking an oversimplified view of accident causes can be a real problem. He hasn't really clearly explained his own position and alternative yet, except that he has clearly rejected the idea that accident causes are knowable, particularly for modern systems. He clearly wants to reject the idea that managing safety by trying to identify and control the causes of accidents is a good idea. He clearly wants us to put that to one side and find a new way to manage safety that doesn't rely on this excessive focus on simple causes of accidents to prevent those causes from occurring.

David: Drew, do you want to kick off a couple of practical takeaways now for chapters three, four and five?

Drew: If you haven't encountered them before, just reading through the vibe of chapters three, four and five, Hollnagel goes over a lot of ideas that might be familiar to some safety scientists and safety practitioners, but everyone's going to encounter these ideas for the first time somewhere. I don't want to say that if you haven’t encountered these ideas before, you’re ignorant or that Hollnagel is wrong for repeating and explaining the ideas. I think you get a lot of similar messages to these in Decker’s work. Decker actually repeats a lot of these ideas each time he writes. He gives different explanations of quite similar ideas.

Yes, certainly, this is one set of those explanations of the underlying fundamental ideas about how to think clearly about the way safety works.

David: Some of these things that Erik talks about he has this idea of causality, this idea of human factors, this idea of relationships between different types of incidents and accidents versus successful work, these day-to-day thoughts and conversations of safety practitioners today and points of research for safety scientists. I don't think it's fair to look at something in 2014 and say that this is how this all works because I think there's a lot of work that comes in on all of these areas. Like you said, Drew, it’s good. It's an easy read through some of these ideas.

Drew: Second thing I thought was an interesting takeaway is that Hollnagel makes a really nice argument that Heinrich didn't believe in Heinrich, at least not the way most people interpret Heinrich, which I think that’s sophisticated understanding. It’s worth waiting for anyone who believes in things like the pyramid or the idea that 90% of accidents are human error. I also think it's a nice little reminder to people who like to just casually say, you're Heinrich, as like a label that we can apply on all dumb ideas in safety.

David: I think having a very good professional and personal relationship with Erik, I think as he would say something like Safety-I and Safety-II, he really wanted to talk about a new way. To talk about the distinction the new way, sometimes you have to characterize the thing that you're trying to distinguish against. Somebody may see safety differently and we talk about different things. I think this Safety-II and Safety-I, Erik would say now that that lumping a lot of things together is probably not fair, is probably not accurate, is probably confusing and probably not the right thing to do.

When you’re coming up with some of these ideas like safety work versus the safety of work or Safety-I and Safety-II, you never expect some of these things to be part of common language. Sometimes down the track, you start cringing. I think James Reason felt the need to write numerous publications trying to clarify his position on Swiss cheese. I think that we need to look underneath these arguments, so we need to make sure that we find out what's being said but also not necessarily get too hung up on these really broad sweeping applications of language.

Drew: On the one side, you could think of it as poetic justice that Hollnagel treated all of this stuff lumped together as Safety-I. Now, Hollnagel’s ideas of Safety-II get lumped together with lots of other ideas that really are not the same thing. On the other hand, we could just say let's all agree not to do that. Let's try to explain our own ideas clearly rather than always in contrast to unfair depictions of other people's ideas. I look forward to hoping that in some of these chapters we’re coming up that Hollnagel presents his own positive ideas a bit more clearly.

David: That's a bit of a glimpse into next week, part 3, for the last chapters of Safety-I and Safety-II in our 3-part series. For those who are following along on LinkedIn and maybe even reading the chapters along with us, please continue to drop your ideas into LinkedIn. Let's try to see what interpretations other people make and what their conclusions are and what else they think we could address. Because as ideas fall out of these episodes, I think there's going to be specific ideas and issues that we can then add to the cue for future episodes.

Drew: That's it for this week. Join us on LinkedIn. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.