The Safety of Work

Ep.57 What is the full story behind safety I and safety II (Part 1)?

Episode Summary

Today, we begin a three-parter on the full story behind the book Safety I and Safety II: The Past and Future of Safety Management.

Episode Notes

For this episode, we are breaking away from the standard formula for this show. We thought it best to split this topic into three episodes, as we don’t want to oversimplify our breakdown of this seminal, two-hundred page book. 

We encourage all of our listeners to follow along and read the book with us. Join us as we dig into this influential book by Erik Hollnagel.

 

Topics:

 

Quotes:

“Most theories are billed as critiques of other theories. So, any new theory implicitly, and usually, explicitly criticizes a lot of existing stuff. And it’s important to separate those two things out.”

“He says that success and failure are not opposites.”

“It means that every single data point, then, has a lot of uncertainty attached around to it, because they’re such isolated examples, such extraordinary events…”

 

Resources:

Safety I and Safety II: The Past and Future of Safety Management

Feedback@safetyofwork.com

Episode Transcription

David: You’re listening to the Safety of Work podcast episode 57. Today we’re asking the question, What is the full story about Safety-I and Safety-II? Part 1. Let’s get started.

Hey, everybody. My name is David Provan and I’m here with Drew Rae. We’re from the Safety Science Innovation Lab at Griffith University. In each episode, we normally ask an important question in relation to the safety of work or the work of safety. And then we examine the evidence surrounding it. But today and over the next two weeks after today, we’re going to break the normal formula of the show, and we’re going to take a deep dive into the theory of Safety-I and Safety-II.

There are a lot of public debates that tend to oversimplify a book that’s almost 200 pages long and we don’t want to do that. We’re talking about Erik Hollnagel’s book with the same title—Safety-I and Safety-II: The Past and Future of Safety Management. I think it was published in about 2014. Drew, it was your idea to do this three-part series where we deconstruct the whole book, the arguments, and the science underneath it. Why did you want to take that approach to the next three episodes?

Drew: David, I think you and I have both got in for a lot of arguments at various times with people online and in other places about what’s often grouped together is the new view of safety. One of the most frustrating things about those arguments is something you have the sense that people are arguing against things when they’ve only ever read a parted summary of it. Given that we try to take fairly deep dives into things on this podcast anyway, I thought it would be worth having a read-through of the book from start to finish. 

I’ve certainly read the book before. Have you read the whole book, David?

David: Yeah, I’d read the book, but I hadn’t read it for maybe five years. 

Drew: Yeah. I was the same. I did a book review of it when it first came out. I had to carefully read the whole thing. But this is the first time I’ve come back to the original book since the first time that I read it. We’d appreciate it if as listeners, you would care to follow along and read the book yourself. We certainly don’t want you to just rely on our representation of it. 

Hopefully, the podcast might form a bit of a guide to what are the best bits of the book to focus on, if you don't feel like reading it cover to cover. Or where are the gems and where are the things that we think it’s quite reasonable to be quite skeptical? 

David: Drew, Safety-II is one of this collection of new view theories often talked about alongside HRO theory, resilience engineering, safety differently, and HOP. What is some background context when we’re thinking about something that’s positioned as a new theory of something? In this case, a new theory of safety management.

Drew: I guess the first thing is there’s no theory that is brand new. It’s important to put any new theory into its context. I think the biggest problem is that humans are pattern matching animals. When we get a new idea, we want to compare it to things that we already know. We see an idea and we say, that’s just talking about this. We focus on the parts of the new thing that are familiar to us. Pretty much we fail to spot the new thing because we’re focusing so much on what’s familiar or we’re twisting things to match our understanding. That’s bad if we then accuse the author of not saying anything new. When it’s asked who’s just reinterpreted everything that they’ve said. 

The second thing that I think is important is that most theories are built as critics of other theories. Any new theory implicitly and usually explicitly criticizes a lot of existing stuff. It’s important to separate those two things out. Something could make really fair criticisms of existing stuff and not have anything that was particularly useful or accurate to say itself. Something could be very unfair in the way it criticizes the stuff but still say something very positive itself. We need to separate out the different parts of the argument. 

The third one—this is sadly particular to safety, I'm sure it does happen in some other fields—is that most safety theories build both the criticisms and the positive claims on a very narrow set of examples. They play this rhetorical game where they never actually claimed that their ideas are universal, but that’s what you’re meant to take away. You’re meant to think that this idea replies in lots and lots of situations. This is sometimes called a motte-and-bailey argument. It’s something that people use when they’re arguing disingenuously. 

The idea is that there’s a modest and easy defender position, which is the castle or the motte. And then there’s a much more extreme position, which is the outer wall or the bailey. When people attack the outer wall, the author retreats to the castle. When the attack goes away, the author reoccupies the outer wall and starts presenting the extreme position again.

A classic example of this in safety—I don't want to do it for Safety-II at the moment—is the idea of how reliable the organizations versus normal accidents. In Normal Accidents, Charles Perrow seems to be saying that lots of accidents are inevitable. That’s the whole idea—a normal accident. But anytime Perrow gets challenged, he says, I’m not talking about all accidents, I’m not even talking about lots of accidents. I’m just talking about one type of accident called normal accidents. If it doesn’t fit the pattern, well, it’s not a normal accident, it doesn’t disprove my theory. 

The HRO theorists seem to be presenting a recipe that any organization can follow. It seems to be that anyone can be an HRO, just do these things. But anytime they get challenged, they’ll say we’re not trying to provide a recipe. We’re just trying to describe what a small number of organizations are doing. That’s the classic example of motte-and-bailey is a narrow defensible claim that everyone’s supposed to take away universal messages. We need to be careful of this when we read about safety theories. 

David: I think, Drew, you’re exactly right. For something as complex as safety, maybe there’s no unifying theory that is applicable in all circumstances in all organizations for all types of accidents or the avoidance of all types of accidents, and that no theory is ever going to be able to explain all possible situations. I [...] think that’s a reason not to look at the theories in the context of what they’re saying and what they’re not saying. That’s what we’re going to try and do today. 

But I think as we’ll also find out today and over the next couple of episodes, some of the representations of Safety-I being so broad and so general aren’t necessarily that useful, and the same goes for Safety-II. We need to look at the specific context like you said at the start, and what they’re saying about specific situations within organizations. 

Drew, we want to go through this book Safety-I and Safety-II chapter by chapter, and we’ll look at what the book says within the context of similar literature. We’ll particularly consider what critic the book makes and to what extent these are fair. These are predominantly about current and traditional safety practices that are labeled broadly as Safety-I. What claims the book makes in relation to those Safety-I practices and also the proposed Safety-II practices, and how far these claims may be true or empirically supported. And then what suggestions this book makes and how generalizable those suggestions are for organizations back to what I was just saying before. 

Drew, I mentioned the title of the book, Safety-I and Safety-II: The Past and Future of Safety Management. Like we said, published in 2014. Not really that old. We’ve been talking about Safety-II for a while, but this book is not that old. Drew, do you want to talk a little bit about the author, Erik?

Drew: Sure. I’m not going to say too much about Hollnagel himself. Because I think when you’re reading a book-length thing, the author doesn't matter so much as the ideas that they’re presenting. The work stands or falls by itself. But the main thing you need to know by context is that Hollnagel is one of the direct intellectual descendants of a professor called Jens Rasmussen. Rasmussen’s big contribution to safety was to shift the focus away from accidents as a direct result of a series of causes and towards the idea that accidents emerge as outcomes from highly dynamic socio-technical systems. 

When we say the socio-technical system, we just mean that all work involves technology, people, and organizational forces. You can’t separate them out and study them in isolation without missing out on important stuff. Rasmussen had a huge impact on a whole generation of safety theorists. It’s not just Hollnagel. You can see his influence in Woods, Leveson, Hopkins, Amalberti, and Dekker. They all draw on Rasmussen to a greater or lesser extent. Each with their own slightly different interpretation. 

The funny thing is if you oversimplify any of those authors, you end up back with Rasmussen’s ideas at the core. That’s why ideas like HRO, Safety-II, Safety differently all sometimes seem to be saying basically the same thing. If you oversimplify them a lot, you just end up back with the thing that they’re drawing from rather than the new thing that they contribute. 

If you’re interested in knowing more about the background, David, you and I both like to say you go back to the original sources. But if you’re looking for a spread of the history, then there’s a couple of papers by Jean-Christophe Le Coze who looks specifically at Rasmussen and Rasmussen’s influence on other authors. Which is a good starting point if you’re trying to connect how different safety work fits together. 

David: Yeah, great. I think the Le Coze’s done some great papers summarizing some of the histories of safety science, some of the problems and challenges with safety science in future directions, and wrote a couple of good papers. Actually, there was quite a strong legacy that Rasmussen has left. He only passed about I think it might have been only last year or thereabouts, maybe the year before. There was a lot of writing that came out about his legacy in safety science, still talked quite extensively in a lot of safety science university courses as well. 

Drew, you wrote here on the notes that we started, because we’re going to go through the first three chapters today, then four and five next week, and then the balance of the book the week after just to break it up and allow people to read long and think about what the different parts of the books are saying. But you wrote in the note, no preface—excellent. You don't like prefaces in books?

Drew: David, I get incredibly frustrated when you don't know whether or not you have to read the preface to get into the ideas. Sometimes the preface is actually the start of the explanation of what’s happening and sometimes it’s meaningless drivel about what the person was thinking when they started writing a book. I’d love an author who gets right into it. This book definitely gets right into it. Your chapter one is The Issues. It begins with how do you define safety, which I think is a great question to start any book on the safety theory. 

David: Yeah. I think Hollnagel makes the point that we talk a lot about safety and we assume that we know what it means. Very few papers, Ph.D. thesis, or articles take the time or bother to define what they mean when they talk about safety. Hollnagel talks about different types of definitions for safety, that there are lots of definitions, and that’s absolutely the case. That there are sometimes subtle and important differences in these definitions. Sometimes, when we’re talking about safety, we’re talking about outcomes or unwanted events. If there are no unwanted events, that’s what safety is. 

Sometimes we’re talking about not no unwanted events, but the risk of unwanted events being at an unacceptable level, and that is safety as well. These ideas of risk-free and acceptable risk are different, and importantly, they are distinctions that we should understand when we’re talking about safety. 

Drew: This is also the first time in the book that Hollnagel introduces the idea that having an absence of accidents is just one way of looking at safety. He says that success and failure are not the opposite. Failure is really clear. A failure is where you have an accident or where you have an unwanted event. Success is something else. I think one of the reasons why we’re going to keep coming back to it is that it’s a confusing idea because the examples that the book gives at this point are examples of where success and failure are exact opposites. Either you succeed or you fail. 

He talks about a robbery and he says it worked so the plan doesn’t work. He talks about what you mean when you wish your friend a safe flight. You’re saying that you hope that they’re going to have the flight without any sort of uninterruptions or the flight will have unwanted events happening. He said this really provocative statement that failure is not the same as success, and that absence of success is not the same as failure. But the book hasn’t given us any clear explanation of what the alternative is at this point.

The main emphasis is on focusing on failures and saying that currently, we do tend to largely focus on failures. Which is a problem particularly if we only have a small number of failures. 

David: Yeah. I think what I took at his definition when he was explaining this was this idea of experiencing harm versus experiencing the potential to experience harm. You can have something that doesn’t actually result in an incident, but it doesn’t necessarily mean that that person was free from exposure to the risk of an incident. 

He talks about, like you said, the example of when you might say to someone to drive safely. What you’re actually hoping for them is that they arrive at their destination without experiencing an incident or an event. But you’re not necessarily telling them that you expect that they’re going to drive to their destination without being exposed to any sort of harm. 

Underneath this argument, Erik is just saying is if there’s no accident as an outcome, the simplified conclusion that that was a safe activity should not be concluded. That’s how I take out of his definition here, but it’s not overly clear. 

Drew: Yes. I agree that that’s a fair interpretation. I think I’m going to be saying this quite a few times during the book. Wanting to have it spelled out a bit more clearly for me, what the alternative is. It’s very clear what he’s criticizing is that oversimplified view. 

David: Yeah. Exactly. But I think it’s just that when we talk about being safe, it’s that the outcome will be as expected. In other words and uses this a lot, that things go right, things go well, or things are successful. A claim that is made later is that if things are going right, things are going well, and things are successful, then they can’t be going wrong at the same time. The fundamental idea from Erik’s thing is that being safe means that things go right. 

Drew: The second half of chapter one is about measurement problems in safety. There are a number of clearly explained and clearly understood problems that when you try to measure safety as a set of unwanted events. The book doesn’t give it this name, but I personally tend to think about it as the denominator problem. That if you’re talking about acceptable safety, then you can’t just say how many bad things have happened. You have to also know how many opportunities there were if a bad thing was to happen. 

If I cut my hand three times but my job is a professional vegetable and that’s over a lifetime of cutting things, then that’s pretty safe. If I cut my hand three times but that’s the only three times I’ve ever held a knife I’ve cut myself, that’s pretty unsafe. To know whether it’s safe or not, you don’t just need the number of unwanted events. You need to know the number of times that went right as well. 

Hollnagel gives a couple of good examples where we do know the denominator. He talks for example about counting signals passed at danger. These are trains going through railway signals. Because we electronically record all of these, we know exactly how many times trains do the correct thing at signals. We know exactly how many times they do unwanted things at signals. We can talk about that ratio of the two things. 

He also talks about the days Sweden switched from left-hand side driving to right-hand side driving. Again, a very specific event where you can count the number of accidents on that day and compare it to the number of accidents you would normally have in that situation.

David: Yeah. Drew, the problem here that I think Erik is getting to is using even when we think we’ve got a denominator using the number of incidents that occur over hours that people work as being a useful denominator or a comparable denominator. Saying that hours aren’t actually the measure of things that go well as opposed to the incidents, which are the measure of things that haven’t gone well. 

Drew: Yes. The way people try to justify that is they talk about it’s an hour in which things went badly versus an hour in which things go well, but that’s not what happens. Two hours are not directly comparable. 

David: Drew, this is where—I don’t know whether now is the right place to say it, but I’ve struggled with this myself, this idea that Safety-II or some of the new view safety says that not having accidents is not what we’re going for. We’re not looking at the absence of negative events. But I think underlying old theories of safety, we’re actually trying to make things safer. Which I still find by definition that must mean not exposing people to harm or not harming people. 

Sometimes, in this paper, like the argument that we’re making now with denominators, we suckle back around to an argument that we’ve said we shouldn’t be doing. Which is that safety is not actually the opposite of having an accident, yet we’re not consistent throughout this book with that argument. 

Drew: Yes. I don’t want to prejudice the conclusion that we come to. Full disclosure to our readers, we’re recording this—we’ve got about halfway through the book and we’re recording a few episodes at once. But neither of us has finished our most recent read through. But I’m certainly coming to the opinion that the statement that there is a problem with the way we conceive Safety-I is very, very clear. But the idea that the way to fix it is to divide things into their Safety-I view of the world versus Safety-II view of the world I don’t think actually succeeds in providing that clear explanation of an alternative. 

David: Beyond the first measurement problem there you mentioned, Drew, about the denominated problem. The second measurement problem that Erik talks about is regulated paradox. This is a challenge for regulators and for organizations. When it was initially proposed in the late ‘70s it was about regulators. This idea that the safer something gets, the less data we have about accidents and therefore when we have very little data, it’s dangerous to read too much into that data. Amalberti has probably written the most about this idea of ultra-safe systems and the paradox of totally safe systems. 

Drew, is it that we just don't have enough data, where now incident rates get too low?

Drew: David, my immediate instinct there was to nitpick on where the name regulated paradox comes from. I had always assumed it came from the idea of a circuit regulator where you’re providing feedback to a system. Once a system gets very finely under-controlled, you’re limited in how further you can control it because you’ve got no feedback. 

David: No, that might be my lack of understanding. That’s what I thought it was about when I had a quick look, but it could well be about engineered systems, Drew. Not being an engineer, I make no claims to know about how regulators work in engineered systems. 

Drew: The fundamental underlying point is very clear. That the safer system gets the less useful accident data is just because there’s not very much of it. It’s not just that you can claim that we’ve got very little data but what we’ve got is good data. It means that every single data point then has a lot of uncertainty attached to it because they are such isolated examples, such extraordinary events compared to every day. 

The third measurement problem Hollnagel describes is a type of efficiency-thoroughness trade-off. He says that we can actually solve that regulated paradox by trying to increase the number of negative events by expanding the definition. We don't have that many fatalities anymore, but we can count minor injuries. We don't have actual accidents, but we can count near misses and high potentials. 

But anytime we do that, we’re making our definitions fuzzier. Defining a minor injury is much harder than defining a fatality, defining a near miss is much harder than defining an accident. Subtle differences in those definitions make it hard to compare to situations because it could just be the difference in definition or how we’re applying the definition instead of any real difference in the underlying risk. 

David: Drew, at the end of chapter one, Erik’s basically talked about some of the problems with defining safety, how we think about defining safety, and then how we measure safety. Where are your thoughts at the end of chapter one?

Drew: I think Hollnagel is pointing out some well-understood problems. You remember I said at the start that just giving nothing new isn’t a criticism if that stuff that you’re giving is true stuff that people should know. If you’ve never encountered these ideas, these measurement problems, it’s important that you understand them in order to understand some of the more sophisticated criticisms the book makes later on about Safety-I. 

These are obviously true problems with measuring safety. The characterization that we do tend to measure safety as a negative event is mostly a fair characterization. We do have lots of other measures of safety other than injuries. But definitely, we do recognize accidents as having this real significance beyond other types of data in safety. 

David: Drew, the problems are clear. Like you said, in some of the writing through there, I must admit, I still struggle to see some consistency in the absence of accidents and not necessarily the presence of safety. And then it’s okay to read into systems that have a lot of accidents, but then if I have pure accidents, you shouldn’t read too much into that. It’s a bit of assuming consistency in the narrative throughout the chapter. But some of the other things that I think he talks about in there that we haven’t quite mentioned is that the safety management system requires that feedback. If you don't have those adverse outcomes then you need that feedback. 

Long before this book, we’re talking about leading indicators we’re trying to search for those other forms of measurement and other forms of feedback about how work goes when no incidents occur. I do like the conclusion there at the end of chapter one where Erik says, look, humans have this fundamental need to be and feel safe. And it’s important that we’re thorough in trying to meet that need. I think that’s a good lesson in not oversimplifying something that’s as important as safety. 

Drew: Chapter two is a very common chapter in lots of safety books. It’s called Pedigree. It’s a kind of potted history of safety thought. I personally don’t like these sorts of chapters because they try too much to apply this narrative story that makes sense of what is a very, very fragmented history. 

Safety developed in lots of different ways. It developed in different ways, industries, countries. In academia, it’s split into lots of sometimes boring and sometimes totally isolated camps that didn’t even know that each other existed. We’ve got different ideas getting invented multiple times in different places, in different ways. If you try to put over that nice smooth line, then it’s always going to be a real misrepresentation of a lot of the work. 

The particular way of doing safety history that Hollnagel draws on comes from Hale and Hovden. They’ve got a very famous paper, The Three Ages of Safety. Other people have rift on it by talking about the fourth age of safety or the fifth age of safety. It’s this idea that there are stages where first safety was about fixing technical problems, then it was fixing human error problems, and then it was concentrated on organizational accidents. 

David: Drew, when you look at some of these narratives—and I’ve tried to do it as well. I’ve tried to simplify (if you like) some of the histories of safety. David Burrows in 2009 wrote a paper, like you said, about The fifth age of safety: the adaptive age. Sometimes we talk about going from rules to culture, to resilience. 

We try to understand how safety ideas are built on each other. But when I had a look, there were ideas in the ‘20s that match with some of the things we’re doing now in the 2000s. And then there were things that were in the ‘80s that were just a representation of things we’ve done in the early 20th century. There’s no linear progression in the safety theory and safety thinking where you try to make big generalizations. 

Drew: But buried within the history, there are a couple of points that the book is going to come back to. The most important one of these is the idea that when we talk about the causes of accidents, technical physical causes are often a lot more clear cut than human causes. Which are often a lot more clear cut again than organizational causes. 

Hollnagel talks about that by saying with technology, we’ve got a much better understanding of the mechanisms that link causes to effects, and they’re much more deterministic. When he says deterministic he just means they’re knowable. We can investigate, we can find the broken component. I don't know to what extent, David, you think that that is actually a valid distinction.

David: Between technical and organizational, or technical and socio-technical?

Drew: The technical accidents can be investigated and have nice, neat causes pinched down. That human error or organizational accidents can’t have that same nice, clear point to the problem. 

David: Any way you draw a distinction, you’re going to have gray areas because it’s not black and white. Maybe for next week’s podcast, but this idea that comes a little bit later in the back that in our modern complex systems, then accidents don't have causes. I don't subscribe to that. There are some technical incidents, but there are also some incidents that involve people. Which I would consider being quite somewhat linear and simple. 

We’ve had this debate and this is probably a long debate for another podcast. That there’s no such thing as a simple accident. Sometimes, we can tie ourselves up in knots for trying to look too deeply at something that might not require deep thought. 

Drew: Yes. That’s a fair point. I also think that Hollnagel is looking at some of the technical stuff as an outsider. I’ve seen some of the engineers make the exact same argument in reverse. He says that techniques like risk assessment and fault tree analysis work really well for technology and only start to break down when things get more complex, or when you start to need to consider the human and organizational aspects of a system.

That argument ignores the fact that even when we are dealing with technical causes, a lot of safety is a very socially determined activity. It’s not a technical activity. Even when we say an accident happened due to a physical cause, that focusing on the physical cause rather than on the process that introduced the cause, the person who didn’t spot the cause or the design process that allowed the cause to happen, that’s still a very socially determined choice. Which I guess almost supports Hollnagel’s broader point that he’s making. 

David: Yeah, Drew, because safety is such a broad interdisciplinary science—you’ve got engineering, psychology, sociology, management science, and all of these other factors. We’ve recently seen a publication from Nancy Leveson, titled Safety-III which is heavily critical of the Safety-II ideas. It’s complicated. I’m not an engineer and I find it very challenging to think about how my ideas might apply to technical disciplines because, in an academic sense, you don't want to be one of those people that are trying to overgeneralize your ideas. 

Drew: My personal opinion is that if you’re reading the book and you’re looking for which bits you can skim and which bits to read closely, I’d feel very comfortably recommending skipping chapter two. There’s a real risk that whenever you try to paint this clear picture, you’re just going to annoy other people. Particularly, if you come from one of the other subfields of safety science. Then Hollnagel’s view of history is designed to lead you to his ideas being the next progression. If you don't already agree with that, then you're going to find the history a bit annoying. 

David: I agree. There are other places to go to find better summaries of the history of safety science. Last year, Drew, you contributed as well to Sidney Dekker’s book called Foundations of Safety Science. I think it’s a broader, fairer representation, but it’s also still written very much through Sidney’s worldview lens. 

If listeners remember episode 17 when I interviewed Carsten Busch titled What did Heinrich really say? based on his master’s thesis. I’d suggest at times getting as close to the source material as you can or at least going to a peer-reviewed literature review or something in an academic journal that’s actually done some more forensic historical account of what the literature said. 

Drew: Always, when you want to understand what a particular person said, go and read their words for themselves. Don’t rely on someone else’s history. If you’re trying to understand how ideas fit together, I think Sidney does a fairly good job of not creating this inevitable match towards progress that some people do in their representations of safety. [...] work is good too. He doesn’t do a big history, but he does a good job of connecting various ideas together and sharing how they emerge from each other or are different from each other. 

Let’s wrap up where we’re at at the end of chapter two. If you’re not already familiar with the ideas, then I think Hollnagel points out some important problems that people should be aware of in safety. The first one is that when we talk about safety, we should be very clear about what definition we’re using. If nothing else, Hollnagel reignited a really important debate about how exactly we define safety. People were getting too comfortable in their definitions even though they didn't agree with each other. Hollnagel points out that if you disagree without realizing that you disagree, you’re just talking at cross purposes when you try to explain what safety is or how we achieve it. 

David: Drew, I think this denominator problem is real and it’s important. Hollnagel doesn’t say explicitly, he doesn’t talk about TRFIR and lost time injury frequency rate, which makes it worse or better because it does have a denominator like I mentioned earlier. But like we said, hours worked are not the same thing as a number of opportunities for an accident. 

I just did a little bit of a thought experiment, Drew. If you’ve got a recordable injury rate of 10, many of our listeners would say that’s really unsafe. Ten per million hours or maybe say 2 per 200 hours, depending on which part of the world you’re in. But say a person performs 10 tasks every hour, then you’ve actually got functional reliability of something like 99.99999 because you’re having 1 incident for every 10 or 100 million tasks. 

This idea that our hourly denominator is a useful denominator for a number of accidents is something that we probably don't need to say too much more about statistics after our episode two weeks ago, Drew. But that’s just another problem with how we currently talk about statistics and log indicators. 

Drew: Likewise, the regulator problem is real and important. It’s definitely true that the safer we get, our opportunities to improve safety further by studying accidents are going to be reduced. We got to start studying near misses and investigating those with the intensity of accidents, or we’ve got to find some other way to deal with what our safety activities are aimed at. 

We’re two chapters in and we’re still waiting for a clear picture of what Hollnagel is suggesting. He’s hinting at this brighter alternative where we’re measuring positives rather than negatives. But we haven’t explained where the positives are just the opposite of negatives or whether they’re something entirely different, and what that something entirely different would be. We’re still looking to see what is Safety-II.

David: If you get the opportunity, all the listeners, and you have a copy of the book—I’m sure many of you will or can get your hands on a copy of the book—then we’d encourage you over the next three weeks to read along with this. 

We’ve read the first two chapters for this episode. Post your own opinions on chapters. We’ll try and get some discussion going on LinkedIn. Let us know if you think our summaries are fair. Let us know if you’re reading into the writing something that’s different to us or you have different suggestions for how we might interpret it. Anything that you think in terms of what Hollnagel is saying in this book that’s either good or bad or that we might have missed. 

Drew: That’s it, at least for this week. We’ll be continuing our discussion of Safety-I and Safety-II next episode. In the meantime, we hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Join us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.