Today, we conclude our three-part series on Safety I and Safety II.
We dive into the final chapters of the book and analyze Hollnagel’s intent and offer our commentary on his ideas. Though our ideas don’t necessarily jibe with all of Hollnagel’s, we appreciate our time dissecting this seminal book.
Tune in to hear our thoughts on the final four chapters. Make sure to let us know if you also read the book and your thoughts on the content.
We hope you enjoyed our little end-of-year deep-dive. Have a happy and healthy New Year!
“So you think of Safety I just as it protects against lots of specific things, but it doesn’t protect against generic things that we haven’t specifically protected against.”
“The fact is...we can make some fairly reliable and valid conclusions about what happened leading up to something going wrong.”
“I think all theorists we should take seriously and not literally.”
Safety I and Safety II: The Past and Future of Safety Management
David: You're listening to the Safety of Work podcast Episode 59. Today, we're continuing to ask the question, what's the full story about Safety-I and Safety-II part three. Let's get started.
Everybody, my name's David Provan and I'm here with Drew Rae and we're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. This week, we're continuing our discussion from the last two episodes about the book, Safety-I and Safety-II by Erik Hollnagel. If you haven't listened to episodes 57 and 58, we suggest you go back and listen to them first and we'll wait.
Drew, let's dive into Chapter Six and onwards through to the end of the book. Where do you want to start?
Drew: Let's start with a quick recap from the past couple of episodes. The book is called Safety-I and Safety-II and the first half of the book is essentially criticizing Safety-I. We've talked a bit about how it criticizes the way we measure safety and points out some of the challenges with safety measurement when we're measuring the number of adverse events and the number of adverse events is decreasing.
It talks about some of the management problems we run into when we try to constrain human variability, assuming that accidents happen because of unsafe behaviors. A continuing theme throughout those chapters has been we've been waiting to hear what Hollnagel has to say about the alternative. Hopefully the next few chapters we get a glimpse as to what he envisages instead.
First, though, we have Chapter Six, which is called The Need to Change. This chapter is kind of interesting in that if you doubted before that Hollnagel was talking about safety engineering and thought he was just talking about operational work, this chapter is very much an attack on current safety engineering practices.
He points to a number of things that have changed, that have made safety engineering increasingly hard. He talks about the rate at which new things are being invented. He talks about Moore's Law and the way computer power is constantly increasing. He talks about a cycle where we're creating systems that are so complex that they need automation to control them, which then starts to make things harder for humans and we need more automation.
He talks about a problem that other people have called this, the system of systems problem. Hollnagel doesn't call it that, but the fact that every system doesn't exist in isolation. The environment of that system is made up of other systems that have been designed at different times and by different people. We can't really think of the safety of a system just in isolation at the time we design it.
David, I think the overall claim here is that systems are no longer tractable, that they're just too complex for us to do analysis in order to work out what safe and unsafe means.
David: I think this discussion about complexity is not really new or different, it's a real world observation and we can read a lot about complexity science in books without going to a Safety-I and Safety-II book to look to understand that but I think this argument for our existing safety methods of imposing constraints is is no longer effective.
I think that's what Erik's trying to say here, is because the world is getting so complex by us just doing these point in time interventions when we see something in our business that we don't like is never going to be enough to solve the safety challenge in our organizations. For us to try to increasingly be effective in these complex systems, complex business models, complex organizations, complex technologies, we need to have a different view.
We're still kind of continuing through this chapter, a case for the need for change. He does say at the end of this chapter that the assumptions that underpin Safety-I are no longer valid in this complex world.
We can't decompose work and look at the individual steps in a task. We can't make things go right just by stopping them from going wrong and so on. Drew, I think Erik's saying that Safety-I is merely insufficient, but it does seem to make some stronger claims in this chapter that Safety-I might almost be entirely obsolete.
Drew: He does actually directly state Safety-I is obsolete so I don't think there's anything hidden there. Personally, I get very uncomfortable with this appeal to complexity as an argument to justify anything. The first thing that makes me suspicious is that everyone proposing a new approach does it regardless of what the new approach is.
They say the old techniques used to work, but the world is becoming more complex so we need something else. Of course, something else is always what they're offering. I think it's a self-defeating argument because if it's genuinely true that the world is so complex that we can't understand it, then firstly, any solution would face that problem. Any solution would have the problem that the world is too complex to understand. As we're going to see, Hollnagel's whole proposal is that we should spend more time trying to understand what's going on, which wouldn't work if things were really so complex.
The second one is that if it were true, it wouldn't apply just to safety. It would be true of anything. The fact is we're designing modern systems. We've got 787s and they fly. The internet works. Claiming that safety is this special thing that can't cope with complexity, ignores the fact that that's something that humans are actually very good at doing.
Our modern organizations are complex and they are coping with that complexity. You can always talk about deficiencies, but you could talk about bad things that were happening in organizations at the turn of the century, turn of the previous century. Bad things happen but claiming that it's because of complexity or that somehow we've crossed this magic threshold, I think is a very poor argument.
I don't want to sort of spoil too much but Hollnagel is just about to back down from this claim that Safety-I is obsolete. I don't want to spend too much time criticizing the claim when he himself is not going to continue making it.
David: Yeah, I agree with what you're saying there about complexity and being an excuse. I think we don't live in a perfect world and I think sometimes looking through the lens of all of these theories in safety, but other theories and ideas more broadly. If you can find an individual situation or a set of situations where something doesn't quite stand up or isn't quite effective, then, sometimes it can be used as an excuse just to throw it all away.
I think Hollnagel has done that with some of the Safety-I ideas about at least in this chapter at least you're right, not for the whole book in saying, it doesn't work in this complex situation or this complex situation so we shouldn't think like that at all.
I think what we know now is, yes, the world is complex and yes, there's not really a silver bullet or a perfect solution to something. But if we can solve some things with one idea and solve some other things with another idea, then we just need to have enough ideas to be able to solve all the problems.
Drew: There's a much more traditional argument for resilience engineering and Hollnagel doesn't make it in this book but I think I'm just going to make it now because I think it provides a much better version than what Hollnagel is offering here. That is, Safety-I relies on us being able to anticipate how things are going to go wrong and that's always going to be limited. There are always going to be gaps in our understanding. There are always just going to be imperfections in our methods for anticipating what might go wrong. There are always going to be failures in foresight and that doesn't make Safety-I obsolete.
It just means that we shouldn't trust that it's going to work 100% effectively. It would be really nice if we had something that complemented it that was not reliant on that perfect decompose ability, that perfect understanding of how the future accident was going to happen.
David: The role of resilience is to have that adaptive capacity to manage the unforeseen situations, the unknown and unknowns, and fail safely, and gracefully extend. If people think about Episode 24 with David, graceful extensibility, about being able to extend your capabilities to respond to those changing situations.
Drew: Yeah. You think of Safety-I just as it protects against lots of specific things, but it doesn't protect against generic things that we haven't specifically protected against.
David: Drew, we're now going to go into Chapter Seven. This is where we start, for the first six chapters have been about creating the need or the case for something for a new idea.
Chapter Seven is where the new idea gets laid out in which Erik sort of promises to do the flip side of Chapter Five. In Chapter Five, he decomposed Safety-I to explain all the problems and now in Chapter Seven, he's deciding to clearly build up and describe what Safety-II is. Drew, how does he do this construction of Safety-II?
Drew: I think this will be clearer if we leave aside Hollnagel's language about anthology, ideology, and phenomenology and just sort of focus on the central ideas. The central idea he offers is that work performance is always variable.
He says the variability is not just inevitable, it's necessary. We need work to vary. We need humans to adjust, to adapt, to keep things safe. He also says that that's never going to be perfect. It's always possible to say here is the perfect way we should adapt in the circumstance, people will always fall somewhat short of that.
Any time you look in hindsight, there's always going to be this gap between how we would have liked things to happen and how they actually happen but that's constant. You take a snapshot of work at any time and you'll see work that is variable and work that is falling a little bit short. It's fairly meaningless to say at this particular time things were functioning incorrectly because that's just the normal state of things.
David: Drew, I agree with this. I think work is always very dynamic. For people who know some of the writing that we've done, Drew, we've referenced some ideas earlier than [...] who sort of said in the 60s that work is so dynamic and so unpredictable that you need to have some basis for how people will perform their role but you also need to encourage initiative, spontaneity, and adaptation to all of the situations that people may face that you can't predict or prescribe.
At some point, I think where I don't quite know how to interpret what Erik's saying here and this has probably been one of my big challenges around Safety-II. At a theoretical level, if we say that safe things going well is not just the opposite of them going wrong and if we're saying we want things to function correctly or incorrectly. Erik's saying that we need things to go right and we don't really define what's right. He's also saying that we can't characterize things as correct or incorrect.
But at some point in time, practically in organizations, we actually need to form conclusions. We need to decide if ways of work are okay or not okay, if they're safe or not safe, if they're good, if they're bad, if they're okay to continue, or if they're not okay to continue. Of course, this should be a sense making in a consensus building process. But at some point, the outcome needs to be in an organization, an agreement on either continuing or discontinuing a work practice, which does involve some kind of categorization or judgment about whether something is good or bad.
I'm not sure, Drew, how you land on some of the different labels that get used where we say, should we or shouldn't we categorize things as one way or another?
Drew: This is where I think their writing style is, again, doing a disservice to the fundamental ideas. Hollnagel says really literally that it's not possible or meaningful to characterize components as functioning correctly or incorrectly. He says that flat out, but he does not mean that. He gives lots of examples later on where he clearly does think it's possible to characterize. If you look at his own definitions, he thinks that what unsafe means is that there is an unacceptable rate, or risk, or frequency of things going badly.
He clearly thinks it's possible to be in a bad state, which means he thinks that a bad state by definition exists somewhere. He uses this sort of really strong language but what he actually just means is he thinks don't focus too much on the classification. That's not what we should be spending time and attention on and I think you can agree with that without agreeing with his literal statements sometimes.
David: No, I absolutely agree with that. I think the time is spent on understanding the work and what it might mean as opposed to necessarily labeling things. That's a helpful interpretation for me.
Drew: The next bit, I'm going to have trouble, David, and I'm going to ask you for your help with this bit, because Hollnagel uses a lot of these engineering metaphors. In this next bit of the chapter, he starts treating them very literally and he starts talking as if he's actually using engineering terms and using them in a precise way.
To me, it just sounds like gobbly, that he is talking pseudo scientific nonsense if you try to interpret what he's saying literally. He's talking about non-linear systems and resonant frequencies as if they're like engineering terms when previously they sort of worked as a metaphor, and here they absolutely don't.
David: Yeah, if I recall that part of the book that you're talking about trying to talk about this idea that the direct causes of accidents in complex systems are unknowable. You have an accident and to think that you can reconstruct exactly what happened is not possible in a complex system. I'm not an engineer, so I'm not sure I can help you with any of the engineering terms but I think that's not always the case. The fact is, we can make some fairly reliable and valid conclusions about what happened leading up to something going wrong.
Drew: Yes. He makes this claim that the outcomes of accidents are visible and leave a trace on the world, but the causes are transient and non-direct and the outcome is just emergent from the causes and that's nonsense when you talk about a technical system.
We don't need to have physical traces of the Mars Polar Lander to reconstruct exactly what happened. It was in the planetary void. We have none of the evidence in front of us and we know precisely what went wrong.
Our ability to diagnose technical causes seems almost independent of the complexity of the system. This only makes sense if you're talking about things like organizational causes, and the problem is not complexity, the problem is social construction. Hollnagel is really wary about using postmodern terms like socially constructed causes but I'm pretty sure that's what he sort of means, whereas he's very liberal with using these engineering terms that really don't make sense.
There's a really valid point that the cause of accidents when it comes to the organization or when it comes to human behavior aren't found in the physical remains of the system. That's a very valid point. Should we move on from that to the good stuff?
David: Yeah, keep going, Drew. We're deep inside Chapter Seven now and we're starting to build up what Safety-II is.
Drew: Hollnagel does actually give us a definition. It's a terrible definition so we're going to move on fairly quickly from it. He says the actual definition of Safety-II is a condition where as much as possible goes right. Safety-I is preventing as little as possible going wrong. Safety-II is as much as possible going right. In terms of a definition at literal face value, that's the same damn definition. It's rhetorically neat, but it says nothing.
To explain further, if you want to know what I really mean by this, then look earlier in the book where he was just criticizing Safety-I so he still wasn't giving us a definition. I think in order to get stuff out of this, we have to read a lot of interpretation rather than the direct words and there is eventually...
I'm giving the sort of gist of the chapter rather than anything I can directly quote. This is not about definitions of safety. Safety-II define safety in exactly the same way that Safety-I does. It's not about ontology, it's not about etiology, it's not about phenomenology, it's not about epistemology. It's simply about what is your approach to try to achieve safety. Whereas Safety-I is about trying to prevent failure, focusing on identifying what failure looks like and then stopping that from happening, Safety-II is about looking at what creates success and trying to promote that.
In practice, the big difference is that promoting success often means permitting and supporting lots of variability, whereas preventing failure often requires inhibiting variability, constraining the system to prevent failure.
David: I think what you just described there really succinctly, really plainly, and really well is the core idea here that we're looking for the exact definition or whatever it is. It's actually a step back from that and think about that world view. This is this is where I think Safety-II can be very, very distant from Safety-I practices in our organization, where for a lot of time a safety professional or an organization worried about safety won't spend any time looking at normal day to day work or any time looking at things where there's no problem that's being raised, or reported, or identified.
I think what Erik's encouraging us to do in the way that you just described it there quite simply in this entire book, is spend most of your time looking at all those things, which is just a non-event when it comes to safety.
Drew: David, I tried to throw in an example here. Even in your notes on my notes, it's clear that there's no clear distinct world of Safety-I and Safety-II that's more about focus. The example I gave was about safety systems on a car.
Lots of things that make a car safe aren't about preventing accidents, they're about helping you drive well. We give you good controls. We give you a responsive engine, we give you a good visibility. All of those things help you be a good driver so that's sort of supporting the normal work of driving.
Then we add in safety systems to a car which are much more about either preventing an accident or mitigating the effects of an accident. Like a lane departure warning tells you when you're drifting out of a lane, collision alarm or the new systems that apply the brakes when you're about to have a collision. They're about seeing an accident, stopping the accident happening and then your seatbelts, airbags, crumple zones, is all about protecting in the event of an accident.
David: Drew, when I read your example there, I just thought oh, Drew's talking left hand side, right hand side of the bow tie, depending on the top event, because, you've got these detective and mitigating controls and you've also got these sort of preventative controls on the left hand side.
I'm going, no, but that's not accurate in the sense of a top event so I had myself in this little thinking loop and then I was even going back to all those things that you mentioned about good visibility, easy to reach controllers, responsive engine. They also are about trying to minimize threats and stop things from going wrong that are caused by a lack of visibility or an inappropriate operator response. I ended up in this kind of infinite thought loop of just saying well, everything's in some way preventative even if you're trying to make things go well. You're sort of at the same time trying to make them not go wrong and this is where I ended up in this sort of infinite thought loop.
Drew: I think that is actually the point here, is that we're using the same definition. Safety-II, just like Safety-I is about preventing accidents. It's purely about what do we focus on to try to prevent the accident. Can we focus on what is the accident and how do we stop us reaching it? Or do we focus on what is successful normal work and focus on making that go well? Ultimately the point is don't get in an accident. It's just you, which do you focus on? Do you focus on the normal work of driving or do you focus on preventing the accident?
David: I like where we've just ended up there and I don't want to oversimplify some of these ideas because of the way that you describe the worldview before. It is a different world view when we're coming at understanding the way work happens and how to make it successful more of the time, as opposed to trying to map out as many ways that we think it can go wrong and try to stop those things. They are different, different ways, different views as Erik has said to me before, we're still looking at the same work.
We're still looking at the same organization, we’re just looking at what we're trying to achieve with a different set of glasses. We are still looking at the same thing just in a different way. What is the setting for the rest of the chapter and then where does the book go from there?
Drew: The rest of the chapter is more about practically what you do in order to do Safety-II. Hollnagel gives three sorts of things that we do. One of them is recognizing performance variability. Second one is monitoring performance variability, and the third one is controlling performance variability but it's clear that controlling doesn't mean you don't think of unacceptable and acceptable. It means more about sort of trying to steer the performance variability into positive directions.
The key underlying message is focusing on understanding how work happens successfully most of the time, instead of thinking about how it might go wrong and preventing that. Both with the aim that if you do either one of those that should increase the number of times you do well, decrease the number of times you do badly.
David: I think we've got a definition now of Safety-II. As we said it's the same definition we want work to go well and therefore we want it to to not result in an accident. Erik's created a different way of thinking to enable us to get at that outcome in a different way which we've now sort of clarified a bit of Safety-II. He asked two questions at the end of this chapter that I found really useful if we wanted to start working at a practical level, start working with these ideas.
The two questions he simply asked are do we know in our organisation how or why things go right? And how can we see what goes right? If it's not an incident reported, how can we see that? He's actually starting to ask two pretty important questions. What processes do you have in your organisation to know what happens when nothing happens, what kind of dynamic nonevent of safety, and how or why things go right.
When you can see these things going well, how can you understand how they're going well, if you like. I think they're important questions and they're good practitioner focus questions to go. How much of my safety management effort do I currently spend on those two questions?
Drew: Let's move on to Chapter eight, which is where we find out what Hollnagel is actually proposing based on this theory that he has presented. I don't know about you, David while I prefer the second half of the book to the first half, it really annoys me how much of a journey we've had to get to get here, because after everything that he's done to criticise Safety-I, he starts Chapter Eight by saying straight out that Safety-I and Safety-II are complementary. Safety-I's not obsolete. We should be doing them both.
David: He does go further to say that please don't stop doing all those things you're doing like trying to understand how things go wrong, doing your risk management and doing a lot of these practices, but complement that with this new view. I think it was one of those books on the way through that you had to step back and like you said a few times, you paraphrased a few times and think about what he's trying to say, what's the message underneath the words here. If you read it literally, when I was rereading this at a kind of literal face value, what's on the page type of thing, at times it does confusingly contradict itself, if you're just reading the words on the page.
Drew: David, are you saying that we should take Hollnagel seriously, but not literally?
David: I think all theorists we should take seriously and not literally without knowing how many people in the world are just offended with that statement.
Drew: Okay, fair enough. What we have here is we have a position which is much more appeasing, much more leaving room for Safety-I. Keep doing what you're doing and that includes investigating accidents, maybe put a different emphasis on some of those activities to allow for the more complex world that Hollnagel says that we're in, add in a few new Safety-II practices.
Interestingly, given how critical he's been of the past, keep in mind that front line work is already a practical mix of Safety-I and Safety-II. In fact, Hollnagel got this example. At one point he goes through just at the end of the previous chapter where he says that the job of safety managers should be supporting workers to detect and move away from impending accidents.
In other words, saying that we can use Safety-II to support workers to do Safety-I, sort of this understanding that the two are always going hand in hand. We always sometimes focus on avoiding bad things, we're always sometimes focusing on achieving good things. It becomes clear Hollnagel's main complaint is really just that safety management in particular is overly focused on Safety-I at the moment or at the time that he wrote the book.
He's saying that the reason he's criticising Safety-I isn't that it's bad. It's that we're so much in a Safety-I mindset that we're having these pathological disadvantages, and that once we move to a bit more of a hybrid position, doing some Safety-I and some Safety-II, we get rid of the disadvantages of Safety-I and we embrace the advantages of Safety-II without ever giving up the Safety-I activities.
David: I think that's a reasonable statement to make and I think that's a very balanced view. I do say sometimes about the pendulum. If you are either extreme, you're probably not maximizing the utility of what you're doing. I think that's what he's trying to do by just having the pendulum sit more neatly in the middle for a lot of organizations or even closer to Safety-II end.
Drew, we’re halfway through Chapter Eight, we had this big breakdown of Safety-I, we built up a definition about Safety-II. He starts talking about methods and techniques in relation to Safety-II. I was speaking to Erik earlier this year and he's somewhat surprised now in 2020 about how few techniques we've got to do some of these Safety-II ideas.
I think we saw in 2013, 2014 around the time just after this book, we have that frame. We had [...] from Erik, we had [...]. We had these things come out but then in the last five years, we almost haven't had any new techniques and methods as far as I'm aware, shape up other than the other popular learning teams, which we spoke about in an earlier episode on fads and fashions for safety.
Drew: I think there's a couple of things going on there. The first one is that Safety-II as a theory doesn't actually offer all that many suggestions. Look ahead to next week, we're going to talk about a paper that you wrote, David where I think you actually lay out a broader scope for Safety-II methods that Hollnagel really only got two things to offer in this book.
The first one is looking for what goes right and by that, he is not talking about sort of studying extreme success. That's a common misunderstanding here, because remember, going right is what happens most of the time. Hollnagel says we should put lots of effort into studying normal and everyday work. He repeats that a few times in a few different ways. He touches on a few examples like appreciative inquiry, cooperative inquiry. He doesn't use the word learning teams, but he clearly hints towards learning team style approaches.
The second category that he talks about is maintaining good working conditions. He doesn't say exactly what the role of safety people is in that. But he does talk about the way that just bad working conditions compromise all of those things that Safety-II says are important. He talks particularly about things like the amount of discretionary time we have at work and how if you're in a job that's taken up with email and paperwork demands, that's going to take away your capacity to vary, adjust, and improve your work. He's clearly saying safety people should have some role in that but he hasn't spelled out how safety people should get involved in working conditions, just that ideally they should.
David: One of the pieces in this chapter that I really liked is the key assumptions. He says at one point, Safety-I assumes that things go right because people work as imagined if you like, things go right, because people follow the procedures. Then Safety-II assumes that things go right, not because people follow the procedures, but that things go right, because people always make what they think is sensible adjustments to current and future situations.
This is the thing that's always, I suppose, even before I knew there was such a thing as Safety-II really early in my career, 20 years ago, where it just didn't make sense to me for us to think that an experienced worker was going to pull out a procedure to read how to do their job and that they were going to actually just be able to do something in exactly the same way every single day.
I think that's a really important assumption, before we do practical takeaways for our listeners to take away is that what is it that your organisation believes about work goes well and that's a good question you can ask. If you want to do some ethnographic interviewing, just ask people that question. Why does work go well? Or what does a safety manager believe about what creates work going well and you'll start to learn what your organization thinks and believes.
Drew: Let's move on to the final chapter, Chapter Nine. There's not a lot new in this chapter, but I'm glad it's there because the chapter provides one of the best litmus tests as to whether anyone who's complaining about Safety-II has actually read the book or not. The one interesting thing here is that Hollnagel specifically predicts that other people are going to try to create Safety-III.
He says that they'll do this in two ways. One of them is by merging Safety-I and Safety-II and the other is by moving beyond the ideas of Safety-I and Safety-II. He specifically says it doesn't make sense if you do that to call it Safety-III. If your big idea is combining Safety-I and Safety-II, well though because that's what Hollnagel is already proposing in his book. That's him saying tough, not me. He basically calls out in advance anyone who says hey, we should need to compromise Safety-I and Safety-II. That's Hollnagel's whole proposal according to Chapter Nine of the book is we want to compromise and merge Safety-I and Safety-II.
David: Drew, we know that there's been a paper published this year by Nancy Leveson titled Safety-III we haven't spoken about in the podcast. Clearly there's no love lost between Erik and Nancy over these issues.
Drew: Yeah. If you read Safety-III by Nancy Leveson as her attack on Hollnagel, Chapter Nine is Hollnagel preemptively attacking Leveson for her attack on him. His point about things like that as he said, short people are going to come along and want to replace my whole theory, but in that case, don't call it Safety-III because I've just divided the world into Safety-I and Safety-II. If you call it Safety-III, you're agreeing with how I divided the work and I didn't leave any space for a Safety-III.
Like it or not, calling things Safety-III is kind of agreeing with Hollnagel presenting Safety-I and Safety-II and then disagreeing when he says you can't have Safety-III. It's a very inconsistent position to take.
David: Drew, do you want to talk about some practical takeaways and then we'll answer the question from the last three episodes.
Drew: This is a tricky one to do takeaways because the whole point of us doing this is we don't like dumbing down a book into simple points, so we really do hope that if you're listening to this, that you do at least skim through the book yourself, preferably do give it a read. I think it's important for modern safety practitioners to have read Safety-I and Safety-II.
Things to take away, first one is most of the criticism of Safety-II are misunderstandings. Almost everything that I've read that someone has said to criticize Safety-II, Hollnagel has already said those exact things somewhere in the book. However, I think it's very forgivable for people who have read the book to misunderstand it because Hollnagel himself says a lot of things literally that he later retracts. I do think it's a very poor mistake and not forgivable to misunderstand Safety-II when you haven't at least read the book and given it a chance.
Second thing is, for all of his complaining about Safety-I and all his talk about inconsistent definitions in ontology and ideology, the point of Safety-I and Safety-II is Safety-I and Safety-II, not Safety-I versus Safety-II. By the end of the book, it's very clear and I think everything Hollnagel has said since then doubles down on this. Hollnagel's say Safety-I and Safety-II are consistent and complementary. Everyone should be doing both of them.
David: Just to reinforce that point, the thing is try to understand what might go wrong and try to take actions in your business to prevent something going wrong. When something does go wrong, try to understand it as broadly as you possibly can, but at the same time as you are doing all of that which you call Safety-I, make sure you're looking at where things haven't gone wrong, understanding why they are going well and not going wrong, and trying to maintain or enhance those conditions in your organizations to create more of that because if they're going well, they can't be going wrong at the same time.
It really is both an end and both conversation, not an all conversation. I suppose some of the things we see sometimes, the popular interpretation which is if you're doing Safety-II then work done is always right, or if you're doing Safety-II, then you should write procedures for your business. All those things are what's not written in this book.
Drew: Yep, exactly. The one we are aware I think is there is an incompatibility between Safety-I and Safety-II. This is much more about emphasis than about activities is that they do take a different world view to variability. Safety-I looks very suspiciously at variability, it says here that variability is something to risk assess. It's something to worry about because if things are variable, it's out of control, it might lead to an unsafe state whereas Safety-II encourages us to be much more supportive and tolerant of variability and not automatically think that variable means less safe.
I guess that includes when it comes to some of our existing safety practices like risk assessment and accident investigation, that that tolerance to variability means that we shouldn't just assume that because work is variable that it's different from procedures, or there's a gap between how we imagine and how it's done. That means we should apply labels like human error and incorrect.
A shift to embrace both Safety-I and Safety-II is going to mean a move away from some of the thought processes that we're driving to just Safety-I.
David: Drew, any final takeaways?
Drew: The final one is just Safety-II does say here's new practices we should be doing as well. I think lots of people and even people who are critical to Safety-II have gone onboard with the fact that there's practices that involved understanding current work and work consultation that maybe people have always been doing, maybe we always should have been doing them but Safety-II encourages us to think about that is a core part of the Safety Professionals role is understanding everyday work better. David, I'm going to fling this next one to you.
There's one question for three weeks. What's the full story behind Safety-I and Safety-II? What's the answer?
David: Drew, I've got Table 8.1 on page 147 of the book, Safety-I and Safety-II by Erik Hollnagel we've been reviewing in front of me. I'll just take a minute because that's explaining the full story. The definition of safety, in Safety-I as few things as possible could go wrong. In Safety-II as many things as possible could go right.
The Safety Management Principle in Safety-I is about being reactive, responding well if something happens, and categorizing unacceptable risks. In Safety-II it's proactive. It's continually trying to anticipate developments and events based on normal work.
The explanation of accidents. In Safety-I, accidents are caused by failures and malfunctions and the purpose of investigation is to find these causes. In Safety-II, it's that work basically happens in the same way regardless of the outcome. The purpose of an investigation is to understand how things usually go right as a basis for explaining how sometimes they might go wrong.
The attitude of people. In Safety-I, people are predominantly seen as a liability, a hazard or an element of a system that needs to be controlled. In Safety-II, because of this performance variability of work being normal, humans are seen as a resource necessary for system flexibility and resilience.
Finally, the role of this performance variability. In Safety-I, if work varies it's harmful and it should be prevented as far as possible. In Safety-II, this performance variability is inevitable and it's also useful. What I mean is it needs to be monitored and be managed. Drew, that's a very long answer to a question, but we have spent three weeks reviewing this book and we probably want to know from our listeners if they tuned out for three weeks or if they thought something different was interesting and would like us to do more of it.
Drew: Yeah, I'd certainly be interested both in our listeners' opinions of Safety-I and Safety-II and on this way we've spent a couple of weeks looking at a book and whether they would like us to do that again. Probably won't do it in the next couple of episodes, but certainly be very happy to pick up another book and give it a similar treatment if that's what people would like.
David: Drew, you did give a little teaser there for next week. Next week's episode 60 and our listeners will know that every 10 episodes, we talk about our own research so we'll talk about a paper that we wrote, the final paper of my PhD research titled Safety-II professionals: How resilience engineering can transform safety practice. That's hopefully going to take some of the ideas from the last three weeks and talk about the role of safety professionals and practical applications within organizations. If you liked this train of thought for three weeks, then it will continue and deepen a little bit next week.
Drew: That's it for this week. Send any comments, questions, or ideas for future episodes to email@example.com.