The Safety of Work

Ep.66 What is the full story of just culture (part 3)?

Episode Summary

Welcome back! Today, we finish up our three-part series on the book Just Culture. We cover the final three chapters and finally figure out what the full story is with the concept of just culture.

Episode Notes

The final chapters cover such issues as creating functional reporting systems and the pitfalls in creating such systems.

 

Topics:

 

Quotes:

“I think this is the struggle with those sort of systems, is that if they are used frequently, then it becomes a very normal thing...but that means that people are using that channel instead of using the line management as their channel…”

“I think unless we work for a regulator, we need to remind ourselves that it’s not actually our job, either, to run the prosecution or even to help the prosecution.”

“If you think your system is fair, then you should be proud of explaining to people exactly how it works.”

 

Resources:

Just Culture

Feedback@safetyofwork.com

Episode Transcription

David: You’re listening to the Safety of Work podcast episode 66. Today, we’re asking the question: What is the full story about Just Culture—Part III. Let’s get started.

Hi, everybody. My name is David Provan. I’m here with Drew Rae, and we’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. In each episode, we normally ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

If you haven’t listened to episodes 64 or 65 which are parts I and II of this series, you might want to check them out first. Drew, we covered the book, Just Culture 3rd edition, by Sidney Dekker which was published last year. In the last two episodes, we’ve been through the origins of Just Culture in the Safety Science literature, and last week we’ve spent a bit of time talking about the difference between a Retributive Just Culture and the Restorative Just Culture.

This week, we finish off the remaining chapters in the book. Drew, do you want to get started with the first chapter we’re talking about today?

Drew: Sure. Chapters III, IV, and V are all quite different from each other and from the books so far. In the earlier chapters [...] Just Culture, we talked about different types of Just Culture, and in Chapter III we take a little bit of a left turn and start talking about safety reporting.

This is actually interesting because often, when we talk about Just Culture we’re very much focusing on accident investigation. But remember, the whole purpose of having a Just Culture is to give people the confidence to report and talk openly about problems. It makes sense that if we wanted Just Culture, we shouldn’t just focus on the justice side of it. We should focus on having good reporting systems and think about how reporting works because that can very directly influence the type of culture that we’re after.

If we look at what’s in the chapter, it’s set off just reminding us that the basis of having a Just Culture is it helps people report safety issues without fear of personal consequence; isn’t without anxiety. Why do we want that? We want that because that sort of reporting is how we learn and improve. If we don’t find out about stuff, we’re never going to fix it. If we don’t find out what’s working well and not working well, we’re never going to be able to do something about it. 

It’s fairly obvious that if people are afraid of what’s going to happen if they do report, then they’re not going to report. It is the usual storytelling style. Sid goes through a number of examples where it later comes out that safety incidents have been silenced because people are afraid of what’s going to happen, and that led on to worse consequences.

David: I like in this chapter how Sidney describes in behind a Just Culture, getting people to report is about two things: maximizing their accessibility and ease of the reporting process, and minimizing their anxiety about utilizing that reporting process, so that it is a positive and rewarding experience. I thought that’s a really good way for listeners to think about it. How accessible is your system and how anxious do you think your people feel about using it. 

Drew: David, you’ve had experience in your own organizations with reporting systems. Which of those two factors do you think is the big one? Is it about the structures that matter, or is it about their feelings about the reporting system?

David: I’d be interested in listeners’ feedback on this because this whole book makes the assumption that it’s the anxiety that’s the problem. That the retributive processes, the fear, the punishment, and the criminalization is the problem. But in my experience, at least recently in the last five years (let’s say), it’s probably more about the system itself. 

What we’ll talk about a little bit later is whether people actually know what the company wants them to put into that system. On the spot, I would say you could probably do a lot just with maximizing accessibility and the understanding of the reporting systems with your existing cultures.

Drew: I don’t know whether it’s a third factor or whether it’s part of the accessibility of the system, but it’s not just about minimizing anxiety. It’s about minimizing confusion about what is supposed to be reported, how is it supposed to be reported, how are you supposed to put together reports, is it just physically easy to access the system, did you think it’s worth your while to do it.

David: I think that’s a good thread and we’re going to pick that up in a minute, that thread about minimizing the confusion because I like that. Sid goes on and talks about making sure that you’ve got answers for people in your organization to the following questions: What will happen to the report? Who will see it? Who else will see? Am I going to jeopardize myself, my career, or my colleagues with the report? In some industries and in some roles, does my reporting make legal action against me possible, or easier, or even likely? I think all of that presumes the thread that you mentioned there, which is about people actually know what the things are that they need to report in the first place.

Drew: Dekker talks about the fact that very often, systems have quite ambiguous language about what is supposed to be reported. The example he gives is words like, you have to report safety occurrences. How do you know what is a safety occurrence? 

We’ve got a paper in the publication process at the moment about hazard reporting, and just the confusion at the front line about what does count as a hazard for the purposes of reporting. You just tell people these safety words: hazard occurrence, incident, near miss. Often, that pre-classification decides whether it fits what is supposed to be reported or not. Some people don’t know what fits the categories. They don’t know where they meant to put it into the system.

David: Absolutely, and I think we—our listeners who are safety professionals—think about these words all the time and probably feel like we understand them well in our career. Has an incident near-miss? Typically, your reporting system will go something like this: report all incidents. You look at a definition of an incident. An incident is any unplanned, unintended activity, consequence, or outcome associated with work. Well, hang on. Weird and surprising stuff happens every single day in front line work. Exactly what is an incident? What information should go in?

The probably interesting question there that people could do is go and ask five or six people on the front line of your organization. What information do you think needs to be reported in our incident reporting system? What types of information? See what the answers are and if they’re not all the same, then maybe you’ve got some clarification to do.

Drew: I was part of an interesting discussion once about at what point does something become an incident? Imagine you see a box up on a shelf where it’s not meant to be there. Is that reportable? What if the box looks unstable, is that reportable? What if the box falls off but doesn’t hit anyone, is that reportable? What if the box actually hit someone but they’re not hurt, is that reportable? You can try to draw the line here. At what point does it become so exceptional that it is or isn’t an incident?

Really, none of it in that case is particularly useful unless there’s something that you want management to do something about the box. If the only action is just to tidy up the box, then why do you want to report it in the first place, except the risk of getting yourself or someone else into trouble? 

That’s a point that Sid makes. Ultimately, regardless of what starts to go into the system, people will keep reporting the things that you respond positively to. If they know a type of thing, if they put into the system they get some results that they’re happy with, that’s going to encourage them to keep reporting. If they get no feedback or actively negative feedback for what they put into the system, that’s the stuff that they won’t report. If it goes into the vacuum or it causes some undesired consequence, they’ll stop reporting.

Then he lays out a couple of things that we can do to help those two things. One of them is formal. On the formal side, having some clear written policy about what’s supposed to happen, who gets to see it, what of the confidentiality rules, what are the limits of what can be kept confidential, can help to set expectations; this is what’s supposed to happen.

Alongside that is having positive precedent. The stories that get told about reporting (again) have a big impact. I think we’ve talked about this before on a couple of papers that specifically studied  reporting. Just one or two stories about things getting reported and bad consequences can really spread and hang around for a long time. It takes a very consistent positive handling of reports to build up people’s trust in the system.

David: I think the other thing that is buried in the text in the chapter is where possible involving the person in the changing improvement process. The actual reward for the reporting process to them is actual check tangible improvements to work and involvement in tangible improvements to their work. We all know that just having a report in the system, even if management reviews it and does something with it, isn’t going to be as good as that direct feedback to the person who made the report or the direct invite for them to help participate in solving the problem.

If we clear up what to report, and that’s not a small thing to do, to think about what to report and thinking about your box story—I know a couple of colleagues and I could probably have a three-hour conversation with them about where hazard risk incident transition points are in your box story—the questions is how to report it if we’re clear on what to report. Do you want to run through the different reporting processes and mechanisms?

Drew: I think what Sid’s doing here is he’s basically going through and answering all of the questions people have or struggled with when they’re trying to put in place for improved reporting systems. These are all important choices that there’s no easy answer. If you’re trying to put in place a reporting system, this is a good chapter to read in detail just for the pros and cons, and every choice that you make has got cons as well as pros.

One of them is about whether you structure your reporting primarily around line reporting or around reporting to the safety department. The advantage of reporting up the line is that you can get much more immediate and direct action to anything that you report. You tell something to your supervisor, they probably have power to directly take action to fix it. If they don’t, they can pass it up another step who has the power to fix it. Whereas, once it’s gone into the safety department, there’s going to be some extended process before that feeds back to front line improvements. 

On the other hand, sometimes the whole reason people are worried is because of how people in the line will think about what they’re saying. They don’t want to report things that seem like complaining, or that will get them seen as troublemakers, or that they think directly dob in themselves or people around them.

I think Sid’s solution is a little bit naive. He’s basically just arguing for—like I said—that’s why you have parallel systems. I don’t know about your experience. I think in theory it makes sense to have primarily reporting by the line and then have a parallel system, but the trouble is that if people are really good at reporting by the line, that parallel system very quickly becomes starved and hardly gets used.

David: You’re right. I think most organizations have this reporting to the line, so hazard incident reports go up the line, that tries to follow-up on organizations’ push for management ownership and accountability around safety. That’s okay, I suppose, unless a person needs another way of raising issues like you say in a parallel system. 

Even if it’s a starved reporting process, I think that there needs to be a safety valve for people. I don’t want to use the term whistleblower and those types of things, but there needs to be a way for a person to bring something to the attention of the organization if they’re not satisfied with the response they get bringing it to the attention of their manager or if they don’t feel like bringing it to the attention of manager. That process needs careful thought because they rarely end very well for the individual people who report in those situations.

Drew: I think this is the struggle with those sorts of systems. If they’re used frequently, then it becomes a very normal thing; it’s not a big deal. But that means that people are using that channel instead of using the line management as their channel. It can become just a venue for complaining about things rather than actively working to get things done. If they’re used very rarely, then it suddenly becomes a big deal when someone uses that system. And sometimes, more of a big deal than the person wanted it to be. They needed a safety valve. They didn’t want three men in black suits turning up next to their supervisor and citing a confidential complaint about you.

David: Sidney was talking about these different types of features of how to report, whether you’re making your organization report involuntary versus mandatory. I don’t quite know what mandatory reporting looks like, but you can have voluntary or mandatory reporting, having a process that is non-punitive, and where individuals are protected. Whether it’s voluntary and confidential, mandatory confidential, voluntary anonymous, mandatory anonymous, there are different (I think what you said) choices that you make with your system that people need to understand the pros and cons like you mentioned.

Drew: I think one that’s worth spending just a little further moving on is this idea of confidential instead of anonymous. This is one where the book clearly comes down in favor of one of the choices. I think Sid doesn’t like anonymous systems. They can become venues for causing trouble if you’re not willing to put your name to a report. There’s no way to seek further detail or even to provide feedback to the person. 

If we’re talking about the purpose of a system being to build up trust and to allow free communication, then if the people need to use the system anonymously, that shows that it’s not working for its intended purpose. But if it’s not anonymous, then we obviously need to build in clear ideas about who doesn’t doesn’t get the information, who doesn’t doesn’t find out who has provided the report, so that people do feel that they are safe when using the system.

David: And then, there’s the reporting mechanism itself. There are choices that you make around how you design your reporting system. But then, what does a person actually do? How do they make this report? I suppose we’re in the age now of forms becoming electronic forms going into databases, so typically we have lots of categorical information—dropdown box here, select this option here, categorize and classify this here. That all makes our graphs look really pretty when we present our statistics and reports.

Sidney doesn’t really talk too much about forms and systems but mentions the importance of things like ample free text-based or voice so that the person reporting can really tell their story in the way that they need to tell it. When I was reading this bit I was reflecting on, I’m not sure how well many of the reporting systems that I know of in organizations actually let people really tell their story of what’s going on and what they think needs to be done. You need a different type of back-end to manage that process. You need a person reviewing every reporting detail, trying to understand it. I think we just like to simplify and aggregate this data too much to want to have these individual narratives.

Drew: There’s a question that I wanted to ask you when I was reading this. It relates to something that someone raised on LinkedIn after the last episode, which was about where do learning teams fit in with ideas of Just Culture? One point we might talk about this in instant investigation and the use of learning teams in instant investigation. 

I thought it might also fit in here that possibly the whole idea of by exception reporting systems might be a little bit out-of-date. That the types of things that we want from these systems are also the types of things that we would expect to uncover using learning teams—getting people to tell stories of work, getting people to raise issues in a safe environment, getting people to ask for things that they need in a safe environment. We can do it with putting in forms and free text fields and analyzing them. But isn’t this exactly what we’re trying to do with things like letting teams, is get this same information?

David: That’s an inspired reflection and the listener who commented about that. I’d almost argue yes. Okay, blank page and you’re designing your reporting system in your Just Culture environment. You’d probably have a thing that said here’s a reporting channel. If anything serious happens, let us know about it. Otherwise, we’re just going to come to every site once a month and run a learning team or a series of learning teams about what are the things that have happened around work in the last month, what are some of the stories, what things have gone well, what things haven’t gone well, have a rotational series of learning teams at each of your sites, and don’t bother getting people to try to fill out forms and achieve counts in that. So Drew, maybe you can completely redesign how you get this information about work. Is that what you’re sort of…

Drew: That’s what’s I was basically getting at is that all of the things that we’re trying to achieve here and all of the problems with reporting systems (I think) are solved if it’s much more normal for people to talk about this stuff, that we don’t have to report in order to have the conversation. 

Obviously, the challenge with learning teams is the volume of effort required. That’s where we need to think about how much information can we deal with, and how much information do we want. Normally, the assumption is the more reporting the better. But I think learning teams perhaps show that there are limits to that. There is a limit to the amount of effort you want to divert, having that time spent and the amount of information because you then just can’t deal with everything that you get.

David: But I think at an aggregate organizational level. We’re off [...] here, but hopefully it’s useful for listeners to have some reflection. The organization doesn’t have to deal with all of that information. I think that’s the purpose of the learning teams or the appreciative inquiry type approach, which is that the people involved in the work at the operational level make their own determinations about what the priorities are, and are supported and empowered for their own solutions. The sites would then in themselves do their own filtering and prioritization, and solve  things that fit in with the resources and capacity that they’ve got. The organization doesn’t need to worry about the aggregate data. The organization needs to facilitate the process.

Drew: That’s a fair point, and yeah, I’m getting a bit off-topic because this is not something that Sid talks about in the chapter; he’s talking purely about reporting systems. Perhaps the next thing to just mention—I’m not sure how general this is or whether it’s particularly because it’s drawing a lot of examples from health care—is the difference between reporting and disclosure.

There are really two reasons to talk about things. One of them is about learning, and that’s reporting. The other one is about making information known to people who’ve been affected by what happened. Obviously, to see how this happens in healthcare, if something has gone wrong in treatment, then it’s an ethical obligation not just to stop that wrong thing happening but let the patient know that something incorrect has happened or that something bad has happened.

Since we need to distinguish between those two things and make sure that we’re dealing with both of those reporting because we need to put into the system to learn and disclose to people who have the right to know about something. I guess that applies to environmental problems. It applies to patient safety problems. I can’t see it applying to typical workplace problems where people typically would know that they’ve been directly affected by a safety incident.

David: There’s the hygiene exposures where you find out at a particular location that there might be the presence of asbestos and you disclose that to the workforce even though they might not know about it. I think there are some examples of organizations that would deal with it, and it’s just good to understand that, but I’m not sure how relevant that reporting disclosure thing is to how people would set up the core of their reporting systems.

Drew: I guess if you're reading through this book trying to design or reform your reporting system and you realize that you currently don’t have any rules, or section, or policy about disclosure, then yeah that’s definitely something to think about because it’s one of the big exceptions to other rules about confidentiality. Confidentiality is about who doesn’t get told. Disclosure is about there’s an active obligation to tell stuff to.

We move on to Chapter IV.

David: Yup, Chapter IV: The Criminalization of Human Error. This is a very Sidney Dekker–type chapter where he’s intertwining these ideas of how we think about human error, how we think about second victims and the people involved, and how we think about retributive or restorative justice. It brings it all together in this context of should we be prosecuting people for safety violations if they result in bad outcomes for other people?

Drew: I don’t know whether it’s the debater in me dissecting the argument or the left-wing political leanings that I have, but I was reading this chapter and I don’t think Dekker makes any effort to say that safety is a special case. All of these arguments apply equally well to should we lock up the person who’s been caught stealing a candy bar? Should we lock up the person who’s been involved in a fight at the pub? Should we lock up someone who’s been taking drugs that are across the line between legal and not legal? 

I think that’s fairly common that there are very few people who take a sociological approach to research, who don’t have strong left-leanings when it comes to the justice system. I can see how people who aren’t leaning that way might want to almost skip this whole chapter, but there are times when it has really important reflections about how we manage safety in our own organizations. Maybe if we deal briefly with the social justice side of things, and then we’ll turn more of our time talking about you, what does this mean in your organization trying to manage safety.

David: Do you want to take that off? It sounds like you hold the first argument well. You just disclosed your political affiliation. Are you sure?

Drew: The first part of it is just a general claim that there is a trend towards criminal prosecution in the wake of accidents. I was a bit curious about whether this was true or not because I think it’s certainly a big perception—I don’t know if it’s a trend—and actually most of the evidence on this is quite old. 

There was a book written. I don’t have the authors here. In 2010, there were some papers by Sidney dating back to 2003. There are lots of other papers in that time period between 2000 and 2010. If there is a trend towards criminalization, it’s certainly not a new trend. It’s something that has—at least 20 years ago—been an active thing that people were concerned about.

I don’t know if that invalidates a lot of the arguments, but I do think it’s important to keep perspective. In safety, particularly, we feel the law disproportionate to the actual risk of legal consequences. I don’t want to be responsible for hyping up the threat of prosecution or the threat of being sued because that fear itself can cause a lot of the problems.

David: I think there are some specific examples, like the captain of the Costa Concordia in 2012 got a 16-year jail sentence that was actually upheld on appeal. In the last 10 years, there are some examples of front line people in-charge of operations, in that case resulting in 32 fatalities or so that definitely go through that criminalization process. But they’re not everyday-type occurrences.

Drew: Yes, and I don’t think we should be using isolated examples as our measure of how likely something is. It shows how the severity of what can happen, but that severity can skew our perception of how likely those sorts of things are. 

David: I did a bit of digging as well. I spent some time this week with another organization at a director boardroom–type of level. It’s that typical conversation about the individual anxiety of senior managers and directors of large companies, about whether they’re going to go to jail, for example, if an incident occurs in their organization. They spend a lot of time, it drives a lot of activity inside organizations. 

I did a bit of digging and I can’t find an example of any individual manager or director of any public company ever being prosecuted, at least in Australian jurisdictions. It’s always something that has never actually happened that consumes a whole lot of time inside organizations and drives a whole lot of things inside organizations that are quite counter to these ideas of Just Culture. Like I said—this chapter on criminalization—if you think about what is driving in your own organization, you realize what a problem this is.

Drew: Let’s talk a little bit about that side of things. I think the first place that comes in is not directly about the fear but just the way that other investigations tend to model themselves on the legal process. Dekker has a really interesting point that if you look at investigation reports, you can see the way terms and concepts that are normally used in criminal prosecutions make their way into that report. 

Your good example of this is you see investigators building a case about whether an action was reasonable or whether an event was foreseeable. Those are really important if you’re building a criminal case. You need to prove whether something was reasonable, prove when something was foreseeable. That’s part of proving negligence. But it’s actually irrelevant for trying to improve safety. 

I know it frustrates me when I see the front page investigation report, like this was a foreseeable accident or this was a preventable accident. That’s a conclusion that you might reach for the purpose of prosecuting, but it’s not a conclusion that tells you what you’re supposed to do in response to this accident in your organization. We should be wary of those legal thoughts coming into the way we think about how to investigate.

David: I agree, and I've even seen it this year. I've looked at organizations ensuing investigation processes. Even though the process that gets undertaken and the way it's reported have these things like statements of fact or a sequence of events, has witness statements. 

I know an organization that's still making the work group involved in an incident sit in separate rooms and hand-write statutory declarations about their version of what happened, and compile those statutory declarations as part of their portfolio of evidence, and call it their evidence. Maybe that's where we've come from using the label of investigation. Is that an example of what you're referring to?

Drew: Yes. I think unless we work for a regulator, we need to remind ourselves that it's not actually our job, either, to run the prosecution or even to help the prosecution. Our job is to help the organization learn, not to do the regulator’s job for them. We do things like get people to sign their reports that they're giving after accidents. That's just like trying to turn it into this either resemblance or actual evidence that can be used in a trial—that doesn't help the learning side of things. 

David: We move on to that one and get to the more direct effects of having this external legal system that has the threat of prosecution. Sid talks about what happens to your organization when the threat is actually real. What happens when an organization has one of their staff prosecuted?

Drew: The first one is that the law always tries to simplify things. Not necessarily over simplification, but at the very least trying to resolve multiple conflicting accounts into a single account. You can't run a trial where you say one of these five things happened. You need to say this was what happened. In order to improve safety, we don't need to do that, but the law does need to do that. Those are two forces pushing against each other. 

Sometimes there can be just direct interference—legal proceedings, taking control of sites, taking control of evidence, preventing people from talking, sometimes even asking the company to shut down their own internal investigation, or accusing the company of interfering with the police or regulator investigation. 

Even without those things, just a general fear of accidentally collecting information and putting it where a future prosecution could gain access to it. This is where we see companies being afraid of doing their own investigation well, for fear that it will generate evidence, or even being afraid of recording things in their hazard issue reporting systems, for fear of being a future accident that could then become evidence used against the company.

David: This legal fear is alive and well in organizations, in (I suppose) most parts of the world, increasinglyall parts of the world. Sid is making a point through this chapter that this fear of legal consequences really stifled our internal reporting, the openness of our understanding of what happens, and obviously the learning and improvement that we can generate as a result. I think this is a logical argument. You make a point that the evidence of whether this argument is true or not is weakened and mixed. Do you want to talk about that?

Drew: I think there are two things we need to ask. One of them is about the processes that companies put in place. Unquestionably, there is a challenge that companies with their safety processes need to worry aboutlegal implications of what they record. There's also an assumption that people make that this psychologically stifles reporting, that people don't report stuff for fear that it will make them legally vulnerable. 

The evidence on this is actually not nearly as clear-cut as you might think. We won't go into detail, but I found one paper that says that for some professionals like engineers, if you survey them and say, does the fear of legal consequences make you more or less likely to report? They'll actually say they are more likely to report. The idea is with the type of decisions and mistakes that engineers make, the way to protect yourself is to make sure that you've reported it. 

You can see that across a number of professions, that the act of reporting is a self-protective action. Things like teachers and child protection. You don't protect yourself by not reporting something. The slightest hint of something and you report it, and having made that report, you're now protected. I don't think it's quite clear-cut that we should say that fear of legal consequences stops people from reporting. 

It's much more true for frontline operators where there's no other way of someone finding out about the mistake unless they report it. In that case, they fear that by reporting, they're exposing themselves, whereas to other professions reporting is a way of protecting themselves.

David: It's an example of one, but I have had that comment from a frontline operator in an organization to me—it would have been at the end of last year—say that we will only report something if there's evidence that means someone else could find out about it if we don't, which is like what you're saying. I have had one person tell me that that is their criteria for what information they put into the system. 

Drew: Hide it if you can. If you can't hide it, report to protect yourself.

David: That's the strategy. Not necessarily a very Just Culture. 

Drew: What do organizations do to protect themselves against both the external threat and the problem of people being afraid to report? They can go through a few different options. He talks about options for individual companies, and also options for fixing whole industries. It's very interesting but also frustrating because he'll give a solution and then immediately point out why the solution doesn't work. I came out of it without a clear-cut idea of what you're supposed to do. 

One of the things you can do is take steps internally to protect your data. I don't think he says this explicitly, but one thing I know some companies do is they try to conduct their investigations in a way that creates legal privileges for the data they've collected. So run the investigation under the direction of the legal team. They think it's being collected for the purposes of legal advice; they can protect it. 

Dekker even mentions the idea of having databases that are easy to self-destruct, and then immediately backtracks on that by saying self-destructing your entire database gets you into a totally separate legal trouble. 

David: I'm not sure how practical some of these ideas are, but I like the way that Sidney's trying to throw these solutions out into the world, and even goes on to say preemptively disclosing, so being so open with the regulator that they maybe almost feel like they've formed a relationship with you. There are ideas, but I'm not sure any of these improve what goes on in your own organization for the purpose of learning. I'm not sure if a frontline operator, or the source of your information that you're trying to get into the reporting system, really cares about these things, about outside probing and things like that. They only care about the anxiety and the access to that reporting system.

Drew: The one thing that he mentions that doesn't have a downside, it's just limited in our capability, is that whenever someone has cross-industry influence—so have an opportunity to make submissions to the regulator, or influence the regulator, or works for the regulator—then putting in narrow exceptions to specifically protect safety data from some types of legal consequence, can at least then make the organization less scared of losing control of that data, which in turn then lets them reassure people within the organization. 

Obviously, you can't make an exception that anything safety-related can't be disclosed, can't be subpoenaed, or can't be demanded by an investigation, but if you can put narrow exceptions, then at least you can have some sort of protection.

David: Let's go to Chapter V, which is the final main chapter in the book. It's titled What is the Right Thing to Do. It's very heavy on advice, and I like the way that Sidney's moving upstream in this book. He started with this whole idea of Retributive Just Culture and Restorative Just Culture, the stuff that happens after really bad things happened. Then he's moved forward to what we talked about in Chapter III around instant reporting systems and how to make them work. 

In this last chapter, he's also talking now about how to create the conditions in your organization even before the hazard or the issue gets raised to make that whole downstream process of reporting, of investigation, of learning, of support for the people involved, and the forward accountability, how do we make that system flow. Do you want to start talking about some of the things that he suggests before any incident happens?

Drew: Just before I go through the list, I've got a general comment, particularly Chapters III and IV in this book. If you read them, you start to become very depressed about the potential for doing anything. Any way of approaching things has its drawbacks, has its constraints. Once you get into this list, it is very sympathetic to the constraints you might be under. 

Every recommendation starts with a try, or an explore, or a consider. They're all almost independent of other things, so you can go through this list and say, can't do one, can't do two, yeah three, I can have a go at three. That's how I'd recommend treating the list—think of any of these as something you might like to try. That gives you one next action to improve Just Culture in your organization. 

Number one is to try to abolish financial and professional penalties that might come up in the wake of an accident. This is fairly industry-specific, but things in healthcare particularly, very often there are immediate consequences like being suspended during the investigation, or the potential for having licenses or privileges revoked. Having those penalties hanging over an investigation is going to be a problem. If you can set up the system in advance so that those penalties don't exist, that's going to significantly improve the justice. 

David: Yeah and I think not just in healthcare. That happens in aviation, rail, utilities. There are lots of industries where competency-based safety critical frontline staff get stood down, suspended, or their authorizations or certifications are removed until the outcome of the investigation is known.

Drew: The second one is to explore having a staff safety department responsible for the investigations, rather than frontline supervisors responsible for the investigation. If people have listened to the previous episodes, Sidney's talked about the pros and cons of this. There are two things we're trying to balance. We want independence and we want someone who understands the messy details. 

The reason for coming down on the side of having the safety department do it is that they can at least independently run the process. We can fix the other one by making sure that that process has ways of involving the frontline staff considering the messy details as part of the process. We get independence by having safety run it, and then we get the messy details by making sure we properly involve other people throughout the process that safety is running.

Along with that comes decoupling the investigation of this particular incident from anything else that looks like a performance review of the people involved. This is not about judging the person for not just this action but any previous action leading up to it. That sort of performance management is the job of line management, but it shouldn't be part of the accident investigation process.

David: Those two points there, I want to make a comment or reinforce those points. I think there are two big points. The first is I think we've seen this trend of leaders are accountable for incidents. Leaders should do their own, line managers should do their own investigations. Safety roles are not to investigate incidents that occur in the line. 

Large organizations, lots of different industries, and as a profession, we've probably pushed a lot of that for the last decade. I did for a long time in some of my roles, but I've also taken that step to go, well actually no. I haven't seen that drive, the learning outcomes that organizations should be getting out of their investigation process. It’s time for the safety profession to step back into facilitating closely those processes, to learn around incidents involving the line and involving the frontline workers, but really facilitating a good learning process.

Drew: One of the unforeseen consequences of that move to develop responsibility for investigations is that we haven't been able to equip frontline people with the tools and knowledge to do investigations well. We've had to create quite simple models of accident causation that we can teach people who may only have to run one investigation. It's not something that they're experienced in, it's not something they need to do a lot of the time. 

That simplification—in order to let many people do it in turn—then means that the depth of the investigations is weakened, and they're much more likely to default into blaming the frontline individuals. It was a very worthy thing to do, very well-motivated by trying to get people to understand the messy details of work to do the investigation, motivated by having close contact so the recommendations could be implemented, but it hasn't worked from a Just Culture perspective. 

David: I agree. Do you want to keep going, Drew?

Drew: The third one is to make sure that people clearly know their rights and duties with respect to any investigation. In particular, knowing who they have to speak to, for example they're obliged as an employee to speak to the people investigating the accident, who they should not speak to, obliging the employee not to talk to the media about the accident. They're not rules about what work, they're rules about your rights and duties for the investigation. 

Really it doesn't matter what else the investigation has. That should apply if you're doing a very retributive system. To make a retributive system fair, you also should have very clear rights and duties. This one you can do to improve any system. If you think your system is fair, then you should be proud of explaining to people exactly how it works, and exactly how they're a part of it. 

David: All these go to the accessibility and the anxiety around the reporting system, but this is specifically talking to Sidney's point about anxiety, so people know what's going to happen if they do report. They know exactly how this investigation process is going to run, they know what their involvement is going to be, and they know what the intention is and how it's going to be achieved through the process.

Drew: The next one is to build the underlying concepts that you're using for your Just Culture, not just into the investigations but into inductions and training. The two particular things that Sidney called out is making sure that people understand that a safe organization is one that openly discusses and tries to fix problems, and the organization isn't one that's proud of having no accidents or incidents. The organization is proud of when there is something that goes wrong, they discuss it and deal with it. 

The second one is just constantly reinforcing that understanding that accidents have complex causes. That whenever something goes wrong, there is a whole system involved, and we want to improve the system, we want to understand and fix the deeper causes of the accident, rather than think that there is a simple single root cause. 

David: And then finally the idea to have to implement, to review processes for how you care for practitioners and people after incidents in terms of those people involved. 

Drew: Again, this is one that you don't have to agree with anything else in the book to bolt on to your existing system, sections, obligations, and responsibilities for looking after the person who might have been involved in causing the accident, not just involve the person who's been injured.

David: That’s the list. I like the way you say this, it’s like a Choose Your Own Adventure–type list, and match it to the context of the organization; what you think might help. There's another set of points about after an incident. If those things that we just talked about are things that you need to build into your organization all the time, do you want to run through the advice from Sidney about what to do after an incident?

Drew: The first one is try not to make the incident be seen as a failure or a crisis. David, I don't know if you could detect where this is coming from in the text. I've got my own ideas why Sidney might think this is a good idea, but I didn't see it as following directly on from other things that he was saying.

David: No, I think only that it might enhance some of those things like the fear of criminalization, or the anxiety of people who can influence the investigation process that might jeopardize the understanding and the learning about the incident. That was the only thing that I could tie back into Chapter IV.

Drew: That was my thinking as well. Regardless of what the formal process is, the atmosphere around that process is going to color not just how people see it, but how the process itself works. If the atmosphere is something terrible has happened, we need to get to the bottom of this, fix it, and move on, or we're in a moment of crisis, what on earth has happened, this is terrible, that is going to color the whole process. 

What we want to do is make it seem as normal as possible—the way we talk about and learn from incidents—that this is a normal part of work, things go wrong and we learn from them, and that itself will help to keep that Just Culture. 

The second one is pay attention to the way your process is actually playing out. It might tend to stigmatize people involved, so remember this is not about setting a policy up beforehand. This is about after an incident. Make sure that you have people and even your own attention looking at what types of things that people are saying around the people involved. What sort of language is being used? Are the people being able to be reintegrated into work? 

We've got lots of processes for reintegrating injured people into the workforce, but we need to pay attention to the fact that people who have been involved in causing an accident are also, to a certain extent, removed from the normal, removed from the workforce, and need to be reintegrated smoothly as well.

David: I think our language is kind of important. I remember a conversation I had with a manager over a set of incidents, and it went something like this. He goes, we had this major incident and it was from a person who I would never expect to have an incident. Does that mean there are people on your team that you do expect to have an incident?

The only reason that we were having the conversation about maybe there was something deeper going on was because the incident happened to a person that he didn't expect it to happen to. Just be careful about those conversations that happen with managers and even your own conversations with colleagues of the people involved, and look out for that type of language. 

Drew: I was part of a fascinating workshop with a couple of italian pilots who were flying as expats for an internal chinese airline. They were talking about what the accident investigation process is like there. They said on the one hand, the processes are incredibly draconian. The moment the accident has happened, the captain is held responsible, he is publicly shamed. The CEOs and senior staff gets everyone together and talk about what the person has done wrong. They give a public apology, they are demoted. They said after that, the slate is wiped clean. It seems draconian, but that person is now again part of the company, part of the workforce. There is no stigma hanging over them. 

We were all struggling with the fact that this process is not remotely what we would consider to be just or fair, but we know that a lot of our own apparently just and fair processes leave people as outsiders and ostracized afterwards. It was an interesting parallel of the process and the outcome we want to achieve, and the importance. 

This is one of the outcomes we do want to achieve is we want people to be reintegrated. We want them to be back, part of the workforce without blame and stigma hanging over them. We don't always focus on that as one of the things that we do need to achieve from our investigations.

David: Interesting story. I think we've got listeners in just over 100 countries, so the national context around the national culture and the workplace culture in these countries is going to intersect with these ideas in this book about Just Culture. I know we've got at least one listener who used to fly for China Southern Airlines and described a similar story to me about you make a mistake, you get a demotion, you lose a month's pay, but then retributive punishment done, back on the team, get on with it. It’s interesting.

Drew: At this point we're probably talking about the same people, so shout out to Luca and Marco.

David: No, I'm not talking about them.

Drew: Oh really? 

David: I'm talking about someone else, but yes.

Drew: Interesting. This is at least a story that gets told multiple times in multiple ways.

David: Shout out to our aviation listeners. 

Drew: These next two are incompatible with each other. They are basically a choice you make. If you have a mostly retributive process, then make sure that it has substantial and procedural fairness in it. You'll constantly be asking yourself, is it focusing on the deeper causes? Is it being respectful to all of the parties and their needs? Just because something is retributive doesn't mean it can't strive to achieve all of those things that are important. 

On the other end, if you're trying to be restorative, then the things to focus on is making sure you're thinking about who is hurt, what are the needs of those people who are hurt, and who is responsible for meeting those needs. It might not be us, it might be external stakeholders, but we need to make sure that there is someone who is identified, who is going to take care of people who need to be taken care of. 

David: This goes on to say there are a lot of ideas here about the investigation process in this book and this chapter as well. He's saying that if you start to mess around and reform your investigation process, he cautions you to be ready for other people to be suspicious about why you are completely changing the way we investigate incidents, what are you doing, because there's a connection to criminalization, because it's probably one of the single-most entrenched safety processes that you have in your organization, which is investigating incidents. People are going to watch closely what you're trying to do and why you're trying to do things. 

Drew: The one thing he focuses on is the suspicion is they're trying to use restorative justice or use a reformative process to basically get off the hook. The way to counter that is just to make sure that you are seen to be working hard. Make sure that you're doing a lot to investigate. Make sure that you're doing a lot to act afterwards. 

If someone complains about the process you're using, you point to the actions. You say, this process we're using, this what we have done since, this is the work we put in, this is the money we've spent, these are the improvements that we have made. If you point to those things, that's a strong defense against people suspicious of your motives for trying to reform the investigation process.

David: The rest of the book kind of fades away into different discussions. It was almost like Sidney wanted to make sure it got into the book but either it didn't have a chapter or it didn't fit elsewhere. There are talks about systemic causes of incidents, there's discussion about ethics, there's a whole range of different topics that get covered in the final 10 pages of the book. Is there anything you want to talk about at the tailend of the book? 

Drew: No, except to just agree that it's strange. I was expecting the book having made these nice, clear recommendations to round to a conclusion. Suddenly, we had an explanation of the difference between utilitarian ethics, virtue ethics, and [...] ethics, which you'd normally expect to see in discussions of tech. It was just weird to have it in the concluding chapter. There's a lot of stuff that seems to be crammed in there. 

He ties it all back to revising back to the suspicion of systems versus individuals, which I think is the big underlying philosophical tension that we're dealing with, is how much we want it, have systems responsible for things, how much we want individuals to be responsible. As the book finishes, it is still fighting to reconcile these things. 

Sid says, “Your systems create discretionary spaces for individuals, so there's still room for individuals to act. There's still room for individuals to have accountability, but we have to make sure that we are matching up how we hold people responsible to how much freedom they actually have to act, and not hold people responsible for the consequences of them trying to deal with a system that doesn't let them do things in the way that they would like to do them.”

David: I’ll try to drop a couple of practical takeaways in at the end, I'm not quite sure. Some for this chapter, some more broadly. Are you ready to go there? 

Drew: I'm ready but you wrote these notes, so I'm going to ask you to kick-off and give us your practical takeaways.

David: Stepping off from where you just mentioned then about systems and individual behavior, Sidney makes a point a couple of times at the end of the book and he makes it all the way through, that one of the primary things you can do is making sure you're always asking what is responsible not who is responsible. I think that's not a bad little sentence to remember. 

Make sure you're always seeking to understand what's driving the behavior of individuals in the organization. I know that the context, the systems, the structures, the pressures, the goals, all of these things that behavior is very locally rational for the people involved in, and our learning process you should understand what caused things to be done the way that they were done. 

Secondly to that, just remembering that there's always more than one truth. He just talked briefly about A Tale of Two Stories and some of that literature, but everyone's got their own truth about what actually happened, and none of those truths are necessarily the truth. 

Drew: If that makes you uncomfortable, then finding out the ultimate truth, remember, isn't our goal. Our goal is to learn and to improve. The truths don't have to be true for us to find something in them to learn and improve from. We don't have to 100% believe someone's version of events to find something in it which is useful for understanding, learning, and improving. 

David: I like that, and that would be a good way to talk to your organization about the goal of the investigation is not to find out what happened. The goal of the investigation is to find out what to do next. They're two different things.

Drew: That's a good lead into your next one, David.

David: Continue to talk about accountability when you talk about Just Culture and Restorative Just Cultures because there's this idea that if we go away from retributive cultures, we go to blame-free, we go to no accountability, we go to just chaos in an organization, and we can't have that. Making sure you're always continuing to talk about accountability, but try to lean that towards forward accountability. 

This idea that we can't change the past, the best thing we can do is hold people accountable for what we want to happen in the future, rather than exhaust a lot of time holding them accountable for something that they can't change. Having that forward accountability conversation and clarifying with your organization that blame-free for something that happened in the past does not mean accountability-free for what should be done in the future.

Drew: One thing that we know for sure is that if you're trying to improve Just Culture in your organization and you don't start talking about accountability, then other people are going to start talking about accountability for you and frame it in their terms. Why not be the first one to bring accountability into the conversation and talk about where you’re reforming it in order to improve accountability from the get-go?

David: I like that. The best form of defense is a good offense. The final takeaway I suppose from mainly the chapter on the reporting process is look closely at your reporting processes and your investigation processes, particularly the narratives around them. What stories are people telling about the reporting and the investigation process in your organization? 

I gave that example about the story that goes around is we don't report anything that someone else couldn't find out about. I've also heard the story of, the investigation is just a thing that happens before people get sacked. Understanding what these narratives and logics that exist in your organization around reporting and investigation, and find ways to change the processes that create new narratives, and also change the stories that get told that create new narratives. 

Drew: That's the end of our discussion of the book Just Culture. We'll move on to new topics in the next episode, but please do feel free to continue the discussion with us on LinkedIn. This is a topic that I think David, you and I both care a fair bit about and are interested in different perspectives because no one has solutions, no one has a perfect answer. Do feel free to please talk about this, comment on LinkedIn, ask questions, make comments, share stories. 

That's it for this week. We hope you found this episode thought-provoking and useful in shaping the safety of work in your own organization. Send any comments, questions, and ideas for future episodes to feedback@safetyofwork.com.