The Safety of Work

Ep, 64 What is the full story of just culture (part 1)?

Episode Summary

Today, we start our discussion on culture. Specifically, we ask ‘what is the full story of just culture?’

Episode Notes

For the next few weeks, we are going to cover ‘just culture’ and focus mainly on Sidney Dekker’s book of the same name.

The laws currently on the books encourage businesses to focus on liability instead of actual safety. By focusing on culpability for an accident, this is a way for businesses to get out of compensating the worker for injury. This is just some of what we will discuss today.

 

Topics:

 

Quotes:

“We both know that Dekker a bit of a problem...a bit of a habit of being pretty harsh about how he characterizes some of the older safety practices.”

“The ability of people to tell their stories and have those stories heard by all the other stakeholders, is a key part of restorative justice.”

“We’re all in the same boat, we’re all, after that accident, have an individual responsibility to stop this happening again by making the system better.”

 

Resources:

Just Culture

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work Podcast episode 64. Today we're asking the question, what is the full story about just culture, part one. Let's get started.

Hi, everybody. My name's David Provan and I'm here with Drew Rae. We're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work Podcast. In each episode, we normally ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. 

Now, we're up to episode 64. We're experimenting more with breaking this formula just a little bit and following our review of Safety-I and Safety-II, Erik Hollnagel’s book, a couple of episodes ago. We're going to do a slightly longer piece of work. This week, next week, and maybe the week after, we're going to look at the idea of just culture and focusing mainly on Sidney Dekker’s book of the same name.

Drew, this was your idea after our work with Safety-I and Safety-II. Do you want to explain the background to the question and the just culture topic?

Drew: Thanks, David. Well, I guess the best place to start is to say that even if you search for just culture, Sidney Dekker’s name and Sidney Dekker’s book comes up. He is not the person who invented the idea and doesn't have any ownership over the field. Let's go right back. 

In the early days of safety, safety laws are essentially all about worker protection. The idea is that there are laws that protect the workers directly from having unsafe conditions and say that people need to look after their workers. Most of the focus is actually on providing compensation if people do get hurt. Some of the early laws are all about setting up conditions. Insurance companies, sometimes state run, sometimes state mandated, so that if a business injures a worker, then at least the worker gets paid out or their family gets paid out. 

The unintended effect of that is that they encourage businesses to manage liability instead of managing safety. If you're going to be in a coldhearted business point of view, either you keep the worker safe or you simply avoid making a payout by blaming the worker and saying it's not your fault, it's their fault that they got injured. Either way, you don't have to pay out. 

That actually fits in really well with a lot of safety science, but it fits in really poorly with more modern safety science. It's probably why Heinrich's ideas about preventing accidents become more and more interpreted as preventing unsafe acts. That sort of focus is very much fitting in with the idea that if you can blame the worker for the act, you don't have to pay out compensation. It's also probably why punishment-focused ideas of behavioural safety became popular. You can do positive rewards, but ultimately that system is very consistent with also punishing people if they cause an accident.

If you believe that accidents are mainly caused by unsafe acts by people and you believe that you can prevent those unsafe acts through reward and punishment, then there's no cognitive dissonance between your positive safety scheme and your way of treating people after an accident. 

The trouble is that as you start to move towards a more design and systematic understanding of how accidents are caused, then it doesn't make a lot of sense to blame individuals. 

From the late 80s to the early 90s, there's this movement towards no blame incident reporting and accident investigation processes. David, I don't know how much you were around for that movement. It was very popular in some industries and very unpopular in others.

David: Yeah, I think I never really saw it. I think Drew Dunn, I suppose, has spoken about or maybe has intended. There was a lot of talk about no blame when I started in safety because it was right on that pivoting point mid-90s and into the early 2000s.

There was a lot of push on safety culture. Most of the theorists at the time—Dekker in 2007, but Professor Andrew Hopkins was writing about reporting cultures and the discussion about open cultures. There was this big move where everyone knew what they possibly should be doing but I really didn't see a lot of organizations do very much practically with it, if that makes sense.

Drew: It makes perfect sense, David. This was the big problem—that the safety theory makes sense and the safety theory moves you towards no blame. There's a massive dissonance between that and all of the practical concerns about how you manage safety in a business. 

Theory says: no blame works, no blame increases reporting. No blame fits in with the fact that problems are ultimately caused by designs, systems, and that individual behaviours are just symptoms. 

The regulations still need to place the blame somewhere. If you can't blame the workers, that's a confession that it's the company's fault. You're basically admitting that you as a company are guilty. You're faced with compensation and insurance schemes where your premiums go up if the organization is responsible for an accident. You're faced with education in safety and accident investigation processes that typically haven't yet embraced the new science. You're being trained in processes that don't fit with that way of thinking either.

It's one thing to have this idealistic idea of no blame, but no one is giving you a process that leads to no blame. Most of the processes involve varying degrees of culpability.

With all of that, we've got this need for a way of resolving the tension between safety theory and safety practice.

David: I think we have seen these cycles. As I was thinking about this episode, even in the last couple of years, we had this wave of just culture. The second edition of the Just Culture book was 2012. Between 2012 and 2015, we saw this increasing discussion in business about these just cultures, open cultures, and no blind cultures. 

Then, we saw this wave come back, particularly in Australia and New Zealand. There was lots of new legislation—Health and Safety at Work Act 2015 in New Zealand. There was industrial manslaughter throughout Australia over the last two years. I know a lot of criminalization of human error in certain jurisdictions around the world as well. That reinvigorated this pressure back on protecting managers, directors, and companies at the cost of starting to increase the pressure to reassign that blame back to the people involved in incidents.

Drew: The idea of just culture is that it's providing a theoretical solution to this problem of the tension between no blame on one hand, and systemic pressures to blame individuals.

The most cynical way of looking at that is it gives us scientific excuses to start blaming individuals again. I don't think we need to be that cynical. I think it's fair enough to just recognize that we have this irreconcilable pressure.

In this no blame environment where it's catching on as a theory but people are struggling with it in practice, comes a book by James Reason called Managing the Risks of Organizational Accidents. I should point out that this isn't a book about safety culture. It's a very wide ranging book about organizational safety issues. It's mainly actually about system thinking and moving away from blaming individuals.

It's got one chapter which is on safety culture. That chapter is subdivided into different components of safety culture. Only one of those components is this idea of a just culture. 

According to Reason, a just culture is one of the four attributes of an overall good safety culture. The whole idea of safety culture is really a bit like the Swiss cheese model. James Reason started it. It's not his fault that one subsection of one chapter of one book got blown all out of proportion. As we're going to say, it's even worse than that because it's mostly one diagram, again, which has been picked up and people have run with it. 

If you go back to this section of the book, Reason starts off by saying that all of the rest of the things he says about a systems approach to safety say that we shouldn't blame individuals. He says, almost never is it appropriate or the right thing to do or useful to blame an individual. 

There are these rare occasions where someone is obviously negligent or even malevolent. “A few truly bad behaviours against,” and this is quoting directly, “the vast majority of cases where the attribution of blame is neither appropriate nor useful.” Reason’s trying to work out, how do we deal with these small numbers of cases? 

He says that the most important thing is to draw a line between the bad behaviours and everything else. If people know what is absolutely bad, then they'll feel safe to talk about everything else. You can have open conversations, you can have clear reporting. No one needs to worry about being blamed because they know where the line is. 

Then, as he goes through the section, he talks himself and his readers through the set of steps about how you draw the line. It gets worse and worse. It ends up with this decision tree diagram that asks a series of questions. It starts off, where are the actions intended? If they were intended, then it's obviously bad. Was there substance abuse involved? Well, if there is substance abuse and you meant to do it, that's definitely really bad. If it's medically prescribed, that's sort of bad. Did you knowingly violate safe operating procedures? Well, if the procedures were reasonably clear and you did, that's very bad. If there's some ambiguity, then it's still bad, but not so bad. This is Reason’s culpability framework. 

Ultimately, how culpable should you be? There are nine possible outcomes and only one of those nine outcomes is blameless error. We've gone from, how do we have a system where we are mostly blame free and still deal with the truly bad behaviours to a classification system where you're basically a suspect going through a series of tests? If you give the wrong answer to any one of those tests, then you get blamed. Only if you make it through every single test, you come out totally blameless.

David: Drew, I suppose we see that in safety, through these models over the years, about these guilty until proven innocent. I suppose the model does that. I just want to throw back, if people don't have a copy of Managing the Risks of Organizational Accidents by James Reason, it's well worth a read because of all of the ideas like this. There's even a two-pager on setting safety targets about not bothering to measure the things that you can't manage like negative lost time injuries.

For example, there's lots and lots of ideas in that book by James Reason from the late 90s that people might recognize a number of things that we're still grappling with today that he talked about at the time. 

This culpability framework that you mentioned, I've seen countless examples of this diagram, this flowchart, and these questions in organizations that purport to have a just culture because they have this diagram where a HR manager, a manager, and a safety investigator sit down either part of or at the end of an investigation and go, right, what do we need to do about the people involved? They call that their just culture. 

Maybe it is in the way that Reason described it but it's not a very just culture in the way that Sidney Dekker describes it.

Drew: Yes, very true. There's one final bit of history that I want to throw in. I should say Reason’s ideas are mostly not about this just culture. It really is just a niche thing that he talks about and he's pontificating well beyond where his original starting position was. Particularly, this diagram got picked up by David Marx in health care, who doubled down on this idea that just culture is really about establishing clear lines between three categories. You've got an innocent error. You've got things in the middle where maybe you should have known better, but maybe it can be excused by system factors. Then, you've got deliberate harm. You've got a classification process for working out what you should have known about or should have reasonably done. 

A large part of this logic is a thing called the substitution test, where you work out what someone should have done by saying, if you actually go and ask other people, would you, in the same circumstances, do the same thing? 

Now, listeners who are familiar with our episodes about hindsight bias, particularly the episode about the Dreamworld accident, would know that the trouble with this is it's impossible to put yourself in the shoes of someone in the past because you already know what the outcome is. You've got information that they didn't have. It's not like you can magically forget that you know that in order to work out what you would have done. 

This test makes it really seem like you're being fair, like you're trying to put yourself in their shoes. In fact, what you're doing is you're building a process that encourages the use of hindsight bias.

David: My understanding of the literature that studied some of this, particularly in health care settings, but also in aviation settings, to try to understand this overconfidence bias of individuals. I recall a study where nurses were asked if they thought that they might make the same drug misadministration error that another nurse had made when the situation was described. 

Participants almost always strongly believed and argued that they would never possibly make that mistake of the wrong dose or the wrong type of drug. Even when all of the circumstances around the particular situation, the messiness, and the difficulties were explained to them, they always felt that they were a professional practitioner that would never make that mistake.

Drew: We've gone beyond this, David. There are studies that show what are the actual rates of people making mistakes in these circumstances and these mistakes that people are confident that they would never make, people routinely make. You ask people how confident they are, compare that to the statistics.

That's just a constant problem in hindsight biases that we can't go back and say, based on these circumstances, what would we have done? Or, based on these circumstances, what would a reasonable person have done? It's unfortunate. If we're talking about justice, the legal system does actually require that as part of the test for negligence, but it's literally impossible to do.

David: You've done a great job of explaining the history there and where some of these ideas come from. We’re going back 25 years and even some of the challenges long before that in safety. 

The book we're going to focus on is the book by the title, Just Culture by Sidney Dekker. There's three editions, 2007, 2012, and most recently the third edition in 2018. We are going to be working from the third edition. Drew, I admitted to you just before this that I only had a copy of the second edition on my bookshelf. I've ordered the third edition from Amazon which will arrive tomorrow. I had to work off a combination of my second edition and Google Docs preview for the introduction today. I've got to rely on you a little bit for any nuances in the second and the third edition.

Drew: It is, in fact, the second edition that I read last and that I've used to teach classes with before. The biggest change to the third edition is it includes a lot more of Dekker’s thinking about restorative justice. I think that's reflected in the title. 

The second edition is subtitled Balancing Safety and Accountability. Then, the third edition is called Restoring Trust and Accountability in Your Organization, which shows the subtle shift of ideas.

David: Drew, I was quite surprised when I looked at it. I suppose, credit to Sidney for this. Sometimes, between second editions and third editions or even between any editions of a book, there's subtle differences. It appears between the second and third edition—I'm looking forward to getting my hands on it—as though there's quite some significant changes to content, structure, and even expansion of ideas between 2012 and 2018 for Sidney.

Drew: Yeah. I find it really interesting. 2014 is when I first personally knew Sid. I was working with him between the years of 2014 and 2018. I got to be part of some conversations where I'm hearing some of those ideas start to develop. That's also the period when we were teaching our critical perspectives on safety, safety ethics, and accountability courses at Griffith as part of the graduate certificate. 

I don't know how many of our old students we have listening, but I know that one of the exercises Sid got people to do was to write a just culture policy for their organization. I know that wasn't just an assessment. Sid was constantly being challenged and improving his own thinking from what everyone said about what they would do to try to improve just culture in their own organization. He had all of that feedback over those years.

David: Drew, we’re going to start with the introduction in the book. We’ll go over a couple of episodes because we think there's a lot of really important and relevant information on Just Culture that's going to be useful for our listeners in their organization. Do you want to talk about the start of the book, how the book opens up, and how the book starts to describe this concept of just culture?

Drew: The first thing I said about Safety-I and Safety-II was there is no preface, hurray. First thing to say about this book is there is a preface and it is very dense in information content. This is not vague with one's thinking about this and that is how I came to write the book. The preface really is an outline of the key ideas and arguments in the book, so you can't skip it. 

I also remember saying about Safety-I and Safety-II, there's no actual definition of Safety-II in this entire book, as I was pleased that the first part of the preface here is a definition of just culture. 

It starts off as just culture is a culture of trust, learning, accountability, and explains that it's particularly relevant to how you respond when something goes wrong. 

Every organization, when something goes wrong, they want to minimize the negative impact and they want to maximize learning. Dekker also points out that there's a close tie between that and how people have the confidence to report when things go wrong, because people only report when they know they'll be treated fairly.

David: Drew, this isn't a very controversial definition. I think we'd all say that, okay, I trust learning and accountability. There are things that are good. We want to learn when something goes wrong and we want to make sure that we've got all of the conditions in place in the organization for us to learn. 

It does, I suppose, set up the rest of the book to really be challenging and questioning how we might go about achieving this just culture. It's one of those things that is very easy to say, yes, that's good, we want that. It's another thing to make it happen in institutional settings.

Drew: Yes, I think it would be fair to say that when most people say that they have a just culture, what they're talking about is they have a process called a just culture process or a policy called a just culture policy. 

I think that's the very first important lesson to take from this—that a just culture is something that we aim towards. It's not something that we achieve by having a process. We've always got to be asking ourselves, are our processes and practices leading us towards a just culture or are they failing to do that? One of the big questions is whether people actually feel that they're being treated justly in that process.

David: Those processes in organizations, I think Sidney’s characterization of those existing approaches to just culture is that most companies just ask what rule was broken? Who did something that was wrong and how bad was it?

Therefore, what consequences should we take in relation to those people? What did they do? Who was it? What do we need to do about it in relation to the individual?

Drew: We both know that Dekker has a bit of a habit of being pretty harsh about how he characterizes some of the older safety practices. Do you think describing people's process like that, saying, which rule was broken? Who's responsible? How bad is it? Do you think that's a fair characterization about what companies actually do in the wake of an accident?

David: I think it's not something that companies would say, yes, that's what we do. I think a lot of managers particularly would say, no, no, no, we don't do that. We're always looking for organizational factors. We're always trying to understand what happened, as well as the individual. We’ll look at the individual actions as well as the organizational factors. We're trying to generate learnings. 

I think it'll be very interesting to ask a lot of workers. I think if you asked a lot of workers, they wouldn't say the same answers as management. That's one of the other things that Sidney says, that obviously, the people with more power in the organization think that the culture is more just than the people with less power.

That's definitely what we find. We do some work helping companies try to reorientate their investigation processes when they say that they want to do what they want to learn more.

When we look at their processes, their reports, and we speak to the people in frontline supervisory roles or frontline work roles, they're definitely experiencing a different investigation process than what their organizations think that they are actually conducting.

Drew: I think that once he has handed in his thesis in a couple of months, we really need to get Derek onto the podcast. He's been doing some pretty deep embedded ethnography with just culture processes. It always shocks me how stereotypical his real world findings are.

Some of the real stuff that happens in construction projects, it's almost like class warfare once they start investigating an incident—how quickly people get blamed, sidelined, and don't have their stories heard.

David: Yeah, we hear that now in organizations. When you talk to frontline workers, you say, oh, what do you think of the accident investigation process?

That's just the thing that the organization does before it blames us. You'll hear these narratives. It is almost like the stereotypical narratives that a quote Sidney might put in one of his books. It's real and it's in organizations today. 

It's hard to know whether these narratives are a reflection of what's happening now in 2020, 2021, or narratives that people have formed views of over their entire working career. It’s like the episode we did, I think, episode 34 on institutional logic. It's just hard to know how stable and enduring some of these logics are, particularly for people who've been in organizations or doing roles for a longer period of time.

Drew: The one thing I would suggest for our listeners who are working for more progressive organizations or who at least think that this is a bit too much of a stereotype, is have a look at some of your own investigations. In particular, note how even apparently very open investigations, they tend to backtrack from the accident. 

Some of the earliest facts that they discover are to do with broken rules or incorrect behaviors. Even if the investigation fans out from there, moves on, or starts considering systemic factors, these investigations are still left in that process with the broken rules. They're still sitting somewhere in the trace of the investigation. There's still very much this risk that the just culture approach will start to be applied, focusing on trying to classify those broken rules based on the other findings from the investigation.

For the workers, there is still very much this risk that their actions are going to be judged based on whether they were reasonable things to do, honest mistakes, or blameless errors.

David: I think we're aligning the things that we've read in the book, the things that we've seen in organizations. Dekker makes some fairly strong factual claims in the preface. I just might mention a couple of these and get your perspective on them as well. 

He says, like I mentioned a little bit before, that just culture processes in organizations favor those who already have the power. That dividing human error up into different categories isn't appropriate or useful. That when there's any form of punishment, this is mutually exclusive with learning.

He says those three things that people who have the power get to say that's not very just and so on. People categorize things. Usually the people with the power assign the categories. Then get the decision to make about whether people get punished and therefore restrict learning.

Drew: We're going to be going into each of these as we go through the book. I'll just give you my initial impressions now, if that suits David. The first one that the process tends to favor, those who already have power, I think, is self-evident. 

The most important decision in a just culture process is who you are applying the process to, whose actions you are judging. We see constantly in whether it's royal commissions, official accident reports, internal company investigations, the part of the organization responsible for writing the report is very seldom the one who even submits their own actions as things that are relevant to the accident. 

If we were being totally open and fair, then a relevant factor in every investigation would be how did the safety department go about investigating the last accident like this? And, why did we fail to stop the next one? Very, very seldom sort of reach back that far. 

The first thing that any regulator should be looking at is, how did we, as regulators, fail to stop this from happening? Regulators have been known to hide their own actions to avoid compromising prosecutions. The process is only going to get applied to the people who don't have power. 

The dividing human error up into different categories is really interesting because the categories come from Reason's own attempts to understand how humans make different types of mistakes. We already talked in the episode about Swiss cheese, about how this is like a fascinating study. How is the human brain so fallible and how is it fallible in so many different ways through different mechanisms? 

I personally think that that is very interesting and very useful. Just when it comes to working out how an accident happened, not so much. You're never going to fix the facts of human psychology through an accident investigation.

David: I think human error classification systems have tried for a long time to create these categories like knowledge, skill, and rule based errors. There's different models of trying to actually look at types of errors. I suppose categorization of errors might help, because if you have common types of errors, then you might be able to come up with solutions for these common types of errors. Even as Sidney says in his book, when he talks about the different ways of explaining certain actions, he goes, if you can explain them in a certain way, then you know that you've got different countermeasures for those different explanations. 

I think categories doesn't really serve a purpose because knowing what category human error falls in doesn't actually tell you more about that individual error or situation. Maybe in the big scheme of things, in terms of the broader approach to organizational safety, maybe having some categories and some generalizable hypotheses is helpful.

Drew: When it comes to appropriate things, I think this is the bit that people find intuitively very powerful. Of course, if you deliberately do something, that's worse than if you do it accidentally, if you did deliberately intend a harmful outcome, then absolutely, of course, you should be culpable for the action. I understand why that makes sense to people. 

One thing that Dekker points out later in the book is that the normal place we apply these sorts of tests is in a criminal court. In a criminal court, we have a very different burden of proof. We have the ability to mount an independent defense against all of these claims and accusations about what was reasonable and what would other people have done.

It's not someone that happens to say, I don't think I would have done this in the same circumstance. It's your lawyer who actually looks through that person's record and says, Mr Smith, you say that you would never have done this, but back on the 3rd of July 1980, you observed to do this. 

That's the problem. That what's appropriate is a negligence test in a court of law is not something that a company has the ability to apply fairly in an accident investigation. We need to be very careful about things that do seem to make sense. It only makes sense when you adopt the whole other procedural justice around it.

David: I think that's right. Just to be really clear on that point because Sidney mentions it over and over again, although it hasn't come up in the preface of at least the second edition, maybe it's in the third edition about those factors that are necessary for justice and obviously independent judge. While the organization is making that decision, whether it's a manager, a safety person, or a HR person, they're not independent of the system or the situation.

Like you said, there's no jury of peers so there's no collection of people who can relate to the people involved in the actions that they took. There's very rarely, even though organizations will say that there is very rarely a right of appeal, there might be some show cause process, but I don't think there's a genuine right of appeal for individuals.

That’s some of those very basic elements of a justice system. It's very hard to argue that your retributive justice processes are just in any way.

Drew: The final one about the relationship between punishment and learning, this is one of those empirically open questions that we genuinely don't have good data on about how different types of investigation systems and different levels of or types of just culture lead to organizational learning. 

People are going to have their own points of view based on the existing understanding of science and their own ideology. I'm personally already on the record that I think that once you've drawn any conclusions, you've already started to close off learning because we learn mainly from uncertainty and we learn from not knowing what happened. 

Once we think we know what's happened, I think we've closed off a lot of avenues to learning. I'd actually go stronger than just saying that punishment closes off learning.

David: Yeah. I think we all know specifically about incident investigations, we did address blame and learning in an earlier podcast with some of the organizational literature.

I suppose the broader empirical research definitely shows that where there's a fear of some kind of punishment or individual consequences, then open communication and learning is definitely compromised.

Drew: Let's move on a bit. The big difference from the second to the third edition is the extent to which Dekker has outlined his ideas of restorative justice. He gives a preview of this argument in the preface. We're going to be going through this in one of our episodes about the book.

In the preview, the fundamental ideas are: number one, listening to and hearing the stories of everyone involved. Now, that's something that we might think that we do anyway in an investigation. The ability of people to tell their stories and have those stories heard by all the other stakeholders is a key part of restorative justice. 

The second big one is focusing on how people have been hurt and how we need to make them whole again after that hurt. We change the questions from what mistake was made to who is hurt and what do they need. We change the question of accountability from who caused the accident to whose obligation is it now to meet the needs of the people who've been hurt. 

One of the big parts of that that's going to come up again and again is the idea of a second victim. We'll talk about this in more detail. A second victim is someone who has been involved in causing an accident and is therefore hurt by the circumstances of the accident. 

It might be that they feel guilty. It might be that they become ostracized. It may be because the investigation itself cost them their job. All of those are harmful to that person. 

The idea of restorative justice is that even the second victims have already been hurt by the accident, the instant investigation process is at risk of hurting them further and has the possibility of helping to make them whole again. 

A lot of Dekker’s views are strongly influenced by his experiences with second victims who have been treated really badly by the organizations they were in in the wake of an accident and the harms that they've experienced because of that.

Dekker also, in the preface, gives a bit of a preemptive rebuttal of the objections to the book. He says that a lot of the discussion around just culture comes out of the fact that people distrust the system's approach to safety. This is something that Reason has written a fair bit about—the risk that the more you focus on the system, the more you seem to be moving away from focusing on individuals. 

That rings very poorly to some people because they think that it means that individuals are getting away with things that they shouldn't get away with. You would say, you're not at fault. It's the system that’s at fault. We'll let you do what you like. 

It also fails to account for the fact that sometimes accidents are just caused by people doing the wrong thing. Sometimes you don't need a complex explanation for the fact that someone should have known better, should have done things differently. They're the one who’s stuffed up, why are we talking about all of these systemic factors? 

That's the most common objection to reforming a just culture process towards more restorative justice. It’s this fear that if we move too much towards the system, we lose track of individual failings, we lose track of the ability to fix some of these individual problems. 

David, I'm really interested both in your opinion of Dekker’s answer and in your own answer to that objection. What Dekker says is very social science-y, almost like structuration theory that I know you've studied. It says that systems are made up of individuals. Every system is just lots of individuals. 

When you talk about individual actions being constrained and influenced by the system, that doesn't let anyone off the hook. You're still talking about individual actions. You're still talking about individual accountability, you're just not talking about individual blame. You're talking about what individuals need to do, what they should be accountable for, moving forward. Do you think that's fair? Do you have an alternative?

David: Look, I think this is a complex part of social science and one that we might do justice on now. One of the central debates in social science is the idea of structured agency. How much of an individual action is caused by the structural factors surrounding their actions? 

Think about the context surrounding their actions, the norms, the expectations, the pressures, and the requirements. Then, how much of their behavior is a result of individual agency. Free will, if you like—this idea of forced choice and free will. That isn't really resolved in the social sciences. 

Structuration really says that this is a recursive relationship. People create systems and then systems drive the behaviors of people, which therefore create the systems and therefore create the people. 

I think what I see is this individual blaming the individual versus blaming the system, is the same circular discussion. It's really difficult for organizations to know where to stand. On one hand you can say, how can we improve the system if we don't hold individuals accountable for their actions because they're reinforcing what goes on in the system? The other argument would be, well, how can we ever improve if we don't actually fix the system problems that are driving the behaviors? 

Most organizations, in my experience, end up in some kind of shaky middle ground where they think they're actually doing both. They go, oh, yes, we actually gave this person a warning, but we also change the permit to work process at the same time. We've been fixing the system and we're fixing the people.

In my experience, unless you make some kind of firm stance to say that actually, yes, maybe we can get some utility from addressing or correcting the actions of an individual in a particular situation, if that comes at the expense of making any broader organizational, systemic action or learning, then in the long run, that's not going to be in the best interest of the organization. 

I think while you're still trying to make this balance between systemic action and an individual action, then you really are not actually making any real progress on your systemic issues. That's kind of how I've reconciled it in my mind. I'm not sure if that answers the question you're after.

Drew: Thank you, David. I think that is a fair answer. If you don't mind, I’ll throw in my own opinion there. I think this is close, but not quite where Dekker is coming from. I am, myself, an individual, and I sit within systems. When I look at myself, I don't see that as in any way absolving me from any responsibility. All that does is tell me who I should get angry at.

When things go wrong, I can see other people who I think might be blameworthy for that or who I think are the mouthpiece for the decisions. For my own mental health, maintaining relationships, and being effective, I need to see that as the system's fault. I need to fight the system. 

If what I do is get angry at other individuals who are also feeling trapped by the system, we’ll just get angry at each other and not achieve anything. If I then take that view of myself and say, well, everyone else is in that same boat, there's just no point in any of us blaming each other. That means we're all individually responsible for fixing the system that we're in. 

Sure, we're accountable for making that system better. The person we think caused the accident, the person we think was hurt by the accident, the person who is responsible for the safety department, or a different site, we're all in the same boat. We all, after that accident, have an individual responsibility to stop this happening again by making the system better. 

Unless we somehow think that blaming a person is the solution that's going to stop the accident happening again, then it's just, where do we direct our attention and our efforts?

David: Drew, in line with that, we'll talk about this more next week when we talk about restorative trust cultures. I think Sidney might agree with you, but I definitely am. I just don't see the point of blaming someone for something that happened in the past because nothing is going to change. You can worry about what happens today and what happens tomorrow. 

This idea that Sidney would also talk a lot about that I'm sure is all through the third edition on restorative cultures is forward accountability. What happened yesterday or last week, kind of happened. All you can do with the people involved, do whatever you can to understand how it happened and then think about forward accountability. What are we going to do in the future? How are we going to work in the future? What extra support do we need to manage this situation if it arises in the future? 

Then, people can be held accountable for the future actions that are clear and unexpected. There's kind of no point just looking backwards. That'd be how I'd kind of reconcile in my mind anyway.

Drew: David, it's not on the show notes, but just before we go on, I'm going to put you on the spot. Do you think that there are ever times when someone is such a danger to themselves and others that they need to be removed from the system as part of the response to an accident? Forward accountability is just to protect this person and other people. We need to get rid of them.

David: The short answer is probably yes. I think that could be a failure of organizational selection processes and training development processes. If you find out you suddenly got a train driver in your organization, that's red, green colorblind and can't distinguish between certain signals in the road network, then there's a reason probably for it. You discover that after a signal passed at danger or something, then you're in a real bind about what you do, depending on what the limits of your ability to make changes within that system.

I would say that they’re less likely to be as common and obvious reasons to do that. There's some really good examples in the third edition of the Book of Life Surgeons who end up criminalized over malpractice claims. Sidney talks about different case studies about, is it really the individual doctor? Is it the system that trains and accredits that doctor to actually do that type of work in that hospital? 

I don't know, Drew. The danger of giving you a very, very, very long answer to a short question. The short answer is there are probably some situations where the individual shouldn't be in this critical role that they're in. But, a lot of water needs to go under the bridge before you perform it, before you do something like that.

Drew: Thank you for that, David. The reason I asked you was that that is a common objection I hear to some of Dekker’s ideas. I assume from the type of stories that Dekker tells and the arguments he makes that he would not come to that position. 

I don't think I have ever heard him totally willing to say that even someone who seems absolutely heinous from the outside bears a lot of individual responsibility. I would personally sit about where you would. I would say that I am willing to believe that such people exist. If they do that, they should be removed from the system. 

I'm just very wary about designing our organizational processes to deal with that small number of cases instead of optimizing it to deal with the vast majority of other cases.

David: I had this discussion, actually, with a senior leader in the organization. We're trying to do some work to see how far that organization will go with the just culture. What did that senior leader, the chief executive, do to come out and say that, while I’m the chief executive, no person in this organization will ever be individually punished for a safety incident in the organization? 

Learning is most important. We own at least 50% of every action. At a company level, we own more than half of every outcome in the organization, including the action of every individual. We're not going to blame you any more than we blame ourselves. 

That was the answer that the CEO gave me. What about the one person that one time that does something that we just absolutely can't let go? My answer was exactly the same. You're going to compromise the 99% of other learning situations because of that fear that people will think that they're in the 1%.

Drew, I might throw another one at you. Since you put me on the spot, I'm going to put you on the spot. A type of incident that gets tried occasionally in the courts is you'll have, for example, an apprentice. Their first day on the job, a group of workers throw the apprentice in a barrel and set them on fire. It is a bit of an initiation joke on the first day of work. Every now and then, for example, an apprentice will get really badly burnt or really badly injured on one of these sorts of pranks.

The work crew—the actual employees involved—will be in front of the courts for some wilful harm being caused. What do you think about that situation in the context of a just culture?

Drew: Oh, thank you for putting me on the spot with that one. That's a hard one. My immediate thought about that is that that is so close—a parallel to the problem of zero tolerance for bullying in schoolyards. 

Now, I'm not an expert on education. I'm not an expert on bullying. If we do have educators, please forgive me for factual mistakes I make. My big personal concern is that it is really hard for a teacher to come along, know who was the bully, and who was the person bullied. It may have been the poor, terrified person who threw the first punch. A zero tolerance system is likely at best to exclude both of them from the school and at worst to actively encourage the bullies to provoke other people into getting themselves expelled. 

I think that when it comes to the apprentices, we may be in a bit of a similar situation. What happened is inexcusable. What happened should not have happened. There is probably some moral right and wrong in that situation. Our ability to come along and arbitrate the right or wrong with enough certainty that we're willing to impose criminal consequences is beyond the capacity of our investigation systems. 

The goal should be to make sure that that bullying stops happening, not to make sure that the guilty in this particular case happen to get the precise justice. I'm willing for people that I think are morally repugnant to have got away with this act of bullying, if it's going down the line to protect the next apprentice from being bullied or being unfairly blamed for what's happened.

David: Yeah. I think I have the same view. It's like one of those things where these conditions in the organization have that practice as a norm, even an expectation or as an acceptable kind of way of working in. You don't deal with any of those social practices by a point in time. Some sort of a harsh punishment. I mean, maybe get some short-term behavioral modification, but you probably don't change the system in the long-term. 

I hope listeners appreciate those couple of practical examples. It kind of shows the debates and discussions that you're likely to have if you start to try to talk about moving towards a more just culture. Some of these are types of things that managers will say: what about this situation? What about that situation?

If you've got any other situations that you'd like us to throw an opinion around on and also canvass, crowdsource opinions on LinkedIn, just throw your situation up on the LinkedIn comments to these episodes and we'll see if other listeners want to weigh in.

Drew: David, I'm certainly going to enjoy the next couple of episodes, if this is the way it's going to go. I really do hope that, based on when we're recording this, we will have a chance to respond to people's comments on LinkedIn in the episodes before we finish the set. 

We're recording this on Wednesday. It's going to go up on Monday. I reckon it'll be the next Wednesday before we get the next chapter recorded at our current rate. Do feel free to drop some comments in, throw some curly ones.

Basically, the framework for our discussion, if you're reading along, is going to follow the third edition of the book. It has five main chapters. Chapter one is about retributive and restorative just cultures. Chapter two, why do people break rules? Chapter three is about safety reporting and honest disclosure. Chapter four is about the criminalization of human error. Chapter five is a  looking forward one, what's the right thing to do? 

We're going to break those roughly in half. We're not quite sure where the exact division in half point is going to be.

David: We're also not quite sure who gets to draw the line, whether it's you or me. 

Drew: Very meta. David, do you have some practical takeaways from the episode?

David: Yeah, I do. I think we want to talk about these couple of things because there's a lot of talk about open cultures, psychological safety, and just cultures.

I think it is really important to our efforts in safety and organization. Hopefully, you find this useful. We also want to be practical all the way through and also evidence-based where we're going to pull in. Not just talking about the differences between books and peer review journal articles before. We’ll pull in the empirical findings along the way. 

I do want to mention some practical takeaways, even if it's just from the preface of this book. A just culture, at least in the way that Sidney Dekker’s books talk about them, is not about having a model or a process where you determine if individuals are culpable or not. It is actually about having a broader or working towards a broader culture of trust, learning, and accountability.

Sidney would also argue that in just cultures, you do not apply retributive justice. I think Sidney, correct me if I'm wrong, but would argue in any way, shape, or form it serves no useful purpose.

Drew: David, I'm going to give a slight contradiction there. I think this is something new to the third edition. Sid actually says that he thinks that most people are probably going to pick some compromise between the two of them, recognising that full restorative justice may not be practical to implement in your organization. 

He basically says don't be dogmatic. That part of having a just culture is recognizing that what looks like justice to one person is not justice to another. That we need to have these open discussions as part of a just culture of recognizing that we may have different ideas about what is just, what is needed for justice, about the justice of any particular action, or how any particular incident was handled. That having open discussions about it and recognizing that we have different stories to tell about justice itself is part of a just culture.

David: Practical opportunity, just on the spot there, number one is who in your organization do you give the workers involved in an incident a chance to do the final review and sign off of your internal investigation reports? That's the story that the organization is putting on the table. How close is it to the story that the people involved believe should be told? 

Hopefully, we can get some really practical examples like that on the way through and create some questions in the minds of yourselves, as listeners.

Drew, just culture asks what went wrong rather than who caused it. I think that's a good question to ask. Then, I took out some challenges. As I went through the preface, I said, where were some of the bigger challenging ideas in there?

I want to list four things, Drew, and get your thoughts just before we wrap up. Some of the challenges that you'll have to solve in progressing a just culture in your organization, which are questions like: how do you navigate this societal expectation to assign blame for accidents? It's everywhere in all societies around the world. 

How are you going to get people to report and disclose openly? How are you going to take care of the individuals who are involved in accidents and may already be hurt either emotionally or physically in some way? How are you going to reconcile the multiple and overlapping interpretations of the same, at the same event, in the organization by different people with different perspectives?

Drew: David, I'll put an overarching challenge there. How do you do all of these things in the spirit of listening to and recognizing the opinions of everyone, knowing that some of those opinions are going to be much less restorative and open to not blaming as you are? How do you respectfully include the person who thinks that individuals should be blamed? How do you respectively listen and hear the story of someone who is telling a story where it's someone else's fault?

David: Please join in the discussion on LinkedIn like Drew mentioned earlier. Throw us some curly ones. We're recording these episodes in real time. If you want us to explain any particular points in the book, if you've read it in the next five chapters or so, if you want us to throw around some ideas, we can respond to some ideas on the podcast. You may as well take advantage of us being really, really behind in getting in front of the recording process and challenges a bit in real time.

Drew: That's it for this week. We hope you found the episode thought-provoking and ultimately useful in shaping the safety of work in your own organization.

Easiest way to reach us is to comment on LinkedIn or talk to us directly by sending comments, questions, or ideas for future episodes to feedback@safetyofwork.com.