The Safety of Work

Ep. 47 Does individual blame lessen the ability to learn from failure?

Episode Summary

On this episode of Safety of Work, we dig into the question of whether individual blame lessens the ability to learn from failure. This was a special request and we are excited to discuss the relationship between learning and designing culpability for accidents. Specifically, we discuss the safety community’s willingness to talk about blame.

Episode Notes

This is a particularly controversial topic, so we are going to attempt to be as neutral as possible. We refer to the sources, A Review of Literature: Individual Blame vs. Organizational Function logics in Accident Analysis and Antecedents and Consequences of Organizational Silence to help frame our discussion.

 

Topics:

 

Quotes:

“ ‘Employee voice’ covers a whole range of behaviors that people can do in organizations that are discretionary.”

“Ironically, when they spoke to a number of managers...as part of the study, managers believed they were encouraging employees to speak up, but on the other hand, they’re employing all sorts of informal tactics to silence this dissent.”

“There’s so many broader forces in their organization that are seeking resolution...that if you enable an approach where an individual can be blamed, then I think that will be the dominant logic in your investigation…”

 

Resources:

A Review of Literature: Individual Blame vs. Organizational Function logics in Accident Analysis

Antecedents and Consequences of Organizational Silence

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work podcast episode 47. Today, we're asking the question. Does individual blame lessen the ability to learn from failure? Let’s get started.

Hey, everybody. My name is David Provan. I'm here with Drew Rae, and we're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. In each episode, we ask the important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

We had a request to do this episode about blame and learning. Drew, I’m really excited to talk here about what you’ve got to say about this topic. What’s today’s question and what’s the background to the issue?

Drew: Today’s question is all about the relationship between learning and designing culpability for accidents. Culpability is a funny word. Usually, we call it accountability if we agree with it, blame if we don’t agree with it, and culpability if we try to sound neutral. I acknowledge that we’re stepping into a really controversial area there where there’s a lot of argument back and forth. We’re going to try to be as neutral as possible and try to pin things down to things that we can discuss empirically.

One of the key differences between different approaches to safety is how willing we are to talk about blame. Very often this isn’t actually an empirical argument. It’s a theoretical ontological argument about whether a human error exists or it’s a moral argument about how much control we have over our actions and how much we should blame individuals versus systems.

But people try to support that moral and theoretical argument with some fairly practical claims that I think we can test. The one we’re going to talk about today is the claim about whether blame helps or hinders organizational learning. 

At one end of the spectrum, we have the idea that any sort of blame is bad for learning. We often hear people—particularly at the Safety II or Safety Differently end of things—saying that when you allow blame, you create an environment that suppresses reporting, suppresses the sharing of information, and stops people from being free to discuss ideas. Even the potential for blame discourages what we need to do to learn.

Right at the other end of the spectrum, we’ve got the belief that individual accountability is necessary for learning. That’s the idea that unless actions have consequences, people don’t learn to take the right action. If the actions have got negative consequences—whether it’s ourselves rather people—then we learn not to do those actions. If there’s no accountability, then people are never going to learn the right thing to do.

This debate happens a lot on the internet, LinkedIn, books, and conferences. David, do you want to take us through the practical consequences for an organization about where you stand in the debate?

David: Yeah, Drew. We’re going to cover a fair bit of ground in this episode. We’re going to go through institutional logic, we’re going to talk about exit causation, sociology, and maybe even a bit of criminology. But practically, when we get into organizations, it’s obviously directly relevant for incident investigation.

This idea of individual blame versus learning makes its way into our incident investigation systems, about how we explain the course of incidence and therefore how we think about improvement actions or organizational learning as a result. It makes us want risk management programs, particularly where we have things like critical controls and life-saving rules, and how we think about situations where risk controls aren’t in place or life-saving rules are not followed.

It’s central to a lot of our purchasing relation to behavioral safety or safety culture, how much we think about individual action, beliefs, values, attention, and care of individuals, as opposed to the way that the organization functions. We see this language a lot—this blame, failure, and learning language a lot. We see it in the HOP literature about systems driving behavior, management response matters, and blame fixes nothing; learning is vital.

It’s really central to a lot of these ideas that are increasingly popular in our organizations, Drew. To go ahead a little bit, most organizations think they do both. Most organizations think that we hold people accountable where it makes sense, but we also learn more broadly across our organization. That’s what we’re going to go through. Is it one, is it the other, or can you do both?

Drew, do you want to talk a little bit about what the safety science landscape looks like in relation to blame and learning?

Drew: This one’s interesting because I don’t think it’s fair to say that in the academic literature, there’s a lot of debate about accountability and blame. It’s all fairly one-sided. But it’s one-sided more as a consensus of the way the academics see things rather than a consensus of evidence. Most of the stuff that’s published about blame, accountability, and safety use this claim that blame is bad for learning, but the papers themselves are commentary papers. They’re presenting ideas and arguments. They’re drawing on evidence, but they’re not creating that evidence themselves.

There are really few studies that try to actually answer the question of whether blame is bad for learning rather than just asserted. Lots of people citing other people making the same claim rather than citing direct studies. 

If you think about it, it’s really a very big question trying to answer all at once. You’re drawing a link between something fairly abstract and whether it’s good or bad for something as big as learning, which is really hard to measure.

There is a little bit of stuff that’s out there. There is some stuff that focuses on the relationship between reporting and investigations and culture. There is some stuff that focuses on measuring reporting. Probably the biggest empirical stuff is there are several examples where there’s been a very strict policy shift out in the real world to know blame reporting. That’s happened at various times in aviation and air traffic control.

There have been follow-up studies that show that introducing those no-blame reporting systems do increase voluntary reporting. But that’s not exactly the same question as to whether blame changes learning. That’s more a link between a no-blame policy and reporting.

David: We’re talking about opposites a lot, Drew, one or the other. The first paper that I wanted to talk about (and I shared with you) was this frame we hold when we’re looking at problems in our organization or when we specifically look at a safety incident that occurred. The paper is titled A Review of Literature: Individual Blame Vs. Organizational Function Logics in Accident Analysis.

The author is Catino from the University of Milan. The paper was published in 2008 in the Journal of Contingencies and Crisis Management. It divides this debate into two views. It says that organizations can hold either an organization function logic. Like you said, Drew, this is genuinely favored by scientists looking at accidents and insist that the actions created broadly by the way that the organization functions, not the actions of an individual. 

The second view is this individual blame logic, which is the one that genuinely wins out in real life, in broader society, and within organizations. It goes to say that there’s always someone who’s done something that has created the event, and that’s where the organization focuses its effort.

Drew: Catino doesn’t actually provide evidence that it’s the individual blame logic that wins in real life. But regular listeners might remember episode 39 where the accident investigations find the root causes. If you haven’t seen that episode, it might be worth listening back to that one if you haven’t.

In that episode, we talked about the problem of only finding what you can fix. One of the things we looked at is how often real-world investigations do end up focusing on individuals rather than on systemic findings and changes.

David: Drew, this paper said that there are these two approaches that organizations can take, and then it just gave these scenarios and examples of certain situations that happen in organizations and described those scenarios and the accident narrative from the two different perspectives.

One of the things that I liked is it pulled out a number of assumptions for the individual blame logic. It said if your organization is looking to hold an individual accountable for something that they’ve done or caused in the business, then there are at least five things that you also have to believe. If that’s the approach that you are going to say is going to move the organization forward.

I’ll just go through this, Drew, and get your thoughts on them. The first is that people are these free agents. They can choose safe for and safe behaviors. We see that a lot in slogans about safety, but safety is a choice that you make. If you want to work around here, that’s the choice that you make. It says that people are free agents, and they can make that choice.

The second is that incidents are caused by a linear sequence of cause and effect events. If something happens, then something else happens and then a person does something right or wrong and then there’s the incident.

The third is that there is actually an individual person who’s responsible for the incident so that you can actually isolate out that individual action. That is the material and direct contributor to what’s happened.

And then a little bit more abstractly, Drew, says that the assumption you hold is this idea that it provides a sense of justice. So that you’re able to move in an emotionally satisfying way since we know what the problem was, and we’ve provided a proportion of consequence to the individual who acted in a certain way outside the expectations of stakeholders.

The fifth is a bit of a legally and economically convenient way to hold an individual responsible. It means that the people who get to make the decision about what gets done—the senior managers who probably weren’t involved in the incident at the point where the incident occurred. They get to maintain their organizational structures, rules, processes, power systems and are able to swiftly move on with their reputation intact and their organization not needing to be changed too dramatically.

Drew, if organizations say, look, it’s perfectly appropriate for us to hold individuals accountable. They are the five things that they also would believe to hold true as well.

Drew: Yeah. I’m not certain that all of those sets are necessarily assumptions. But they are things that generally do go together with that world view. One thing that I think that Catino draws out, which is consistent with the broader academic literature on this topic, is that the same accident can appear objectively different depending on which logic you take.

You can pick a logic, a set of assumptions, and beliefs, and the accident will make sense under that particular framing. It would also make sense under the other framing if you chose that point of view. Really, what that means is that there has to be some choice between the two logics upfront. You can’t let the raw events of the accident somehow objectively tell you which logic you have to follow.

Finding individual blame validates and reinforces the current organizations and systems. Finding problems with the organizational systems lets the individual off the hook. You can’t follow down both paths logically and say the individual is to blame, and they were working within a broken system.

Because our investigations get shaped by this logic, then there’s no such thing as a neutral investigation. We’ve really got to use something other than the investigation itself to make this choice of upfront. Are we going to go into this with the willingness to blame individuals, or we’re going to go into this with the focus on the organizational functions?

David: And Drew, if people want to read a really good example of how the same incident can be described in two very different ways, there’s a good paper from 1998 called A Tale of Two Stories and it’s by Cook, Woods, and Miller. They talk about a patient safety incident in quite some detail from, like I said, two different narrative perspectives.

This is the distinction that we’re going to try and test today empirically, Drew. If we choose between these two logics, what price do we pay in relation to learning to look for individual blame? The question, does individual blame lessen the ability to learn from failure? We had to step outside of the safety science literature. Drew, do you want to introduce the paper that we’re going to talk about now?

Drew: This one’s a paper called Antecedents and consequences of organisational silence: an empirical investigation. I always like it when they put into the title exactly what they’re doing. The authors of this paper are Maria Vakola and Dimitris Bouradas, both from the University of Economics and Business in Athens. The paper was published in the journal of Employee Relations in 2005.

It’s a little bit of an older paper, but it’s still reasonably current in terms of the thinking and the evidence. It gives a little bit of background on this idea of organizational silence. There’s this whole research area in organizational studies called employee voice. Employee voice covers a whole range of behaviors that people can do in organizations that are discretionary. They’re not inherently part of your job description. They’re not mandatory things you have to do.

They include things like speaking up and reporting problems that you see, putting forward ideas to improve your work area, joining in and contributing to conversations and debates outside your work area—we observe in the broader business—and offering expertise that isn’t part of your current role description. You haven’t been hired because of this expertise, but it’s other stuff that you know and you offer it to the organization.

You can be a good employee just working in your little bubble doing your job without any employee voice. But generally speaking, employee voice is good for the organization as a whole. It means that everyone is contributing more than their immediate role description.

There’s a lot of research that tries to discover what encourages and discourages employee voice. And when you don’t have an employee voice, that’s organizational silence.

David: Drew, I thought this was a good paper that you’d found because like we said earlier, this idea of individual accountability and blame is quite a really large, somewhat abstract concept, and trying to link that to organizational learning, which is also another large abstract concept. It means we want to talk about something in the middle, which is this idea of organizational silence or employee voice. Because I think most people involved in this debate or discussion would agree that to have learned—one of the key things that we need to have in our organization—is the ability for people to speak openly and freely about matters.

People aren’t hired to participate in incident investigations if they’re involved in an incident. But if we can get them involved in a way that shares their views and ideas openly, then we would form the view that open and diverse conversation is likely to be an input for organizational learning or strong input for organizational learning.

That’s the main claim in the literature that for organizational learning to occur, then people need to be able to tell their story. Stakeholders need to openly explore all of these different perspectives surrounding an incident. And then collectively work together on how we fix it and how we move on. This is being talked about in the literature as a just culture, a culture of reporting, and an open culture. Or in the case of this study, talk to either a silent climate or a voice climate as two types of climates in the organization.

This paper quite clearly situates their study within the broader organizational studies literature that says that withholding of information and ideas can undermine organizational decision-making, error correction, organizational learning, and innovations processes. Drew, we again see that sometimes we need to go to the organizational study's literature to find these sorts of papers. But also the things that we talk about in safety are generally talked about within these disciplines decades before they probably become popular in safety. 

In the ‘70s, Argyris argued that there are really powerful norms and defensive routines within organizations that often prevent employees from saying what they feel or they know. In 2000, another paper that was referred to in this study by Morrison and Milliken said that when a culture of silence exists, organization members—this is your frontline workers—are caught in this paradox where they know the truth about certain issues, and they know the problems, yet they dare not speak that truth to their organization. So things aren’t able to be learned and resolved.

Drew, before we get into the actual details of the method in this study, the front part of this paper proposes that silence is an outcome of the managers’ attitudes and beliefs. They really want to test this idea of a climate of voice and a climate of silence by thinking about how managers in that organization are creating that particular climate. Ironically, when they spoke to a number of managers as part of the study, managers believe that they are encouraging employees to speak up. But on the other hand, they are employing all sorts of informal tactics to silence this dissent.

This is where we get practically in the organizations, and they say, we’ll only hold an individual accountable when it absolutely makes sense for us to do that. What that actually becomes is some sort of informal tactic to silence a whole lot of voice on a whole lot of matters. Drew, do you want to talk a little bit about the methodology from this paper?

Drew: Sure. Just before I go into the exact details of the method. It always fascinates me—I guess this is one reason why employee voice is such a fascinating area of research and why so many people spend time on it—that managers don’t know what people aren’t telling them. We consistently have this phenomenon. It occurs in every organization. That the picture managers have of the rest of the organization is not the same as the picture that people lower in the organization have.

That should be totally unsurprising. It’s the result of every single safety climate survey—that managers give a different score to the rest of the organization. But despite that, consistently you ask the managers how well they know what’s going on. It’s really rare to find a manager who doesn’t believe they’ve got their finger totally on the pulse. That people of course speak the truth today, that of course, they know what’s going on. Despite all this consistent evidence that management very seldom knows exactly what’s going on. Somehow, managers believe that people are constantly telling them the truth.

The method, in this case, and it does very much follow this top-down approach that I’m not entirely comfortable with. Seeing things purely as management rather than as structural forces a little bit simplistic. Anyway, that’s the approach they took. They’re using three scales in a survey within a single company of 677 people.

This is a fairly large sample within the company. It’s fairly representative of the people there. They’re surveying people about top management attitudes to silence, supervisors’ attitude to silence and communication opportunities, and employee behaviors in relation to silence. Essentially, they’re pulling everyone to break out of these different levels and how the different levels relate to each other.

It’s also testing two outcome variables—organizational commitment and job satisfaction. I think really, it’s just a way of showing that silence is something that matters more than just for its own sake. David, do you want to speak to the findings?

David: Yeah, Drew. The findings were maybe not surprising for listeners. There was a correlation between top management attitudes and supervisors’ attitudes to silence. It’s essential that middle and frontline management will look to the behaviors and the messages of top management and adopt similar approaches at supervisory levels to top management levels. 

And then there’s a correlation between employee silence behaviors and communication opportunities. What that means is if there are opportunities for employees to contribute their voice (if you like) through meetings, processes, and reporting systems, and things like that, then those opportunities will correlate with them.

But the stronger findings that I think, Drew, that are really relevant to us is that the strongest relationship they found was between supervisors’ attitude to silence and employee silence behavior. Drew, one of our earlier episodes, which I don’t have the number of hands when we spoke about our authority to stop work research. It’s probably episode 30 or 40. We spoke about what is really important for a person to put their hand up and stop the job for safety is their immediate supervisor’s reaction. Or how they think their immediate supervisor will react. 

This is sort of the same finding relation to employee voice in this context as well. The employees are more influenced by this micro silence climate or voice climate between them and their supervisor as opposed to the top management or [...] formal communication opportunities. 

Drew, like you, said with the outcomes, this organizational studies literature was less concerned with safety learning within businesses and more about organizational commitment and job satisfaction. And said if there’s a climate of silence and employees aren’t able to have their voice, then that correlates with a lower commitment to the business and job satisfaction.

Drew: David, when we put these episodes together, obviously, we can really only talk about one or two papers directly. There’s probably a lot of trust that is involved in connecting all of the different dots between specifically what these studies show and the question that we’re asking in the episode. One of the things that I’m reading between the gaps here that I think some of the other papers that we looked at show this idea of where the employees get this attitude from in the first place.

There’s an assumption in this paper that there are particular supervisor behaviors which are encouraging the employees to have that feeling that they can’t speak up, that they will be blamed. We look at some of the surrounding literature, it says, what sorts of behaviors create that? It’s things like when individuals do get blamed when things go wrong. When you raise a problem and instead of the supervisors treating that as an organizational problem, they treat you as a problem for raising the problem.

David: An extension of that, Drew, is when people are blaming others or providing disciplinary consequences to people who've been involved in a breach of a rule or an incident, it’s the supervisor that’s delivering that disciplinary consequence to their employee. Whether they believe that or not, whether they’ve been told by a senior manager that they need to hold that person accountable, they are the ones who are delivering the message to their teams and to their people as an immediate supervisor. What employees would see is what I would see the actions of supervisors.

Drew: And if you look at the actual questions in these surveys, there are things like, I feel that I can say something to my supervisor without there being disciplinary consequences. I feel like I’m not at risk of people being individually blamed for things going wrong. You wonder where these attitudes come from when you have formal systems that are assigning that blame. And then supervisors, as you say, passing on the message during that missing link that’s not directly provided in this study. But it’s definitely supported by the surrounding literature.

David: Yeah, absolutely, Drew. This study goes on to talk about all these things that are related to this climate of silence. Just to connect the silence to the blame is one of the things that the study showed through the questions that they asked in this climate of silence is it is when people are fearing consequences of speaking up, truth-telling, or making some kind of suggestion that will contribute to their climate of silence. That’s where this relationship between blame and not speaking up is related. It’s a whole raft of other things that the authors claim is bad that goes on in the organization as a result of these climates of silence. 

Drew, based on the findings of this study, there are some important implications that we can discuss, and then we can maybe move on into the next interesting conversation that I wanted to throw you on the spot about. I suppose we conclude from this study that top managers and supervisors have to work to create a workplace where people feel safe to express their views, where they’re encouraged to offer their ideas, their suggestions. 

If employees perceive their managers and more importantly their supervisors either aren’t interested or will attribute something to blaming them for something they’ve done, then they’ll probably choose to remain silent. Therefore, if our employees are choosing to remain silent, then we can’t possibly have the information that we need for organizational learning.

Drew: The other interesting one that the authors drew out was the importance of creating opportunities for speaking up behavior. They saw this not just as a negative thing that gets suppressed by management supervisor behavior, but also as a lack of opportunity. They recommend creating communication opportunities such as formal systems for the exchange of information or the transfer of information.

David, I guess that’s a whole lot of topics that we could go into is problems with reporting and formal communication systems within organizations.

David: Yeah, absolutely. We’ve described these two ends of this spectrum on the way through this episode. We’ve talked about individual blame, logic which says let’s look for the individual involved and let’s hold them accountable. And then, on one hand, we’ve gone, let’s look at the way the organization functions and it’s not about the individual. It’s about the system that they’re within.

What I wanted to talk about now is many of our listeners and I suspect most organizations would say that they do both. They identify individual accountability where it shows up. Where it’s obvious to the organization that there is an individual who needs to be held accountable for a certain action, and they also identify the organizational factors involved, so they work on both.

Many managers, many safety professionals would say that’s the perfect approach to take. When it’s obvious that we need to blame someone, we blame them. When it’s obvious that someone else would have made the same mistake in their situation, then it’s unfair to blame them, and let’s look at the system. Drew, what are your thoughts about this idea that actually the approach organization should take it somewhere in the middle?

Drew: First thing I want to say is that certainly, this has been a central theme in a lot of the original literature that was proposing reforms to the way we investigate accidents. There are some really big authors who have tried to take this middle line approach between accountability and no blame. I think James Reason is one of the ones who very squarely straddled that fence.

David: In the early 2000s, Amy Edmondson, and she’s very popular now for her work in psychological safety and does amazing stuff. There’s a Harvard Business Review article where she says that no, no, it’s perfectly fine to have clear and unacceptable behaviors. If people perform those unacceptable behaviors, then they should absolutely be held accountable. And that shouldn’t get in the way of a psychologically safe workplace.

Drew: The central argument that they tend to rest on is the idea that it’s possible to draw clear lines. Those clear lines create predictability which creates a sense of fairness. That’s where their version of the idea of just and just culture comes from. If people can predict what will happen when they behave in certain ways when management is behaving predictably, then even if there is blame, it is predictable blame. The problem is when the blame is not predictable when the blame is unfair. That’s the argument that they make.

But I think there’s a much more important question about how you set up those systems, what practically happens? Given that this stuff that we talked about earlier with Catino and many others who say that it’s how you go into the investigation that matters. That basically determines whether you’re going to blame an individual or blame the system. And where does this draw the line approaches assume that you can do the investigation first and then decide afterward which side of the line it falls on?

David: Yeah. We hear someone like Cindy Decker would say that a lot is who gets to draw the line, particularly with these just cultures' approaches, accountability, or culpability models that I think all evolved out of reasons to work. Many organizations today would have these models that say we do our incident investigation. And then the only time we blame someone is after we’ve gone through this very disciplined process of classifying and categorizing the individual behaviors into willful misconduct, an error, a simple mistake, lapse, or something. Therefore, we attribute a level of blame proportionately to the culpability of the individual.

But I think what I see, Drew, and I tried hard in the empirical literature to see whether there was any support for this. There’s not a lot, but practically what I see in organizations (at least in my opinion) is when organizations say that they’re doing both, they’re really only doing the individual blame thing. Because there’s so many broader forces in their organization that are seeking a resolution to accidents and issues, and so many power and hierarchical issues. If you enable an approach where an individual can be blamed, then I think that will be the dominant logic in your investigation. Unless you really don’t create the possibility for that.

Drew: I think we have actually seen the empirical evidence for this. All of the studies out there that classify the outcomes from incident investigations always fall into this far more blame of individuals and certain recommendations. Which shows that they are holding individuals to account through training, through corrective actions. That’s consistent even in organizations that have shifted towards these frameworks that supposedly implement the just culture idea of drawing clear lines and only blaming individuals if they have crossed that line.

David: Most of those classifications will happen by the HR manager and another senior manager who will be making those decisions out of the context of what it’s like to perform the role of the individual that they’re making those decisions about. Drew, we’re probably getting a little too far from the empirical literature and into some of our commentary. By way of story, I did make an effort in an organization at one point to remove any individual blame and employment consequences associated with life-saving rules.

The narrative for what I wanted that organization to take was that if any of our employees ever find themselves in a situation where they are working outside of these life-saving rules, then we own that as an organization. The most important thing at that point in time is to understand everything we can about that situation and what needs to change in the organization as a result. Because of that, we will never blame an individual in any way, or they’ll never be any employment consequences in any way for a person found to be in breach of a life-saving rule because it’s too important for us not to learn everything we can about that situation.

As much work as we’ve done in that organization, I couldn’t get that narrative across the line. The organization was still at the belief that they could do both.

Drew: I think this is why there is such a difference between the scientific consensus and the practitioner consensus. Is that those ideas of promising not to blame someone just appear so radical. People are so afraid of finding themselves in a situation where someone desperately does need to be blamed, and there’s a rule preventing them from blame. 

I find that really fascinating that our psychological need to make sure that people have consequences can be far greater than our need to make sure that the organization learns from the incident. Even though we might have rhetoric that says otherwise. That’s clearly where the psychological forces are that makes us just way too uncomfortable letting someone get away with something that they clearly should be blamed for.

David: Yes, Drew. I quite enjoyed that little ending, and we’re going to go into practical takeaways now. I sort of said that there are two approaches—individual blame and organizational functions. What we want to look at is the relationship between this individual blame and learning. To do that, we presented an empirical study of the employee voice and employee climate, which said, if you’ve got blame in the organization, then you are likely to create a climate of silence. Which means you are unlikely to get free and truthful information to make good decisions and learn.

That’s the narrative we’ve been through in the research. What are the practical takeaways that people should take out of this episode?

Drew: I think the first one is just a straightforward answer to the question. Which is on the issue of individual blame, yes, the weight of evidence suggests that individual blame will lessen your ability to identify your opportunities to improve and to actually do those improvements in your organization. Yes, individual blame does lessen learning.

The second one is that a completely blameless approach might not be possible anyway. Just because of the legal systems that we operate in, the social systems that we operate in, the psychological forces that play in the investigation.

The third one is that we shouldn’t deceive ourselves. If we choose to go down a system that has the opportunity to make individuals culpable, accountable, or blame—whatever we want to call it for accidents—we need to do that in the understanding that we are directly making a choice, which is bad for organizational learning. And we’re going to do that anyway. There may be other good reasons to do it. It may be we want to say, where the legal system forces us to do it, the clients force us to do it, or we can’t possibly take responsibility for letting someone stay working for us once they’ve made this mistake.

That’s all fine. Just don’t pretend that you’re doing it because it’s overall good for learning. We’re doing it despite the fact that we know it’s going to hurt learning.

David: Drew, I want to throw two more practical suggestions in there for individuals. A practical suggestion for safety professionals. This microclimate between employees and their supervisors doesn’t have to exist in the same climate between yourself as a safety professional and a frontline workforce. 

Finding ways that you can build open communication between the safety department and the workforce—even in the absence of the management and the supervisory climate for voice—means that you can still get information out of the front line. And then find ways to use that in your decision-making in the organization and input that into other processes to actually contribute to organizational learning. Getting that information from the frontline in a different way if it’s coming up through the management chain.

The other one was more for supervisors and managers, Drew. Ideas about what they can do to maybe help promote this climate of voice. When there’s an incident investigation that lobs on the table, how they ask questions of the incident investigator, how they facilitate that conversation, and do they ask about what the individuals did?

Or do they start by asking questions as management which is something like, what could I have done that could have changed your situation as a leader? What resources weren’t available in relation to this job that should have been in terms of time, people, and equipment? What did we do as an organization to create conflicts between cost, productivity, and safety that created conditions where people weren’t able to work in a different way?

How the managers, supervisors, and even the safety professionals show some vulnerability is all in the paper that we discussed today that didn’t talk about creating a climate of voice.

Drew: But one final thing that I’ll throw in there, David, is I think this is probably the single most common misunderstanding about safety differently. Is the way it encourages us to treat frontline people as experts in their work. People often assume, oh, they’re saying that frontline people are always the experts. No, that’s self-evidently not true.

But what is definitely true is that if you treat someone as an expert, if you treat them with that sort of respect, that’s what gets them to tell you stuff. Regardless of whether they’re the expert, they will know things that you don’t know. Encouraging them to speak by showing them the deference to expertise is how you encourage that employee voice.

David: Yeah, Drew, I think Edgar Schein would call that humble inquiry. We could probably do a whole episode on humble inquiry. That might be a fun one to do. Drew, invitations for our listeners?

Drew: I guess, the biggest thing is we’re really interested in hearing how people are going with the way they’re approaching investigations in their organizations. We know that this is very often an area where safety practitioners do try to improve their work and do try to improve the way their organization handles things. We know that it can be a very frustrating area to try to achieve change.

We’re very interested in hearing your own stories about what you have tried to change? What resistance has it met? What successes have you had?

David: Yeah. How far have you got with reducing or removing individual blame from some of your safety practices and processes in your organization? And have you seen any changes in the way that your organization is able to learn, as the research might suggest?

Drew, the question for this week, does individual blame lessen the opportunity to learn from failure? I think you gave an answer to the start of the practical takeaways, but do you want to reinforce your point?

Drew: I think the answer is yes, and not just because of the simple idea that blame discourages reporting. There’s a whole range of mechanisms that blame influences various employees, voice attitudes, and behaviors. All of those things that blame suppresses are things that we need in order to be learning organizations.

David: Thanks, Drew. That’s it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. We’d love to hear what you thought of this episode, as we get pretty close now, Drew, to episode 50. Send any comments, questions, or ideas for future episodes directly to us at feedback@safetyofwork.com.