The Safety of Work

Ep.48 What are the missing links between investigating incidents and learning from incidents?

Episode Summary

On this episode of Safety of Work, we discuss the missing links between investigating and learning from incidents.

Episode Notes

This discussion is building off last week’s episode where we focused on blame. We thought we would dig a little deeper into how people learn from incidents. 

We use the paper, What is Learning? A Review of the Safety Literature to Define Learning from Incidents, Accidents, and Disasters, in order to frame our chat.

 

Topics:

 

Quotes:

“Learning from accidents is pretty much the oldest type of safety work that exists...and almost from the very start, people have been complaining after accidents about people’s failure to learn from previous accidents.”

“This paper really confirms the answer that we gave last week to our question about, ‘does blame sort of get in the way of learning?’ “

“You’ve got to admit that you are wrong now in order to become correct in the future.”

 

Resources:

What is Learning? A Review of the Safety Literature to Define Learning from Incidents, Accidents, and Disasters

Feedback@safetyofwork.com

Episode Transcription

Drew: You’re listening to the Safety of Work Podcast episode 48. Today we are asking the question, what are the missing links between investigating incidents and learning from incidents? Let’s get started.

Hey, everybody. My name is Drew Rae. I’m here with David Provan, and we’re from the Safety Science Innovation Lab at Griffith University. In each episode of the podcast, we ask a question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. David, what’s today’s question?

David: Drew, in episode 39, we talked about whether our incident investigations actually find the root causes. Last week, as part of a request, we spoke about the relationship between blame and learning. I thought this week we’d go a little bit deeper in relation to learning from incidents. I think this is a fairly essential topic for the safety profession and safety management activities in the organization. We talk a lot about learning, and particularly in relation to learning after incidents.

We have artifacts in our business. Companies will have lessons learned reports and a whole raft of processes to try to facilitate this sort of learning to happen following an incident. I know some organizations such as Shell that have full-time roles that are purely dedicated toward attempting to embed these learning from incidents that occur inside their organization and other organizations within their industry.

We know organizations have dedicated incident investigators and incident review meetings. But too often, we also hear, Drew—I’m not sure what you hear, but I hear—a lot of safety professionals and others within organizations talking about their belief that we just don’t learn enough. We seem to ask ourselves questions about why we’re having repeat incidents. There have been books written about this. Listeners might be familiar with Professor Hopkins’ book about the Texas City Refinery Accident in 2005. The title of that book was Failure to Learn.

I thought it’d be a really good opportunity to just talk about learning from incidents in relation to the literature.

Drew: Good choice of topic, David. As far as I can tell, learning from accidents is pretty much the oldest type of safety work that exists. The earliest stuff written about safety has been about accidents. Almost from the very start, people have been complaining after an accident about people’s failure to learn from previous accidents. There have been whole special issues, Journal of Safety Science, Journal of Contingencies and Crisis Management have both had issues about learning from incidents and failing to learn from incidents.

If you look through this academic work, the common approach is to assume that learning doesn’t happen because the investigations aren’t good enough. There’s a lot of focus by academics on improving accident models on encouraging investigators to use the improved accident models on increasing the depth of investigations to get beyond surface courses to get down to latent conditions.

My personal opinion though—and I think we’re going to see this established in some of the literature that’s reviewed today—is that organizations don’t really want to learn nearly as much as they say they do. We’re happy to say we want to learn, but what we usually mean by that is we’re willing to make small technical changes. We’re willing to make adjustments to equipment or adjustments to work processes.

But it’s really hard for an organization to stop and rethink the whole way they operate or the way they think about themselves and about the right way to do business. That potential for broad organizational learning, I don’t think is nearly as big as we would like to think it is.

David: Drew, I’m reminded, as you were talking there, it’s a bit like the cartoon. In the first cell, it’s who wants change and everyone’s got their hands up. In the second cell, it’s who wants to change, and no one puts their hand up. I think learning from safety incidents it’s a subset of broader organizational learning. The paper we’re going to talk about today actually used a broader organizational theory of learning by Argyris and Schön.

They had this generic learning scheme first proposed in the late 1970s. It’s still widely regarded today and a pretty simple model. It’s probably the pretty simple models that stand the test of time, Drew. They say organizational learning requires a learner, it requires a learning process, and it requires a learning product. We’re going to speak about that as we go through the paper today. Who’s the learner, what’s the learning process, and what’s the learning product?

According to this theory, Drew, all organizational learning starts with the collection of information—the learning product. We would think of this as the investigation process that leads to an investigation report. Then it’s followed by some kind of processing, sharing, and interpretation; the processing of that information within the organization; and then the storing of that information. The institutionalization of it generates the actual learning outcomes that we try to see and observe.

Drew: The other thing we should probably mention here is probably where most people have actually heard of Argyris’ work is the idea of single-loop learning or double-loop learning. Inside this learning process, the idea is that most of the time, we do it in a very simple almost like a thermostat type way. We’ve got some governing variables—the things that we want to keep within acceptable limits. We’ve got some strategies we used to manage those things that we want to keep, and we have consequences of each of the strategies. The consequences might be intended or unintended.

When our strategies are working well, we’re really just monitoring the situation wherein picking a strategy and then getting back on track. We’re still learning. We’re getting better at implementing the strategies. We’re getting better at choosing the strategies. But Argyris says that there’s this other type of learning where we question and rethink that whole process. We examine the strategies we’re using, we throw out some. We find some new strategies. We might even question what variables we’re monitoring and managing, and try to change that set of variables.

That double-loop learning requires us to be very open and reflective about our own reasoning, and in particular, publicly testing the assumptions that we’re making. Argyris says that while most people, if you ask them how they learn or how they run their organization, that’s exactly what they’d say they’d do. But in practice, most people are much more defensive than they think they are. They make a lot of assumptions and beliefs, and they keep those private non-tested. It’s not safe to say exactly what you really think. Particularly in times of organizational crisis.

As a result, people tend to lock into that single-loop learning where they are finding the way they’re doing things now rather than challenging the underlying assumptions.

David: Drew, I think that the traditional approach to learning from incidents that we apply in our organizations go something along the lines of, yeah, we do a careful analysis or a careful incident investigation. We are a small group of people who distill and formulate the lessons to be learned. We send this around our company on a one- or two-page form—some kind of flash report, safety alert, or lessons learned report. And then we think that this magically leads to some kind of future prevention of incidents. 

But then as the title of this episode may suggest, Drew, there might be some missing links in this formula that we apply within our company. Let’s dive in and talk about the paper.

Drew: Okay. The paper we’re going to talk about is one that David found. It’s called What Is Learning? A Review of the Safety Literature to Define Learning from Incidents, Accidents and Disasters. This is actually a new journal. I don’t think we’ve published one from the Journal of Contingencies and Crisis Management before. New for us on the podcast, very old journal. The paper’s in 2014. The authors are Linda Drupsteen and Frank Guldenmund. Frank’s actually one of the editors of Safety Science now.

David: Drew, I did a bit of looking because I hadn’t read anything or I wasn’t aware that I’d read anything from Linda Drupsteen before. It looks like she’s published a whole lot of articles relating to this topic of learning from incidents. It seems she took quite an interest in this topic around 2012–2015. She’s published a number of other papers, for example, titled Critical Steps in Learning From Incidents, Why do organizations not learn from incidents?, and Assessing propensity to learn from safety-related events. 

Five or six papers, they’re all published in really reputable journals. Some of its empirical work, and all published with really reputable co-authors as well—second and third authors. It was really pleasing to find this little body of work there around this topic.

Drew, as mentioned in the title, this was a literature review. We’ve used these literature reviews on the podcast before because it’s a good way of getting broadly across a particular question. The authors had three aims with this paper. They wanted to contribute to a more comprehensive knowledge of what the literature says about learning from incidents. They wanted to try to use this model of organizational learning to identify possible explanations for ineffective or inefficient learning that goes on within safety. They also wanted to establish where the gaps in the research were so that we could have a forward research agenda to try to address some of the things that we don’t really know.

Drew: That’s a pretty cool way to do a literature review is to take an existing model of how things work and then find all of the papers, fit them into that model, and show where the gaps are or where there’s any controversy. The method’s fairly standard. The authors searched databases of papers looking for mentions of organizations, learning, incidents and accidents, and safety.

And then they stripped out anything to do with healthcare. Things with keywords like nursing healthcare. Just because there’s a huge amount of very specific information on patient safety that doesn’t really fit in with the rest of the literature. It’s its own bubble.

David: Drew, they found 113 papers. They also pulled out three published Ph.D. theses because they considered these three unpublished theses to be essential for the review. I suppose as an aside Drew, you get involved in supervising a lot of Ph.D. and master’s research. How much useful research is left inside unpublished Ph.D. and master’s theses that just don’t get read?

Drew: The habit of making sure that every Ph.D. student produces a couple of published ergonomic papers is really fairly recent. Certainly, I encourage all of my own students to basically publish everything before they’d produce their final thesis. But that’s by no means the way a lot of people do their Ph.D. thesis. 

Particularly if someone does their thesis and then leaves academia, or they go straight onto their next work and they don’t have time to go back and publish these papers, then it just sits in this Ph.D. thesis. Most search engines don’t pick up thesis. They usually only categorize—you need to go directly to the institution. If you don’t know that the thesis exists, you’re hardly likely to find it by just an internet search.

There’s some really great stuff. If you think your typical published paper is 10,000 words (at most). A typical thesis is 100,000 words. Some of these go into much, much greater depth. Inside that thesis, there is a whole study. There’s a whole deep literature review of the topic—stuff that maybe no one has made public.

David: Yeah, Drew, I was fortunate to be able to publish most of my stuff, or we’re able to do that at least with my Ph.D. It’s really helpful. I sometimes see Google scholars start to pick up some of these institutional catalogs now of Ph.D. thesis. But it’s by no means easy to get your hands on those.

First of all, they end up with 81 articles after they stripped out the healthcare ones and ones that weren’t in English. Then they took another 21 out because they weren’t really covering learning. They were talking about risk management or safety culture or some other topic—not learning. And then 14 more papers were excluded because they are really specific about either a particular incident and particular learning and didn’t really have generalizable information. They’re left with 44 articles in these three theses.

Drew, the first thing they did was they analyzed against the empirical classification criteria. I hadn’t seen this before. Do you want to just describe that step of what they did?

Drew: The idea is there’s a little bit of a research life cycle that starts off with observing and describing the world, then turning the world into theories—sometimes that’s called the inductive step. Then there’s turning that theory into hypothesis—sometimes that’s called the deductive step. There’s testing those hypotheses and then there’s evaluating the hypothesis.

That’s a social science version of the scientific method. It’s a bit of a stretch to say that research really happens like that. But at least, in this paper, they seemed to be able to sort things fairly well. I think the main motivation for doing that is really just to separate out stuff that is directly empirical versus stuff which is—the kind way to talk about it is theory building. The less kind of it is to say it’s really just writing essays—talking about this topic without really investigating the topic.

David: Yeah, Drew, it did make the statement in the paper which gets made in a lot of systematic literature review. I think we’ve said this a number of times in the podcast that the authors said that even though there was quite a lot written on the topic, there was a lack of actual empirical research that had been done to test particular aspects of theories.

In this literature review, they broadly covered three areas, Drew. They talked about the learning of the lessons. The incidents become the input to learning. The identification of the causes and the actual incident investigation process. How that lesson-learning happens right upfront. And then what the end to end processes of—like we said earlier—processing that learning information, sharing that learning information, and then how that gets it better into the organization.

They spoke also from the literature about all the conditions for learning or the barriers to learning. The things that we’re going to talk about three there are going to be organizational trust—which we spoke about last week. They actually extensively reference the paper we spoke about last week, Drew, within this paper. Also, the impact of the incident, which is sometimes referred to as the incident intensity. Which is the more serious and the scariest the incident, the more propensity there is for learning. It doesn’t mean there’s more learning. It just means there’s more propensity for it. Then the people involved and what the different actors in the learning process—how they contribute to learning or not.

Drew, we might go through each one of these one at a time and talk about the findings from the literature review. Do you want to maybe kick off with the learning processes?

Drew: Sure. One thing that they found fairly quickly in the literature review is there are lots of models that people propose for learning from incidents. Models are probably overstating it. This tends to just be steps of a process. It lays out the path that you need to follow from having the incident to learning having happened.

Most of those models have different numbers of steps. The shortest one is 14, the longest one is 13. Broadly speaking, there’s collecting the information, analyzing it, selecting the lessons, planning communication, doing the communication, implementing the recommendations or the learnings, and evaluating. If you think about the first couple of those steps are the ones that we do in the accident investigation itself—collecting information, analyzing it, and selecting lessons. That’s investigating the incident coming up with the recommendations.

But it’s those other steps—the communicating of that knowledge—where the literature is very skeptical about the way we currently do it. Because it’s not just enough to get to the incident and have the recommendations, there’s a need to share it. Using lessons learned systems or using written alerts like safety bulletins is generally ineffective for communicating the lessons from an accident. The journal conclusion is there needs to be some sort of face to face sharing of experiences.

In fact, the literature highlights person to person sharing. One on one storytelling to facilitate. This happens locally throughout the organization rather than the broadcast systems that we intend to use.

David: Yeah, absolutely, Drew. We’ll come back at the practical takeaways and talk a little bit more about that as well. Because I think they are quite blunt from the review of the literature. Just to restate that point that you made, that the way that we typically communicate and share the lessons through our systems and through short-form written documents is largely ineffective. Or just doesn’t have the opportunity to be effective because that’s not the way that we learn.

Drew: One of the things they highlight, which is really fairly speculative in the literature specifically about accidents but is well-established in general learning literature, is that you need some sort of participation for learning. Even just face to face learning where someone comes and tells you about something, you’re unlikely to learn. You learn by engaging with the material. Actually, working with the material, generating new ideas about that material rather than just having it given to you or sent to you in an email.

David: I think particularly what we’re talking about here in terms of learning is we’re actually talking about change either belief or behavior type change as a result. Someone updating them into a model of work or risk, or someone actually changing the things that they do. We’re not just talking about people remembering certain facts or pieces of knowledge. We’re trying to motivate change. Definitely, Drew, I think that’s something that is well established in the learning literature that people have to be participants in.

Drew: David, this might be something that you meant to get on to when we get to the partial takeaways. One of the things that I’ve seen that I really liked, don’t you just see this organization-wide, but I’ve seen local supervisors use this. Some supervisors they’ll get sent a safety alert or a bulletin and they’ll just read it out. Often in a monetary and go through exact words they have to say, toss it off, and then get on to the real stuff. 

But I have seen people pick up a safety bulletin, say a couple of things about it, and then ask their team, this obviously doesn’t apply directly to us, but what can we find the parallel to learn from this incident? Actually get their work team engaging in talking about, discussing, and trying to generate new ideas from a safety bulletin.

David: Definitely. That process done effectively would create a real possibility for learning. It’s a good segue there, Drew, to the findings of the review in relation to the conditions for learning. The first condition there that I mentioned and we spoke about last week was organizational trust. Like I said, this paper really confirms the answer that we gave last week to our question about does blame get in the way of learning. Really here, they just say, look, learning is enabled by a culture where openness and trust are valued. We spoke about that last week.

Situations where trust is absent. It said that that created putrid political processes, which we spoke quite a lot about social and political processes in episode 39 about the root cause, Drew. And also about power conflicts, anxiety, blame. We’ve covered those topics on the podcast before. But just another reinforcement—even when we’re talking about learning from incidents after the investigation processes occurred—that the organizational climate around trust will be a big factor in whether people generate new ideas and change the way that they think and the things that they do.

Drew: This next bit though, David, I found a little bit counterintuitive because they do say that the size of an incident—both in terms of its magnitude, severity, and importance even outside scrutiny—is something that increases the amount of learning. That’s one of the conditions. Which almost runs counter to that idea of trust because you expect that when you got outside scrutiny when you got a more major incident, that there is less willingness to talk about it, less trust. But the findings seem to be generally that the more important an issue was, and the more there was outside interest from media or other stakeholders, the more there was the likelihood of at least change if not learning.

David: I think Drew, there were few pieces of information reflected from the literature review in this paper around that. I think one of the possibilities that they spoke to from one of the papers that we’ve reviewed was that the bigger the event, the actual more conversation that goes on in the organization about it. That sharing step—even if it’s rumors or speculation inside the investigation report or outside, they just said there’s this greater talking going on about that event. Which just creates more opportunities for learning, idea generation, and change to occur. Whereas something that’s quite minor and doesn’t get much air time in the organization.

I think that was one that I could say, oh yeah, I could understand when we think about learning as a social process and the sharing of the information about the incident between people—face to face around water coolers and in lunchrooms. I could see that that will create an opportunity for more learning.

Drew: Yeah, that makes sense.

David: The only other thing, Drew, they contradicted a little bit in that. We haven’t really talked about near-miss events or that. Learning from incidents or learning from near incidents (if you like). They also said that although at times, like what you said, the political pressures and management not wanting to admit any failings, that politicization of the process increases with scrutiny as well.

These constraining and enabling forces are going on. The more the event, the more talk, the more opportunity for learning. But also, the bigger the event, the more likely things are being shut down and tied up in a neat bow.

Drew: We should move on to talk about the people involved. There are some really interesting things said about people in different positions along that learning chain. Starting with the very actual source of the information—the people that you’re talking to.

David: Yeah, Drew. They talk first of all about before you can learn, you need to find out about the incident occurring. We talk about reporting or where some people talk about a reporting culture. Your people are your eyes and ears, and you only know what they tell you. The fact that someone’s actually got a report of the incident in the first place, and then when you start your investigation, you’re interviewing the people involved. 

That’s the main source of your information. The majority of your information about the incident that feeds right in at the start of the funnel into the learning process comes from the people who are involved.

Drew: Then we have a whole heap of almost contradictory information about the investigators of the incident. Everyone seems to agree that the investigators are important. But then we have this wish list of conditions for the investigators that they need to be independent and have expertise. But then they also need to have close knowledge of the work processes, the sector that’s being investigated, and about safety. Outsiders who are also complete insiders.

David: Yeah. It’s like Dave Woods’ four I’s are the safety profession. It involves being independent, informed, and informative. You got to be separate, but you got to be close. I think that’s a tension in safety roles, and it’s a tension in investigation roles. But I think their point here, Drew, which I agree with is again, we’re right at the start of the process around the learning process. The quality of the investigation creates the possibility for the double-loop learning that we spoke about. It creates a whole raft of possibility to bring certain stakeholders into the investigation process to actually start to enable that learning to occur as the investigation is taking place.

There’s a whole lot of things that investigators can do to really create the conditions through the investigation process for learning to be maximized or be given the best opportunity of occurring. I think we sent people off on two-day investigation training courses. But I’m not sure our organization, we really have strong capable investigators as well as we should have.

Drew: Something I’ve noticed when talking to people who are concerned about the quality of investigations in their organization is very often people are concerned about the correct application of the model and the correct formatting of the report. They want high-quality investigations as in these very consistent, neat things. But there are more important attributes here than just consistency. 

I think one of these big ones is this ability to produce things that are credible so that the rest of the organization recognizes that this is the truth about their organization. But also have just enough of a reflective component laying bare the assumptions that are made when we think about the work so that we can have conversations that move beyond just restoring the work to the way it was before.

David: Yeah. And Drew, they talk about particularly moving the organization beyond just restoring it. The literature talks a lot about management support, which will come as no surprise to our listeners. But it’s saying that as managers, that creates the opportunities for learning and change in the organization. Basically, management support is required for any learning to really take place inside the organization. If management is not interested in thinking differently about the way that the business functions, then the business is not going to change the way that it functions.

Drew: That’s probably a good point then to talk about what we need from the managers. The big thing is that the people who are told then about the lessons need to be capable of receiving and responding to the lessons as feedback. As something that they can reflect on for their own performance and behavior. The literature lists a number of things, reasons—because they are too busy because they are overconfident with the way things are now, they have fixed ways of thinking or a fear of being wrong.

Something that gets mentioned in the double-loop learning is this contradiction between the ability to change and learn and fear of being wrong. You’ve got to admit that you are wrong now in order to become correct in the future. If you’re not willing to admit that you might be wrong now, then you have no chance of correcting and changing what you’re currently thinking.

David: Drew, we’ve gone through the findings there about the learning processes. Mostly in the safety literature, there are sequential kinds of processes, but there’s a heavy social process that’s related to whether learning actually happens. Not whether the process gets followed, but whether learning actually happens. And then these conditions around trust, incident intensity, and the different people or the actors involved in the play.

In terms of a general discussion now before we get onto the practical takeaways, let’s go back to this model now. Let’s tie it all back to this organizational model around the learning product, learning processes, and the learner. There are many factors that help or hinder like we’ve mentioned on the way through. But the learning product itself that ties back to the Argyris and Schön model is the incident description itself and the causes of that incident are the learning product.

When we say what do we want people to learn, we actually want people to learn about the incident, learn about what caused it, and therefore what could be done to not have it in the future. Do you agree, Drew? Is that the way you think about the learning product? What are we actually trying to get people to understand about health?

Drew: In a sense, I think just learning directly about what happened is not always useful for learning. We need to have something inside that product, which is something that we don’t know already. If people pick up that report and there’s nothing in it that surprises them, then they’re not going to learn anything from it. It’s got to contain new knowledge in that investigation.

If it’s just the same as 50 other investigation reports of similar incidents, they all come to the same conclusions, then it’s like having already read every textbook on a subject, having yet another textbook on that subject. It’s not going to tell you something new.

David: Yeah, absolutely. I just had thought then, Drew, we haven’t really talked about the difference between actions, recommendations, and learning. I definitely have those two things just distinct in my mind. An investigation should come up with actions to address the risks involved that obviously need to be perhaps better managed. But then also learning might not just be to tell everyone about those actions. 

Learning will be what we learn about our organization. As a result of this, how do we need to change the way the organization runs as a result. That could be quite different to just the list of actions. Would you agree with that? Would you disagree with that?

Drew: I think that if an investigation finds immediate things that need to be fixed to make the situation safe, then it’s got to have actions. I don’t think every investigation finds those things. But if there is a particular broken piece of equipment, broken process design that needs to be fixed, then yes, you got to have immediate actions.

The important thing is that taking those actions shouldn’t be the point at which we close off the opportunity to learn. I’d actually prefer that the investigation didn’t have recommendations beyond that. That they have instead of things that we have learned or observed. And then other people can interact with those things, learn from them, and turn them into recommendations.

David: Yeah. I like that description that you’ve given there, Drew. Good advice in conjunction with the episode we did on root cause investigations. That’s the learning product. We’re talking about—the lessons themselves. And then the learning process which is split down. I summarized that into four steps, Drew—acquire, share, use, and store. 

The incident investigation is the knowledge acquisition process. Like you said, Drew, we’re finding out things we didn’t know before. We’re acquiring knowledge. We’re getting a different understanding of the way that our organization functions. Then we’ve got this communication and knowledge sharing step, which is crucial. Which is how that information flows from the investigation. Flows get discussed, debated, and engaged with by others in the organization who we want the learning to take place with. 

And then a follow-up process, Drew, which we talked a lot about evaluation in the podcast as well, but talked about the process needed to actually see whether that knowledge is being used. To see whether things are being done in a way that’s consistent with the lessons that have been learned.

A way of storing that information is part of the organization’s collective memory. I suppose that’s the institutionalization step. How does that knowledge go from a collective awareness inside the heads of people to be something that’s known by the organization. So then the next year and the next year, when new people come into the business, they get the benefit of that learning as well long after the incidents occurred.

Drew: Important to point out here that that storing of information is in memory, not in some dusty lessons learned in the system. David, we were talking about Ph.D. student projects that never make it to the light. There’s a great master’s project sitting in an archive somewhere called Lessons Learned About Lessons Learned Systems. The biggest lesson about lessons learned systems is that no one learns lessons from them.

One whole organization there, every project was required to type in three keywords about the new project into the lessons learned in the system and implement the things that they found into their project plan. The whole organization just developed this skill at choosing keywords that produced nothing just to make sure that they weren’t locked into this bureaucratic system.

David: Yup. That’s divided organizations’ work. That’s maybe why learning is difficult because people learn something when they’re motivated to learn and when they have some kind of belief that they need to learn something new. The last piece there is to learn. Like we said, learning the product, learning process, and then the learner themselves. 

I put there Drew, and you’ve said this to me before. How would you judge the success of an incident investigation, and I think you once told me that the success of an incident investigation be related to how much the incident investigator themselves actually learned with regard to things that they didn’t know before the investigation started.

Drew: Yeah. That’s the learning product that the investigator needs to learn something. And then that learning needs to carry through that learning process right down to the end-users. We test whether the investigation is a success if the investigators learned something. We test if the learning process has been a success if there has been some change within the organization that embeds that learning.

David: This literature really focuses, which I think is a good first practical takeaway. It’s the managers that need to lead the change and embed the learning. These lightened conditions and this double-loop learning is driven by management. I think in organizations, a lot of the time, we think about the lessons learned as being things that the people who are exposed to the risk or the workforce need to do. 

We create these lessons learned reports. We send them around the organization. Like you said, Drew, the supervisor stands up in front of a toolbox talk and tries to make sure that the workers have learned the lessons. I think the safety literature here says it’s actually about managers learning the lessons, not so much about the workers learning the lessons.

Drew: Yeah. That makes a lot of sense.

David: I think if you’re in an organization, the first practical takeaway I’d do is send your lessons learned reports to your management and ask them to tell you what they’re going to do differently as a result. Rather than firing them at your workforce.

Drew: Dave, Am I allowed to refer to [...] material in this podcast?

David: Yeah.

Drew: There’s something you wrote recently about communication processes that I really liked. Which was the difference between one-way communication, two-way communication, and open communication. In an immature bureaucratic organization, you’ve got this idea of sending information out. It’s information sharing, it’s not communication. When you’re more advanced, that is two-way. You’re sending information out, and you’re listening to what people are telling you.

For learning from incidents, what we really want is open communication. We want people throughout the organization talking about this incident, engaging with the information from it, and individually and in teams deciding what they’re going to do about it locally.

David: Yeah, I agree, Drew. I learned something from a colleague, Adam Johns, when I talked about top-down communication. Where he said it was referred to as a one to many broadcast communications, like announcing over the radio or TV in your organization. The way our lessons learned systems had done in the organization is a bit like that broadcast model, which is we write something down and we send it to 2000 people and tell them to all read the same thing.

I think, like what you said, Drew, real learning happens when it’s an open and engaging kind of face to face or—as close to these days—dialogue process that creates some ideation process that lets people get motivated, reflective, and engaged around what might need to be different. Very different from the way that many of our listeners' organizations would think about what they do when they get their investigation report and how they try to make sure the lessons get learned.

Drew: The second practical takeaway is something that we’ve already touched on, which is that the instant investigation process has to contain within it some sort of acquired knowledge. We’ve got to have investigations which finish with telling us something that we can then start. The potential for the organization to reflect about itself starts with an investigation, which lays bare its assumptions which lays bare its thinking, its beliefs, and creates that potential for reflection to be just immediate actions to restore the situation.

David: Yeah. Also the paper, Drew, talked about the importance of different people being engaged in an investigation process to start that learning and start that reflection both managers as well as the people involved in the incident and getting really broad stakeholder engagement in the investigation process. As opposed to one investigator just running the process and writing the report themselves.

Drew: David, how would you manage that resource-wise? Does that mean that we just need to do fewer investigations in order to do them high-quality?

David: That might well be the cases in the organization. You might want to design some kind of criteria for learning potential around an event. I don’t know what that looks like. It might not be as blunt as just actual or potential consequences. But creating some criteria for what the learning opportunity looks like with this particular event. And if you find something that fits that learning opportunity, then you run this really, really deep different investigation and learning process around it. That’d be how I think at the top of my head, I suppose, Drew.

Absolutely, investing time and resources. These are things that have happened in your business that give you a chance to reflect on how you run your business. I know some people who are probably more in Safety-II space. We talked about learning from normal work, or not just learning from incidents, but hey, look. The incidents and the things that have happened. I think we’re leaving a lot on the table in our organizations at the moment.

Drew: Final takeaway we’ve got here is the evaluation process, which is actually testing to see if learning has occurred. That follow up is something that I don’t think any of us do particularly well—going back sometime after an incident and after the sharing, and seeing what is actually different. Ideally doing that before we have another incident to prompt us to go, look, and find out that the recommendations from the last one really didn’t make any change.

David: Drew, we mentioned in episode 39, and I had a lot of really positive feedback on this comment about thinking about the outcomes that you want your corrective actions and recommendations to address. I think if you recall in that episode, we suggested something like, by all means, list the action or the corrective action, but actually list the outcome you’re trying to get to. Which is to improve communication or what you’re trying to do. 

I think I’d take that same approach to hear with evaluating lessons. It’s been really clear from out of the investigation, what are the things that you think need to be different in your organization? They’re the things that people need to obviously learn and put into place. And then try to be as specific as you can so you can go and actually test to see whether those outcomes are in fact in place in your business or not.

You have to define them before you start the learning process so then you can evaluate whether they’ve been learned or not.

Drew: Thanks for that, David. Is there anything that we want to hear from our listeners?

David: Those two really interesting points that might be different from the way we think about running safety a lot of the time. Anyone who’s got any face to face learning processes to create learning from incidents. Maybe people who are experimenting with learning teams around incidents or things like that, are you seeing a change in learning that’s happening in your organization?

Secondary to that is do you have any of these evaluation processes in place to test how well you’re learning from incidents or not. Just really keen to see, I suppose lots of people will be sending out lessons learned reports and closing out actions in their incident database. I’m just curious to know—listeners that are playing with more different learning processes, learning evaluation processes.

Drew: David, our question for this week was what are the missing links between investigating incidents and learning from incidents?

David: I think there are two missing links here, Drew, in our normal incident investigation learning process. The number one missing link is the way that we share information. And sharing information consistent with the organizational learning theory. Also, the way that we don’t really follow up on whether learning has been embedded in our business. We get to the investigation report and then we close at the actions. But those are the two missing links—the way we share and the way we follow up.

Drew: Okay. That’s it for this week. We hope you found this episode thought-provoking and useful in shaping the safety of work in your own organization. As always, contact us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com