The paper we are discussing today is about the language used in incident reports and whether it influences the recommendations made based on these reports. We run through the findings in this paper which include the process of writing three different reports on the same incident - each including all the same facts but just written with a different perspective and focus.
This paper reveals some really interesting findings and it would be valuable for companies to take notice and possibly change the way they implement incident report recoMmendations.
Topics:
Quotes:
“All of the information in every report is factual, all of the information is about the same real incident that happened.” Drew Rae
“These are plausibly three different reports that are written for that same incident but they’re in very different styles, they highlight different facts and they emphasize different things.” Drew Rae
“Incident reports could be doing so much more for us in terms of broader safety in the organization.” David Provan
“From the same basic facts, what you select to highlight in the report and what story you use to tell seems to be leading us toward a particular recommendation.” - Drew Rae
Resources:
Griffith University Safety Science Innovation Lab
Accident Report Interpretation Paper
Episode 18 - Do Powerpoint Slides count as a safety hazard?
David: You're listening to the Safety of Work podcast episode 83. Today we're asking the question, does the language we use during accident investigations influence the recommendations that are produced? Let's get started.
Hi, everybody. My name’s David Provan. I’m here with Drew Rae, and we’re from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.
Drew, a confession right upfront. I've been absolutely slammed with work and it was my responsibility to prepare an episode. This is a little bit late. I'm taking an easy route. We picked up a piece of research that was done at the lab. Drew was one of the supervisors of the student and also a co-author of the paper. We sidetracked the preparation for today.
In this episode, we're going to focus on a paper by Dr. Derek Heraghty. Derek has just been awarded his Ph.D. from Griffith University. Congratulations, Derek. That's only been publicly announced in the last few weeks.
We thought that rather than interviewing right away about the findings of his entire Ph.D., which we'll do in a later episode, we'll give him a bit of a break (I suppose) from spending time with Drew. We'll talk about one of his earlier papers. Drew, do you want to give us some background for this paper?
Drew: Sure. I'll leave it to the episode where we talk about the whole Ph.D. to talk about the details of it. The broad idea was that Derek wanted to try from both a research and a practical perspective to transform the way incident investigations were done in his organization. See what happens if people actually adopt a lot of the rhetoric that people have been saying in the past few years about moving away from retributive towards more restorative justice and moving away from human error style investigations to much more systems-focused investigations.
Before we go into that side of things, he had a question about the language of investigations. Derek's idea was that just the way we talk about investigating incidents and accidents shapes the way we think about them. We've already done it. Once we start saying we're going to investigate and we say it's an incident. It sounds so much like police language. Why are we doing an investigation, not exploration? Why are we calling it an incident rather than just something that's happened?
There's a lot of work in psychology that comes at it from this same point of view that the language we use about things can shape our thinking and our concepts of it. We'll get a bit into that, some of the cool experiments that have been done. David, do you want to talk at all about how we do incident investigations now before we dive into it? You probably had more practical experience than me in just doing investigations.
David: Actually, I haven't done any investigation for nearly 10 years, Drew. The last one I did was a really, really serious investigation and I haven't done one since. I guess it's worth talking a little bit about maybe common industry experience around investigations. Is it even possible to talk about common industry experiences? Maybe I'm a bit out of touch.
Normally an organization will go, something's happened. There's either an actual or potential quite significant consequence to persons like serious injury or fatality, so we need to investigate what went wrong so that we can fix it. Typical aspects of an investigation depending on the methodology you use, you’ll want to do a sequence of events or a description of the incident like what happened at the start of the shift, the start of the day, or an hour before work and go, do we understand the sequence of events and what happened?
Then once we know what happened, then it allows us to talk about what the causes were. One of the things in that sequence of events that were out of place, out of shape, wrong, noncompliant, or missed that caused the incident. Then once we can land on a set of causes or even some organizations a single cause, then we can go straight to recommendations and actions. What do we need to do?
Depending again on methodology, there are different frameworks for how to classify causes. We won't get into a big cause debate, but basically, that's it as far as on that, Drew. You've got the event, you want to understand the sequence or the description, you want to land on what you think was wrong that caused it, and then do something about it. Is that the oversimplification you were looking for?
Drew: I think that's pretty much what I was after, David. I guess I wanted to get you to talk about that leading into the fact that there is hardly any research about accident investigations, which might seem a bit surprising to our listeners because there is a huge amount of academic literature published about investigations. Most of it is telling you how to do investigations or it's using the results of investigations. But there's very little that looks at how and why organizations go about doing it.
We don't have any generalized knowledge. We just have this folklore about how to do it. We've got consultants, and advocates, and gurus telling us different ways to do it. We've got academics publishing different accident investigation models and saying why aren't you using the models, but very little research about it.
What little there is basically tells two broad ideas. You've got your engineering idea, which is we're doing investigations to fix things. We start from something's gone wrong, we systematically examine the causes, and then all of our recommendations, our ways of fixing the system so that whatever went wrong doesn't go wrong again. It's almost like there's a single truth or at least close to a single truth about what the causes are.
Our job is to uncover them and then use that uncovering in order to come up with recommendations. Then more from a sociology point of view, we've got this idea that investigations are how organizations recover from accidents. Accidents are disturbing events, accidents create a lot of uncertainty, they create a lot of distress. Our investigations are a way of going from uncertainty back into certainty by following a well-defined set of processes and creating some sort of closure or resolution.
If you think in terms of steps, it looks the same. But the difference is that from the engineering point of view, there is a real truth out there. In the social logical process, we don't need a real truth, except that possibly coming up with a narrative that everyone's happy with might actually be a path to closure. That finding a single narrative might actually be some sort of sense-making or healing exercise for the organization. That's more from a functional point of view, not because that single narrative actually exists.
David: Drew, I like the opening couple of sentences of this paper. I'll just quote them. "How stories are framed can greatly influence the readers’ interpretation of the event and the actions taken as a result. The analysis of accidents and the accident report style used to share the story provides a prime example of this." That's a pretty clear intro of what we're about to read in the paper.
Basically, what we're saying here is that how we tell the story of the accident in the investigation frames the actions. I suppose this study is trying to say, well, if we change the way the story is told, is it going to change the actions as a result?
Drew: Which I think, regardless of what you think of investigations seems a fairly logical thing to test out. David, let's go a little bit directly into the paper. The authors are Derek, Sidney Dekker, and myself. It's got a title and a subtitle. The title is Accident Report Interpretation. David, do you have the subtitle written down?
David: No, Drew. I'd left it there because I was so disappointed. I was like, Drew must have had the day off when coming out with the titles for this paper because longtime listeners might not know that one of Drew's many, many talents is coming up with really interesting names for papers like Safety Work versus The Safety of Work and Safety Clutter. I was very disappointed, Drew, with this title. No offense, Derek.
Drew: I do have co-authors and I don't always have to insist on stamping my own voice all over the quality stuff that they're doing.
David: Just my stuff. There you go. Maybe the going in quality was better for this one than with my normal papers.
Drew: I do have to double-check. The paper as published doesn't have a subtitle. The paper is just Accident Report Interpretation.
David: You're a bit disappointed with yourself, aren't you, Drew?
Drew: Just a little, yeah. Let's run through what the paper actually does. I think probably, let's start with the hypothesis that's behind the paper. The hypothesis is, if we present the same accident in three different ways, we’re then going to give those three different versions to each person. Each person only sees one version of the same accident. Then we're going to ask them, what recommendations would you make?
We're giving them the report with everything there, all the details but the recommendations are missing. So they've got to read the facts and then decide what they would recommend. But what they don't know is that they're getting the story told in a particular way. The idea is these people who were given different reports, do they come up with different recommendations or the same recommendations? Because if everything is driven by the facts, then they should just give the same recommendations. If everything is driven by how you tell the story, they should give different recommendations.
David: Drew, there's a large sample population too. So 93 people who work for construction companies, I think there were at least a couple of different 3, 4, or more construction companies involved—3 different companies. Drew and Derek, and I'm guessing Sidney didn't do a lot either.
Drew: Let's be totally honest here, David. This study was done while Derek was overworking in the UK. I'm advising in the background and helping him write it up. But everything here, when we say we did, it means Derek did the work.
David: Great. So 93 people, 3 different companies got a sample of people from each of those different companies like engineers, construction managers, human resources, safety advisors. Different roles, but all people who would see into the investigation are through the course of their work. We've just got an alphabetical list of those people and assigned them into one of three different conditions. Basically, set them up to receive one of three different reports. Drew, do you want to keep going?
Drew: Yeah, can you tell us about the three different reports?
David: Yeah, there were three conditions, so the report variants. Like I said, each of the participants was randomly allocated one of the three. Each person, like what Drew had mentioned earlier, only ever saw one accident report. They only ever thought, I'm getting shown a report. What they were given was one of the three reports of the same incident.
Basically, the reports were different in the language, structure, and style, and then, therefore, also just a few slight differences in information as well. The three different variants were variant one used what was described as a more traditional linear type of story. It was called, I think, the human error variant, which was looking at more about the specific activities of the people involved in the event.
Report variant two, it focused on issues existing within the organization system. It was more of a systems investigation report. Then report variant three was more along the lines of this multiple accounts, multiple stories, multiple perspectives on the incident. Drew, have I described those report variants, okay?
Drew: Yes. One thing you might be wondering is exactly what is the difference between each of these reports. If we did this as a very, very strict psychology experiment and we were only concerned precisely with language, what we might do is take a report and just change one of the words for an equivalent word and do that throughout the report. There are real studies that do things like everywhere that it mentions the word worker, we change that to the word victim and have a very precise change.
The way we did it in this study is we wrote up three separate accounts. These accounts are describing the same incident, but they're entirely different. I don't want to go so far as to say mistakes, but perhaps one of the weaknesses in this study is that because the different styles lend themselves to emphasizing different things, then there are facts which are highlighted in some of the reports, where you need to read between the lines of the other reports to understand those same facts. They spelled out in one, not spelled out in the other.
All of the information in every report is factual. All of the information, every report is about the same real incident that happened. These are plausibly three different reports that are written for that same incident. But they're in very different styles, they highlight different facts, and they emphasize different things.
David: It's a good clarification, Drew. We might explain how that matters. I think Drew explained how that matters, but also when we get to conclusions, in terms of some of the takeout, maybe it doesn't matter so much. The participants got these reports, whichever one of the three that they received.
At the end of the report, there was just a heading that just says, provides three recommendations based on the information provided in this report. There were three blank boxes and they just had to write three preventive or corrective actions that they would suggest as a result of the investigation report that they've just read. Drew, do we want to talk a little bit more about the three variants, or do we want to move on to the result?
Drew: Let's move on to the results and then we might go back and talk about how we think the variants influence those results.
David: Great. The only thing that I'd probably say is the incident report one, from what I read in the paper, was actually the original real report from the company. We mentioned that Derek was trying to change investigations inside his company. The first account, if you like Drew—it's the way I understand it—was the actual, original real incident account. The two subsequent accounts were the rewritten two other variants. Is that right?
Drew: Yes, that's correct. Derek didn't write the first report, but he was very familiar with the accident and wrote the second report.
David: It was a real report, which is kind of cool. Because that part, knowing that you've actually given a real report from a company to 30 or 31 out of the 93 people and ask them what the actions are is really good feedback for you as an organization and something that any of our listeners could do. [...] next incident investigation report, remove the actions, send it to 20 or 30 people, and ask them what they think the action should be. Then do what we're just about to explain in terms of the analysis and you might see how the way that you're doing your investigations are shaping the actions.
Drew, talking about these results, the traditional approach, what was called the traditional approach or the human or blame focused approach. Of those 99 or so actions that came back, so 33 people with three actions each if that's about the limit of my mass, but that seems about right, it looks like about 72% of those actions were system focused and nearly 30%, the other 28% were human- and blame-focused. You got this 70/30 rule so that 30% of the actions were about the individual and 70% of the actions were organizational and system.
Drew: Just to clarify what's meant by those two categories. A human- or blame-focused one is anything that specifically says we should punish the people involved. We should do some non punitive action for the people involved like training or reinforcing corrective behavior. Unambiguously, the person who has made a mistake, we need to either fix the person or get rid of them. Everything else goes into system-focused.
That ranges from communicating about the accident or reviewing the risk register, making changes to the physical workplaces, making general changes to work practices, making changes to systems and documents.
David: Given that there's about 30% of those recommendations in that, can I assume dangerous? But I assume that because each of the 30 people had to put three different actions, can I assume that every single one of those 30 people actually had punish or retrain actions as one of their three? It would probably be pretty close to that.
Drew: David, the raw data isn't in the paper and I don't have it in front of me, but I don't think that's in fact the case. I think we had quite a range of different people here. One of the reasons why we've got so many system-focused—even on the original report, which is very blameworthy—is that we've got a lot of people who are also very system-focused in the organization.
What I think is useful to take out of this first one is just how many systems actions are there. It's not like we were hiding details in this report that made it impossible to come up with system-focused recommendations. There was enough detail there to come up with good system focus recommendations. It's just that lots of people wanted to also punish the person or retrain the person.
David: I think also, Drew, it could have been interesting to just ask people for actions and not necessarily ask them for three. Because once you've gone past the one or two worker ones, then you've got to find something else to put in there, maybe update a document, communicate, send a safety alert, or something like that. I think it would be really easy just to say, what recommendations should we do and see how many people put down for themselves?
Drew: Yeah, we specifically gave a number of recommendations because we didn't want it to be weighted based on who put lots of recommendations. If a few people put lots and lots of recommendations, then those particular people, their input would swamp the raw numbers. Whereas giving everyone exactly three gives everyone a fair chance to decide what is important, then the balance is based on what they think is important, rather than how many things can they think of.
David: A great example of the thought that goes into the research design and the trade-offs that you need to make with the design of the research.
Drew: Another interesting thing, which isn't highlighted a lot in the paper was just how many of the recommendations were in fact themselves, counterfactuals. A huge number of people, their recommendation on the report was, the person should have done X. Basically, the recommendation for the report was time travel and do something different.
David: The wording of the investigations, yes. I saw that in the results, those counterfactuals. It was a lot. They were numbers. I thought our recommendation is that we're trying to create counterfactuals, not actual counterfactual statements.
Drew: There were actual counterfactual statements that their recommendation was to do it differently.
David: In terms of counterfactual, if one of the things that were called out in an incident report and I don't know the details, it was something like the person didn't isolate that equipment, then the recommendation would be to isolate equipment?
Drew: No, the recommendation was the person should have isolated.
David: Okay.
Drew: The recognition was literally, they should have. What's really interesting is when we shift from that to a systems approach, so we've got mostly the same facts, but now we are presenting them in quite a different way. This system's approach, even though it may add some extra things, keeps in all of these statements about the human. There is still exactly the same opportunity to blame the human based on what the human has done, but the number of recommendations that involve blaming the human goes down drastically.
David: It's down to about 9% to the individual and 91% to the system. Compared with 30/70, you're down to about 90/10. You're dropping by two-thirds the number of actions that are targeting the individual by broadening the information to a systems perspective.
Drew: Yup, and none of these are now directly about punishing the individual. We not only are no longer focusing on the individual, but we've entirely stopped any idea of punishing the individual.
David: Drew, that systems approach, the good thing is I think the example reports are in the appendix of the paper as well. It was published in Safety II. I don't know if we mentioned that. Is that an open access journal?
Drew: Yes, we particularly published in Safety MD because it's open access.
David: Okay. All of our listeners will be able to get access to the paper and you can actually read the accident reports. It might give you a good idea for a little experiment that you can do in your own organization by getting accidents, write it up in a couple of different ways without recommendations, and hand it around and see what people say.
I think that's really fascinating, Drew, that just by expanding out the discussion of the incident beyond the individual and the immediate sequence of events into organizational factors or systemic factors, has sort of basically provided enough context that no one says, okay, we need to discipline this worker now.
Drew: Then we have the third variant. The third variant has the story told from multiple perspectives. It actually includes in there the worker’s account of what happened. Once you have included that account, the number of actions blaming the worker goes down to five. We've gone from around 30, like almost a third, down to eight, down to five.
Correspondingly, all of these recommendations have now shifted. That they're now mostly talking about practices that aren't even directly involved in the accident. We've got this category of coding which is we're going to have a recommendation, which is a reinforcement or a change to a practice not directly involved in the immediate accident. You can think of that as like systems start thinking, what else is going on in the background that is influencing safety?
The first report has 32 things in that category. The second report has 28 in that category. The third report, once we've told the worker’s story, 50 things in that category. So half of all recommendations are no longer tied directly to the accident but generally trying to improve safety in the organization.
David: It's fascinating. It may be a logical result or make sense to people, but it's also a really helpful result for us to think about how our incident investigation process—everything about investigation reports—could be doing so much more for us in terms of broader improvements to safety in the organization.
Drew: My immediate thought, David, is that our incident reports are possibly even taking away some of our capacity to see these broader issues when they tried to reconcile the stories. This is something that I've heard anecdotally people saying in organizations. It's come up in a few different studies of accident investigations.
The worker, when they're telling their story, is revealing organizational issues. But that is not making it into the final report because it spoils the nice, clear narrative of what happened and who's at fault. Those broader issues that are raised by the workers are often our best opportunities to fix things that we've accidentally filtered out through this process, which gets me wondering, what if we didn't investigate?
What if we just got the worker to tell their story, and then took that and directly tried to make recommendations coming out of it? That might make a few people uncomfortable. But this seems to be evidence that we might actually get better recommendations than if we try to uncover the truth, create the one true narrative, and then come up with the recommendations.
David: I think there are so many different examples of incident stories, where the operator story is just so helpful and useful for the organization to learn from that is so—sanitized is the right word, but just narrow it up by the time it gets onto a—it maybe gets into a long report somewhere in a system, but then whenever it gets talked about in the organization, he gets talked about in four or five PowerPoint slides. I think we did a very early episode, it may have been episode 18 or something on, are PowerPoint slides a hazard or something like that. Is PowerPoint a hazard?
Drew, what are the takeaways would you have before we go into—or do you want to go into practical takeaways now?
Drew: Just before we do, I thought it would be worth acknowledging and talking about some different ways of interpreting these results. Derek does quite an interesting job in the paper. He says, okay, we've got these raw findings, what do we make of them? He gives us three possibilities. The first possibility is that there is not, in fact, something going on here at all, that this is just random. I think we can just dismiss that based on the internal evidence of the paper.
The difference between the groups is very [...]. It's not coming from any classification. When we did the analysis, we had multiple people independently classifying it and then reconciling it. They agreed with each other very, very closely. So there's probably something real happening.
The second possibility is that we cooked the books. The reason why we've got these results is because we deliberately made the first report look really, really bad for the human. We deliberately made the third report look really, really good for the human, and that just draws you in particular directions. That's always going to be a possibility. We can't totally rule that out when you have different information.
We are trying to imitate a very human error-focused report. We are trying to imitate a very sympathetic report. But the fact that there were people who read the first report and came up with very system-focused recommendations, and there were people who read the third report and still came up with blame the operator type explanations put limits on just how much the books could have been cooked. I think even that possibility just shows that you can present the same facts and deliberately lead people down towards particular recommendations by how you talk about them and present them, which is the point that Derek's trying to make.
From the same basic facts, what you select to highlight in the report and what story you choose to tell seems to be leading us towards particular recommendations. Then the third possibility is that we didn't cook the books. That this is in fact a thing that happens just from the presentation, not from any selection or highlighting effect. We can't really distinguish between those, the second possibility and the third possibility. They both can be true.
To actually narrow it down, you need to do lots and lots of repeats of the same study with just tweaking a tiny different thing each time and seeing which of those tweaks made a difference. I'm not even sure that it is worth doing that if you accept the basic point that this is showing. We know we can influence people by how we write the reports. Exactly, how to influence people in particular directions is probably going further than we need to go to make the general point here.
David: To make the practical point here. Drew, thanks for explaining that because I think that's really helpful when an author goes, you don't have to read between the lines of the results and try to think about what could be causing it when the researchers themselves have done the work to try to make sense of it and are openly communicating that. Drew, practical takeaway time?
Drew: Yes, let's.
David: Okay. No particular order, I've been involved in presenting lots and lots of incident investigation reports to management. I know that a number of our listeners will be doing the same. I'm sure that they often feel like they've done a really comprehensive investigation, they prepared this report, they presented it to management, and during the presentation, the incident gets re-investigated. Because the management team, seeing the information and seeing the actions, and it's not lining up for them.
Maybe we need to think really carefully. Drew, I'm sure we really need to think really carefully about when we get to the end of the process, presenting information that makes it clear and easy to know why the actions that have been identified are the accidents to go forward.
For example, if you've got a very worker traditional human error type style of narrative in your incident investigation, and then you present all system type fixes, then you might have a senior leadership group that goes, hang on a minute, what about the person involved and vice versa? I think it's important to think about it as we get out, as we go from a large report to a small report to a couple of slide presentations that the narrative and the actions alignment remains.
Drew: Yes, I think that's very true. The second thing that you've got here, David, is about investigators' training. I'm not sure exactly what you meant by this, but I certainly think that very often we have people who have good intentions about having less blameworthy type investigations. Our process is for doing the investigation, then still seems to lead us back to very human error type explanations.
I think that's where we need to train investigators how their choice of language and their choice of framing leads them down those paths. Understanding that things are not raw facts, things are selected facts, interpreted facts, presented facts, and that those make a real difference in how the narrative gets told. Giving people some training in that might be helpful in coming up with better investigations.
David: Yeah, Drew. This is, with great power, comes great responsibility. You want great capability in people. This research shows that the way that the investigation is written down makes a huge difference to the way that people think about their recommendations, the adequacy of those recommendations, and even shaping their recommendations.
I suppose we've seen it on paper, but you can imagine through the investigation process, the way that the investigator is conducting that investigation is continually shaping their recommendations in their mind. We actually need to change the way that investigators are conducting the investigation itself. Because my extension of these findings would be, which is this third takeaway here is, you maybe need to rethink your accident investigation framework—all of your causal categories and how you actually go about your investigation methodology.
Like you said earlier, Drew, does your whole report structure change to just be here's the date and time of the incident, here's the workers account, this is how we just tell the workers account, and that's it? We share some reflections of other stakeholders involved like managers, engineers, and others. We go work as account reflections of related stakeholders and then actions. I think this idea of investigator training and investigation framework could do some deeper thinking, deeper research, Drew. That was what I concluded.
Drew: I think amongst people who do think deeply about this, who are concerned with improving the quality of investigations are concerned with aligning investigations with [...] safety science. They often think that the solution is to create or train people in more sophisticated models of how accidents happen. I think one of the interesting things that this data doesn't prove but very much hints at is that those models may be part of the problem. That if we've got a framework, which is leading us to tell stories in particular ways, that will lend us towards human blame, even if that framework has got lots of other possibilities in it.
What I'm thinking of here is something like, some people try to move towards a just culture by having a framework about culpability that has that one end, the person is a totally innocent mistake, and at the other end, a very deliberate mistake and there's a flowchart in the middle. But the trouble is, that entire framework—even though it has the possibility of the human comes out as innocent mistake— is causing you to tell a story that is a story about a human messed up. How much was it their fault? Was it a little bit their fault, a great deal their fault, entirely their fault?
It's a story about human error. Something like, I can break things into categories. It says, okay, you got to focus on the organizational aspects, you got to focus on the work task aspects, you got to focus on these aspects, and you got to focus on the human aspect. Again, we're telling a story where there are factors at each level, including a bunch of factors that are blaming the worker who was right there. Even though we've got all these possibilities, I can still end up with lots of human error style findings.
David: After that, it's a very well-known, very successful methodology. The [...] framework starts with people. Again, not so much the framework, but the methodology as much as the application of the methodology because I've seen the majority of investigations. I've seen they've got nothing in the organization category and a full column in the people category. I think you're right, Drew. I think all of these models are reductionist type models.
They all start with the people involved. They all start with other people's accounts of what the people involved did. Maybe it's a full flip of actually starting with the actual people involved account of them on what they did, not sanitizing or changing that, and getting that discussed, aired, and shared openly. Then go and get the reflections of other people in the organization—reflections of that account.
Drew: I think this leads on to the next takeaway, which is that I think we should seriously ask whether the analysis phase of accident investigation actually helps us come up with better recommendations. We've got a clear challenge here that if we just present three different accounts, one account from the person who was there and closely involved, one account from the supervisor, and one account from someone else who saw the whole thing.
If we took those accounts, we could come up with some good recommendations. By analyzing it, are we improving the recommendations or are we just trying to narrow things down into a single clear narrative, which leads us towards recommendations?
David: Drew, you're right, even the analysis phase, but also thinking about the learning phase. I'm talking with a colleague the other day about sharing lessons learned from incidents across multiple sites. The advice I gave to them was, and maybe it's consistent with these findings, don't share actions. People have to go through the process of sense making and coming up with the actions for themselves.
If you're sharing actions, you're not sharing learnings because you're robbing people of the learning process, which is actually arriving at the actions. I also said, share the description of the event, let the other sites go through the process of examining the description of the event, and arriving at the learnings and the actions for themselves. They were wondering why when they were sending all these actions at the sites, no one was doing anything with them. I said, well, because there's no learning process on giving people actions.
Drew: That's fascinating. You're almost saying, if you tell people the moral of the story, then they're not going to get a good moral out of it. You got to tell them the story and leave them to work out for themselves what the moral of the story is.
David: Maybe, yeah. I guess the idea there was a learning process involves dialogue. It involves reflection, it involves the assimilation of knowledge into my own context. You got to let people go through that process to learn. Learning from incidents is not sharing actions across multiple sites was my conclusion.
Drew: Our takeaway, you drew a bit of discussion there, David. Hopefully, it has actually created some thoughts in our listeners about what you could do yourself or what you might think about this study. Maybe we shouldn't have done takeaways, David. Maybe we just say that and let's tell you a story of the study. You yourself go through the learning process of working out what's the appropriate takeaways from this.
David: Maybe we could do that one episode. People do like the takeaways though, Drew. Maybe [...] with that. Maybe if you engage with us on LinkedIn or something we can extend the learning together.
Drew, the question we ask this week was, does the language we use in incident investigations change the recommendations that are produced?
Drew: I think the short answer is yes, they do. The same facts communicated in a different way lead people to make different recommendations.
David: All right. Thanks, Drew. That's it for this week. We hope you all found this episode thought-provoking and ultimately useful in shaping the safety of work in your organization. Join us in our discussions on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.