The Safety of Work

How do policies and metrics shape the outcome of investigations?

Episode Summary

In this episode, Drew and David examine how organizational policies and performance metrics can inadvertently constrain learning from safety investigations. They discuss "Closing Investigations: The Role of National Policy in Shaping Structural, Organisational and Relational Constraints on Learning from Patient Safety Incidents" by Mesinioti, Macrae, Sheard, Hampton, Louch, and O'Hara, published in Safety Science. The research, based on NHS England's investigation system handling over 2 million patient safety events annually, reveals how rigid timelines and compliance requirements often prioritize closure over meaningful learning.

Episode Notes

The discussion explores three critical constraint categories: structural elements like mandatory timelines, organizational factors including resource capacity, and relational dynamics affecting investigator access to information. Drawing parallels beyond healthcare, they challenge listeners to reconsider their investigation frameworks, suggesting that organizations inadvertently create systems where investigators focus on meeting procedural requirements rather than generating genuine insights. The discussion emphasizes that effective investigations require adequate capacity, flexible timelines that accommodate complexity, and environments fostering open relationships that enable thorough inquiry and organizational learning.

Discussion Points:

Quotes:
David Provan: "The more hoops and hurdles and constraints and requirements that you put into the investigation process, the more that those things become the focus of the investigator, as opposed to the learning and improvement outcome that we're trying to achieve."

Drew Rae: "If you wanted to improve investigations in your organization, one simple leadership practice you could take is when someone gives you the investigation report, basically just say, no, there's nothing here that surprises me. Go away, come back and learn something."

David Provan: "These are the outcomes we're trying to achieve and the investigation takes as long as it takes. And if it's being held up for any reason, this is the process to check in on progress. As long as it's being worked on."

Drew Rae: "Investigations are way more about relationships than I think people realise when they're planning them. And so we've got to create an environment in which investigators can build and use relationships."

David Provan: "There's no reason that if you learn something during an investigation that you need to wait till the report signed off and finished before you do something about it... as soon as we learn something that we become curious or concerned about, we should act on it."

Resources:

Resource Link: https://www.sciencedirect.com/science/article/pii/S0925753525002243

The Safety of Work Podcast

The Safety of Work on LinkedIn

Feedback@safetyofwork

Episode Transcription

Drew Rae: You're listening to the Safety of work podcast, episode 133. Today we're asking the question, how do policies and metrics shape the outcome of investigations? Let's get started. Hey, everybody. My name is Drew Rae. I'm here with David Provan, and we're from Griffith University in Australia. Welcome to the Safety of Work podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. David, we've done a lot of episodes so far already on incident investigations. What in particular are we going to have a look at today?

David Provan: Yes, I'm really looking forward to today. Just today's discussion and this paper. I think it's got a lot of practical relevance for all organizations, all industries. But we've talked in previous episodes about the bias of investigators. We've talked about whether or not investigations get to the root cause. But today we're going to specifically look at sort of policies and metrics and how those structural things that we put around the investigation process can shape how investigations are conducted and the outcomes that we get from them. So I think it's. It's going to be a lot of. A lot of fun, Drew. It's a little bit different to how we've talked about investigations in the past, but I guess really central because most of our listeners will be in organizations that have quite a lot of formal structure and process and reporting and performance measurement around the conduct of investigations.

Drew Rae: Yeah, And I think it's particularly interesting because this is how organizations try to make their investigations better is by creating policies and by creating metrics, particularly around things like timeliness of investigations. And so this is a really interesting look at how some of those attempts to make investigations better might backfire or at least start creating unanticipated consequences.

David Provan: Yeah. So, Drew, should we introduce the paper and then we can get into the background? Because I think the context will help listeners.

Drew Rae: Yep. So the paper is called Closing Investigations. The Role of National Policy in Shaping Structural, Organisational and Relational Constraints on Learning from Patient Safety Incidents. And my apology if I sounded overly precise in trying to pronounce that. But when we have words like structural, organisational and relational. Got to take care getting every syllable the authors we have. This is in healthcare, by the way. This is in the nhs. In England, healthcare papers generally tend to have longer author lists. Our authors in this case are Paulina Messiniotti, Carl McRae, Laura Sheard, Sarah Hampton, Gemma Louch and Jane O'. Hara. David, I don't know if you know any of these authors, the only one that I had name recognition for is Carl McCrae, who, I'm kind of surprised hasn't cropped up already in papers that we've done. He's all over safety and resilience, multiple industries, but tending to focus on healthcare. The others are all from their resumes, very experienced qualitative researchers, mainly specialising in health care, in things like the patient experience or staff experience. So getting into those qualitative questions that don't get answered by clinical trials.

David Provan: Yes, none of the authors were familiar to me, but I looked at their institutions and you spent a little bit of time in the University of York and other quite reputable institutions across the uk. So, remembering when we're talking about qualitative research, a lot of this interpretive research that we'll talk about today, the people doing the research make a big difference between the quality of. In the quality of that research, the quality of the analysis and the usefulness of the findings. So I think given the institutions involved here and the people involved here, I think we can. And the scale of the study, I think we can take a lot out of the findings. Yeah.

Drew Rae: One thing that is interesting is all of these are university researchers. It's fairly rare when you see research into a big organization like the nhs, that we don't have, like multiple insiders involved in the study. And I think in this case the independence is important because any insiders would be involved in those policies and structures. And so having people who understand health care but sit outside the organisational structure, asking their questions and running the interviews, I think is the right way to go about this sort of research.

David Provan: Andrew, also, it's in safety science, which a lot of the health care is in, sort of like know quality and safety in healthcare journals, there's dedicated journals for, for this and for it to come across into safety science is. Is nice to see access to sort of broader industry. And then also, Drew, I guess we're recording this in November 2025 and it's got a December 2025 publication date. So this is just the reality of publications going online and then having to wait for a. A, I guess a paper issue of the journal or still the regular routine issue of the journal to occur.

Drew Rae: Yeah, it surprised me a little bit, this being published in Safety Science. In this case, as with you, David, I'm glad it is, because I think the lessons here go beyond healthcare and this is actually the right place to share these learnings. Sometimes you get a little bit suspicious when you see something that's not published where you might naturally think it would be that you, like, worry about. Like, are they. Did it get rejected elsewhere and are they trying to make an end run? No question of that in this case. Not with these authors, not with this topic. It would have got proper peer review in safety science.

David Provan: So, Drew, we mentioned healthcare and patient safety and it sort of was in the title a little bit as well. So maybe just a little bit of background context. Obviously, patient safety. We're now talking about patients in healthcare settings that I guess, have adverse clinical outcomes, either mistakes in the diagnosis, mistakes in the care and the treatment plan around, you know, accidental harm within healthcare context. So we. We sort of, depending on which study and what you look at, the research suggests that somewhere between, say, 1 in 10 patients and maybe 1 in 20 patients experience some kind of misadministration of care in a healthcare environment. So it's a big. It's an issue. There's a lot of these sorts of events that occur. And so, for example, the NNHS in the UK, where this study is based on, you know, they have more than 2 million patient safety events annually across their National Health Service. So we're talking about kind of like the policies, metrics and structures in place to investigate those 2 million events across different healthcare organizations.

Drew Rae: Yeah, it might be a topic worth coming back to at some point, David, just the way that healthcare changed its framing around patient safety. There was this big report that came out in 1999 called To Err Is Human, which. Which basically presented medical error in this same sort of framing of, like, what if we treated other aspects of safety like this? How bad would medicine be in terms of number of people they're killing on work sites? Which, yeah, is an interesting framing, because medicine is not. People go into medicine sick already. So it's a lot more controversial, like, what actually counts as an error versus just failing to save a life. But certainly it had a big impact on the way safety was managed and measured. And thought about was this starting to treat it, treat patient safety, as if it was a similar thing to workplace safety, with very similar practices adopted from other industries and very similar safety management systems adopted. And one of the symptoms of that is this increasing practice of investigating adverse events the same way that you'd investigate a worksite incident and doing a postmortem review, using some of the models and techniques that we use for accident investigation, feeding into these incident events and reporting.

David Provan: So, Drew, the background in this paper, sort of says, and clearly following on from some of that work in the late 90s, you know, since 2010, the National Health Service in the UK had a sort of establish a standardised national policy that mandated how these different NHS organisations should respond to and how they should manage these serious incidents. So they developed a serious incident framework policy, put that in place in 2010 and then I guess this research was done right at the end of that policy, at the time when the NHS was looking to kind of like withdraw that policy and establish new processes for managing these types of events. And despite all of that effort from 2010 kind of all the way through to when this research was done, I think these interviews were done a year or two ago, I guess there was still a lot of generally recognized challenges with the, the effectiveness of this framework and the quality of investigations. And this isn't just in the nhs, this is in broader industry. And a few of the podcasts that we've already done on this topic of incident investigation, I think that many of our listeners would share the view that the things that the outcomes we're intending to get from our investigation processes are often not the outcomes that we do get.

Drew Rae: The one last bit of context that's worth throwing in before we dive into the paper a bit further is that the NHS has had a history of using metrics and targets as business improvement levers. So they'll do things like noticing that they've got long waiting times in emergency rooms and their response to that is let's start setting targets for waiting times in emergency rooms. We've got issues with bed occupancy, let's start setting targets for that. So very top down setting of targets, but then leaving it for the local parts of the organization to actually manage these targets as like a new constraint that they have to deal with alongside all the rest of the problems. And a lot of what we're about to see in this report is a very similar pattern, is they had real problems with their incident investigations. You know, they had investigations that were occurring but no follow up or there'd be an incident. And the investigation just sat open and open for years and years and people are wondering like, is anyone actually doing anything about it? And so the response is, okay, let's set targets, let's make sure that the investigation starts in a timely fashion, let's make sure that the investigation gets completed, let's make sure it gets closed, let's make sure the actions from the investigation get closed. It's kind of a logical response, but it doesn't actually tell people how to achieve those things, it just tells them they Must achieve those things. Yeah.

David Provan: Andrew, I think you mentioned the unintended consequences of intervening in structural intervention in complex systems and hierarchical systems. Because this paper did a good job after it presented sort of the background and context, and as a non healthcare professional, it was really well done. I felt like I really understood the background to this issue and where the research was being positioned. And the authors sort of then said, look, their aim is to really explore the role of national policy. So this is external. So the NHS is sort of an external, almost a sort of a regulatory oversight type organization or a government organization. So what's the role of national policy in either supporting or constraining specific efforts to learn from serious safety events? How can kind of a safety policy that clearly espouses this central aim of supporting learning, how can it sort of like persistently fail to achieve that aim? And, you know, for that particular research aim, the researcher was saying, look, all of the evidence is suggesting that, you know, the implementation of this policy is not making things as better or not doing the things that it is expressly intended to do. And then they are really also looking at, you know, how does this national patient safety policy translate into local organizational practices for how local people within these healthcare organizations, how they respond to how they investigate and how they learn within this NHS kind of policy framework. So I thought they did a good job of laying out the things that they were really interested in trying to understand.

Drew Rae: Yep. David, I'm actually going to jump a little bit forward in our script and introduce the method. Then I'm going to come back and talk about some of the policy principles, because I think that might just be a cleaner way of doing it. So the method of this study is basically qualitative interviews. They take a really good sample. They pick six different NHS organizations because the NHS can be quite different in different geographic areas and different socioeconomic areas. So we're going to interview multiple people from each of these six organizations. That means a total of 49 NHS staff. All of these staff are people who are actually involved in managing serious incidents. They basically pick people at each of the three levels. You're the most frontline people who'd be involved in incident managing, right up to managers who are responsible for managing it, but are still, you know, involved. They're not distant from the work of investigation. Each of these interviews is 25 to 40 minutes, and they're very sort of direct face interviews. So asking people what are the problems and just getting people to recount what are their problems and experiences. It's a little Bit more direct than I like to do when I'm interviewing. I like to sort of get people to tell me stories and analyse it rather than asking them the questions very directly. But there's nothing wrong with this approach, particularly when people have already done a lot of reflection on their work, are already very aware of the problems that are just basically waiting for a good interviewer to come that they can share all their troubles with. And if you do collect all those troubles together and put them into themes and then present those themes back, fairly straightforward way of doing the research. But David, should we go through the policy side of things and then the metric side of things? So on the policy side of things, they've got these seven principles that are supposed to be sort of followed and met by organisations.

David Provan: Yeah, Andrew. So these principles, they sort of, which I don't know if we see in all policies, we definitely sometimes see intent and scope in regulation and things like that. But NHS specifically said that there's seven key principles that we want to achieve with these policies around patient safety investigations. The first is sort of being open and transparent. The second is that it should be preventative. The third is it should be objective. Fourth is timely and responsive, systems based, proportionate and collaborative. So I guess seven principles. There's some description inside the paper that says we want to put in place a program consistently across the NHS that delivers or operates consistent with these key principles.

Drew Rae: And on their surface, all of those sound excellent, like, who would not want openness and transparency, who would not want timely and responsive, who would not want proportionate, who would not want objective. But then you sort of stop and think, okay, what are real investigations like? And then you begin to ask questions like, okay, open and transparent. There's nothing in there about confidential. Which means that, like, you know, they're obviously a goal conflict here between talking to people about a really sensitive thing that's happened and trying to maintain like openc and transparency. You can't sort of keep people private and anonymous or confidential and avoid. You've got to make a choice. If you say that something's going to be objective, okay, at first glance that sounds good. You know, the people conducting the investigation are independent, but then that means we're excluding the people who are most closely involved from being part of the investigation. We're treating them as objects of the investigation rather than cooperatively working to find out what happened. When we say things are systems based, that can sound good, but can also like stretch towards very bureaucratic, process driven rather than context driven. So, yeah, there are sometimes dangers in coming up with these very aspirational ideas without thinking closely through what's the actual effect on the conduct of the investigation when you ask for all of these things.

David Provan: Yeah, I think you're right. I think we're sort of used to principles as a way of trying to manage. We see it in contemporary safety theories, five principles of this or three principles of that. And companies have their own principles in place. But I think in this case, you know, having seven principles, very general, I don't think these inform decisions and actions around the process. It's the structures and metrics that we're going to talk about that seem to influence the, the way that investigations happen. Not a set of principles. So, you know, because these principles, it seems as though that, and the paper makes commentary about this, that what they learned in practice is that these principles aren't what's playing out in this process inside organizations.

Drew Rae: So getting down then from the principals, what did they actually do? The first thing they did was by they, I mean nhs, not the researchers, is they basically broke things into three levels of investigation. The level is based on the seriousness of the outcome. I think, David, they had wording in there about also the potential seriousness of the outcome, although in practice that tends to be the seriousness of the outcome. And then depending on the level, that then determines the time frame in which it can be completed and it determines who is involved in doing the investigation. So, you know, your most serious incidents have an external team with a six month deadline. Your least serious incidents have a very internal team with a 60 working day deadline. In that time frame they have to produce an investigation report. That investigation report has to include an action plan. So it can't just be documenting what happened, it actually has to be also fully finalising the recommendations coming out of the investigation. The report's supposed to identify root causes. Many of our listeners immediately sort of prick up with the danger of specifically saying you have to identify root causes and the conclusions have to be justified and based on relevant evidence. Which anyone who's ever had to like justify recommendations on relevant evidence before, particularly within healthcare, would realise that's a big job, is actually trying to dig up evidence that what you're recommending will actually work. And you almost have to do a miniature literature review for each recommendation to do this properly.

David Provan: You mentioned a little bit earlier as well, this seems perfectly logical and it's perfectly well intended behaviour like exactly what you've just said. The nhs, okay, we want, you know, we've got these principles, we want openness and transparency and timeliness and proportionate and collaborative. What we need is when these events happen is these organizations to report them to us. We'll determine the classification, they can then go and apply the appropriate investigation. We'll make sure things get closed out on time, we'll make sure the action plan going forward is appropriate and it's, you know, evidence based and everything that I've sort of just sort of said that is all perfectly rational, logical ways of managing institutions. And at the same time then those structures as we'll kind of learn about when we talk about the findings, those structures tend to not create those outcomes that we're, we're trying to create through those structures which I think many of our listeners will be able to relate to in their own, in their own organizations. And there's some, there's actually quite a bit of good work in the institutional literature about corruption of role and task and how this kind of well intended structural work gets, and process work gets corrupted based on organizational pressures and resources and goal conflicts and those, you know.

Drew Rae: Those things and also just a lack of understanding of how long things take. So if you're sitting in an office in London, it might seem really quite reasonable that you say okay, 60 working days. 60 days, that's two months. But we're not even saying two months, we're being careful and making sure we've said working days. So it's not going to get affected by weekends or public. It's always going to be 60 working days. Sounds like a very reasonable time to investigate an incident. And then you remember that like this is no one's full time job. So the incident happens and immediately the clock is ticking and then the organisation's got to find out that happened and find out what happened and gather together enough information to report it and wait for it to come back with a recommended level to be applied. And by then your first five to ten days have already ticked off and then you've got to actually find a team, put a team together and make sure all that team is available and sync up their calendars with all their other stuff. And by then another 10 days has ticked off and then you start gathering data like by the end of it you're absolutely rushing to get that report out as like every day towards the deadline ticks out and then I think.

David Provan: Drew where you know you've got to develop an action plan and then you've got to get that action plan signed off by your organization before you can send it to the nhs. And, and so you don't want too big and difficult actions because it'll take too much time to get them all approved internally. So Drew, let's. We're sort of, we've sort of crossing over into findings now, so maybe just the last bit of method. So these researchers, they did these 49 interviews. What I also liked in this the paper is that this paper don't see every of them of those six person authorship team, they actually said who did what parts of it. They said who conducted the interviews. They said which of the authors did the analysis and that which I found interesting as well because you don't always see that when you're seeing a multiple author list. But so they analyzed it. They basically had these transcripts from these 49 interviews and they did this sort of progressive comparison technique, Drew, which was what we did during my own PhD qualitative research of this. So you just start with the first interview and go what are the key themes and points here? And then you go to the next one and say are they similar or different? And then you go back and look at the first one. If you learn something new in the second one and say was that in the first one and you just progressively go through and you just try to land on what are the key points that seem to come up over and over again.

Drew Rae: Yeah, it doesn't have a lot of detail in the method and I imagine if you're not used to doing this sort of work, it seems almost like suspicious how little detail they give. But you don't want this to be turn the handle applying a theory or a very carefully constructed method. You want to be actually responsive to the data that you're collecting. And when you see things in the first interview that tells you what to look for in the second one. And so you sort of see the quality less from the description of the method than from the results that come out at the end is you can look at the results, come out the end and say yeah, they did a very thorough job of the analysis here.

David Provan: So Drew, there's three key areas of findings. Do you want me to introduce these three key areas then we can sort of go through and talk about each one in a little bit more detail.

Drew Rae: Yeah, just quickly before you do, I did have our highlighted just in their introduction to the findings, which I think is a fairly good summary of where we're headed here is they say there's a misalignment between the policies, aims and principles and its practical implementation. So basically their aims and principles are well intentioned, but they've created this industry of investigations that's become a lot more compliance oriented, a lot more tick box exercise than was ever intended by those principals.

David Provan: So, Drew, I think we can. I'll just as you. As you highlighted and read out that sentence, I think we're talking about a study in investigations, but I think our listeners might be, well, I'm at least reflecting, going, oh, gee, we see that a lot. We have a even internal policies in our organizations for safety. We have key aims and principles and things we're trying to achieve and then we roll them out and then we end up with this, you know, industry of safety and transactional activity and tick the box exercises. So let's explore kind of a little bit about why that, why that might happen. So there's three areas of findings. The first with regards to these policies and metrics that were set by the nhs. The first is about how that creates structural constraints, about how we're imposing these rigid timelines and categories and even assumptions and logics that then actually organizations have to navigate these very rigid structural constraints and timelines in a very dynamic and complex organizational environment. So this idea of imposing structural constraints. The second area of finding is about investigative capacity and resources. So what the policy expects to be done for each event based on the frequency of these events. I mentioned 2 million, but Drew, some of these organizations, which were different sizes, the six organizations, but some of them were having 2,000 plus or might have been, maybe it was even 20,000. Sorry, I got that, missed my mind. But these are a lot of events that are, that are occurring every week in these organizations. And they're saying that these organizations just aren't resourced to be able to meet the intent of these, these policy expectations. And then the third piece is about roles and relations. So this was a, this is probably a little bit more of a fuzzy finding for me at least, Drew. But they were sort of talking about the assumptions about how investigations work don't really match the reality of how investigations really work and how organizations really get things done. So the things we assume that we have an independent investigator and they, they get free access to people and everyone gives them their time and do it is not really the reality of what people experience doing investigations in real organizations. So we've got these three areas. Structural constraints, capacity and resources, roles and relationships. Drew, anything you want to provide by context for overview or do you want to dive into the structural constraints piece?

Drew Rae: No, I'd love to dive straight into structural constraints, if that's okay. David? Yeah, the first thing that, and this is this is actually what I was a little bit surprised by. So this is the first time their findings are not just. Yeah, that's exactly what I'd expect. And this is the fact that they accidentally, by the way they defined their metrics, basically told everyone that compliance with the timeline is the most important thing about an investigation. Now, I bet if you asked the people who set those requirements, that is not what they would have set. Like, you know, the seven principles do not say following a strict timeline is what matters. They would have said, what matters is learning. But there are more metrics for timing than for anything else. So basically, by the definition of the metrics, they've told people timing matters. And then you've got the fact that you can't predict when an incident's happening. These things don't happen on a schedule themselves. So when you have combine metrics about timely response with ad hoc events, suddenly timing becomes everything because you've got these deadlines start rushing at you the moment the events occurred. So they set things that seem fairly sensible. You know, we'll give you two working days to tell the commissioners and other relevant people that an incident's happened. We'll have 72 hours to review the incident, work out the level, give it Back to you. 60 days to complete the investigation, unless it's big, in which case it'll be six months. But that then means, okay, we've got to have people who are ready to devote two working days. The moment an incident happens, no one's just going to be sitting at their desk waiting, ready to spend those two days. So what does that mean for the organisation? Four days after the incident, it's come back to you again and you've got to start the investigation. No one's going to be ready, sitting at their desk for that to happen. So this is what happened was the participants felt that their main focus was getting everything done on time and getting everything literally green, because each stage has to be marked as complete and then it turns green. And if it doesn't turn green, people ask, why did it not turn green? Why did that not happen? So you've got this very gated process where getting a product acceptable to make it past the deadline is what's necessary, not the quality of the investigation or the learning from the investigation.

David Provan: Listeners heard sort of Todd Conklin talk. He sort of says sort of a line about, what if we, instead of thinking people work as safe as they can, what if we reframe that? Do people work with the minimum level of safety that they think they need to get the job done on time or, or whatever. And I think this is the case that we're seeing through here, which is what's the minimum level of quality I can, I can cram into an investigation to get it out the door, right? And so this, this for me, you know, these timelines was the most black and white part of this policy, right? It was so clear whether you were, you met it or you didn't. And companies said, what percentage of incidents did we notify within those two days? Okay, what's outstanding and approaching 60 days. So what management were interested in, what the NHS were interested in was this most black and white feature. So they sort of talked about this temporal arrangement, this, this time frame arrangement as being the central organizing structure. So everything in this whole process came down to these timelines. And I think that's a problem. And Drew, just as a side point, I know we're not impractical takeaways, but I was involved in a real incident investigation in an organization recently and their company had a policy of 28 days to complete investigations. And it was very clear to me that the investigations team's central focus was making sure that they had their presentation ready to provide to management within 28 days. Like that was actually the first topic of the investigation. Kickoff meeting was a reminder for everyone of what those time frames were and what those T dates were. Nothing about the principles or what we were trying to achieve from the investigation itself.

Drew Rae: Now, David, I would have loved to have been able to ask the policymakers why they thought it was necessary for investigations to be completed on time. Because if the focus is learning, there's no automatic jump from, oh, we learn more when we do things on time. That's always like a contradiction. There may be legitimate reasons and legitimate concerns, but I'd rather they actually, like spelled those out. It's like one of them would be, ok, you've got to get back to the people involved, the patients and their families. It is important that you don't leave them hanging forever. But you don't need the whole thing completed for that. What you need is regular interviews for keeping those people informed. And ok, don't leave them sitting for more than two weeks at a time without giving them an update. But that update doesn't have to be. We've completed and finalised the report. It can be. This is important to us and we're going to be continuing to pay attention to it for a considerable period of time. Maybe it wasn't that. Maybe they just thought that if they didn't set a deadline, people would just never get round to it. It would be deprioritised. But again, that's not what they achieved. They didn't achieve prioritisation, they just achieved making people get it done to the minimum standard.

David Provan: And I guess because it's like unintended harm caused to patients. Drew, in some of the research feedback here, we've got these structural constraints. Again, we're saying that learning's the priority, but all our classification is done on the actual consequences of the event. Creates obviously the most intensive investigation process. And you know, I think in one of the responses and one of the quotes in the paper was kind of like, we could have seven of these sorts of unintended. I don't remember the exact clinical issue, but we have these things happen literally all the time. They're serious because they can be life ending for the patient. They're not intended, but we know exactly how they happen and they happen. And we keep organizing these big six month external investigations to do something. Why are we reinvestigating the same thing ten times a month just because of the consequence of it? Now, they weren't trying to say it's not important that we treat the consequences seriously. They're trying to say, if our goal is learning, why aren't we actually focusing on the things where we stand to learn the most?

Drew Rae: And in case you're thinking, okay, but if people are getting seriously hurt, then we should learn how to stop this. Remember that the deadline for serious investigations is six months. So if you're having 10 of these a month, you've got 60 of these investigations running at once. Instead of one good investigation that just gets the new incident fed into it, they're like literally obliged to start a new investigation for each incident and not allowed to strategise around. Oh, well, yeah, okay, this is a new example, but we've already got an ongoing investigation. Let's treat that as new data to feed into the new investigation, rather than start the clock running on another one with its own report, its own action plan. And you imagine, you know, if you had something that's a little bit odd, say, say three similar incidents happen in a month, in six months time, you've got three investigation reports, all with three action plans having to come out.

David Provan: And the other piece around these structural constraints is obviously when you're sitting there as the nhs, you're thinking about this policy somewhat in isolation from what it takes to run a healthcare organization. And so then these resources, these resources inside organizations that are typically the Patient safety resources or quality resources, they're in there going, well, I've actually got a day job. I've also got a set of strategic priorities that I'm trying to work on to make improvement. And then I've got all these reactive investigations and compliance requirements. So the participants were also feeding back like, well, a lot of this stuff. I hesitate to use the word distraction, but we're doing a lot of investigation and a lot of action plans and a lot of actions that aren't aligned with the strategic priorities that we think would actually make our organization a lot safer. So all of this is drawing on the same resource that's being used to also work on day to day safety management as well as proactive strategic improvement. And so you've got this kind of tension and resource kind of push pull between these different needs as well.

Drew Rae: That's an excellent segue, David, into the next category of stuff, which is mismatch of requirements and resources. And the problem we're about to describe here I think would be familiar to a lot of organizations, which is who are the best people to investigate an incident? Because you've really only got two main choices. One of them is you pick operational staff. And so doing the investigation is like an add on to their usual work, which means that they've got to have extra training to be able to do the investigation. They've got to carve time out of their already full workload to get involved in the investigation. Or the other option is you have dedicated safety people, but they are far less familiar with the work that they're investigating because they're full time safety people. So they're less likely to be able to appreciate the strategic priorities. And also they're going to be asking for time from everyone else to provide them with that insider information. So they're still going to be taking time away from other people's full time jobs to get the information that they need to do the investigation. And the trouble is that we set these requirements to make sure that we were properly investigating all of the incidents without stopping and working out how many incidents there were, how many investigations there were and how much staff capacity that would take. And we simply overwhelmed everyone with setting unrealistic requirements.

David Provan: I think many organizations do. This drew exactly the same tension, is that we don't sit down and maybe organizations morally don't want to plan to hurt people. So it's sitting down and going. We feel like we're going to need to do 50 or 60 investigations in the next period of time. The average investigation Takes about two to three weeks and it involves this much resource. Maybe we don't want to admit that we're going to have those many events, but the reality is we end up having those period in, period out. And it is going to be management, operational staff, safety teams that have. That have otherwise full time jobs that need to come across and work on these investigations at a time not of their choosing, at a time that the event chooses for them to have to do that. And so, yeah, I think that idea of requirements and resources and the other point here, and it's also in. Actually, I'll save it for the relations piece, this piece about independence, but we'll come back to that. Yeah.

Drew Rae: Do you want to talk about the idea of capacity and engagement, David? Because that was something that again, surprised me, or at least it wouldn't have been something that I would have immediately thought of, but was really insightful, which is the more overworked people are, the less time and opportunity they have for doing things like engaging with the patient and their family during the investigation, keeping them up to date on what's happening, making sure they're properly consulted. So if the whole goal of this in the first place is to like do investigations which satisfy people who were harmed, we've now created a process that is more dismissive of than more bureaucratic. We're getting them a final report dumped. Here you go. 60 days after your father died, here is the investigation report. But we haven't spoken to you in the meantime to ask you your position on what happened or keep you informed about the investigation or how it's happening.

David Provan: Yeah. So I think, Drew, you've got. If we think about what a minimum compliance approach to investigations is, which is, what do I need to do? I need to be able to produce a report, right? That's literally a report that someone will sign off as opposed to maybe the discretionary effort involved in a deep understanding of the sequence of events and deep thought on systemic actions that would prevent this or a similar sequence of events from occurring in the future. That's the meaningful outcome that we want from investigations. Then there's a big gap between those two things. I think, as this research kind of, you know, suggest is we can do some fairly transactional investigation that gets signed off by both our company and by the regulator over and over and over again. And I think those. The thing about meaningful engagement is all of that discretionary activity, which is to go and talk to teams that might be relevant. But, you know, I could just as easily not go talk to Them or you know, contacting, reaching out to a family of a patient and setting up a time to talk with them and giving them an update when they're not really helping my investigation at that point. But it's the right thing to do. I think in a very resource constrained, time pressured situation, reactive situation, people just want to get something off their plate as quickly and easily as they can. And I think that's what we're learning, that's what we're seeing in these results.

Drew Rae: Yeah. And that blends nicely then into the final topic here around roles and relations, which is the more bureaucratic this becomes and the more time and resource constrained it is. Coupled with this, I think misplaced desire to make investigations objective leads to these investigations that get very detached from the people involved in the incident. And so the investigations are no longer an act of care surrounding people who are harmed as clinicians or harmed as patients. The investigations are no longer working with frontline workers to work out how to understand and improve their work. They're just very much outside impartial activities with as little collaboration as possible because collaboration takes time and resource, as little consultation as possible because consultation takes time and resource. This then makes it very much down to then the individual lead investigator and their skills that some of them are still with these constraints, really good at it because they're good investigators, because they're natural collaborators, because they really understand the frontline work or are really curious about the frontline work and some of them are not. And we can't train everyone to be a brilliant investigator. And the more we have these number of investigations to do, the more we can't just use the best investigators, the more we have to use who's available and then try to give them the minimum training necessary to be able to do the minimum amount of work to get the investigation done.

David Provan: And Drew, I may be taking the findings of this paper too far, but I'm really curious in your views about this because there's a central theme in investigations around and it's one of the principles here which is objective. And there's a thing about independence and talks about, you know, for these serious events it needs independent oversight or senior management oversight. And we have this in organizations incident investigation process that I need someone from a different department to come and investigate this department because they're so not, they don't have the same biases of the as the team involved in the event or the department involved in the event. And this research kind of suggests that we've got the actual capability of the investigators. But, you know, the feedback from the research, the really good investigations where people collaborated between teams across different levels of the organization, it really came down to that. Lead investigators and the people who had long tenure in the organization, really good relationships, really good knowledge of the different parts of the organization could really identify, oh, we should go and talk to this team, or we should talk to this person or that connects back to this particular process. And I've kind of got this trade off between having insiders do the investigation means they kind of maybe know where to look and who to ask and how the work happens. And we've got the challenge of their bias. But then you bring in an independent person to do an investigation, they don't know the team, they don't know the work, they don't know the people. And so they don't have the same necessary biases. But the question that this research sort of suggests to me is can they actually get a good understanding of what's going on? And I know where you sit on that because I'm starting to lean away from independence in investigations.

Drew Rae: My immediate thought is, if your goal is learning, then why the heck would you care about bias? No one, when they're talking about learning design, says, oh, gosh, having someone who knows about their topic is bad. If you're talking about judicial proceedings, if you're talking about the ability to prosecute, if you're talking about the allocation of blame, absolutely, independence and objectivity are key. But you've told us that that's not what you're trying to do. So, like, the actual requirement for independence is nonsense unless you are caring about blame. What you might want is an outsider perspective. What you might want is be concerned that, oh, people are so used to the way they do things that they're not considering alternatives or there's things they haven't noticed. So by all means, make part of your collaboration network for an investigation. Getting an outsider perspective. But yeah, the actual goal of independence seems to be just like misunderstanding your own stated purpose.

David Provan: Yeah. So it sort of said, after all, for all of our efforts to put in place structural processes, time frames, classification criteria, when we look at these original principles and what we're trying to achieve, you know, this finding sort of said is actually, it's none of those things that are actually delivering on what the policy says. They're basically saying that the, the policy aspirations and objectives or principles that we spoke about are primarily reliant on the individual local relationships between the person doing the investigation and the people who contribute to it. So it's not the structures and the timeframes and the class and all of those things. It's actually, do I have an investigator who has the time and the network and the personal characteristics and capability to run a good investigation? Right. And so you almost step back from this and say, actually the only thing the NHS probably should have done is who are these resources in the organization and how are you making them good at investigations?

Drew Rae: Now, David, I know you've got a hard out today, so just quickly, before we get into the takeaways, there is this kind of broader message that the authors put in throughout this paper, which is they think that the effort was misplaced, that there's more of a middle piece that needed doing. Not so much setting these high level policies and goals, but investing in the capability to do good investigations and actually investing in guidance and instruction and training and procedures for investigation might have actually been more useful in achieving the goals. So instead of like setting hard targets, instead setting mechanisms to achieve those targets. But David, shall we. I think takeaways are fairly clear here.

David Provan: Yeah, I think we'll talk about them and try and talk about a little bit more practical. But I actually really liked the way that they summarized this discussion. And I love a good, sort of like overarching theme, but they sort of, when they talked about these temporal structures and learning, which was that first category, the structures piece, they sort of talked about deadlines over diagnosis. And then the second time when they talked about organizational processes, they talked about compliance over comprehension. And then the last bit about roles, they talked about procedures over participation. So kind of showed that these things that we wanted in our principles by which we used to justify the implementation of the policy didn't translate through the way that we did that implementation.

Drew Rae: Yeah, and I think that'd be great for any organization to look at with their investigation policies is okay, what do you care about? Do you care about deadlines for the investigations or do you care about having deep and good learning? And if you care about the deep and good learning, then work out how you're going to get that. Don't work out how you're going to meet the deadlines. Do you care about making sure your processes are repeatable, objective, structured, or do you care about whether they have a real understanding of work and then implement things that are going to try to achieve that? In particular, make sure you've got the capacity and the resources that people don't have to go for minimum compliance. And then do you care more about the processes or do you care about who is in the processes and making sure people are included and participated. And again, build your structures around that, around making sure that people can participate and that you have got processes that are aiding in participation, not excluding people from participation.

David Provan: And Drew, when I think about a lot of safety management practices, I kind of try to think through the process component, the capability component, and the leadership component of these processes. So with investigations, for example, a lot of what we're talking about today is the process components. We've talked about the capability of the investigators. And then you need these leadership components which talks about, you know, the investigation is a priority. What are the expectations of leaders, how are the resources made available to be able to, you know, do this process within, you know, the, the organization's expectations. And this investigation feels like something that we've all tried to solve with process when it perhaps is more effectively solved with leadership and capability. And, and maybe as we'll sort of talk about now, maybe less process might be a little bit better than more process when it comes to incident investigation.

Drew Rae: Oh, David, you hear my favourite topic with investigations because I think if you wanted to improve investigations in your organization, one simple leadership practice you could take is when someone gives you the investigation report, basically just say, no, there's nothing here that surprises me. Go away, come back and learn something. Then come back and tell me something that I don't know. Now, that itself may be a bit gimmicky, but the underlying message there that it's the receiver of the report sending the message, I want to learn. I want to be surprised. I want us to learn, not. Thank you for closing this out in a timely fashion. It's the leadership behaviour that actually makes the difference in the investigation.

David Provan: Totally, totally agree. So let's talk about some takeaways and I sort of just I guess in relation to these three areas of findings. Drew, I think the first one for me is that, you know, timelines take over. So this is this idea of compliance over quality outcomes. And most organizations do this in their incident management system. They say, you need to notify us within 24 hours. You need to complete an initial report within seven days. You need to have the investigation closed out within 28 days. And, you know, having these as rigid timelines just, I think from this research and, you know, I guess many of our listing just makes no sense. You know, it should be as simple as that. These are the outcomes we're trying to achieve and the investigation takes as long as it takes. And if it's being held up for any reason, this is or or, you know, to check in on progress. This is the process to check in on progress. Right. As long as it's being worked on.

Drew Rae: Yeah. Two modifiers I would throw in, David, is the site's got to be made safe and that's got to be done straight away. And if there is a finding that needs to be found quickly, a technical finding, those need to be. You've got to have someone on the ground checking if there's like a machine that's broken or something like that, or a bug in the equipment that might occur on other sites, that's got to happen quickly. That's what you need deadlines for. Other than that, you just need period deadlines. So in other words, basically regular reporting on the current status. You don't need the thing closed.

David Provan: Totally agree with the interim. Like, there's no reason that if you learn something during an investigation that you need to wait till the report signed off and finished before you do something about it. So if you have an investigation to say equipment failure, say, you know, someone's got hurt because a piece of equipment catastrophically failed, and you find out early in the investigation that the maintenance hadn't been being done on that machine, nothing stops you immediately taking action to understand that aspect of the rest of your organization, even while the investigation continues. That's a useful surprise to go and explore. Whereas what we try to do is wrap up the entire investigation before we communicate and take any action. And I think that hurries people up. But I like the idea of just progressive outcomes of an investigation. Hey, we've now learned this. Let's hunt that out to the side and do something with it. And then a few days later in the investigation, oh, we've, we've now learned this. So I think, true to your point, I think as soon as we learn something that we become curious or concerned about, we should act on it.

Drew Rae: Second one is capacity and resourcing. And David, you have like put a masterclass into one sentence here. Basically three full different strategies that you can use for capacity and resources. And they're basically your options. Is firstly is planning. So actually like make sure you know how many incidents you tend to have. Make sure you know who is going to do them. How are you going to make those people available. Or your second alternative is you reduce the volume. You limit the number of things you're going to investigate to the capacity that you've got. And the third strategy is you add in flexibility. You don't have hard and fast rules about what has to be investigated. So you Let people adapt to the resources and to the priorities of the investigation.

David Provan: As opposed to, that was just my speaking points, but as opposed to saying we don't have dedicated resources for this activity and we don't know exactly who's going to do it. When we have a policy to investigate all incidents and we have thousands of these every year and they all need to be done within 14 days. You're just setting yourself up for the experience of what this paper describes as opposed to, this is how we do this, this is who, this is how we make them free. This is when we're only going to investigate these certain types of events, which from an 8020 rule, I think most organizations will get a huge amount of benefit from cutting 80% of their investigations out of their business and focusing on the 20% that will generate the most learning for them and the most improvement and then flexibility. Just because you've got a preferred timeline, we expect most investigations to be done within four to eight weeks. But absolutely no issue if a complicated investigation or one that needs a whole lot of lines of inquiry or resources, we've got no issue if that takes us four or five months, just as long as someone's responsible for it and as long as we're making progress.

Drew Rae: And then the final takeaway that you've got listed here, David, is relationships. And what I read into that is investigations are way more about relationships than I think people realise when they're planning them. And so we've got to create an environment in which investigators can build and use relationships. Ultimately that's what we're trying to do, is we're trying to restore relationships that have been harmed by an incident.

David Provan: And so Drew, I mentioned their access and climate with relationships as well, which is really about can I get access to the people and teams that I need to, to talk to during the investigation. And what's the climate like in terms of, you know, management making resources available, but also people feeling like they've got the psych safety to talk openly. And a lot of that's going to come down to the climate created by my manager. So if I say what I really think during an investigation, what's my manager going to do? Or if I say what I really think, how is the investigation going to present this information? So, you know, again, you know, the people we talk about safety being a people kind of profession or a people pursuit and you know, this is an example of it's more about the person in the investor, the people in the investigation and around the investigation, then maybe the process itself.

Drew Rae: Absolutely. So, David, the question we asked this week was how do policies and metrics shape the outcome of investigations?

David Provan: Well, I think how I said a lot, but I think how in a few different ways, you know, imposing constraints and changing the focus of the investigation. But I think the takeaway for me is the more hoops and hurdles and constraints and requirements that you put into the investigation process, the more that those things become the focus of the investigator, as opposed to the learning and improvement outcome that we're trying to achieve.

Drew Rae: Absolutely. That's it for this week. We hope you found this episode thought provoking and ultimately useful in shaping the safety of work in your own organisation. Comment on LinkedIn or send any comments, questions or ideas for Future episodes to feedbackafetyofwork.com.