The Safety of Work

Ep.71 Do double checks improve safety?

Episode Summary

Today, we discuss whether mandatory double-check policies actually improve safety in the workplace.

Episode Notes

This topic came directly from our Safety of Work portal, which you can locate on our LinkedIn page. Rhys Thomas was good enough to submit this topic and also provided us with some great resources.

Join us as we dive into this topic and decide whether double-check policies help improve safety.

 

Topics:

 

Quotes:

“How do you know whether an error has happened, if no one notices it?”

“I think you’re doing a good job of qualitative research, if readers want to then go and actually read the raw data.”

“And I am completely unwilling to say, ‘This is a bad practice, we should get rid of it’ until we’ve got the evidence.”

 

Resources:

Double Checking Medicines: Defence Against Error or Contributory Factor?

Feedback@safetyofwork.com

Safety of Work on LinkedIn

Episode Transcription

David: You're listening to the Safety of Work podcast episode 71. Today we're asking the question, do mandatory double check policies improve safety? Let's get started.

Hey, everybody. My name is David Provan and I'm here with Drew Rae. We're from the Safety of Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. We hope you find our new fortnightly routine to your satisfaction. It’s definitely making things easier to prepare. The episode that we’re doing today was one of those ones where we thought it was going to be somewhat straightforward, but about 20 research papers later, Drew finally emerged and here we are tonight with this episode.

In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

So today we’re tackling one of the questions in our Safety of Work ideas portal. You might have come across the portal, and the easiest to find it is to go to the safety of work LinkedIn page, follow the link, and there you can enter an idea for an episode, raise a question, or vote for other questions that other listeners have already raised.

The question for this week that we’re tackling was submitted by Rhys Thomas, and Rhys asked us, "Policies mandating double-checking of medications by two individual people is a widespread practice in healthcare. It takes up a lot of staff hours, but does it improve patient safety?" Rhys also pointed us to a couple of relevant papers including an observational study and a systematic review. 

So Drew, would you like to kick off with some background to this question of double checking policies?

Drew: Sure. So before we get too deep in, I want to point out and be very clear on this—that the question does not have a definitive answer. David, unless you've been newly qualified since we last spoke, neither of us is medical professionals. We don't want to be giving people advice about how to run hospitals. All we can do is comment on the research that's in front of us.

On this topic, there's a surprisingly small body of research. I reckon, when I was looking it up, I probably found more systematic reviews surveying the literature than I did find original research papers. So it's no surprise that most of those systematic reviews say there's not much else written about this. 

What research is there is really rather mixed, and so there's a real danger that if we followed our usual format and just picked one paper that we'd be giving you pretty bad advice. But I still think it's a really interesting question and it's a really interesting case to discuss. It lets us think about some things like what is an error, what counts as an error, and how we treat errors. It also lets us talk about some different research methods and how they go about answering questions like this.

So we're going to basically talk around this question and never actually give you an answer. I just wanted to be clear about that upfront. David, do you want to tell us a little bit about what double checking is?

David: Yes. So double checking is a practice adopted in many industries, not just healthcare, even if you don't call it double checking. So we know of it in aviation between actions of the pilot, the co-pilot, and the air traffic control. We know in major hazard facilities there are double checking activities around risk and work permits. In utility sectors, there's a confirmation process around energy isolations. 

I think we could—if we thought about it—come up with examples of double checking practices institutionalized in all of our high hazard industries. so it's a really important question for us to understand about the safety benefit that comes from that validation process through double checking.

So when we're talking about double checking, and in this case what we're talking about specifically today is medicine administration—I suppose there are two broad categories of double checking. There's this confirmatory process, which is when someone says, hey, this is the drug, this is the dose. Can you check it for me in the presence of the other person and the person just goes, yeah, that looks right. Then the other one is that it's independent. So both of the people independently work out what’s needed, and then they compare their notes. That's either what we might refer to as a prime double check or an independent double check.

And then, Drew, I thought it was interesting the way you drew this distinction between double checking being either a practice in the organization or a mandatory policy. When you're briefing me at least, you talked about it's a different question if you're asking do bike helmets work or does a mandatory bike helmet law work. I don't know if you want to say anything more about the difference between a practice and a mandatory policy.

Drew: No, I think you've explained it pretty well there, David, that we're measuring different things whether the rule works, and it matters how you measure it. So whether we put in place this policy and see the effect, or whether we actually check to see whether people are following the policy, and only score it for people who are following the policy rather than evaluating the policy as a whole.

But we might say a little bit about the particular area we're studying. For some reason, a lot of the papers about double checking in medicine are specifically about medical administration. The idea is that a doctor has already decided what is the appropriate medication, so we're not talking about mistakes made in selecting a medication. And the doctor has already decided what is the appropriate dose, so we're not talking about mistakes in the doctor deciding on the dose.

What we're talking about is that decision has been recorded in some sort of chart or in some sort of electronic system, and a nurse comes to actually give the patient the medicine that's been prescribed. So that's what medicine administration is, the actual giving of the medicine. An error is where the medicine that is given doesn't match the medicine that has been prescribed. David, I don't know about you, but it's surprised me just how difficult it is to even define whether that is an error.

So we've got all sorts of weird situations. We've got, for example, the nurse looks at it and thinks that maybe the doctor has made a mistake—the number just looks wrong. You technically could say that is an error in administration because the nurse has not given the medicine that's prescribed. The patient is supposed to be given it at 9:00 but the patient is in the shower at 9:00 so it gets given at the wrong time. It gets given 15 minutes later. No real difference, is that actually an error. The patient gets prescribed one thing and they get given a generic version of the exact same drug. Or it has to be measured out in 50 mil doses and the dose that ends up being given is 51 mils and so they carefully record that, that gets counted as an error.

All of those things could be counted as errors or they could not be counted as errors, or maybe it only counters an error if someone detects later that the wrong medication has been given and otherwise it goes unnoticed.

David: I think that's an important question to ask about what counts as an error, particularly when we're looking at literature reviews and we're looking at different research studies. It can be important to know what people are counting as errors because as we go through, we'll introduce some rates and some statistics from certain studies. And without knowing exactly what's being counted as an error, you can form very different views about the problem associated with medicine administration in the hospital.

Drew: Yeah. I wanted to start this podcast by talking about how serious a problem is this that we're trying to solve. You look at the different studies and it ranges from 2 in 1000 times medicine gets administered, it's in error and most of those times it's harmless, to another one of the studies has 70% of the time medicine is an error in some sense. Even just trying to describe what is the scale of the problem, how important is it that we have safety protection is really hard to do.

David: So it's hard to know what counts as an error in the different papers. How do we go about getting information on whether double checking works? How did you approach trying to answer this question from our listener?

Drew: The honest answer is our listener provided us with a really good paper, so I started off writing the episode around that paper. But I then went and looked at some of the papers that that paper was referring to, and so I got a bit of an idea of the range of different stuff that's out there. These are just some examples of different types of ways people have tried to investigate the question of whether double checking works.

The most basic one is you go into incident databases and you say, how many times do we have actual adverse events that happen because of a lack of checking, and how many times does this happen even though there is double checking? So we get studies like that, and that's originally sort of where people came up with the idea that there's a problem we need to solve through checking in the first place.

Typically, there's an accident that happens, they investigate what went wrong. What went wrong was we administered the wrong medication, how do we fix that? We put in place a process for double checking. Now, every time there's a similar incident, we look and see did someone double check? And if they didn't double check we say, that's what caused it is failure to double check. We've got a number of studies like that.

We've then got a number of qualitative studies. So these are ones where they talk to nurses, pharmacists, and doctors about patient safety, or maybe they specifically ask them about medicine administration. They say, what do you do, how well does it work, what frustrates you, what went well, what doesn't work well, what do you worry about?

A good example of this is there's a 2015 paper by a friend of the show, Dr. Tanya Hewitt. It's part of a wider study interviewing practitioners about their beliefs about patient safety practices. They found that checking was something that came up a lot in these conversations, so they've decided to write a paper digging into that specific issue.

And then we've got some other studies that set up like simulations or experiments. There's a 2015 study by Dr. Amy Douglas where they had simulated patients and they deliberately created errors, and then they either had one nurse or they had a pair of nurses working together and tried to work out whether the errors were detected or not. A little bit of an artificial situation, but it's nice and definitive because you know whether the error exists or not. You're just checking whether the single nurse or pair of nurses were able to work it out.

If we want to get a little bit more thorough, we go into observational studies. A really good example of this is a paper published in 2021 in a really good journal, BMJ Quality and Safety. I’m not going to list out the authors because there are 18 authors in this study. The lead author is professor Johanna Westbrook from Macquarie University. This was a really thorough study. 

The researchers shadowed the nurses, and the researchers separately had little devices on which they could record whether and how checking occurred. And then they also entered their own details of the medication being administered. Then they sent off those records to an independent researcher who hadn't watched this happen to work out whether the medication that was administered matched the patient record.

It was part of a larger study about electronic health systems. The researchers were embedded and around all the time so the nurses got used to seeing them around. They didn't feel that they were being checked up specifically about the double checking, so we got some good information about how checking practices work. We'll talk a little bit later about the results of that study.

The gold standard, which is still really hard to do, is an actual clinical trial which is as if checking is like a treatment. A good example of this is there was a crossover study. 

So ward A uses single checking, ward B uses double checking, then you watch that for a while, and then they swap over. Ward A uses double checking, ward B uses single testing. That gives you a nice fair test, but there's a lot of noise because there are all sorts of reasons why things could be varying over the trial. In particular, how do you know whether an error has happened if no one notices it. So we don't really know when the errors have occurred. If we had a completely reliable way to know when someone's made a mistake, [...] make mistakes.

David: So Drew, there are half a dozen or so different methods for researching this question of how do we understand how double checking practices work in the real world, and how do we understand whether double checking processes work in terms of improving safety? I think you've highlighted a couple of studies or at least a study in each of those different methods.

But I think, just before we move on to the next session, it's still it's clear that medicine administration errors are our problem. It seems as though, from the literature, they're more common than we'd like. Although in one of the studies, one hospital that had 5000 staff I think had done 50 million drug administrations in a 12-month period. That's just a huge volume of activity and we don't really know from the papers. Why I say we don't know is because the rates range from, like you said, 2 in a 1000 to 72 in 100 times that the person administering the drug gets it wrong.

Look, it's something that feels like it's an important area of research—medicine administration—and I think double checking as a control for the risk around that is an important area to research as well. Conducting realistic studies that get enough data, although really hard, is probably quite a valuable research avenue for the appropriate kind of research to pursue.

Drew: Yeah. I mean, this is a really serious problem that people are trying to solve. I’ve mentioned that neither of us is a doctor, but one of the examples that keeps coming up is there are some drugs where there are two different ways to administer them. You can minister it basically into the blood vein or into the muscles, and you do it the wrong way the patient dies. A mix-up between the drug that's meant to go one way and the drug that's meant to go the other way is really a life and death error that comes down to a decision right at the coalface with the person giving you the injection.

I don't want to pretend that this is not a serious problem or not one that we would really like to have more reliable solutions for. The trouble is we just really don't know whether having a second person check helps or not.

David: I think we might put a lot of faith in that second-person check process, which is like, well, if one person makes a mistake a percentage of the time. If we check it all of the time, then we're going to significantly reduce that percentage of both people making a mistake.

We want to understand what are the potential problems with double checking, and for this part of the episode, we're going to use a paper authored by Dr. Gerry Armitage. The paper's titled Double checking medicines: defence against error or contributory factor? It was published in 2007 in the Journal of Evaluation in Clinical Practice.

Drew, we really like good titles of papers. I think I particularly really like good titles, and this is exactly what the paper talks about, and I want to say it again. Double checking medicines, so is double checking a defence against error or a contributory factor of error? I know quite a contentious debate, could double checking actually be contributing to more errors than single checking?

Look, Armitage—as a single author of this paper—has published a fair bit about patient safety including several papers that are cautiously skeptical of double checking. And Drew, I was thinking this might be one of the first times we've talked about a paper on the podcast with only a single author. You just talked about one from Macquarie University, and I assume Professor Jeffrey Braithwaite was one of the authors of those 18 authors on that. But how common is it, as an editor of a safety journal, to get a single author paper?

Drew: Good question, David. I actually don't know what the exact situation is here. I try to make sure that whenever we talk about an author we give them their correct title. And at the time he published this paper he wasn't Dr. Gerry Armitage, and two years later he was Dr. Gerry Armitage. I’m assuming that this was actually part of his Ph.D. is my best guess. That's where it's really common to have a single author paper.

The other time is when it's like an essay or opinion piece. A few of the really big names in safety tend to publish single author papers every now and then. Rasmussen did it, Sidney Dekker does it, Leveson does it, Hollnagel does it. When they're presenting a new idea or they're arguing a position is a common time to get a single author. Armitage sort of has two types of papers. He's got these single author position pieces, and then multi-author empirical or qualitative studies.

Sorry. Go on, David.

David: Thanks, Drew. I saw he comes from a registered nursing background and before obviously completing his Ph.D. This study involved an analysis of 991 incident reports about medicine administration errors, followed by interviews with 40 healthcare professionals—doctors, nurses, pharmacy staff. It was a targeted sample, so he looked at these 991 reports from within an individual hospital, and then he went and sought out (I suppose) volunteers of healthcare professionals in that hospital that had an experience of medication errors. So he wanted to talk to people who had an experience of medication errors.

The analysis of the incident reports in that study, Drew, indicated a few interesting things. That the incident reports themselves tended to attribute the medicine administration errors to the individuals, so the person administering the medication itself. And a number of cases where the errors happened despite or because of double checking. 

It wasn't clear from the incident report whether the double checking had actually happened, but at least in 12 clear cases out of 991 drug administration errors, double checking was involved. Even when the double checking occurred, it was still the individual who administered the medicine that seemed to be blamed in the incident report.

Drew: David, I don't know what you thought about this. I was interested in what the standard is because it seemed to be that he was almost arguing that if we have cases where there was double checking and an error still happened, then that was like an argument against double checking. Whereas I see it more as it's an imperfect defense. Even if double checking didn't catch everything, if double checking still substantially reduced the number of errors and we still had some errors where there was double checking, it would still be something that was worth doing.

David: Yeah, I think that's a good interpretation. I think as a layer of protection, it may not be perfect, but we don't know from this study what happened in the 979 out of the 991 cases where double checking did actually work and it was something else that was a problem.

Drew: But what is really interesting is the rest of the paper from the interviews is actually going full in the opposite direction. So he's essentially arguing that there are good reasons to believe that if you put in place a double check, then even the original person administering the medication might be more likely to make a mistake. So there's the possibility that the double check doesn't help, but there's also the possibility that having a double check makes things worse. And then there are some actually quite compelling reasons to believe that that's something that we should worry about.

David: And I think Armitage did a pretty good job of presenting this qualitative research back like with the inclusion of some quite detailed quotes in the paper, and really gave a sense of where the themes were being drawn from in terms of what someone said. I must admit, Drew, it got me really interested that I would love to actually go and read the 40 transcripts from these interviews because of the way that the information was presented in the research paper. I think you're doing a good job of qualitative research if readers want to then go and actually read the raw data.

Drew: It depends, David, whether they're reading the raw data because they're interested or because they want to read the raw data because they don't trust you because they suspect you've misrepresented it.

David: Yeah, maybe. I want to talk about these four themes that Armitage talks about could be problems or issues with the double checking process. The first is deference to authority. Do you want to talk about that as an issue in double checking?

Drew: Sure. There's a couple of things going on here. The example that he gives in the paper is the idea of a nurse being worried that the medication is not the right one to administer, and then checking with a more senior nurse who tells them to go ahead and deliver the medication. That's an example where the person you're checking with is sort of enforcing that the medication gets delivered as written on the piece of paper, but increases the chance that something that's written on the paper might be incorrect and that they just go ahead and do it. Having the double check sort of reduces the opportunity to challenge or to question what's going on.

I don't think it was in this paper, David, I definitely read this somewhere else. I read it was like a process of deference to whoever is quicker with the math. So if someone was making an argument that if you have two people doing calculations, the first person to do it is probably the person who is least accurate but faster with doing the mental arithmetic. And if they say the answer, the other person sort of gets drawn into that answer rather than doing their own independent calculation.

The first person says, it's 60 mils divided by body mass of 30 kilograms equals 3 mils, and the second person just says, yeah, that sounds right. The first person has made an error, they've divided by two instead of dividing by three. The second person goes with the flow and doesn't have time to carefully work it out for themselves, and doesn't want to appear stupid by saying, no stop, let me work out for myself what 60 divided by 3 is. So you end up with the quicker least accurate answer taking priority.

David: I think what we’ll go through is when you actually get this double checking where you've got the checkers in really close proximity and engaged in the process at the same time, opens up that hierarchy and some of those social processes to play out. The second one is that I’m actually sitting on an episode from a listener who wants us to talk about diffusion of responsibility. Here it was called reduction of responsibility or dilution of responsibility.

It was this idea that if I know someone else is going to check what I’m doing I can be less thorough. Or if I’m checking someone else's work, I know it's already been calculated or checked by a competent person before I have to check it so I probably don't really need to give it my full attention. And if both people involved in the checking process are feeling like the other person is going to do a more thorough check than them, then it might end up in a situation where they don't take the ownership and responsibility to make sure that they do a complete job for their part in their process.

Drew: And this happens to us all for our lives. It’s a real thing. I very often co-supervise students and if they send me something to review and the other person who's supervising gives a really detailed response, then I’m likely to just skim through it. Whereas if I give a really detailed response, the other person is just likely to say, yeah, whatever Drew says. 

That's what we do all the time is there's no point in duplicating work. So if two people have the same job, we either assume that the other person is doing it really well, or one of us is the primary person and the other person is you're just checking, they're putting less intellectual effort in.

David: I suppose the risk is where both people make assumptions of the other person in the process and those assumptions don't really line up with the way that each party sees the world or participates in the process. 

Drew, the third here was about autoprocessing. This idea from some of the respondents in this study or the participants in the interviews that double checking happens with a very little active appraisal. Here they added a distinct difference between the way that the structure of the checking process between different professions within the hospital.

If you've ever been into a chemist, a pharmacy, or a drug store—depending on where around the world you're listening—what you see is you see someone go and prepare the prescription, then they put in a little basket and they sit it on the bench. Then the pharmacist or someone else comes along and they pick up the basket, then they go through and they look at the drug, they look at the prescription, and they do their own checking. 

That's like this independent check where there's actually no interaction between the person who does the first preparation or check and the person who does the confirmatory check. They're not talking to each other, they're not priming each other, they're doing two independent reviews, if you like.

And then compare that with the nursing staff that was basically bedside when one person held the drug out, read the label, read the dose, and the other person kind of nodded. Maybe in here they were saying, what are you doing on the weekend or what are you doing after your shift, and they're doing other things at the same time. The idea of autoprocessing kicks in, which is that things are right so much of the time that people just assume that they're right all of the time.

Drew: I was actually struggling to imagine what true independence could look like because even in the case of the pharmacist, there's an implicit message going on. The person who puts it in the basket is saying I have done this and this is what I think the answer is. The other person checking it is picking it up and saying, okay, with the assumption that 99 times out of 100 these things match, my default assumption is that they match. I’m just checking in case that they don't.

David: What do you need, Drew? Do you need a third check where two people independently take the same script, put two baskets on the table next to each other, and then the third person checks that the thing in the basket is the same?

Drew: I literally think it's impossible to create complete independence. Unless, literally, you both separately prepare the medication and then analyze them. Only if both medications are equal do they get administered to the patient. It's just literally not possible to be truly independent. There's always going to be some degree of you’re following down the path, doing this automatically, it's a job that we do thousands upon thousands of times. There is no way it doesn't become a little bit [...]. There's no way we give it our full attention 100% of the time.

David: I think, Drew, one of the things that were interesting here where you said there's no way we give it our full attention 100% of the time was the idea was introduced here, and it actually came out of some of the quotes that double checker may do their own internal risk assessment that controls how much attention they put into the double checking process. 

Depending on the drug that gets read out, there was a quote in the paper that says look for blood products. I would always make sure I did a proper double check, and then if it was something that I knew couldn't really hurt the patient, then I knew that it didn't matter so much if I didn't do a double check properly, because even if the first person got it wrong, it wasn't really going to hurt anyone. 

Drew: I got a real sense from the qualitative interviews in this paper and in the other qualitative papers. That there's a formal risk assessment that goes on, and that there are certain types of drugs that get classified as high risk and they're the ones where double-checking is mandatory. 

But even within that category and even outside of that category of high-risk drugs, everyone has their own internal risk assessment calculus. That there are certain things—whether because of experience, because they know these mistakes are common, or because it's a mistake they made themselves and they're worried about it—that trigger genuine rigorous scrutiny. Other times, that something might be technically high-risk but actually, it's fairly routine.

David: Drew, the fourth theme there after deference to authority, reduction in responsibility, autoprocessing, is just the simple idea of lack of time. If you're running around trying to find someone to do the double checking process and you're trying to get all of your jobs done, it just can be hard to do a thorough double checking process just from a pure time constraint for the first person as well, as the checking person as well. 

Drew, you raise this idea from a study by Westbrook that most double checking is in fact not independent. You talked about struggling to think about a situation that was truly independent. Well, this study claimed that probably only around 1% of the double checks that went on was in fact independent checking, and the other 99% had some degree of the first person priming the second person with information. Drew, let's move on. What’s the evidence for and against double checking? We've talked about a number of studies so far in the podcast. 

Drew: I think it's worth saying upfront that despite the arguments made in that Armitage paper, there is no evidence and no one who seriously believes that double checking actually makes things worse than single checking. Even though theoretically there are reasons to believe that, really these are all arguments that are about the ineffectiveness of double checking, not that it makes things actively worse. There's no evidence of that actually happening. 

But there is really a lot of evidence that double checking sucks up a lot of time, and then a big question mark about whether it helps and certainly whether it helps proportionate to how long it takes.

The Westbrook study for example—this is the observational one where they did lots of following around of nurses—found that for the mandatory double checking there was no effect. It really didn't make a difference for things where they had to do double checking whether they in fact did or didn't do the double check. Even when people forgot to do the check, same rate of errors. But they did find, oddly enough, that they don't have to do double checking and they chose to do it anyway, there was a positive effect. That sort of voluntarily where they were a bit worried themselves, they got someone else to check it that seemed to reduce errors. 

There are some weird statistical things in that study though that have me not wanting to take the results too definitively. In particular, they found a huge number of errors. Something like 72 errors per 100 medicine administrations, which suggests to me that they were being really, really pedantic about what was and wasn't an error to the point where these results may not meaningfully reflect what's actually going on. 

David: Drew, this paper also forced me to go and do my own digging. I put up a study because there was something mentioned in Literature Review about a study that over a seven-month period, double checking and single checking had no difference to error rates, a little bit like what you just said there about the Westbrook study. 

What I actually found out—this is actually in another hospital in Australia, it was published in 2002. What they've done is they had a double checking process in place, and they wanted to change certain medicine administrations to a single checking process. But what they actually did was they did a survey about drug administration. They also did a competency check of people around checking that they could administer drugs, let's say, reliably and accurately with a single checking process. 

What they did is they then observed the administration drugs, but they also calculated the error rates and the incident rates for seven months after they made this change from mandatory double checking to mandatory single checking. They also found in the error rates itself, it made no difference. 

But they did find in the survey questionnaire they did—when they asked about things like, level of responsibility, how thoroughly the healthcare practitioners administering the drugs felt that they were delivering the medication, and their sense of accountability over medicine administration. They found a number of increases around, let's say, role and task factors that we might normally see in an engagement or another psychology-type of questionnaire around people's sense of accountability and purpose in their roles. 

But they also reported in this study a saving of time of something like up to an average of 20 minutes per medication round in the hospital. Which goes to that issue of time where they felt that they could pay greater attention to the other health needs of the patients. They could pay greater attention to what was going on with the patient's condition, and they weren't running around in night shifts trying to find another nurse in the ward to check a medicine administration.

Interestingly, like you said Drew, nothing that suggests that double checking made it worse. But definitely, some research that suggested double checking may not have the impact on improving safety that we might believe that it has. 

Drew: I did go and have a look at some of the systematic reviews and tried to get a sense of the overall weight of the evidence rather than just the single papers. Typically, what we find is that there is some work, which shows some benefit for double checking. The trouble is that the methodological standard of all of these projects is pretty poor and there aren't a lot of them anyway. It's kind of, choose your own answer to the question, depending on exactly how you [...] the individual studies and what you pay attention to. 

For example, some of the studies don't actually measure the error rate, they just measure their compliance rate. Some of them measure the compliance rate, but they don't measure the error rate. Some of them the error rate goes down everywhere which suggests that the fact that researchers are watching people is having more of an effect than the double checking. Some of the studies have got a tiny number of errors, so we can't draw statistically significant results. 

That's why I said at the start of the podcast, there's no clear answer to this question. All there is, is a big question of what do you do with the fact that we don't have a clear answer? We've got this practice that a lot of people do. We've got evidence that is pretty weak that doesn't say the practice is bad, but it doesn't give a slam dunk answer that the practice is good either. 

Is it justified to make people spend time on practice when we don't have clear evidence of whether the practice helps or not? Lots of people, when they're interviewed about the practice, complain about it and say that it's not helpful or it’s distracting. There are lots of authority figures who say, this is a cause of accidents if we don't follow the practice.

David: Drew, what would be your next steps then? Do you want to keep working through this debate about what we do with this incomplete information? Whose responsibility is it to find what information to help us understand the practice further? 

Drew: There are some organizations that actually have this job. Either charities or statutory authorities are supposed to give advice on patient safety or whatever the currently approved treatment options. I found it really interesting that the general response to the types of studies in systematic reviews that we’ve talked about in this podcast basically shifts the burden of proof the other way. They say, we've got this practice and there's insufficient evidence to abandon it. 

I thought it was really interesting. That's not the words you usually see when you're evaluating giving people medicine. There is insufficient evidence to justify not giving someone medicine. 

The other thing is that there's a lot of if it's not working, it's because people aren’t doing it right. Lots of people give the advice that says the important thing is that we have independent checks. Those checks are for the most dangerous medicine. They say, well, the reason we've got mixed evidence is because people aren't doing the checks independently, or because the policies are too broad brush. They're not just applied to dangerous medicines. I personally find that a little bit of a strange argument. 

I think the first time you propose a practice and someone doesn't do it, you can say, okay, well, it's still a good practice, they just need to do it properly. But when you've got a long-standing practice and good evidence that no one ever actually does it the way it's supposed to be done, I think you need to question the overall feasibility. If we’re supposed to be doing independent checks but no one ever actually does the checks independently, then maybe it's not feasible to expect independent checks. 

David: I think it's hard, Drew, because I think their practical, rational side of me can go, well, okay, it seems like a sensible thing to do for these dangerous risky situations. Get a second person to check before it happens. I suppose the social psychology researcher may know how this may play out in the real world, which is where a practitioner makes their own risk assessment, primes the situation, and tries to get on with their job. 

We've seen this outside of the healthcare sector. I've seen this in some industries. I'm going to talk about the utility sector briefly because they do say, high voltage live line work and low voltage live line work. Now high voltage live line work might be 100,000+ volts, and low voltage works might be anything less than 11,000 volts.

It's actually the high voltage work which carries so much risk that is so well controlled, so well thought through, and so well planned that most of the problems—and if you look at most of the fatalities that occur in that industry with what might be perceived as a lower risk activity and then all the systems, processes, and mindsets in those organizations—have built around the fact that is perceived to be a lower risk activity. 

It's just a hypothesis of mine, it will be really interesting to look at the medicine administration errors and how they're actually categorized from a risk point of view by the hospital. What the relative error rates are between higher risk drugs and lower risk drugs because I'm going to suppose—put a hypothesized table—that the error rates going to be much higher for those things at the hospital perceives is lower risk, but I don't have any evidence behind any of that.

Drew: There certainly is evidence in some of the studies that that is the case. But it's not quite fine-grained enough that we can say then the question that we really care about is, does the double checking help in those higher-risk situations? You can put contrasting hypotheses. You can say, well, okay, everyone is so careful in those situations, anyway, that having the extra check doesn't help. Or you can say those are the most sensitive situations, so those where we want every layer of protection we can have, so we want double checking.

Mentioning social psychology, I realized myself, just in preparing this episode, how strong the forces are. We have a practice here which is expensive. We have here a practice where there is not a strong evidence base that this practice helps safety. I am completely unwilling to say, this is a bad practice, we should get rid of it until we've got the evidence. The fear of recommending stopping doing something, even when there is no strong evidence base supporting that thing, is still a real fear. None of us want to be responsible for saying, let's do less safety. 

David: I think you're right, Drew, but I think it’s that Westbrook example of what you called out there in those mandatory high-risk checking situations that double check didn't change (I suppose) the error rates. Where the first person wanted a second opinion, then, obviously, it did have an impact. Now we're back into the safety work, safety of work discussion about mindset. 

You can have the same practice in two different situations with the different mindset of the first person involved in the practice generating different outcomes. One has an impact on the safety of work and one doesn't, which is why we found it so hard to just broadly categorize certain practices as being safety work, as safety clutter, or not safety clutter under the safety work heading because the same practice can have a very high contribution of the safety of work in one setting and no contribution in another setting.

Drew: Having picked up a clear listener question and have failed to give any answer to it, do we have any practical takeaways we can offer from this episode?

David: Look, I think I had a couple, but do you want to get us started, Drew, with these practical takeaways? Before I maybe throw it over to you, there was a line in one of the papers that I copied across because it stood out to me in some of the work that I've been involved with recently. Healthcare is a very human-centered affair where you've got actual human action as the last (let's say) line of defense, or maybe only a single layer of defense between a major hazard outcome. There's no automated process to monitor, no automated process to detect, no automated process to recover in that risky situation. You've got these human actions that form the principal defenses. 

The paper talked about increasing error wisdom is imperative for these frontline operators like helping them understand or building competency in them about how they understand how errors can occur and might occur, and the ways to mitigate those in terms of human actions. The paper then defers to the aviation sector and talks about crew resource management in the cockpit of things like that. I think that's something that's worth starting from a practical takeaway point of view. 

If you've got human actions, which are really critical controls in your hazard management process, then you really need to help people understand how those controls can fail.

Drew: Thanks for throwing that in, David, because I find that a really useful thing as we head into takeaways, is that no matter what else, this idea of double checking has built into it sympathy for the fact that humans are fallible. It encourages people to think, even though I'm doing my job, I'm doing my job competently, I'm doing my job with care, there is nothing wrong with admitting, I might be wrong. Let's get someone else to check it. That is if it's built into the culture rather than a mandatory rule.

A very healthy approach to error is just recognizing that we're not perfect, and having someone else check and tell us that we're wrong is not our mistake, it's not a bad thing. It's just part of the way we do business.

David: Drew, what else we might do with this double checking situation that we might have in our organization?

Drew: I think it might be useful for anyone in safety to just think about situations where we've got two different humans inside the same decision loop. It might be explicitly checking, like in this case. It might be just having someone else review your work or sign off someone else's work. Think about how much extra confidence you have in the safety of the task because of having that second person, and is that something that is justified. Is the second person actually adding that level of safety, or are they just adding safety work? Is the extra time and effort of having a second person in the loop worth it? Are there better things we could be doing with that time, or do we think that genuinely it's a sensible decision that we've done the right thing by having two different people checking that same task? 

David: Drew, I think the last practical takeaway for me was one that was called out in even the most (I suppose) skeptical papers like the Armitage paper to encourage critical thinking in the checking process. If you've looked at your checking processes, and if you thought about it, yes, it does give me some extra confidence or I'm not sure it gives me confidence. A number of papers talked about this enforcing. Doing what you can to create some independence, enforcing a time difference or a physical distance between the first person and the checker. You limit the extent to which the first person can prime the response of the person who's checking that work, or can engage in some other social process or separate process which might distract the checker from doing what you hope to be a thorough and independent check of the process. 

Drew: I think the pharmacies are a great example of that. I have to admit, I have dozens of times seen the pharmacist put the medication in the script into that little basket, and I had never once noticed its purpose in creating that independence in the process. I just thought it was a weird thing the way pharmacies operated. Now that you pointed it out, David, I'm not going to stop noticing the utility of creating the separate check by putting things in the basket and handing it over rather than. 

David: I only know because I've been sitting there in the pharmacy frustrated going, there's my medication right there. The person's already prepared it. Why can't the person just give it to me? It just sits there for 15 minutes until the second person comes along and checks it and goes, well, here you go, here it is. That was the example that first sprung to mind when I was reading the paper.

Drew: We normally finish our episodes by an invitation to the listener. My question that I’d be interested in some conversation maybe on LinkedIn is, what's your take on the default action we should take when we've got a common safety practice and we don't have good evidence for it? Is it okay to drop a practice just because there's no evidence? Is it okay to continue the practice even though there's no evidence? Is there an active ethical obligation that we immediately try to search out evidence in those circumstances? 

I always struggle with that on this podcast when we’re coming up with recommendations, just what to do with a lack of information, how definitive you can be. I'm interested in what people think. 

David: Yeah, Drew, I am too because it's a question of, where's the burden of proof? Should we have things that we can prove contribute to the safety of work, or should we have everything that we think might contribute until there are things that we definitely know don’t? I suppose that's just a bit of a central discussion on the 70 or 71 episodes today on the podcast. 

Drew, you've answered it a couple of times through the process so I almost feel like I shouldn't. But just for process sake, the question this week was, do mandatory double checking policies improve safety, and your answer is? 

Drew: Well, that’s it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.