The Safety of Work

Ep.37 How do audits influence intentions to improve practice?

Episode Summary

On today’s episode, we discuss how audits influence intentions to improve safety practices.

Episode Notes

To help frame our conversation, we use the paper How Does Audit and Feedback Influence Intentions of Health Professionals to Improve Practice?

 

Topics:

 

Quotes:

“...The two parts of this study that we’re going to talk about now, are really trying to address that first part of it, which is the information to intention gap…”

“In the field, there’s obviously other information, which is going to affect the decision, other than this particular report.”

“If there’s no data, professionals really want to see the data, before committing to whether or not they need to improve.”

 

Resources:

How Does Audit and Feedback Influence Intentions of Health Professionals to Improve Practice?

Feedback@safetyofwork.com

Episode Transcription

Drew: You're listening to the Safety of Work podcast episode 37. Today, we're asking the question, how do audits influence intentions to improve practice? Let's get started.

Hey, everybody. My name is Drew Rae. I'm here with David Provan, and we're from the Safety Science Innovation Lab at Griffith University. David, what's today's question?

David: Today's question is how do audits influence intentions to improve practice? This week, when I was looking for an episode that we might do, I was running a few different search strings in Google Scholar as we talked about in episode 34, and then I realized that we haven’t discussed audits yet in an episode.

In fact, Drew, I'm not sure if we've actually discussed safety management systems, [...] investigations, or a lot of the other safety work practices that we have in our organizations. Anyway, today we're going to talk about audits. When we talk about audits, it's a very broad term. There are different types of audits. External audits, internal audits. We can audit systems. We can audit individual behavior.

I haven’t looked at the dictionary definitions, Drew, and they go something like this: “An official inspection of an organization normally by an independent body.” But a quote like that isn't that helpful. In this episode, our working definition is going to be the review of an activity against a specified criteria. 

Before I go any further, Drew, do you have any opening comments about how you see or how you feel about the role of audits in safety management?

Drew: I think we're safe for all the year’s podcasts, so I can't express my full view of audits in the language I prefer, David. What I will say is that audits come up most often in my own research. Whenever people are trying to justify why they're doing something that they know they shouldn't be doing, the answer is to any nonsensical safety practices almost always a story about an audit that happened and required something that was previously informal and working quite well, needed to be turned into something that was legible and documentable to an audit process. It turned from an effectively working informal practice to a very well-documented paperwork practice. 

David: Drew, I think for our listeners who are familiar with the safety of work model that [...] audit fits into that category of demonstrated safety work, which is directed back towards some stakeholder as opposed to directed towards risk reduction of work. Full disclosure for me from the outset. I did find myself in an unpopular situation in a room of safety professionals at a conference a few years ago when I was presenting about safety clutter. When I was asked about the types of things that safety clutter candidates, I suggested to the room that they should straight away go back into their business and remove their safety audits because they're probably unlikely to be adding any value to safety.

The reason that I held that view at that time was that when I reflected on all of the companies I've been involved in and all of the audits that I'd seen and been involved in some way, I couldn't really identify the time that those audits had specifically directed an improvement that I think was material to safety or to work that wasn't already in [...] or wasn't already [...] or already underway by the organization.

With that as a backdrop, I did learn a bit by going through today's paper, so I'm interested to get your views as well on the research that we're going to talk about today, Drew. 

Drew: I really liked the way you framed the question which comes out of the way this paper has looked at it. You’re trying to look for the long downstream effectiveness of audits is actually fairly hard, but there's this nice simple intermediate question which is how do people intend to respond when an audit says they've done badly. 

David: You're right, Drew. I think any try to do a direct link between an audit and an improvement in performance—however, you thought about measuring that—was going to have a lot of intermediary steps and when we talk about that model used in this paper, we'll show what those look like. 

I really want to also talk about an audit process that looks at work as done, not just an audit process or a study about whether the operational procedures match the internal or external requirements. Is my procedure as good as the rule says the procedure should be regardless of what actually happens? So, really wanted to look at practice audits.

In some ways, the audit process that we're going to talk about today could be considered a very well-designed behavioral observation. I think there's some hidden advice in the podcast today and we'll try to make it a little bit explicit in the end about what this paper could also potentially teach you for your behavioral safety programs in your organization.

Drew: I think that's definitely something that we'll see as we go through the paper. We're looking at an audit of cardiac rehabilitation which is not something I'm an expert in or you're an expert in. But certainly superficially, this looks like a very, very reasonable evaluation particularly because a lot of the audit is just benchmarking against other people who do exactly the same sort of work.

David: Drew, let's dive into today's paper. I tried to find a paper that I thought had a reasonably sound research design with both observational and experimental methods, but like always, I learn a lot about research design from you in all of these episodes. Anyway, I think it's a well-designed study.

Let's dive into it. The paper for today is titled, How do audit and feedback influence the intentions of health professionals to improve practice? A laboratory experiment and field study in cardiac rehabilitation. The paper is published in 2017 in the British Medical Journal of Quality and Safety. It looks like the BMJ have their own quality and safety journal, mainly around health care and medicine quality, safety issues, and a lot of writing in there about patient safety, research, and outcomes.

The authors are Wouter Gude, Mariëtte van Engen-Verheul, Sabine van der Veer, Nicolette de Keizer, and Niels Peek. The authors are spread across two universities, the University of Amsterdam in the Netherlands and the University of Manchester in the UK. 

The stated objective of the authors and the BMJ does this with its papers. It's got a really specific design for its abstracts and it starts with the objective. The objective of the authors was to identify factors that influence the intentions of health professionals to improve their practice when confronted with clinical performance feedback.

The authors said that this is a really essential first step in the audit and feedback mechanism. Even though this is a really specific setting within the healthcare industry, like you said, neither of us are cardiac rehabilitation specialists. I think there are endless lessons in this research for the audit process itself, but then also how we think about what people are going to do with the audit outcomes once they are delivered.

Drew: I think that's a fair model to say that the necessary first step for an audit to be effective is that the person who is being audited has to want to improve the specific things that the audit has called out. If we look at you, are there factors that increase or decrease their likelihood of even wanting to respond positively to those audit requests? That's the view, regardless of anything else. That's the necessary first step. The order of at least people have to acknowledge that it's legitimate and have to want to respond positively.

David: The literature review of this paper is really interesting, and we like it when the papers do a reasonable job of reviewing the literature. The authors talk about the healthcare industry increasingly adopting this audit and feedback strategy to improve the quality of care.

They review the literature and they find that something like 75% of the time studies that look at audit processes on practice and feedback processes doesn’t result in any change. I'm not quite sure what's going on here because if you've got an objective measure of clinical guideline or expert opinion about how something should be done, you've got specific practice data with well-trained professionals about how their performance matches that clinical guideline or expert data, you give them the results, the benchmark, the way to improve, and the means to improve, three-quarters of the time, nothing happens. I think that's a fascinating finding in the literature. Not surprising but still really interesting.

Drew: I decided to take a bit of a personal approach to how I thought about this. I've tried to imagine someone auditing something that I'm doing and that I think I'm good at. I'm probably going to give a lot of teaching examples through this paper because anytime we get these feedback on cardiac performance, I'm thinking, what if someone came into my classroom and did a benchmarking exercise about my teaching? How would I feel about what they had to say? 

The idea that 75% of the time, the response to someone criticizing my teaching is, get stuffed. Does not surprise me in the slightest. We talk about people who are respected professionals who think that they are doing a good job, who know that they are doing a good job, have been doing a good job for many years, being audited and told, these are the specific areas that someone says you need to improve. I don't think it's that surprising that the answer is more often than not, no, get stuffed. 

David: Yeah, it's a good story, Drew, and I wonder if it's more than 75% when the safety professional comes in with a safety audit to the manager. But our listeners can tell us their experience of intention to change or the difference between maybe genuine intention to change and stated intention to change following safety audits in their organization.

Drew: I guess the big surprise is that people are that honest, that these aren't just 75% of people don't change. A lot of these are intention to change, so these are people saying they don't even intend to adopt the recommendations. I think there's actually a mix of studies here. Some of these are actually about the effectiveness of improving the quality of care. So, maybe these are people who do say superficially, yes, I accept the audit results, but later on, they don't actually result in a change in practice.

David: Through the background literature, the authors identify a whole range of mediating factors that determine whether or not feedback provided through an audit process is going to result in change.

They identify things in the literature, like who delivers the feedback, whether it's a supervisor or a colleague, whether it's delivered once or more than once, whether it's delivered verbally or in writing, whether it includes explicit targets, an action plan, or direct improvements. There's all this stuff, but the researchers say we've learnt all this stuff in the research although we know these factors from previous studies, it's not helped us understand the underlying mechanisms.

Drew, I mentioned in your take on this, because it's sort of an indirect reference to our manifesto paper on reality-based safety science from episode 20, and the quote goes, “Reviews attempting to deepen this understanding have equally failed to do this because most audit and feedback studies were designed without explicitly building on any extent in theory.”

Drew: Yeah, I love that quote. It conveys to me a picture of lots of people doing very naive experimental work, doing trials that ignore the theory and then other people trying to build a theory of what works and doesn't work based on the results of these experiments. But without the feedback loop, none of the experiments are getting any better at testing or expanding the theories, that we just get more and more inconsistent experimental results, and more and more theories built on top of these inconsistencies.

David: Yes, the author started with control theory, and it's quite an old theory in relation to feedback loops. The authors talked about how many different disciplines have studied control theory from mathematics to engineering to behavioral science. 

According to control theory, feedback prompts its recipients to take action when their performance does not meet a predefined standard. Basically, if you're falling short, then the person will change their performance to bring it into line with the standard. 

They highlight three mechanisms to make this feedback improvement process effective. If it doesn't work, I suppose in those 75% of studies where the feedback didn't result in improvement, control theory says it's going to be one of three reasons. 

Information-intention gap, which is basically that the feedback fails to convince the recipient that the change is necessary, which is I don't agree with the need to change. Second is the intention-behavior gap, which is where intentions are not translated into action, which is I want to change, but I can't or a behavior-impact yet, which is that the actions do not yield the effect, which is I did change, but it didn't change the results how it was intended to.

This information-intention gap, intention-behavior gap, behavior-impact gap sounds fairly logical to me.

Drew: I think it's a simple but straightforward model. If I take it back to that example of someone coming into my classroom and says the problem, Drew, is the PowerPoint slides don't have nearly enough animations on them. The first explanation of why there are no changes is I say that's a dumb idea. I'm not going to do that. Of course, there's going to be no change. 

Your second explanation is I say yeah, that's a good idea, but I never have the time to get round to actually putting in the animations and transitions, even though I agree it's a good idea, I just don't manage to do it. 

The third possibility is I say yes, fantastic. I'm going to go through and put little animated icons in every one of my slides, and the students hate it. I think that covers all the range of main possibilities as to why the feedback doesn't result in improvement.

David: Yes, that's the whole theory, Drew. This study or these two parts of the study that we're going to talk about now are really trying to address that first part of it, which is the information to intention gap which is the audit feedback resulting in an intention to change. I suppose their base hypothesis from this will be if someone gets feedback that their performance doesn't meet a standard, then those are the things that they're going to want to change based on control theory.

Drew, it actually helps to talk about the design of study too first. The second part of the research was a longitudinal field study where they enrolled 18 individual cardiac rehabilitation centers in the Netherlands and collected data across 14,847 patients. 

Basically, it's what they call a cluster-randomized trial, but they assign these different centers into different experimental conditions. Each of the 18 rehabilitation centers got four quarterly feedback reports in combination with educational outreach visits. 

The researchers went into these centers once a quarter for a year and these centers were either given performance feedback on indicators concerning psychological results of cardiac rehabilitation or physical results of cardiac rehabilitation. As you said, Drew, all of the benchmark data came from the sample population across the 18 centers.

They develop these indicators with a group of cardiac rehabilitation professionals. They got all this data. They worked out what the performance should look like or what they felt the performance should look like with these indicators, and then they assigned red, yellow, green traffic light indicators to represent how an individual center’s performance on that indicator ranked in relation to the overall population. 

They sent these feedback reports through web-based systems. They turned up during these visits once a quarter. They worked with the quality improvement team which involved clinical professionals. It wasn't like a quality department and they facilitated the process or they observed a process where the teams developed their improvement plan. Then each visit, they updated their plan accordingly. Drew, any comments on the study design? This 12-month study design?

Drew: I think it seems really quite reasonable. I mean, obviously, you have to do this real-world field study using clusters rather than individuals, because this is a team performance scenario. There's no point in singling out one individual when in fact most of the improvements have to be made at the level of the center and most of the data exist at the level of the center rather than at the level of the individual. So, yeah, trying it out is obviously a very realistic scenario. This is how you would actually try to do a quality improvement. 

In fact, I almost got the sense that the primary purpose was actually to try to improve quality and that the research was a secondary purpose, but still appropriately built into the improvement.

David: Yeah, Drew, and it appears as though they've wanted to do a laboratory study as well. So across these 18 centers, there was a population of 132 clinical practitioners who had been involved in at least two of these outreach visits. They asked these 132 people if they'd like to come into the lab (essentially) and participate in individual experiments that are based on the same study. Forty-one people said yup, I'm up for that. I'm going to come in and help you out with that.

What they wanted to do, Drew, which I really like, is they wanted to reduce the impact of organizational and social factors on the participants' decision making around performance feedback. All of the constraints that might exist in group settings in the workplace or with organizational resources available, I wanted to try to take that out and they were giving people hypothetical situations. 

But it's interesting, these hypothetical situations were actually the exact 40 or 50 reports that had been provided back to the centers, but just shuffled up and then given to the individuals in the lab, so they could actually see the way that the performance feedback reports from the audits were dealt with inside an organization and those same reports were dealt with by individuals during the lab study. 

Drew, I actually think that's a good way of actually trying to eliminate some variance in terms of those other factors, which people could just say the practitioners made those decisions because of their organizational setting.

Drew: I think particularly as a complement to the field study, this is a really interesting little experiment. I think we should also just point out exactly what it is that they're being asked to do here. They're getting these reports that have got both an indicator, so some measure of safety or quality that might be a process indicator, so an indicator of what the center does or it might be an outcome indicator, some particular statistic related to the patients. These have been benchmarked to show whether they tend to be low or high compared to other centers and then the participants are asked to pick which ones of these indicators are the ones that they're going to build into their hypothetical action plan. And they're asked to justify.

If there's a low indicator that's showing up red that they decide not to build in, they'll ask you why did you not want to build that one? And they also asked if they picked a high indicator. In other words, something that's already showing good performance is showing green. Why have they picked to try to improve that one?

David: And I think one of our other episodes, one of our experimental episodes of the commercial sea fishing captains—it was one of the earlier episodes; I can't remember the number off the top of my head—they gave them three or four choices. If people selected something that was low, tell us why. Is it: (a) it's not a relevant aspect of quality cardiac rehabilitation, (b) improvement is not feasible, (c) the indicator score is already high enough, or (d) other, and tell us what the other is. They did a similar set of responses or standardized responses for if they selected something that was high.

Let's dive into the results. Basically, in the lab studies, 75% of all indicators were incorporated in improvement plans. If you gave people feedback on 100 things, then in the lab they would put 75% of those things into an improvement plan. In the field, they would only put 39% of indicators into improvement plans. On their own, the practitioners wanted to improve (let's say) twice as many things as they arrived in groups within an organization’s settings. Not a lot of explanation for why these things are the case in the paper, but any thoughts about why in organizations only half the things might be put into a plan?

Drew: I think this one might just have to do with the ideal versus the possible, that in an experiment you design an ideal plan. In the real world, you are much more conscious of organizational constraints. If there is a disagreement, it's more likely to end up in compromise rather than just doing everything.

David: Yeah, and I think that might be a reason for the second piece. For each 10% fall in the indicated feedback—say from 80% to 70% to 60%—every 10% that that performance fell in the lab resulted in a 54% increase in selection for improvement.

Whereas in the field, it resulted in only a 25% increased selection for improvement. I think that probably matches the first result in that not as many things go in, which means there's a higher threshold for things to make their way into the plan.

Drew: I think there's also an effect here that in the field, there's obviously other information which is going to affect the decision other than this particular report. Hypothetically, that's all the information you have whereas in the real world, you've got other data that might not be included. So other reasons for including things or not including things.

David: Absolutely. The trend sort of continued with the benchmark data, so I think about the red, yellow, green traffic lights. These scores of (let's say) amber or red, or low or intermediate on the benchmark were ignored (say) a third of the time, 34% of the time in the lab. But they are ignored almost half the time, 48% of the time in the field, which means that again, someone not deciding not to do something with an amber or a red performance indicator half the time within organizations suggest that there's a bit more going on here than just what the result looks like against the performance benchmarks.

Drew: It suggests that legitimately or not, there's some explaining away happening. Someone's being told that you are below average and the response is not I have to improve, but oh, that's because of some reason that we know about that means that we don't need to try to improve.

David: Yeah, and I think one of the interesting tangents that I drew out of this, which is what you were just saying there, is if you've got the auditor who's behaving a bit like the lab study, which is what are all the things that I think need to improve as the auditor, when it hits the auditee in the field with all of those other considerations, a tangential conclusion from this might be that auditee is probably only going to agree with half the stuff that he says to an auditor.

Drew: The other possible experimental effect is just that the people in the lab had to justify where they weren't doing it and the people in the field didn't have to put down just a stark, no, I'm not doing this and this is why, which may be the different dynamics there, the legitimacy of saying no, and the experiment might have led people to just go ahead and do it.

David: It's something I don't recall us knowing, whether the explanations came after the plan was completed or allowing people to get the opportunity to change, then that could have well been a factor. But you're right, when you remove those constraints, you're probably more than likely to be more ambitious about what you're trying to do. 

The feedback was spread. When we go to the lab, the lack of intention to improve practices that were benchmarked as low, 30% of people said the scores are already high enough. So even if it's red compared to other centers, I consider it already good enough. Twenty-five percent was that the improving indicator was not feasible, which may be owed to the second thing, which is that it doesn't matter whether I want to do it or not, we can't. 

Improving indicators like priority 25% or 15% was not a relevant aspect of quality clinical care. Fair spread across those types of reasons, which is we’re good enough already, we can't do it, it's not a priority or it's not even an important issue in the first place.

Drew: I think we can probably lump together the lack of priority and was not relevant as sort of two extremes, adding up to almost half of the explanations here. One of them is I just don't think this indicates quality at all. The other is it does, but it's not really an important indicator of quality.

David: For the findings that are green or high against the benchmark, the reason for people putting them in their improvement plan anyway, 82% people just said this is green, but it's an essential aspect of quality clinical care, and it should belong in everyone's improvement plan. It kind of flies a little bit hard in the face of audits. Your auditee really might want to continue to focus on the things they're doing that's green. All the audit tries to do is get them to respond to the reds and the oranges. What we're saying in those previous results is that half the things in those reds and oranges they're not interested in anyway, and they might want to take that 50% of resources and put it towards things that the auditor has said is already green.

Drew: This one really had me thinking about the purpose and intent of audits, even if they worked brilliantly. I think if someone said why do you do an audit? That’s I say to show you what to focus on, to show you we are currently doing well, where you're not doing well, and where to improve on. 

But in fact, it seems like—at least according to the people who are being audited—they're already focusing on certain things and not focusing on other things. The audit is telling them what they are focusing on versus what they're not focusing on and it sort of saying, yeah, well, that's exactly right. These things are green because I think they're important and I'm focusing on them. These things are red because they're not important and I'm not focusing on them. It's not helping them reprioritize. It's just confirming what they were already prioritizing.

David: Yeah, I like the way that you've thought about that and played that back. I guess then if the audit process tries to change that prioritization, then it’s probably failing at the first hurdle to go from the information to intention gap, because if any of the audits is trying to direct all of that resource to improvement, including some of the resource that's currently maybe supporting the reason that certain indicators are green, then unless the reprioritization is agreed with, that's the 75% of cases where nothing happens.

Drew: It's almost like you need a pre-audit step to get people to agree with what's on the audit and to agree that it's important. Then when it comes back around, they might agree that it's important.

David: I think the practice organizations, our listeners will probably say, but we do that. We do entry meetings. We agree on the scope of audits with the people who are being audited. But I think the extension of what you're saying there don't agree to scope. It might almost be more of a process step at the end which is, here are all the observations from the audit. Now, how do we together make sense of this in terms of what needs to be done? Not just that the audits are this is red, this is green, this is the area that you need an action, and where's your management response. I think scope alignment would happen a lot, but I'm not sure that sense-making alignment at the end would happen at all.

Drew: To play devil's advocate, I think (perhaps) some auditors would say we don't really want to give people a chance to explain away the bad results. This is fair. This is the same order we're giving to everyone. It would be a misuse of the audit process to let people have a whole bunch of reds and to let them then decide the red doesn't matter or the red can be explained away. But let's move on to talk about the author's conclusions here.

David: Still will be a fascinating conversation for an organization to be brave enough to talk like that, though.

Drew: Oh yes.

David: The authors conclude that there are four mechanisms that relate to this information intention gap. Maybe I'll describe each one and we can have a bit of discussion about each one. The first is that absolutely measured clinical performance and benchmark comparisons do influence professionals' intention to improve practice.

So if you give someone feedback against a set of indicators and you do give them feedback that their performance is low against those indicators, then that provides input to change, not 100% of the time like we've said, but it does influence change. 

From the authors’ point of view, benchmarks might trigger an intention to improve more than the underlying score. So, whether it's 60%, 70%, 80%, 50% it probably has less influence than whether it's red, green, or yellow compared with some standard or some benchmark. I think they also highlight from other research that using colored traffic lights has also been previously shown to increase the influence of decision-making, like saying something is good or bad is not as influential as saying something's green or red.

Drew: Yes, and I think if you want to label your audit, as we're doing this as a general motivational tool to get people to agree to improve, then it doesn't really matter that it doesn't work 100% of the time or people don't agree with 100% of the indicators. 

The overall message here is that, yes, doing this process and giving people this feedback had an impact on which sections they selected. So the process of doing the audit had an effect. The process of doing the colored benchmarking had an effect on them picking particular things.

David: That's the audit process. The second finding is that professionals have their own view of what constitutes quality of care. I think more broadly, what we said earlier is you as a teacher, Drew, have your own view of what constitutes good teaching.

Professionals ignore somewhere between one-third and one-half of all benchmark comparisons. One-third and one-half of the feedback you provide people during an audit is going to be ignored because they disagree with the benchmark or because they deem improvement unfeasible. They don't consider it an important aspect of work. 

There was more variation in this finding amongst individuals than teams. When you put teams together, they're probably not as different in their views as when you actually look at a difference between individuals, but that's just that's a group effect. I think we can say that that would happen pretty much in anything. 

Interesting in the field study, the teams selected process indicators rather than outcome indicators. There was quite a big difference. It was something like five times fewer outcome indicators in the field than in the lab. The authors conclude that it's probably because when you're in a real organization, you don't always feel like you can commit to improving the actual outcomes that the patients have, but you can commit to following a process inside your clinic. That reminds me. Just as I'm talking about this now, is methodology a social defense?

Drew: I think this should be familiar to anyone who has sat through more than one performance evaluation at work. There is no way that you want to commit to a 10% increase in sales, or to have five papers accepted over the next year, or to have a drop in injuries by 20%. But you definitely want to commit to you will have that report written or you will perform at least three audits because these are things under your control and you want to commit to things that you can control, not things that may be affected by factors that are just outside your control.

David: The third finding which we haven't spoken about yet in the results was that improvement intentions tended to remain similar in subsequent audit and feedback iteration. When they did the four action planning sessions over four quarters, something that was previously on the improvement plan one quarter before was 10 times more likely to be selected on the subsequent improvement plan. 

The researchers concluded that either action wasn’t completed or that the indicator was still showing a problem. Even if actions were completed, they really didn't understand the problems. They weren't actually addressing what the causes of the indicated result were. They concluded that in their minds, at least over this 12-month study, quality improvement in organizations progresses really slowly.

Drew: To be honest, David, I think they're over-interpreting their own data there. All the study really shows is that people's ideas about what they should improve are reasonably stable. The authors seem to think this is caused by a lack of improvement as opposed to just consistent preferences.

I think the fact that people had so many things in the green that they still thought was important is direct evidence that that's what's going on. If someone thinks what's really important is that we have better team meetings, then just the fact that they've managed to have better team meetings isn't going to change the fact that they think that that's important and something they should still be focusing on.

David: I hadn't thought of it like that, Drew, but now that we draw a few of those pieces of data, I could agree with that. As a hypothesis, that could be what's happening with that.

Drew: It could be that they keep coming up with the same improvement topics because nothing's getting better as well. Certainly, lots of people have just good intentions that they haven't managed to carry out, so they report the same thing at three equality meetings in a row. Yes, I'm still planning to get around to improving them. I really wish I had time to do it. 

David: The fourth conclusion that the paper draws is that professionals give priority to improving data quality. We also did mention this in the findings. In the study, if there was not sufficient data or that particular center didn't report that data, they just got given a grey circle next to the indicator instead of a red, yellow, or green.

That gray (I think) was more likely to be in the action plan than a red indicator. They concluded that professionals really give priority to improving this data quality. If there's no data, professionals really want to see the data before committing to whether or not they need to improve. 

The authors made another statement here, which might be an over-analysis of that data, but they say this is a bit of a not a problem, but if we're using audit to work on data rather than using audit to work on work outcomes, then surely it's diluting the impact with the audit.

Drew: I think this is exactly one of the big problems with audits. This is the demonstrated safety versus the safety of work problem, and that the easiest audit things to respond to and the most likely ones to be responded to are requests to provide evidence rather than requests to improve work processes or requests to improve work practices. That's exactly what this is saying. It's much easier to fix non-compliance with evidence than to fix the process or to fix the outcome. That's why these things get targeted.

David: We talked last week in episode 36 about leading and lagging indicators. Or it might have been episode 35. There, I've been involved in a lot of conversations in organizations that say, we're not going to focus on this particular issue yet until we have 12 months of data so we can baseline it or understand the problem. We think it's important and we're prepared to wait a year until we get good data quality before we start doing something so that we can show the improvement we've made. I'm not sure that's the best way to approach understanding your organization and making improvements. 

Drew, if we moved to our practical takeaways, there's a couple here. The first one is that audits and performance feedback, providing people with performance scores and benchmark comparisons will influence their individual practitioners’ intention to improve practice. 

My comment at the start saying audits are mostly not that helpful, I suppose I have to rethink that in the context that if you can provide performance feedback to an individual about the performance of their work against some kind of benchmark, it will have an impact on their intention to improve their practice.

Drew: One way to interpret that is to go back to the original question. If the process is breaking down, where is it breaking down? Is it breaking down at the intention to change? Is it breaking down at the capacity to change or is it breaking down into the change actually causes a positive impact?

I think this study has shown that it's breaking down a little with the intention to change, but you can't explain lack of effectiveness solely from that because most audits in this process did result in intention to improve and intention to improve that is guided by the audit.

David: I think going back to your teaching examples, if someone comes in and gives you a whole bunch of feedback on your teaching, then based on this study you'll possibly have an intention to improve maybe half of the things they tell you.

Drew: Yes, particularly the ones that I already thought were important and I was already in the green for, and particularly the ones where they'll really ask me for more evidence of my teaching rather than changing my teaching.

David: Perfect. Secondly, if we move on from those intentions, there is a substantial variation and these intentions because professionals either disagree with the benchmarks, either deem improvements unfeasible, or do not consider the indicator an important aspect of their work.

Drew: Which is really quite an assuring practical takeaway which is people do show judgment and they're not going to do something just because an audit tells them to. In fact, that's less cynical than I usually think about audits.  The people are going to be selective in how they respond based on what they genuinely think is important.

David: And if you think about the person being audited in this context is being a professional, experienced, and an expert in their role, then they're going to have views about what quality and what priorities look like for them, whether they are a cardiac rehabilitation clinician, a pilot, a safety professional, or a manager. If you bring them audit information and they don't agree with the criteria or the benchmark, don't have the resources or abilities to do anything about it, or they don't' consider it important, then you're right, Drew. I think you'd expect a person to say, I've got no intention to do anything about this.

Drew: Which leads onto another third one which could be a practical takeaway, or couldn't, might not be depending on how you read available evidence. The authors certainly think that these disagreements impede the effectiveness of audit and feedback interventions. I actually don't think we know that officially. There are two answers here. One of them is the professionals are being selective, so the audits are, in fact, very effective. The professionals are screening out to dodge your recommendations and just implementing the good ones. That would be great. That would be increasing the effectiveness.

The other possibility is that these are genuine feedback that they should be responding to, but they are not accepting it. So, the audit is in fact being less effective. This particular study can't really tell us which it is. They are screening out the dodgy recommendations or the ones that they should be following.

David: I think that's a fascinating research question because if we think about audits that many of our listeners will be familiar with the safety audit place in their organization, the auditor has the power, influence, and say over the audit team most of the time. A hundred percent of issues need to be taken up and needed to be responded to. It's really interesting to do some research around just whether audits are really effective when people only do half the things that the audit tells them to do.

Drew: Yeah, would optional noncompliance improve or harm the effectiveness of the audit will be a really good question. We can just do a study where we give people that choice to just reject recommendations and see what difference that makes and how much is implemented.

David: Drew, I'd love to know from our listeners if they have a process which led the auditee to say thanks, I've heard you, but no, I'm not going to do anything about it. That would be interesting.

Other things I would be interested to hear from our listeners as well is if they got an audit process that controls for any of the aspects we've spoken about. Do auditees get to really agree with criteria and benchmarks beyond just the scoping of the audits? Just exactly what the measurements are going to be, what red, green, and yellow are going to look like, and do they agree with that.

The second will be, beyond closing meetings, how much agreement and sense-making in discussion around audits cause an outcome is done before the final conclusions are drawn from the audit? Anything else someone wants to share about how they've seen audits really be important or relevant to the improvement of safety management in their organizations.

Drew: David, our question for today was how do audits influence the intentions to improve practice? What do you reckon the answer is?

David: The answer to that question from this study will be that providing performance feedback against a benchmark will influence practitioners' intention to improve practice, but only if they agree with the benchmark, they agree that the aspect is feasible to improve, and they also agree that the aspect is an essential aspect to the quality of safety of their work.

Drew: That's actually quite a good takeaway, David. That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. You can reach us on LinkedIn or send in comments, questions, or ideas for future episodes to feedback@safetyofwork.com.