The Safety of Work

Ep. 111 Are management walkarounds effective?

Episode Summary

In this episode, David and Drew discuss the role and impact of senior leadership safety visits and management walkarounds in safety management programs. The episode explores how management walkarounds can influence staff perception and the effectiveness of safety programs, and scrutinizes how the same general initiative can have different outcomes depending on its implementation.

Episode Notes

The research paper discussed is by Anita Tucker and Sarah Singer, titled "The Effectiveness of Management by Walking Around: A Randomised Field Study," published in Production and Operations Management. 

 

Discussion Points:

 

Quotes:

"I've definitely lived and breathed this sort of a program a lot during my career." - David

"The effectiveness of management walkarounds depends on the resulting actions." - David

"The worst thing you can do is spend lots of time deciding what is a high-value problem." - Drew

"Having the senior manager allocated really means that something serious has been done about it." - Drew

"The individual who walks around with the leader and talks about safety with the leader, thinks a lot better about the organization." - David

 

Resources:

Link to the Paper

The Safety of Work Podcast

The Safety of Work on LinkedIn

Feedback@safetyofwork

Episode Transcription

David: You're listening to The Safety of Work podcast, Episode 111. Today we are asking the question, are management walkarounds effective? Let's get started. 

Hey, everybody. My name is David Provan. I'm here with Drew Rae, and we're from the Safety of Science Innovation Lab at Griffith University in Australia. Welcome to The Safety of Work podcast. In each episode, we ask an important question in relation to safety of work or the work of safety, and we examine the evidence surrounding it. 

Drew, over 100 episodes and I was a little bit surprised when you raised this topic that we haven't done an episode on seeing your leadership safety visits to site before. 

Drew: Looking back, we haven't actually had that much about senior leadership at all, David. This is in particular one of those things that get done a lot but kind of overlooked or not particularly interesting to researchers. Our friend Ben Hutchinson, who on LinkedIn likes to do a lot of short reviews of papers, highlighted this one about the same time it came to my inbox. You notice it a couple of times. Maybe other people are interested in it and we should talk about it. 

David: I guess it's a staple of a lot of safety management programs around the leadership commitment in doing safety walks. These things have been done for decades and decades in the safety space and in the quality management spaces as well. It was nice to have a bit of a dig in the research around their impact or otherwise. 

Drew: Before we get into the research, David, this is not something I've actually ever had direct experience of. I've never been asked to go with a manager around a walkaround. I've never at least to my knowledge have a manager do an explicit walkaround. I've had leaders who show up to things, particularly Universities have a lot of events and people notice the heads of school who show up on the events and the heads of school who don't show up to the events. 

This idea of having an organizational shop floor and having management, you come along and walkaround is not something I'm used to. Maybe it's something that you could talk about in your personal experience. 

David: Lots and lots of these walks, both being a recipient of executive visiting sites early in my career as well as facilitating entire programs for executives and non executives to move around a company’s operational sites to personally chaperoning and facilitating these management walkarounds. I have definitely lived and breathe this sort of program a lot during my career. I can share a few of those experiences as we go through the episode and the research itself. 

Drew: Before we go right into the paper, David, could you give us just an outline of what a typical management safety walk looks like. 

David: I guess no two would be the same and there's quite a lot of industry or organizations at the moment, I think, doing quite a lot of work with reshaping some of these visits and trying to make them more learning-focused than compliance-focused. There's quite a bit of work to be done in this space. 

Typically, what an executive walkaround would be about is a couple of things. One would be trying to demonstrate to the organization that safety is important. The executive, the very fact that they are making time, making effort to get to site, they are specifically looking at safety, talking about safety, is designed to give the impression to the organization that safety is really important and it's really important to the senior leaders. 

Then depending on the organization's program, it might involve a site inspection, literally looking at hazards, risks, and their controls. It will typically involve interactions with workers, so safety conversations about the work they are doing, potentially the risks and hazards involved with their work, how they are controlling it, sometimes ideas for things that could be done to improve safety. It's typically a combination of both compliance and conversation, if you want to look at it like that. Not to say that I think that's the way it should be, but typically that's what an organizational program would look like. 

Drew: Thanks, David. This paper is by Anita Tucker and Sara Singer. Both authors are at Harvard in the Business School and in Public Health. Professor Tucker has a pretty solid body of work on healthcare management. Lots of papers with the word empirical in the title. Lots of field work from experiences. Doctor Singer does more survey style research. A good pairing for a paper that does surveys and qualitative experiences. 

The title is The Effectiveness of Management-By-Walking-Around: A Randomized Field Study. The journal, oddly enough, is Production and Operations Management. That's not a journal I am personally familiar with. I thought I would mention a little bit about what we do when we don't know a journal. 

There's a particular site that I tend to use called Scimago where you can just type in the name of a journal. The first thing is if it shows up on the site at all, that gives you some indication that it's good, not some fake journal. But then, it gives you an idea about where the journal ranks within different fields whether it's sort of first quarter, second quarter and so forth. How often the journal gets cited, how often it gets cited by itself, how much it gets cited externally, and how that's changed over time. It's a good site for just giving a quick indication that a journal is reputable on the up and up respected in its field.

The other thing to look for with a journal is whether it's where you'd expect a paper to be published. This is a little bit of a weird choice for a paper about hospitals in production operations management. You've going to ask why it wasn't published in Healthcare Management Journal, but that could just be that because they thought the work was really good, so they went for a higher status journal. The other possibility is they tried to publish it in Healthcare and it got rejected. You can never really know. 

In this case, because both authors have a fair bit of credibility, you wouldn't see the journal as a red flag. But David, there are a few odd details about the paper itself. It was published in 2014, but the study that they're talking about was done in 2005. You're well over a decade before the paper was published. The paper cites stuff in 2008 supposedly as inspiration for how they designed the study, which is obviously impossible.

A few weird details with the timing and with the age of the work relative to when it was published. Nothing that should have you sort of too worried but the sort of thing that would lead you not to take the raw statistics in the paper perhaps at face value. Just to look for the overall sort of vibe and principles rather than treating it as the absolute highest quality of evidence despite the fact that they went to a lot of effort with things like randomization, that's sort of counteracted by some of these other weird things. 

David: Yeah, Drew, it's a little bit strange unless it's sat around as part of someone's PhD and never actually got published and then they came back around to the data six or seven years later. I was also thinking it could have been some sort of industry initiative, but with that level of randomization, it was less likely to be led by industry. In any case, we've got a bunch of data and some sort of interesting things to discuss. Do you want to talk a little bit about what they did in the study?

Drew: They started with 94 hospitals and they gave them all a safety climate survey. Part of giving them the survey was the hospitals agreeing to take part in an unspecified safety initiative to which they would be randomly assigned to either do the initiative or not do the initiative. 

Now if you're wondering, why would a hospital sort of sign up to do a safety initiative that they didn't even know about, this was part of some sort of national accreditation scheme where hospitals had a box that they needed to tick and this was research that was offering a way to tick the box. It’s a nice way for researchers to get a foot in the door and fill an administrative need that the hospitals had. 

For the 24 hospitals that were randomly selected, they put in place this particular walkaround program. David, as you said, all walkarounds are different so I think the details of this matter. This was very much a new view inspired walkaround scheme. The idea was not compliance. The idea was giving senior managers exposure to frontline conditions. 

The idea is that there would be firstly what they call work system visits, which are the very senior management visiting the frontline areas with the manager for that area just to see what was going on. Then they'd have these broader safety forums to discuss the ideas that were generated at the system visits with a larger group of people. Their explanation for this is that the research that they were building this on seems to suggest that just the frontline visit affects the people who are visited, but that's about it.

If you're looking to try to measure the effect, then you've going to have some way of spreading the effect beyond just that one group of people that you visited. Hence the safety forums that consulted a larger group of people. David, you want to jump in with any thoughts on that?

David: It's a little bit like what many or some organizations might be doing today with a combination of work insights and learning teams. Basically, what they've got here is these senior leaders going and learning about work with individuals and small teams, and then bringing some large groups of people together in sort of a learning team–style conversation to discuss; challenges and opportunities and things like that. I think this paper and this podcast becomes very relevant to organizations who are working with those types of programs today.

Drew: We might talk about that a bit more when we get to the conclusions, but I definitely was thinking of the parallels to learning teams as well. After the safety forums, there was a debrief meeting with the senior managers, the work area managers, and the safety department to basically process the data that they'd got, to discuss the insights gained, to decide on some actions, and allocate responsibility for those actions.

The researchers called these sets of steps, visits, forums, and debriefs. That's one cycle that takes three months. The idea was that you would repeat the cycle over and over for the 18 months of the program. Each hospital had four main areas they were targeting. If you did this thoroughly, a lot of the areas would get visited once, some of the areas would get visited twice.

In fact, implementation was pretty patchy. Twenty-four hospitals to start with, four of them dropped out. Of the others, quite a few of them apparently only had two cycles. You would've expected six cycles over the 18 months. The average was four cycles, which means quite a few less than four, some of them managing the full program.

Whenever you have a slightly patchy implementation, you're going to have reduced overall effectiveness. It doesn't necessarily mean the thing doesn't work if the people didn't do it in the first place, but it is an indication that you can't just put this sort of program in. You have to actually do the follow-up and keep regularly doing it in order to have an effect.

Then a couple of things they measured. They focused on the idea of identifying and solving problems. They counted the number of problems that were identified. They weighted those problems based on frequency and severity and looked at whether they were sort of low hanging fruit or difficult to solve. They looked at who the problem was assigned to and whether each problem was resolved. 

There's a fair bit of detail in the paper just about their various ways of checking these variables, doing things like not taking it on face values that the problems were actually solved. They got a bunch of external nurses in to sort of read about each of the solutions and judge how thorough the solutions were, and compared tha to the self-assessment and decided that they had pretty good understanding of whether the problems were resolved or not. 

Their main big measure was a thing called staff perception of performance. This is different from safety climate. It's an individual measure that gets averaged as to whether each staff member thinks that overall the hospital and the management are doing a good job of looking after safety. David, is that a fair summary? Anything else you want to say about the method? 

David: No. I think just when it comes to research or even when it comes to an organizational safety program that the patchy implementation just creates a bit of a problem. It really does weaken what you're trying to achieve. Having something that's already a pretty light touch, like doing six cycles over 18 months, which is six visits, six meetings, six debriefs, that's already a very light touch program. Then to dilute it even further only in four or two cycles I think that's an important lesson. If you're going to do something in your organization or you're going to do a research project, try and get some good discipline around actually doing what you're trying to measure. 

Drew: I also think it's a genuine challenge for researchers because in order to measure an intervention, you've got to be precise about what the intervention is. But the more precise you are, the smaller it is and the less overall effect it has. If it then gets weakened because not all of your partners actually do it properly, then you could end up with the world's best intervention but it still shows either no effect or just a marginal effect.

David: Let's talk about the results then, Drew, and maybe we'll bring in some other experiences and a few other bits of research as well. 

Drew: This is a fairly statistics-heavy paper and I'm skimping over some of both the statistics they did. They did some fairly high quality checks, so every time they came out with an answer, they did a couple of other things to check where that answer was coming from and to rule out alternate explanations. Overall the sort of headline is if you average this out, compare the 24 hospitals that had the intervention to the 24, to the 62. 

David: 70 others. 

Drew: Yeah, then there was no significant change in staff perception of performance. In terms of raw numbers, it went down slightly, but not even statistically significant. On average, this program did nothing. That's concealing a little bit though what's going on. 

The first thing we can look at is comparing the same types of work areas. Different parts of the hospitals are obviously quite different, but when you compare emergency rooms to emergency rooms between the control and intervention groups, then you see that there is a bit of a difference and the difference is negative. On average, it actually makes staff perception worse. But that again sort of hides the fact that once you look at the individual hospitals. This average to negative effect is actually made up of some hospitals doing really well and getting better and some hospitals getting worse. 

Because of that, I don't think you can really take the average and say management walkarounds are bad. Because we've had some successes, some not, you really are going to look at what makes them successful and what makes them not successful. 

The first sort of interesting sub-detail is that the work areas that went down the most went even more negative were ones that were already pretty bad. We've got dissatisfied staff and this made them even more dissatisfied. 

That's statistically interesting because normally you look for a thing called regression to the mean, where you would expect the highest performers to tend to get towards average and the lowest performers to tend up towards average. Just because there's not much room for the low ones to move down, there's not much room for the high ones to move up. On average, you expect them to move towards the middle. To see the low ones getting even worse, that's interesting.

David: One of the questions that I had reading this paper and we never quite know is the extent to which these visits across the 20 hospitals were conducted in a similar kind of way. Most organizations would spend a lot of time training their leaders when they're doing these visits about what questions to ask, what questions not to ask, how to interact, and how to send messages. 

I was kind of wondering when I read that result, seeing that essentially the hospitals that had the worst perception of performance in this program or potentially contributed to their performance getting even worse. There's a saying that I like to use, which is don't make bad leadership visible. It's almost better for an organization to think that their senior leaders are bad than to make them visible and remove all doubt. 

There are a few things in here that just from my own experience in organizations that I'm wondering exactly how the leaders actually performed the visits and how consistent whether any training was provided.

Drew: There's not a lot of information about that in the paper. Even if there was, even if they gave us full details of workshops that run to start selling the program and I expect there were some of that going on, then you still don't know how the individual leaders took that training and how they reacted. We do have some hints a little bit later on when it gets to the problem solving about how some of the leaders might have been acting, but we'll get to that. 

David: Maybe Drew, before we talk about some of the problem-solving if we're going to go into that, it was my turn this week. This week I went often and also dug around a little bit in this space because like I said at the start, I was a little bit surprised that we hadn't tackled leadership visits, and it is a very popular practice in organizations. 

I did a little bit of my own digging. I did come across another quite large study in healthcare even though it was only a single hospital, but they randomized executive walkarounds across more than 20 units within the hospital and found a very similar outcome to this which had no average effect on climate at all. One of the things that they were hoping the researchers were expecting was that if leaders were doing these visits, then people would be talking about the visits in the lunchroom and there'd be this kind of spillover effect into the broader climate. It seems that that didn't happen and there was no movement at all on average. 

However, when they took the individuals who were directly involved in the walks themself, there was quite a significant change in their perception of the climate in the organization. It's almost like the individual who walks around with the leader and talks about safety with the leader thinks a lot better about the organization and approach to safety, but everyone else doesn't really get that experience and it doesn't really change their perception at all.

Drew: Thanks for that, David. I think that says a lot about how these sorts of things can be experienced by different people quite differently depending on which side you're on. For the leader, they’re spending two hours walking around. For each person, they may be encountering the leader for 30 seconds and that makes a big difference. Leader thinks they've spent two hours on safety. The person thinks that the leader spent 30 seconds on safety. 

David: Yeah, Drew. I did have this personal experience in an oil and gas organization where a big part of the program was leadership visits at all levels of management and executive. We had really great outcomes, I think—and this is all anecdotal—on our land-based drilling rigs because the rig only had about 12 or 15 people on the rig. When a leader went there, they basically spent a lot of time speaking to every single person.

But then a leadership visit to a processing facility with maybe 500 or 600 people, 90% of the people wouldn't even ever know that the executive had actually been there. I think Drew, that's just another reflection for people about a program that works in one workplace may not actually work in a different type of workplace when the context is different.

Drew: The researchers broke the problems basically down into two categories, sort of high value difficult problems, and low value easy-to-solve problems. Their hypotheses were that solving both of these types of problems would be good, but they seem to have been expecting that solving the high value problems was better. 

In fact, solving the high value problems didn't have as much effect as solving the low value problems. The sort of biggest improvements in staff perception came from when the leaders were able to deal with a large percentage of the small problems that were raised. David, your reaction to that? 

David: One of the reflections is that high value and low value problems is a very subjective assessment and I know the frustration that frontline teams deal with when they just don't have, for instance, enough pens to do their job or they just don't have enough that is seen to be a low value problem but it's really really annoying for them at a very high frequency.

I can imagine that for frontline teams having what may be seen to be a low value problem but is just a real annoyance, would potentially really change their perception of the organization and of management.

Drew: Yeah, there's another little detail here about prioritization, which we'll come back to that I also think was contributing to this bit. 

The second sort of thing in that category was that having a senior manager assigned responsibility directly for the problems was associated with quite a big increase in staff perception. They measured what percentage was allocated directly to a senior manager versus, for example, passed off onto the safety team or the area manager to deal with. 

One standard deviation in the percentage allocated to senior managers had a really quite large difference in the shift in staff satisfaction. We don't know exactly what's going on, but the author suggests that probably having the senior manager allocated really means that something serious has been done about it. It wouldn't be necessarily visible to staff who had been allocated this, so it's going to be coming through the actual effectiveness of it being dealt with. 

Don't think there's anything else on the stats that I had on my list, David. Any stats that you'd noticed that you wanted to talk about?

David: No, Drew. I think they were the two things of interest was how do these leadership visits affect perception particularly in relation to getting issues raised and addressed. 

Let's talk a little bit about, I guess some other interesting patterns in the data here as well before we talk about some practical takeaways. Do you want to talk about some of these nuanced results? 

Drew: Yeah. Everything so far has been based on the average, but there was this fairly clear underlying pattern of high performing and low performing. They used the qualitative data to try to identify some differences between these groups. The high performing groups—these are ones where staff perception of performance went up considerably during the study—was where the activity identified meaningful problems and where managers took those problems seriously.

One of the examples they point to was when a hospital identified a medication room that was so small that only one person could work in there at once, which considerably slowed down getting medicine processed out to patients, making life hard for everyone. As a result of the management visit to that site seeing what it was like, management just moved the whole medication operation to a different room in the hospital that was easier and bigger to layout. I knew there was a staff comment management doesn't necessarily do what we expect in response to the issues, but they do something. 

David: And the worst, Drew? 

Drew: The worst performers read like a cartoon stereotype of bad management trying to improve things. The first thing they point out is they spend a lot of effort trying to prioritize at the expense of actually doing anything. I think this feeds directly into the high value problems versus low value problems. The worst thing you can do is spend lots of time deciding what is a high value problem, what's a low value problem. 

One group, the researchers directly observed the safety forum and they said the management spent the whole time negotiating with staff how to prioritize everything that they used up the entire time of the meeting and never got to actually talking about potential solutions.

David, I don't know about you, but I was just thinking about these risk assessments where you do hazard identification and then spend hours arguing about is each hazard high, medium, low, minor, severe, likely, or unlikely. It doesn't necessarily lead to better treatment of any of the hazards, particularly not if you've burnt out your whole hazard meeting talking about risk assessment and never actually got to coming up with controls. 

David: Yeah, I think so. Drew, we briefly mentioned the overlap with the current popular practice of learning teams and definitely hearing some stories about how those are being done effectively and not effectively. Spending all the time talking about the problems and then spending all your time trying to evaluate and prioritize the problems doesn't actually get things improved, risks lowered, or things fixed.

Drew: Yeah. The next thing that happened was there were two work areas that resolved no problems at all. They'd done the work. Management had come, they'd seen the workers doing the work. They'd talked with the workers about problems they had, had these safety forums to raise problems, they'd made lists of problems that needed to be solved, and then none of those had been done. Easy to see why that's not going to do wonders for staff perception. 

The third area hadn't technically done note. When the researchers asked them how many they'd solved, they didn't have that information, which I don't know if that's better or worse, not even keeping track of whether you'd done anything about the problems.

Then yet another area they had put solutions in place, but almost all of their solutions were educating staff. Staff has told management that there's a problem, management's response is we've got to tell you why you shouldn't be complaining about that or tell you why you should be doing things differently. A bit of a sign that the listening exercise wasn't quite working there. 

David: That's a little bit like I can imagine in some of these contexts when maybe it is the pharmacy team or something raise all these issues about medication preparing medication and potential medication errors and then the staff just goes we better remind people of the procedure and remind people of the rules and make sure that they don't make those mistakes. I had a few scenarios running through my head when I started reading these types of conclusions. 

Drew: The final one I think is quite interesting, and I could really see this happening in some organizations, particularly with things like learning teams, is they'd followed this process and identified a bunch of solutions, and then management decided that they didn't actually have the authority to just go ahead and implement the solutions because the hospital had other processes for making healthcare improvements and evaluating them and approving them. They decided that they had to pass off all of the things that they'd produced just as suggestions to the patient safety committee for consideration. 

Similar things happened in a couple of different places where they decided that they just needed external approval for the changes that they'd thought of and couldn't go ahead and do them. Again, I don't imagine that sat well reporting back to staff. We've heard you, we've listened to you, we decided to do it, and then we decided we couldn't do it. 

David: I'm assuming that a number of the listeners are going to be sort of nodding along and as they might be listening to this now, because I definitely have seen all of these types of procrastination and abdication inside organizations, which is oh no, we can't do this, this has to go to this committee or no, this has to go here. Again, in a learning team, no one has the authority to act on anything that comes up. 

Clearly in this study you sort of see that in some ways actually and maybe it is because some of these common patterns actually makes the overall perception of performance worse because you're raising these problems, you are hearing about all other people's problems, you're talking a lot about the problems, and then nothing's kind of happening. Your overall perception of performance rather than just thinking management doing nothing now you actually know that management's not doing anything.

Drew: I don't think the problem is necessarily at the implementation end. I think it's particularly true of things like learning teams. You have a small group of people that have been consulted. It is appropriate to stop and think that what this small group of people is telling me is what everyone wants and will it cause other problems? That's why organizations have processes for just not randomly implementing changes. 

The trouble is that if you've got one process that is designed to generate expectations of change and another process that's designed to slow down and check changes, then no one's going to be happy. You sort of need to generate possible changes at a rate at which you can properly evaluate, approve them, and make sure you implement some. 

David: I do think that organizations do have a process for randomly implementing systemic changes based on individual situations, which is an investigation process.

Drew: Yeah, that does tend to happen too. 

David: Drew, let's talk about some practical takeaways then. Do you want to get us started? 

Drew: Okay. My first one is just the fairly obvious one that the same general initiative can have very different effectiveness depending on how it's implemented and who's implementing it. That's something we need to be careful of when we generalize about safety practices. Something we might be guilty of a fair bit as researchers is just recognizing that if something works in one place and makes things worse in another place, then on average it doesn't work. But that doesn't mean it's a meaningless activity. It is both good and bad. It's not nothing. 

David: Yeah, I think that's a great point particularly for our listeners who work in very large organizations with different types of activities, that some things may not look at an organizational level like they're having much impact, but there may be places where it really really is. That's one of my fears with the safety clutter work that we've done, Drew, is that organizations make these really big average-based decisions to stop something or change something without actually understanding how it's working in the different parts of their business.

Drew: Yeah. In fact, that's a point we made in the safety clutter paper, is that clutter often exists in the form of generalization, that's something that is good and is useful somewhere. Then it gets spread everywhere where it's less useful and less effective. If you just throw everything out, then you miss the fact that it was originally a good idea that was helping somewhere.

David: Yeah. 

Drew: Second takeaway is that when we do any sort of consultation effort, whether it's forums, walkarounds, reporting systems, or learning teams, what do we judge those on? Do we judge them on their success at consulting or do we judge them on their success at generating actions that get taken?

I started off actually making that not a question, but a statement. Then I realized that no, it genuinely is a question that we should keep asking ourselves, particularly when we put in things like learning teams. Is our goal to consult or is our goal to improve? And just make sure that we measure it on what we are doing. 

I personally don't think that consultation by itself is actually a virtue, and this paper sort of generates some of the evidence behind that. Consultation without action actually can make people pretty annoyed. Your mileage may vary on that one. 

David: Yeah. Drew, and we've said this a lot on this podcast with I guess the underlying premise for exactly what we're doing here is safety work versus the safety of work is to understand the mechanism that we're trying to impact with this type of a safety work practice. Because that’s what it is. It’s a safety work practice doing these leadership visits for safety. There are a lot of different mechanisms that we could be trying to impact and it would change how we do the activity, and how we actually measure its effectiveness like you say.

Leadership visits could be about a whole bunch of things. It could be about actually wanting leaders to understand workers have done better, so they make better leadership decisions because they've got a greater operational context for their business so they can actually make good, organizational decisions. Or it could be about checking the presence of critical risk controls. The visit is structured around verifying these fatality risk controls, and then we need to obviously measure and evaluate that. It could be about demonstrating care and concern for the workforce. Climate might be a good measure of effectiveness.

There might be three, four, five, six different ways that a leadership visit could be designed to have an impact but all very different processes and potentially all very different ways to evaluate them. I think Drew, with the leadership visit programs I see in organizations, the organizations rarely actually give the thought for what they're trying to influence.

Drew: Yeah, thanks David. That was well said. The last two takeaways are basically for people who are thinking of or currently have a management walkaround program or similar activity. First one is just that act of assigning responsibility for actions directly to senior managers seems to make a positive difference. 

If you've got a program, give some thought as to just an instruction. When you're generating actions for these things, think about who it's being assigned to and try to have a higher percentage that's assigned to a manager rather than to the safety team or lower down in the organization.

The second one is that prioritization, on the other hand, really doesn't matter. Spending effort and time on trying to prioritize the problems doesn't improve the effectiveness. A problem is a problem. Fixing it makes things better, but spending effort on prioritization takes time away. 

You might think we are being really efficient because we are focusing our attention on the most important problems. But in fact, you're being inefficient because you're focusing your attention on ranking the problems instead of on what you're trying to do. 

David: In that process, like we've mentioned, everyone's going to have a different view on what's important, what's less important to them. This seems to be a numbers game, which is just fixing as many things as you can. Employees and your workforce will feel better about the organization and the management team as a result. Which again, maybe we didn't need all the research to work that out because I guess that's probably a pretty common pattern, which is if my problem gets fixed, then I'm going to feel better about my organization and my management.

Drew: I guess what's a bit counterintuitive there is people might be disinclined to solve what they think of as petty problems at the expense of more serious problems, but I think part of making staff happy and part of having an effective organization is recognizing that if someone cares enough about something to complain about it, then the worst thing you can do is try to prioritize that complaint and say, oh no, it doesn't matter to me. It was the most important thing to them so that's the way to make them happy, is to deal with the thing that they've said is important. 

David: I think we see that and Drew, I'd be curious as to whether this holds up in the industrial relations research because it is probably a very effective industrial relations strategy with organizations and trade unions for those listeners in parts of the world that have big, collective bargaining processes and union processes, which is treat every issue that the union raises as a really important issue and fix it. I guess it works really effectively as an IR strategy and it seems to work effectively as an employee relations strategy as well. 

Drew: David, the question we asked this week was, are management walkarounds effective? 

David: I was just about to get in and ask you the question, Drew. Your answer here at the end of our notes is sometimes yes, sometimes no. It depends on the resulting actions. I think that is a really great way to finish this podcast, which is just walking around for the sake of walking around is, you may as well not, but if you're walking around and finding problems and getting those problems fixed, then it probably is making a difference.

That's it for this week, Drew. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.