On this episode of Safety of Work, we discuss how you can know if your safety team is a positive influence on your safety climate.
“We heavily rely on and almost solely rely on line managers in the organization to influence, create change and affect the organizational safety climate.”
“It’s really tempting to reduce safety to measurable indicators…”
“I think there are some things that we can, practically, learn from this [study].”
Nielsen, K. J. (2014). Improving safety culture through the health and safety organization: A case study. Journal of safety research, 48, 7-17.
Drew: This is the Safety of Work podcast, episode three. The question for this episode is, “How do you know if your safety team is a positive influence on your safety climate?”
Hey everybody my name is Drew Rae and I'm here with David Provan. We are from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work podcast. If this is your first time listening, the podcast is produced every week and the show notes can be found at safetyofwork.com.
In each episode, we ask an important question in relation to the safety of work or the work of safety and examine the evidence surrounding it. David, what's today's question?
David: Drew, today's question is, “How do you know if your safety team is a positive influence on your safety climate?” As safety professionals, we often talk a lot about the role that leaders play in shaping safety climate, but I think when we look at this episode, we rarely talk about what impact that we personally have as safety practitioners on the safety climate within our organization. Therefore, we actually don't know very much.
There's less than a handful of studies that I'm aware of that really try to explore this relationship between safety professionals and organizational safety climate. However, I think we would all agree that there should be some sort of impact. You would expect that the safety team and the safety practitioners would be impacting the organizational safety climate and it will be really useful for us to know how that happens.
Drew: There's something that I know from personal experience as a safety worker, but it's not something that crops up a lot in the research. Most of the examples I know of are pretty bad. There's one organization that’s going to remain nameless because we’re on a public podcast. They decided that safety climate was really important. They had learned about it. They knew that it mattered. Their solution was they produced these glossy booklet called, What Is Safety Climate, and gave it out to all of their staff as if explaining what safety climate was would somehow magically create a better safety climate.
Even more cynically, I think safety climate is a tool that safety consultants use to help safety professionals look good. They come along, they measure the organization, they do something to help out, they measure again, and the measurements almost always go up because that's what safety climate does. People then say, “Look at where we are,” but I don’t think that putting the measurement aside and if you treat measuring safety climate as a separate thing to managing safety climate, then safety climate is something that matters. It's something that safety professionals have a real and important role in looking after.
We just don’t know very much about it. What we've got is a lot of practical experience, some red flags of what to do or not to, but it's all based on personal experience. I'm glad that in this episode, we're looking at a sincere effort to look at and improve how a safety team is acting and interacting.
I've got some concerns about that paper that you pitched David, but it's a rare example of at least taking seriously the idea that safety influence isn't just about doing risk analysis or cheerleading for safety, that it comes from improving the way your safety team interacts with the rest of the organization.
David: Yeah, thanks Drew. I'm looking forward to your thoughts on why the research was conducted, but today what we will do is we'll overview our research paper titled Improving Safety Culture through the Health and Safety Organization: A Case Study. This research was conducted over a number of years, around 2011-2012 by Kent Nielsen in Denmark and it was published in 2014 in the Journal of Safety Research.
Dr. Kent Nielsen has a PhD in psychology. At the time of the study, he was the deputy head of occupational medicine at a regional hospital in Denmark. He self-describes himself as actively engaged in safety intervention research which, from my perspective, is really great to hear a fellow researcher that's really aiming to understand, create, learn from intentional change projects within organizations, and then provide the outcomes of that work in the public space.
Drew: When I looked at the other papers that Dr. Nielsen's produced, he's basically basing his whole work around trying to evaluate interventions in real organizations rather than just producing research papers that tell people what to do. That's not easy research and sometimes you can think about what would be the ideal way to do that as a researcher. That gives way to practical considerations. What sort of companies the company's going to let you have, what sort of data will they let you collect.
For their reason, a lot of researchers just shy away from it as too hard or at least too hard to do well. In the opening sentence of his paper, Nielsen points out that there is a real lack of culture change intervention studies. What he means is this lack of projects where they do a baseline measurement, they then target a specific change, and then they measure afterwards. You think this is the easiest type of intervention study to do, and there’d be lots of them. He could have given how much we've talked about the importance of safety climate and safety culture for at least 30 years, but that's not really the case. There's surprisingly little of this work done.
The study uses safety climate as its main measure. It called itself about safety culture, it measures safety climate. We’re not going to get into the difference in the relationship between the two here or we might never finish the podcast. Let's just assume for the moment that they're closely related.
David: Nielsen outlined that historical safety climate research and he did it as part of his literature review going back Zohar in the 1990s as heavily linked safety climate to supervisory practices. I think we have this general assumption in our organizations that safety climate and safety climate change is [...].
We heavily rely on and almost solely rely on line managers in the organization to influence, create, change, and affect the organizational safety climate. Kent understood I suppose the social psychology literature a little bit more broadly and understood that climate was not just about supervisors and workers, or managers and workers, but safety climate was about interaction between everyone in the organization.
Therefore, he formed the hypothesis that the safety team should be able to have a somewhat direct and measurable impact on the organizational safety climate. This makes sense because we know that the safety team in an organization has some assumed or inherent power and influence over the way the organization thanks about, and understands, and manages safety. He studied them because it’s really important to us to try to understand how the way that the safety team behaves and interacts with others in the organization has an impact on safety climate and I suppose therefore, indirectly safety outcomes in the organization.
Before we get to the design and findings of the research, this immediately raised a point for me. I suppose when I read it, I was really keen to talk about this because like I said in the introduction, we do spend so much time thinking about what others need to do to shape safety in the organization. I thought this was a great opportunity for us to think about the things that we do to shape it ourselves in our roles every day. Drew, how about you tell us about the methodology and your views on how it was done?
Drew: This is the point where I need to get a little bit critical. This study calls itself a case study research, action research, and a quasi-experiment, and those are three totally different things with totally different methods and standards of evidence. It doesn't really fit any of the labels properly. What it is, is it's a before and after study. Sometimes the jargon we call that a pre- and post-evaluation. You measure, you do something, you measure afterwards.
The golden research, so the research question was to see whether the safety team could improve safety culture by creating more and better safety interactions with the shop floor. It’s a two-step process. Step one, you create the better interactions, and then step two, those interactions lead to safety improvements. The idea is that having an active and visible safety team is going to generally lift the idea that safety matters, it’s important, that people genuinely care about safety, and that's going to create a better safety culture to get things done.
They did a study in a Danish industrial plant and the study was prompted by the fact that the plant had some fairly serious problems. They definitely needed to do something to improve safety. They did the study over a good amount of time. You don't expect the safety culture intervention to lead to change every two weeks or two months. They did the before and after measurements, 23 months apart and had a whole package of things that they measured. They had a safety climate questionnaire with more than 270 people answering it. They had a small number of focus groups, a lot of document analysis, and the researcher is sitting in on a lot of the safety committee meetings to see what was going on. They also collected data about safety interactions. They asked people who they were talking to, what they were talking about, and they were treating this fairly quantitatively, basically just trying to measure the number of times that people had conversations. They involved safety as one of the topics of that conversation. There's some good things here and there's some real problems.
The first good thing is that they measured a baseline before they started trying to improve things. That's really important, because otherwise, there is nothing to compare it to. If you don't measure beforehand, then you don’t know whether your scores afterwards are any good or not.
The second really good thing is that they measured the mechanism. They would try to improve safety by first improving how active their safety team was and how interactive everyone was about safety. They made sure to measure those things, not just the inputs and outputs with safety.
Spoiler alert: it’s really good that they had those measurements, because measurements like injury rate, you don't really expect to see a change from this sort of study, and that's what they found here. Having that baseline measurement and measuring the mechanism is important not just for research work, but for any sort of program evaluation.
Dave, you’ve spent time in the organization trying to improve safety, how common is it to remember to actually get baseline measurements and to have defined before and after measurements when you try to make changes?
David: I think it's very rare in organizations to do baseline measurement. I think we see it a little bit in our work in human resources or any employee engagement. I think when people are trying to understand a problem and trying to get a baseline, maybe the first time companies do their safety climate or safety culture work, they could treat that as a baseline, but having the discipline of designing a program where an organization would intentionally say, “Over the next two years, we want to achieve X. To do that, this is what we are going to measure now to know that in two or three years time that we have changed, or improved, or done what we want,” is in my experience quite very true.
Drew: I think one of the things that makes it hard that they got right here, is it means delaying the start of the intervention. For the first month, they didn't actually do any change, that's why the measurements are over 23 months rather than your nice round to two years, because they spend a month just collecting the data for that baseline. Your new CEO is itching to improve safety, but they took the time to measure before they started intervening so that they knew whether the intervention was working or not.
If this was a real quasi-experiment though, they would have had a control group that didn't receive the intervention. That's the other big thing that you need in an experiment, to know whether all of these quantitative data is making a difference. That's one of the downsides of getting just two quantitative. You focus on things that you could count, and in order to understand things that you can count, you need to have something to compare it to.
For example, they counted the number of separate issues discussed at safety meetings, but that's something that they were trying to directly manipulate. They also counted a lot of self-reported measurements like people telling the researcher how often they had a conversation about safety. You never really know with just a single site when you're doing that, whether those numbers are going up or down naturally, or because of the intervention, or just because the researchers are present. That's why a good experiment really needs a control.
Of course, given that the whole point of this is that we're dealing with the factory that was in safety trouble and had a whole bunch of violations given to them by the regulator. It would have been pretty unethical to set-up a genuine control group here. A control would have made a factory where it had the same problems, but you didn't try to fix it. That's a good reason this shouldn't have been a quantitative experiment in the first place. It really should have been a proper qualitative case study, or properly designed action research.
That's just me as a research methods lecturer making complaints about this. It's really tempting to reduce safety to measurable indicators like this, you make all sorts of sacrifices. If someone told me, “Hey, we've got a great new safety intervention,” we’re going to make our safety committee much more active and effective, then I don't think just measuring the interactivity would have been enough. You basically say, “Step one, improve the safety committee. Step two, everyone communicates more. Step three, dot, dot, dot. Step four, all of our safety scores go up.”
I really want to know what step three is. I want to know what they’ve changed on the shop floor. What was the increased safety? What did this look like? What were people doing differently? What were the issues that were fixed? That's why a genuine mixed method study where you investigate and talk about those qualitative things happen. You're talking about what actually happened to workers in the factory, not just what changed on the survey. [...] for me. Tell us about the actual intervention, David. This is pretty good.
David: What they did over just short of two years like you said Drew is a whole range of interventions or safety improvements if you like in this sort of space around climate, communication, and worker involvement that should feel very familiar to all of our listeners. The safety team focused on performing more and better safety interactions with the shop floor.
What they did is they started with a health and safety committee. Like you said Drew, what I remember in this site, I think its lost time injury frequency rate was almost 40% at the start of the study period. There was a dozen or so notices from the regulator. There was a change in CEO, we can all form a view of the operational starting point of this organization. They took the existing health and safety committee and they immediately doubled the number of shop floor representatives. They moved it to happen every month.
They added an external safety consultant because they believe that the internal safety team didn't have the requisite safety knowledge to actually provide the meaningful advice on safety issues that were raised in that committee. They immediately changed that communication and issue resolution mechanism. They did a whole raft of things to increase communication, they put notice boards all around the workplace, they put safety messages in the company newsletters, they provided messaging for the new CEO who was apparently really passionate about safety.
They prepared additional safety communication for distribution. The production managers and supervisors incorporated safety discussions in every one of their regular meetings which they hadn't previously. Then the safety team also developed these proactive goals they have that were going to lead to improvement. Rather than just being reactive every day, each member of the safety team had specific projects or activities that they were personally leading and implementing, which shifted them from being reactive to proactive.
Many of these things are things that we would commonly say in organizational safety programs. The real challenge with the intervention is they did so much and there were so many other things going on in this workplace over the study period and so many other factors with organizational change that, while they did all of this stuff—we'll talk soon about the findings which again spoil it—safety climate improved based on the measurement that they were doing. It's really hard tonight which of all these different interventions that they were doing had what type of effect on like you said Drew, the shop floor activities or the worker perceptions about the organization and safety.
Drew, do you want to talk specifically about the findings or any other thoughts on what they did during the intervention?
Drew: Sure. That’s it. Fundamental limitation of this sort of research is when you’ve only got one example that you're working off. It really doesn't matter how sophisticated your measurement or your quantitative analysis is. You can't say what it was that caused a change that you saw. One really good thing they did and this is I think pretty important, is they didn't make the classic mistake organizations make of saying, “Here’s the injury rate before the intervention, here’s the injury rate after the intervention,” it went down. It must've been that the intervention was great.
They did a proper trend analysis on the data over several years and what that analysis showed was that even though there was a decrease injuries, this was indistinguishable from just continuation of normal trends. I think they get full marks and a lot of kudos for the bravery and just saying that upfront and clearly. It’s so easy for researchers to say something like, “Even though it wasn't statistically significant, the injuries decreased.” You shouldn't do that. That's like saying the radio was playing white noise but if you listened really, really closely, you would hear the yellow submarine play backwards.
The whole point of statistical testing isn't that it tells you what's real or not real, but it does tell you what's indistinguishable from noise. Statistical testing has been criticized a lot, particularly recently in psychology, but if you're going to do it at all, you're going to stand by it. In this study, the researchers stated honestly that any change in the injury data was buried by the noise. That's an honest and it's a fair thing to say. I think it's the actual conclusion that you're going to get from any study that’s just a single site.
David: Drew, I sort of transitioned from solely being a practitioner for most of my career to be involved in research over the last four or five years. That's probably the biggest lesson for me and maybe for some of our listeners out in this study because the data did actually show that safety incidents decreased by more than 30% over the two-year study period. From like I said earlier, an LTIFR of 40 down to 24 or thereabout, and that would be so easy and so obvious for safety practitioners in their organizations to claim victory over their two-year intervention with a ⅓ reduction in lost time injuries.
I think many practitioners and organizations will claim success with far less reduction rates, but yeah, after the statistical testing and the long-term trend analysis, that could have done nothing and had the same or even greater reduction in incidents just based on the moving trends and their variability in the year’s performance. That’s a real lesson for organizations. We know we now we need to think about what we measure outside of injury rates and this just shows clearly that it will be very easy for this company to think it's improving safety when it might not actually be improving safety.
In and around the injury rates, what they actually looked at closely was issue resolution. This was my mainly through the health and safety committee. The three years prior to the intervention, the health and safety committee worked and then resolved approximately 20 safety issues every year. The first year of the intervention, the committee resolved 62, and then in the second year, the committee resolved 115 issues. A part of this was the frequency of the meetings, the people involved, and so on, but you can see how that increase in the raising and resolution of safety issues could have an impact on climate scores.
The climate scores significantly increased. Workers reported that we're getting more feedback about safety, that they're feeling more involved in safety, and are receiving improved safety instruction. Finally Drew, they reported a 50% increase in safety interactions during the study period which is the meetings and conversations that people were having in relation to safety. Interestingly to me in this result is that something like 90% of those reported safety interactions were just between two people. A manager and a worker for example, or a safety professional and worker, which goes to show potentially the importance of that close and personal communication that happens every day between two people in a workplace around safety.
Drew: I think that's a harder number to manipulate as well. It's one thing to schedule a meeting every week that everyone goes along to. You can easily drive up your number of times people talk about safety by having that form of thing or you could drive the number up by saying, “Safety has to be an item in every meeting that we have.” I think the fact that as well as the stuff that they were deliberately changing in the safety committee, they got increases in all of these one-on-one personal interactions which were not something they had a policy about. They didn't have safety conversations in their process or were counting that as a KPI. That was just a thing that naturally increased as a result of all the other things that they're doing.
David: From all of that discussion about the intervention and all of those findings, I think the bottom line here is that we can say that safety probably did get better at this site. Although we're not claiming it through injury rates, we understand the limitations of the climate and all of the interventions, and that maybe doesn't surprise our listeners either judging by the things that they were doing over this two-year period.
At the very least, they fixed a whole number of outstanding issues that we saw in those numbers of things that were identified and resolved through the committee. They obviously had to respond to all those regulated violations and there was the new CEO who was quite passionate about safety. I think we could say that safety got better on this site, but from the study, it's really hard to know how big that change really was and what were the key specific things that contributed to that thing.
Drew: That’s important if you want to take this away and do that somewhere else. It's one thing to say, “Hey, this was a success here. Aren’t we great?” but how do you know what to learn from that? Ultimately, the success or failure comes from the set of numbers that they have measured and there's too many things going on that could explain the changes. There was a new CEO, so do you say, “Okay, this study proves that changes in CEO has improved safety”? There was a regulated breathing down their necks. There were two changes in safety manager during the study.
We know nothing about the equipment or the work and how that might have changed, because they didn't really investigate or discuss that in the study. It's also a little bit hard to interpret exactly what was the change. The safety culture scores increased. It was that just because people saw that the committee was more active or was it because they thought that the new CEO generally had safety as a priority? Maybe it's because in the background, they're not reported by the measurements, or the equipment violations got fixed, and that made people genuinely safer so they reported a more positive safety climate.
Is it fair to say that this is a success story? Is it fair to say that safety climate gets better if you train people to talk about safety more? We don't really know the answer to that and that makes it hard to know what to take away, but I think there are some things that we can practically learn from this. David, what do you think it means?
David: I think practically as you go through all of the limitations and the way that the research was done, and I think this is a bit about the researcher’s eyes speaking than their stomachs in a sense of they were trying to measure everything, they're trying to do everything, and then they are trying to find answers for everything all at once, but I came up with four takeaways that I thought were really useful for practitioners in their organizations that are trying to improve safety climate through the role of the safety organization.
The first is shop floor engagement. I think safety professionals have increasingly moved their roles away from having frequent and effective shop floor interaction, we've seen this as the role of supervisors. We've seen this as not the safety professional’s role to do direct shop floor engagement. We were influenced by managers and work with their managers, and then they work with their people on the ground.
I think that really limits out, as safety practitioners, our understanding of the operational context to the organization. It limits our real time risk information and it limits our ability to positively impact the understanding and perception of safety amongst the frontline workforce. I really think the safety professionals should prioritize time on the frontline listening, communicating, and following up.
The second takeaway for me was issue resolution. In this space, actions speak louder than words. If the safety team, safety practitioner has an action, then it should be done well and it should be done on time. I think all too often, the safety team and safety practitioners are really busy, and timeframe slit, and the safety team isn't diligently following up and resolving safety issues, then it's hard to expect other people in the organization to do what we'd like them to do for safety as well.
It can be tempting in this space for safety teams to fall back on hazard reporting systems and managing the process, but I'd encourage all safety teams to really closely watch your hazard reporting and observation system. Know what's going in there, know who it's going to, and know that it's getting done. It appears in this study that issue resolution can play a big role in worker perception of safety climate.
The third I had Drew was about role modeling. This has been a reflection for me at various points over the last couple of years, but how do we, as safety professionals, role model our safety interactions through the processes that we’re involved in? Safety meetings, internal investigations, safety audit. You’ve got to get involved in a lot of conversations with safety practitioners where judging and blaming our managers for what they're not doing about safety, but then at the same time, we don't expect managers not to criticize or blame their workers for having accidents. I really think it's important that safety professionals pay a lot of attention to the interaction behaviors that they're role modeling within their organization.
Finally, Drew, I thought being proactive. The safety organization went from just reacting to what was coming up on a day-to-day basis to actually setting clear goals and improvement programs that they were working through the safety committee, but with people on the shop floor and with managers to get fixed.
We've seen this a little bit in a lot of safety themes about them not having the space and the time to be proactive about safety. I think that's really important. That would be my four Drew. Shop floor engagement, issue resolution, role modeling, and being proactive.
Drew: I think those are fair takeaways. I'll throw in a sneaky number five which is, that I think it's important to think about the change that you try to make in the organization, and always use that as the basis for measurements. For all its faults, this model of measure-change-measure is far superior to just annual tracking of statistics. One good use of the safety climate as a metaphor is, just remember the injury rates are like the weather. They go up and down all the time. It's the climate that we care about, the consistent things that we find to work to improve.
Think about the long time difference that you want to make as a safety professional. Whether you would like to see issues getting resolved, whether you want to see a more interactive organization, whether you want to see an organization with safety committee as a prestigious committee to get on to, measure what it is now, work to improve it, measure again afterwards. It's not a bad model.
I think we can agree to disagree, maybe about when we think this study has shown that safety teams do impact safety climate, but I think this study has definitely shown that evaluating safety interventions can be really hard. It's given some useful things to think about, about what types of things should go into an intervention and how you might manage a package of measures like they have here. Did they really work on improving one aspect of the organization?
David: Yeah, I like that Drew. I like how you talked about that one aspect. I think it’s so tempting for practitioners in organizations to say, “We want to improve safety culture,” “Okay, great.” Most of the academic researchers don’t agree exactly how to define it and what makes up the safety culture. I really like the way that you then said, “What are the specific aspects of your organizational safety climate or culture that you're trying to improve? Be as specific as you can be and then find a way to measure and intervene around that.”
Just as a pre-intro for next week where we're actually going to do that ourselves, and we’re going to the specific issue of trust which is sort of part of climate and culture, but we’re taking our own advice there where we’re going to next week and being more specific than we've been this week around particular mechanism for improving safety.
Drew: Yeah. That's it for this week, anyway. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Please send any comments, questions, or ideas for future episodes to email@example.com.