The Safety of Work

Ep.52 What is the relationship between safety climate and injuries?

Episode Summary

On today’s episode we fulfill an earlier promise we made on a past episode and discuss the relationship between safety climate and injury.

Episode Notes

We frame our conversation around the paper, Safety Climate and Injuries: An Examination of Theoretical and Empirical Relationships. 

Tune in to hear us talk about retrospective studies, the perception of safety vs. actual safety, and the influence of injuries on safety climate.

 

Topics:

 

Quotes:

“People who say that they think their company cares about safety, those people generally are safer.”

“Most safety climate research assumes that safety climate is a good measure, because it is a predictor of injuries.” 

“Not enough of these studies measure the strength of climate.”

 

Resources:

Safety Climate and Injuries: An Examination of Theoretical and Empirical Relationships

Feedback@safetyofwork.com

Episode Transcription

David: You’re listening to the Safety of Work Podcast, Episode 52. Today we’re asking the question what is the relationship between safety climate and injuries? Let's get started. 

Hey everybody, my name is David Provan and I'm here with Drew Rae. We’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. In each episode, we ask the important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

Drew, what's today's question?

Drew: David, today we're fulfilling a couple of promises that we made in earlier episodes. In episode 39, we talked about whether our incident investigations actually find the root causes. Last week, we talked about the relationship between blame and learning. We're clearly heading back towards question by question discussion about safety culture and safety climate. 

I looked back and I saw that in episode 35 in fact, we already talked about leading and lagging indicators. There, we briefly mentioned that there were papers that show that safety climate is a better lagging indicator of injuries than a leading indicator. In other words, injuries predict climate as much or more than climate predicts injuries. 

At that time, we said there are a couple of papers about this and we'd come back to it, so here we are. As you might imagine, there are a number of studies that have explored the relationship between safety climate and injury rates. That's the whole reason why safety climate was endangered in the first place as a way of explaining why very similar organizations or similar situations could have different numbers of injuries. 

The authors of the paper that we're going to talk about today, they make the point that many of these studies have what we call a retrospective design. In other words, they're looking backwards in time—looking at history, rather than looking forward as if we design and experiment. 

Typical way of doing that is you decide to do a climate survey. On the same day that you do the survey, you look at your current 12 months of lost-time injuries. That current 12 months is actually the last 12 months. You're asking whether the last 12 months of injuries is associated with today's safety climate.

Now, it's a good thing to have something in safety where we have two really clear measurements like that. That's actually something that's really attractive about safety climate, we've got these reliable, valid survey measures that have had lots of work put into them, and we've got numbers of injuries. You can argue about how much you hate lost-time injuries but at least it's a nice, concrete thing to compare with.

The trouble is that while the surveys are very rigorous, there isn't really a commonly accepted conceptualization of what safety climate is as a construct, which means that you end up with lots and lots of different surveys that measure the same thing, and lots and lots of different understandings about what those surveys actually mean. 

That's a bit of background to a really simple question, which is just what's the relationship between measuring safety climate and the injuries that you get?

David: Drew, there's a number of papers that we could've picked out that showed this relationship retrospectively and prospectively, about this relationship between safety climate and injuries, but we settled on this paper. I think for a couple of reasons, that gives us something a little bit more interesting to talk about the gist of that relationship. 

The paper separates out the studies that used this prospective design, and separates that analysis from the studies that didn't. It's really startling to look at these two different designs. What was good in this paper that we'll introduce in a minute is the authors kind of ignored what the original authors of the papers they're reviewing said. They did and looked at their method, and actually then categorized it based on what they actually did. 

The second thing is they separated out this constructive safety climate into two different separate constructs, which they labeled as organizational safety climate and then psychological safety climate. Let's talk about a few of these ideas and what they separated out. 

The first thing is this climate-injury relationship. We've talked a lot on the podcast, Drew, about being really clear on the mechanism—that any practice that you're putting into your organization, what is it actually influencing? It's not just reducing injuries necessarily in the first place, but it's trying to influence something, that they might influence something else, that they might influence injuries.

Dov Zohar—who I suppose in 1980, if I'm not mistaken—published the first real 40-item safety climate survey. What's that? Forty years ago. In that paper, he specified that climate is a lot about the perception of the organization's safety policies and practices. That perception of the organization's policies and practices affects the outcomes people expect from certain behaviors. Then, the outcomes they expect from certain behaviors will affect the actual behaviors that they perform in the workplace, and the actual behaviors they perform will affect their injuries. 

You can see this climate construct affects individual kinds of reasoning, which affects the performance of their work, which affects whether they'll have an injury or not. We're still talking, even in climate about things say, three steps removed from injuries, but I suppose, in climate research—at least for the last 40 years—have tried to spell out what that connection looks like. 

Drew, this existing research, what does it show in relation to climate and injuries?

Drew: It's one of those rare concepts in safety where the common sense understanding matches up with a fairly reasonable body of evidence. When we talk about safety climate as opposed to more nebulous concepts like safety culture, most of the questions in the survey were basically asking people do you think your company cares about safety, do you think your company has a good safety measurement system, do the people around you at your company care about safety? 

We shouldn't be surprised that people's perceptions aren't perfect, but people who like the safety of the company they're working with, turns out those companies on average are safer. People who say, no, my company is dangerous, my company doesn't care about safety, the people around me don't care about safety, on average, those companies have a worse safety record when it comes to injuries. 

It should come as no surprise that the opposite works as well. When a company has an accident-borne injury, the people of that company then say, oh, my company is not very safe, safety procedures in my company don't work very well. It's obvious, but it's nice that we do have this evidence that clearly shows the birth of those real events that really happen. 

David: I like that relationship there that we see a lot in social psychology, that our perceptions of the world shape our actions in the world, and then those actions in the world shape our perceptions. When you do that at a group level, then you can see how these shared norms play out into shared practices.

Drew: David, I should interrupt you for a moment and say that, actually, that's an interesting extrapolation which is not nearly so well-evidenced. There's much less evidence that people believing that their company cares about safety directly leads to better safety behavior, then leads to the safety. The evidence is actually for the end to end link. People who say that they think their company cares about safety, those people generally are safer. 

The idea that it happens through shared behaviors, or through influencing people's behavior, compliance, or things like that, that's much weaker. That's actually not what the paper we're looking at today looks at, it just looks at the direct connection between perception of safety and actual safety.

David: Yeah, thanks for that pick up, Drew. That was me showing a little bit into my thoughts and experiences which we encourage our listeners to do a lot as we touch all of these safety science areas. This is probably a little bit more defined than other things that we've talked about on the podcast, but there are still quite a few things that we obviously don't know in relation to safety climate. 

The next one, Drew—I'm probably not going to get this one right either—they separated out organizational safety climate and psychological safety climate. Listeners are thinking, oh, psychological safety, we're talking a lot about that at the moment. It's not what we might be talking about now when we talk about psychological safety. What we were trying to do, we split out whether the studies reported the individual results. What individuals believed in relation to safety climate, or whether that we're looking at a group result, team result, or an organizational result. 

The psychological safety climate relates to an individual's perceptions about safety in their organization, and the organizational safety climate relates to the collective or the aggregate perceptions of the organization. 

Drew, this was the way that I understood the distinction. I read the paper a few times trying to look for a broader description, but I couldn't get a better answer to that. Is that the way you interpreted what they were doing?

Drew: Yes. Just to give listeners a rough idea of how the difference might work7, if you're only measuring things at the individual level, we don't actually have to measure everyone within the same company. We could just go out and find 200 people and say, what do you think about safety at your company, and has anyone in your company had an injury in the last year? Just treat those as if it's like an individual psychological property. 

Whereas, when we talk about organizational climate, we have to get people from the same organization and we have to aggregate their results. It's not about what an individual believes and whether an individual is safe, it's about what a group believes and how safe that group is.

David: Thanks, Drew. The other thing the authors talked about before we jump into the method and introduce the paper was these moderators of climates. We've talked about there is a relationship between climate and injuries in the literature, and this relationship goes both ways. They talked about a couple of things that made it difficult to compare all of these studies in relation to climate and injury connection. 

The first was time—the length of time that you're reporting injuries before and after the climate assessment, whether it was 3, 6, 12 months, or so on, the definition of climate in the assessment tool that was used—there's lots of different climate tools, lots of papers taking existing tools. They take out items, they put in place new items, and they try to make the tool better. 

And then the third thing was the severity of injuries. Some studies had different thresholds for what they counted. Some included first aids, some included reportable or an identifiable type of incidents. When you get all of these studies together and they're on different time scales with different definitions and assessments of climate and different inclusions of types of injuries, it just made the whole data just not as neat as the authors would have liked. 

Drew: Statistically, when we talk about identifying moderators, what we're doing is we're looking for extra variables that can increase or decrease the strength of a relationship. As we'll see when we get into the method of the paper, when you're comparing a lot of different studies—and some of them seem to show a very strong effect, and some of them show a very weak effect—the next thing you look for is, is there a pattern, is there a moderator that makes some of those studies have a stronger link?

David: The paper we're going to discuss today is titled, Safety Climate and Injuries: An Examination of Theoretical and Empirical Relationships. The paper was published in the Journal of Applied Psychology in 2010. Drew, the authors were Jeremy Beus, Stephanie Payne, Mindy Bergman, and Winfred Arthur, Jr. You did a bit of looking into the other works of the authors. Do you want to give us a cent of these authors? It was published in Applied Psychology so I suspect they're not purely psychy science researchers.

Drew: No, these are all org-psych researchers. They're all fairly heavy hitters who do a lot of workplace-type research mainly focusing on things that can be studied through surveys and statistical analysis. Individually, they don't tend to publish a lot about safety climate, but there are several occasions when they've got together and done a really insightful thing specifically focused on safety.

This seems to be the reason for getting together, a shared interest in safety climate. What I particularly like about their work is they don't just blindly or naively measure things without thinking about what it is the underlying thing that they're doing, what are they actually measuring, how does the way you're measuring it influence the results you get. When you say the differences in measurements from different researches, they try to explain this in terms of the research practices.

A really good example is there's a 2009 paper by Payne, Beus, and Bergman, and a couple of other authors not in this set, that looks at how come some studies say that safety climate is a leading indicator and now the studies say it's a lagging indicator? 

The explanation they give is that they point out that if your theory says that climate's a leading indicator, then you got to design a study measuring climate first and injuries afterwards. If you go the other way around—if you measure climate and then look at historical injuries—then even though your theory is about climate influencing injuries, your study is about injuries influencing climate. 

That's the reason why in this particular study, they don't go based on the theory that the other authors say that they're using. They go based on the study design and they look at methodologically what people are assuming. 

Probably a good point just to remind people why it is that we sometimes focus on the quality of the authors and their track record. The reason is that when you're reading a paper—particularly a difficult paper with things like complex statistical analysis or review of literature you haven't heard before—there's a lot of trust you have to assume as you read about the authors. You need to trust them about how accurately they're representing other things they talk about, and you need to trust the choices they make in their analysis. 

I always like it when I am confident that the authors are better statisticians than I am. When they say they've done things in a particular way, I know that I don't need to look up the methods they used and try to look up the statistics manual and second guess what they're doing to see if it's reasonable. I can just know that anytime I have checked it, they've done a good job. I'm just happy when they say, this is the right way of doing it. Using authors who are experts in this is a good guide as to whether the methods were appropriate or not. 

David: This, Drew, is a practical tip. That's as simple as looking at a paper—if you like what you're reading—then jumping on Google Scholar, hitting the authors' names in, and just having a scan for what else they've published, how many times it's being cited, and just getting a sense for their body of work. It's a good thing to do. 

This paper contains both the literature review and a set of meta-analysis. The meta-analyses are testing four different relationships—organizational climate as it leading constructive injuries, injuries leading to organizational climate, psychological climate leading to injuries, and then injuries leading to psychological climate. 

Just before I dive in and explain that, I'm going to throw it to you to tell us about the layman's guide to what a meta-analysis actually is when we're trying to test these four relationships.

Drew: Sure. I think we've talked before about systematic reviews and we might have mentioned meta-analysis in that context, but this might be one of the first meta-analyses that we've looked at. 

A systematic review is when you use a set of search words and you very carefully find every paper that matches the search words. Then, you use a set of rules to whittle it down to papers that are particularly relevant to the subject you're doing, and of suitable quality. 

With the meta-analysis, you go one step further. Your rules are strict enough that the papers you find can be directly, statistically compared. They've got the same independent variable, they've got the same dependent variable. In this case, depending on the direction we're talking about, climate is the independent variable and injuries is the dependent variable, or vice versa. 

If you get papers that match up that well, then you can do direct statistical comparisons. You can almost, in some cases, actually lump them together. Instead of 10 papers—each one has 200 people in it—you treat it as if you've got 1 paper that now has 2000 people in it. The most simple version of it is way just like direct-point scoring count and say, six papers in favor, four papers against. This gives different weightings to different papers based on the size of the sample so that we don't just say, six for, four against. We say you're combining them, comparing their sizes, overall, the population says this.

David: Drew, you mentioned about being fairly strict about which papers they included. It looks like they had 32 papers that looked at injuries leading to psychological safety climate, they had 10 papers that looked at injuries leading to organizational safety climate, 11 papers looking at organizational climate leading injuries, and 1 paper that looked at psychological safety climate leading injuries. 

Just remember that these were not necessarily what the original authors of these papers claimed that they were doing, but it was what the authors of this paper—by looking at their method—actually categorized them as having done. 

Drew, the first thing to notice, like I said, the vast majority of studies were designed, I suppose, against what these authors said that they're actually trying to measure. That's just a good call out for any researchers in our listener group just to maybe rewind and relisten to Drew's advice before that they can get theory and get in your research questioned. Just make sure you're designing your research to actually answer your research question and not answer different questions that's the reverse of what you're trying to answer. 

Drew: It's probably also worth saying that these are really pretty good numbers given how narrow we're looking. We've said on the podcast before that there are thousands of papers about safety climate. The fact that we're talking about your numbers in the tens doesn't mean that all of those, out of thousands of papers, are bad papers. It just means that they're not answering this particular question. 

Most safety climate research assumes that safety climate is a good measure because it is a predictor of injuries, and then they go on to ask other questions about safety climate. We're looking here about just papers that try to answer that foundational question, does safety climate work as a predictor of injuries, or do injuries work as a predictor of safety climate?

With that in mind, it's really quite good that we've got here 32 papers specifically looking at the question of do injuries cause safety climate? There are 10 papers that say, do injuries cause organizational safety climate? Eleven look at whether the climate causes injuries. Those are really good numbers. That's 10 different people giving a good answer to the question. 

David: Let's talk about the answer to that question then, Drew. Let's talk about the findings. We ran through these three or four key findings. We might repeat a little bit, but let's remember that this is an aggregate of a lot of good studies. The strength of evidence behind the findings in this paper is greater than the strength of evidence behind any of those individual 54 papers that are included in this group.

The first result is that yes, the effects for organizational climate are stronger than the effects for individual climate. Like you said, Drew, if you sample a group of 40 people in one team in a company, then that will be a better predictor of an injury than if you sample 1 person and then see whether that 1 person has an injury or not. You'll be more confident in your assessment of whether injuries are likely to occur based on the aggregate climate of the group of people rather than one individual. 

Drew: That one has some practical implications that we'll talk about later on in the episode. The second result is the one that we flagged as the headline, which is that certainly for organizational climate, injuries are a better predictor of climate than climate is, as a predictor of injuries. 

Both directions are real. The authors are fairly clear about saying this in their other work. They think there are two mechanisms going on, that your two effects, each of which works in a different direction. The stronger effect is the way in which our injuries influence the safety climate. The weaker effect is that safety climate has an effect on injuries. 

David: Yeah. I think that’s really, really important. Again, we’ll talk about all of these findings in the practical takeaways. That makes a little bit of sense, but I also think that that makes things a bit challenging for us as safety professionals to go well if injuries are predicting what’s going on in organizational climate. 

We also talked in episode 35 about injuries being the predictor of how much safety work is going on in our organization as well for a lot of the things we call leading indicators. It raises the question about how much of the things that we think in safety are actually leading indicators because most of them seem to lag what’s going on in your injury rights.

Drew: The next finding is that when we talk about climate predicting injuries, the longer the time frame we measure over, the weaker that association is. It folds off really quite quickly within a few months. Some people are in the habit of mixing together safety culture and safety climate as if they are this constant force. 

It’s actually more that safety culture is like the climate and safety climate is like the weather. The safety climate now says something about the likelihood of an injury over the next few weeks or the next few months, but it says very little about the long-term prognosis. It’s not a very stable thing.

David: One of the papers in this study, which I'd already read previously or include in the group. It says injury rights for two years before climate survey and then two years after a climate survey by all the different groups in the organization. It was really quite rapid even though there was a positive relationship after the climate survey for the injuries coming forward. It only lasted about three months and then beyond three months, it was almost like there was no prospective relationship. 

It also had, almost from the day of that climate survey, there was no prospective relationship about most serious injuries. The only thing that the climate told you was your chance of having a weak confidence of your chance of having minor injuries in the next three-month period.

Drew: Yeah, which is not nearly as good a predictor as I think we would like it to be. The thing that interested me is that at least in this study, it doesn’t work the same in the other direction. I think actually the reason why they were testing these hypotheses was because of those previous studies you've talked about, David. 

There were some signs that it fell off quickly in both directions. In this study, they found that the influence of injuries on safety climate seems to actually be quite stable and long-lasting. That means that if you have an injury a year ago, that could very well be affecting your safety climate today. Whereas your safety climate today doesn’t affect the injury a year into the future. It says that injuries hang around doesn’t affect longer than climate hangs around as an effect on injuries.

David: That’s also not entirely surprising. Memories and experiences quite strongly influence how we think about the world and how we do things. I know we worked with a lot of organizations who can recall in great detail even a lost-time injury that might have happened 12 months ago or let alone a more serious incidents than that. I don't find it that personally surprising that the hangover of our experiences is greater than how we envision what the future might hold.

Drew: Yeah. I don't find it surprising now, but I did find it surprising when I first realized that talking to some companies—just how powerful some of these stories of past injuries could be. To the point where no one who I’m talking to was even in the company at the time the injury occurred, the incident and its causation and the response, and how the regulator dealt with it still have a big impact on how people think about and practice safety in their organization many years after some of these events.

David: The researchers studied, there were some other things going on around these papers that they studied. Do you want to talk a little bit about some of the other things that the researchers were looking into?

Drew: Sure. The next part that they talked about was something they called contamination. This isn’t a concept that I’ve heard of before, at least not in this way, but they give a pretty neat explanation of it. They say that one of the problems with safety climate is that no one really agrees what is or isn’t included in the definition. Which means that you pick up two different safety climate surveys, they don’t just have slightly different questions. They may, in fact, test slightly different concepts.

One of our colleagues at the Safety Science Innovation Lab in fact is a big proponent of this idea, saying that this is a feature or the bug of safety climate. They say that if you’re doing safety climate, you should create a new special scale tailored to fit the industry you’re assessing. That's going to make it far more valid than just a generic scale. That makes a lot of sense as a measurement tool. It makes it really, really hard to compare different studies as a researcher. 

What they did is they started with Dov Zohar’s original definition which is based around measuring, specifically, employee perceptions of the workplace, policy, procedures, and practices concerning safety. Anything that didn't match that concept of worker perceptions of those things, they called contamination. Their prediction was that the less pure a concept is, the weaker the link is going to be. 

If you had questions about your safety management systems, your individual beliefs about safety, or your attitudes towards taking risk, then all of those things are going to be contamination that weaken the relationship. The weird thing isn’t that they found the exact opposite, that they found that contaminated scales were more strongly linked to injuries. David, what do you think about that?

David: I thought about this a lot, Drew, just because of your comment. I find it a little bit surprising. I think what I was thinking of, Dov Zohar, he put his work together in the 1980s and he put 40 questions together about safety practices and activities at the time. It was like we had an audit program in place. We have this and that. 

I was thinking, 10, 20, 30, or even 35 years after he did that, there's probably a lot of things that organizations have today that in the 1980s would have been representative of a whole lot of other things. These questions were testing, I suppose, the results or what workers saw as a result of potentially a lot of other things that were going on that was driving that particular activity when it was more discretionary, then more standardized.

As we've evolved these climate ideas and people have added more questions, they might be reflective of the ways that we've searched to differentiate these companies.

I think if you give a Zohar questionnaire from the 1980s now to every company, all companies may answer very highly on every single one. The only answer that I could give is that we've created a more nuanced understanding to try to differentiate organizations as every organization has come up the curve with the things that got around safety in their business. It would just be a hypothesis. I've got no evidence from checking the climate surveys to test that. 

The only thing that I can conclude by the strength of the relationship is that we've added in questions over time that help us get a better picture.

Drew: That makes a lot of sense, David. The other thing that was occurring to me, as you were explaining that, is that not enough of these studies measure the strength of climate. If a climate has like two directions, one of them is the size of how positive or negative it is and the other is how much everyone at the company agrees. 

You could talk about a strong climate where everyone has the same feeling, and then that strong climate might be positive or negative. I think a lot of the refinements we've made to the climate surveys, the things that they call contamination here, are actually improving our ability to measure the strength of climate, working out what it is that people agree or disagree about. Just not enough studies reporting the variance in their findings for them to think about that in this particular study. Maybe that might explain some of what they are adding in those extra questions to get a better picture of how much people agree at the company.

David: There have been interesting questions for someone to come pick all of the climate surveys that have been used in the last 40 years, published, try to pick apart the individual questions, constructs, try to do some looking at how our questioning around climate has evolved over time. and whether there's anything to learn in the way that it's evolved.

Drew: Yup. Interesting study for someone.

David: Drew, there was one climate indicator which was this idea of perceived management commitment, which sort of shows up in most climate surveys and in many cultural surveys, too. It was shown to have the most robust association with future injuries. 

Now, this kind of made sense to me. If we think about what Zohar originally published about climate leads to behavior outcome expectancies, which then influence behavior and injury. I suppose as an individual, depending on whether I think my management wants me to work safely or is okay with me working unsafely. Then, that's probably going to change the way that first step happens which is what my expectations are in relation to my individual behaviors.

When I saw that in the paper and then I connected back to that original mechanism, I thought, well, that makes a bit of sense to me. What were your thoughts about that one indicator?

Drew: I have to admit, I actually have a slightly more naive interpretation of that one. Naive as in more simple. I know I often criticize the idea of measuring perceptions instead of measuring the actual thing that we care about. I think when it comes to something like management commitment, perception is really the best measurement we've got. 

You ask management, are you committed? They're not going to give us any more honest answers than if we ask workers how committed they are to management. The very naïve explanation is that perceived management commitment correlates with actual management commitment. 

In other words, what we're saying here is that companies with management commitment to safety end up being safer. I think that's very straightforward and logical. We get some errors because we're talking about perceptions, but there are lots of ways other than worker behavior through which management commitment can translate into improved safety. Including things like investment in systems, investment in staff, investment in training, investment in capital commitment, and can all be ways in which commitment leads to safety.

David: Drew, I supposed the error in the perception of others is maybe less error than the self report error.

Drew: I think so in this case.

David: Drew, on first read of the article today, you set back a set of practical takeaways. Let's start talking through those. I think this is something that we've shown a relationship between something we talk about in safety and injuries, which hopefully listeners are going to be a little bit excited about, if not obvious. Now, I think we've got some really good practical takeaways as well to help them better.

Drew: I have to admit, I've quipped these almost directly from what the authors suggest their own practical takeaways. I always love an academic paper that does actually stop to say, what does this mean for safety practitioners?

In this case, the very first one is that as far as quantitative measurement goes, you can probably just boil down safety climate into a simple idea about perceived management commitment. If you're looking for something to measure in your organization, doing a customer satisfaction survey of your staff and asking them how much they are convinced that management cares about safety, fairly simple measurement might actually be just as good as a more complex safety climate survey.

David: Drew, for the idea of one or two questions on a pulse survey might tell you everything you need to know—everything you need to know in regard to this climate construct than maybe doing an original 40 question survey.

Drew: Yeah. If we want to ask 40 questions, that gives us 38 to ask on a different topic other than perceptions about various things that all just correlate with each other. 

The second takeaway is this importance of the fact that injuries are a leading indicator of safety climate and just how long-term in effect they can have. I think it's really important for us to think about in an organization of how we deal with injuries and the stories that get told about those really matter how much our people think that we care about safety.

Quoting directly from the paper, “For managers and safety practitioners, this suggests that the need to investigate and address the factors that have contributed to past injuries.” This is me talking again, saying that this next bit is really important. To continue the quote, “To clearly communicate to employees what has been done to decrease the likelihood of their recurrence.” That's the bit that we often leave out of our investigations. It is not just investigating, but then clearly convincing people that, yes, we have done something to make sure that this isn't going to happen again. Thoughts, David?

David: Look, I think we've talked about individual blame and learning on the podcast before. We've talked about what organizational learning from incidents needs to look like to happen.

I think this idea that how our people feel after incidents and injuries occur in their workplace is going to form, like you said, very clear and long lasting views about their perceptions of the organization and its commitment to safety.

I think a lot of organizations would think that they do this. Okay, yeah. We address the factors. We send around alerts. We tell our people what they're doing. I think this is a bit of a show as you've done something not to just tell us that you've done something, sort of statement, if you really want to not have injuries negatively impact your climate for the next 12 months or so.

Drew: Yeah, that very much reminds me of those survey responses you see sometimes that say, you said we did, where a lot of the we did is we've pledged to communicate better instead of actually showing that management has taken an action and response.

David: Normally with our clients, Drew, we call that say-do gap. How big is the gap between what the organization says and what it does?

Drew: The third finding is related to this fact that the organizational climate—the climate measured in groups—has a stronger effect, a stronger link to injuries in the future than the individual climate. 

Now, this gets back to the very start of the podcast, David. You were talking about how climate works. This is, I think, the evidence that it is actually the shared norms and values operating through behavior. If there wasn't some function of norms and behavior going on, then we wouldn't see this statistical effect. That suggests that actually improving climate by focusing on group activities, value sharing, communication of positive norms, is actually an effective way to improve safety and reduce injuries. As opposed to when we think of it as an individual concept, then we're promoting safety to win the hearts and minds of individuals.

David: Yeah. Focused on, I suppose, broadcast communication where you're trying to just get messages out and into each individual in the organization.

Yeah, Drew. I suppose, coming together, making some collective sense, norming together about shared experiences, and shared expectations. I really like that. I think it really questions the way that we do a lot of our internal communications around health and safety. I suppose a lot of our efforts to try to change climate are not directed at activities that are going to create the biggest impact.

Drew: Yeah. Even though leadership commitment is important, leadership messaging doesn't demonstrate that commitment in the type of way that works here.

The final practical take away from people measuring safety climate, that when you measure something like climate, don't forget to measure the strength of agreement rather than just how positive or negative it is. In statistical terms, that means don't just measure the average score. Measure the variation in the scores, the variance.

Even lots of researchers make the mistake of not reporting the variance, which makes it impossible to test things like whether more cohesive climates have bigger effects, which is something that we believe is true. We'd really like to be able to test that.

David: Drew, the other things that we'd like to know from our listeners, just a general question about what our listeners experience with the safety climate and its relationship to injuries. I suspect that a lot of our listeners have been involved in safety climate assessments, safety climate, or efforts to strengthen the positive safety climate in their organization. Can you, at any of your own experiences, explain some of the things that we've spoken about in the podcast today? Drew, anything else you'd like to hear from our listeners?

Drew: No, that's the main one. I think everyone has slightly different interpretations and applications of a safe climate. I'm especially interested to hear people's stories of how they've used the concept, how they've measured it, how they've had it measured on them.

David: Drew, that's it for this week. The question was, what is the relationship between safety climate and injuries? The answer?

Drew: Well, I think for once we have a simple answer. Safety climate is negatively correlated with future injuries. Good safety climate results in lower future injuries, and even stronger injuries are correlated with future safety climate. Fewer injuries means a better safety climate in the future. We can argue about what that means but their findings are fairly straightforward and clearly evidenced through.

David: Drew, that's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send us feedback on LinkedIn. Also send any comments, questions, or ideas for future episodes directly to us at feedback@safetyofwork.com.