The Safety of Work

Ep. 1 When do behavioural safety interventions work?

Episode Summary

On this very first episode, we pose the question, when do behavioral safety interventions work?

Episode Notes

Tune in to hear us discuss whether behavioral safety interventions are effective and worthwhile.

Topics:

Quotes:

“Human behavior change is absolutely a science, but behavior-based safety is probably mostly nonsense.”

“In a randomized control trial, every individual is either given or not given the behavioral training…”

“Interventions that are based on theory tend to be more successful.”

Resources:

Mullan, B., Smith, L., Sainsbury, K., Allom, V., Paterson, H., & Lopez, A. L. (2015). Active behaviour change safety interventions in the construction industry: A systematic review. Safety science, 79, 139-148.

Feedback@safetyofwork.com

Episode Transcription

David: You’re listening to the Safety of Work podcast episode 1. Today, we are asking the question, when do behavioral safety interventions work? Let’s get started.

Hey, everybody. My name’s David Provan and I’m here with Drew Rae. We’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com.

In each episode. was ask an important question in relation to the safety of work or the work of safety and examine the evidence surrounding it. Drew, what’s today’s question?

Drew: A bit of a broad topic for this episode because the answer to this question is going to go some way to explain why the rest of our episodes will have much narrower topics. Just before I get to the exact question, the scenario prompting the episode is simple.

Let’s say you’re in an organization that wants to improve safety and for whatever reason, your organization has decided to focus on individual behavior. The safety manager wants to get with the program, but they also want to make sure that they’re adopting something that’s well-proven, not just behavioral snake oil.

They go to their shelf and they pull down the book called, Behavioral Safety Interventions Based on Strong Evidence. What is the book? How many well-evidenced interventions are there? And are there some patterns to what works and doesn’t work? The precise question we would be answering is just when do behavioral safety interventions work?

Before we dive too deeply into it, let’s just have a quick chat about behavioral safety in general. David, what are your thoughts? Is behavior-based safety science or nonsense?

David: To answer that question directly, I think that human behavior change is absolutely via science, but behavior-based safety is probably mostly nonsense. However, the question we’re asking today is a really interesting and important one because all of our listeners would’ve been involved in behavioral safety interventions at some point in their career. It’s one of the big belief areas of safety, along with safety culture and safety management systems.

I think it’s really important that we unpack this question around behavioral safety safety and what it means. We know that behavior change as I mentioned earlier, is quite a well-established science within the discipline of psychology, and we know about how you attempt to create behavior change, and the need for people to have an emotional connection around the change. We’ve seen that in various public safety campaigns around alcohol, drugs, smoking, gambling, driving. There is a lot of research, although however, many those campaigns themselves don’t really follow that behavioral change research very closely.

Drew, you’ve also done a bit of digging more broadly into behavior change and behavior safety. What are your general thoughts?

Drew: Like you, I agree absolutely that there is definitely a science around behavior change. There is often a disconnect between what science says and how people go about trying to implement behavior change campaigns. Often, people rely on quite outdated or almost folklore about behavior change rather than what the science and theory suggests.

We can certainly look and see that there are definitely times when we would expect that behavioral interventions would be very appropriate. There are certainly things that go on in workplaces that are very much guided and driven by human behavior. Often, those are things where there are subtle incentives to do things in a way that’s more dangerous, and it makes sense that we should try to put subtle incentives pushing back in the other direction, whether that’s through making it easier to do it the right way or more motivated to do it the right way.

The theory suggests that that sort of thing certainly work when everyone agrees what the safe behavior is. They just find if hard sometimes to follow that safe behavior. It’s a bit of a truism that if someone’s going to change their behavior, then they need to start by agreeing that their behavior needs to change and having motivation to make that change. There’s no behavioral science that suggest you can just force behavior changes onto someone.

David: That is a good place to step off from, Drew. Do you want to tell us about the paper that you’ve selected for us to talk through today?

Drew: Sure. The main paper we’re going to look at is called, Active behaviour change safety interventions in the construction industry: A systematic review. Let me unpack that a little bit. It’s a long title like many are. A systematic review is just a particular type of work that involves taking in, as the name suggests, a systematic way trying to find all the stories on a particular topic and looking at what the weight of evidence is.

We’re dealing with one particular scope. You can’t do a systematic review of the entire world, so we’re focusing on the construction industry and we’re focusing on active behavior change interventions. I’ll get into it a little bit what they mean by active interventions in a moment.

The paper was published in 2015, so it’s reasonably recent. It’s got a long awful list and the first author who’s usually the one responsible for the paper mainly, is Barbara Mullan. Professor Mullan is a social psychologist, she has a very extensive track record writing about behavior change, and she published this particular paper in a reputable journal called Safety Science. All of the external markers we look at to find good research suggests that this should be current, should be well-informed review of what we currently know on the topic.

The question this study asks, specifically, with what characterizes a successful behavioral safety intervention. They were find firstly which interventions were successful and then they will try to look to see if there were any patterns. The method they followed is fairly standard for doing a systematic review. They start off with a set of keywords and they did electronic searches to find every paper that might conceivably be relevant. That gave them a starting point to around 6000 articles. They then screened those articles to find ones that met certain conditions.

First thing is it has to be in the construction industry. Secondly, it had to be trying to reduce injury rates. Thirdly, it had to be a program to actively focus on behaviors. So, not just evaluation of some external things like legislation or checking whether new equipment led to some change in behavior. Then, they were considering only studies that had some experimental lawsuit or experimental design. The very minimum standard is that any study had to have a measurement before the intervention and after the intervention.

Starting with 6000 candidates, the end result was 15 papers. Fifteen papers that reported on evaluated behavior change safety interventions in the construction industry. Now, note that’s not 15 good papers or 15 papers with strong designs. That’s 15 papers total around the world.

David, that surprise me. Does that number of papers surprise you?

David: Well Drew, yeah. It absolutely did. Given that criteria, which is the construction industry trying to reduce injury rates through a program actively focusing on worker behavior, there’s thousands and thousands of organizations around the world who have done that in the last 5, 10, 20 years.

If you were to ask me that question about how many research papers are out there covering that scope, I probably would have guessed 200–300 in the last 10–15 years due to the amount that behavioral safety have due to problems with these within our organizations, the amount of researchers that talk about it, the amount of books that are out there. I’m curious how well this evidence base reflects more broadly the safety evidence based on other ideas, and now curious though the podcast, just to try to understand how much research is out there for the things that many practitioners and many researchers hold up to be true.

Drew: Yeah. I guess it’s no secret the I’m not a big fan of behavioral safety as [...] not as often applied as a broad brush first solution to any safety problem. But I always imagined that there was maybe a large but weak evidence base and now be needing to go through lots of studies and [...] how will those studies were conducted. It surprised me that there were so few total that were published, particularly given that the earlier based safety places a lot of evidence on itself as an evidence-based approach. That’s central to how it works is you do and intervention, you measure the change of behavior, you use that to update the intervention. It has that scientific feel to it and it’s advocated as scientifically based. So yes, that did surprise me.

David: One of the other things that surprised me was when I read the author list of those 15 research papers. There are many authors that I didn’t recognize and I suppose in this systematic literature review, Professor Mullan who’s from really not specifically within the safety science discipline and many of the authors that have written many of the books that we’ve read in safety are prominent in their research papers and nor are their theories really strongly tested within the research papers either. That gap between the experimental or empirical research and the textbooks if you like was really interesting to me.

Drew: And I suspect this is not really about behavioral safety but is reflective of the evidence based in safety science. I’m going to [...] when we get up to our episode on the systematic review of safety different interventions. I don’t think we’re going to come close to 15.

David: No. So Drew, tell us about the studies and the designs within those 15 papers.

Drew: Sure. The studies had a real mix of designs. One of the 15 was one of the really traditional randomized control trials. Sorry, no randomized control trial. Every individual is either given or not given the behavioral training and they’re just randomly allocated to one of those two conditions that we measure the difference. Some of the studies were various combinations of allocating groups to the intervention or not to the intervention. A couple of them were just what we call pre-post design, which means you measure beforehand you do the intervention, you measure afterwards. There is no control group.

Now, I love to be really critical of the research designs in these studies, but doing real world research in organizations can be hard. You have to fit in with what the company wants to do. And in construction in particular, you have to deal with the life cycle of the project, you have to deal with some contractors, staff coming and going between companies and between projects. It gets really messy really fast and we can’t set things up as nice and neatly is people do when they do clinical drug trials.

Professor Mullan’s conclusion was that the overall methodological quality was poor, but I don’t think I’ve ever read a systematic review that doesn’t say the overall methodological quality of studies in this review was poor. Instead of nipping the studies, let’s focus on the results because those speak for themselves.

Remember there’s 15 studies. Five of those studies, the only output thing they measured was just injury rates, whether injury rates went up or down. Eight of the stories just measured behavior, whether behavior got better or worse. Two of them measured both. They measured where the injuries change and where the behavior change.

Of the studies that measured injury rates, exactly two found an improvement in an experiment group compared to a control group. Out of those two, the experiment group had a much higher rate of injuries to start with. It really isn’t fair to compare the two groups anyway. A couple of the before and after studies thought that they found an improvement, but when they checked the trend over time, it wasn’t clear that the intervention had anything to do with the improvement that is just a continuation of existing trends. So, that was a bit awash when it comes to injury rates.

It was better when it came to measuring behavior change. Of the 10 studies that measure behavior, all of them had improvement on at least one of the things that they measured. That’s not quite as good as it sounds since they often used a whole lot of things that they measured. Some of the things that they measured were things like safety knowledge and self-reported behavior rather than actually looking at how people are behaving. But still, 10 studies and all of them found something that improved over the life of the study.

David, do you want to quickly just mention about what we should be using as our end point we should be measuring whether changes injuries or whether it’s okay to measure major changes in behavior or change in self-reported behavior?

David: Yeah, I think safety side research suffer from some of the same difficulties as measuring safety in organizations more broadly. We had a lot of discussions in our organizations about what we should be measuring, leading, or lagging, or other types of measures.

When it comes to safety of research, though, I think what the study needs to do is have a measure that’s directly related to the mechanism that the researcher is trying to impact, or change, or understand. If it’s about behavioral change, then the measure really has to be about the behavior and about a specific behavior as possible.

Measure the actual behavior that’s happening is different from measuring people’s understanding and knowledge about what behavior should happen, and it’s also very different from measuring injury rates. I suppose if we look at that sequence of what is the behavior to do, does the person understand it, do they do it, and do they not get injured, measuring something that’s three or four steps removed from what the intervention is aiming at, makes the results a little bit unreliable when we’re looking at correlations, let alone trying to understand causation.

Drew: That’s a good point. Obviously, we all love to know that our safety interventions were making a change in injuries. But from a scientific point of view, injuries are just a really noisy way of looking at safety. They go up or down for all sorts of reasons, including just random variation. It’s not actually a very strong claim to claim that your intervention caused injuries to go up or down. It is [...] to look at that middle step. The idea is we change behavior and changing behavior changes safety. Then, the key measure is, has the behavior changed? That means actually looking at the behavior.

Some of these studies were good at that. Some of them were basically semi-secretly observing the behaviors. So, watching the workers for long periods of time. Some of them were just getting workers to fill out surveys about their own behavior. Of course, if someone knows that they’re part of a safety equipment study, they’re going to say, “Yes, my behavior got better. If you pretend to admit they’re breaking the rules on paper, that’s going back to their own organization.

Let’s get back to that question. How much can we conclude that the behavior change interventions worked? Not a lot but enough so that we can have a go at trying to find patterns.

Professor Mullan and her co-authors suggested a couple of things. They said that interventions that are based on theory tend to be more successful. What they meant by that is a lot of the behavior change interventions didn’t actually draw on behavior change theory in their designs. That’s consistent with a lot of other stuff I read and seen.

There are a heap of road safety campaigns and health campaigns that don’t make good use of our knowledge about how people react to behavior change messages. Safety is no different to that. There’s a big manual that the, I can’t remember whether it’s the South Australia or Victoria government put out on how to do road safety campaigns, and I was shocked just last week to see a new motorcycle safety campaign. Obviously, the people who put together this campaign have not read the manual that their own state put all that effort into producing. So, basing it on the theory matters.

Also, focusing on interventions that change people’s knowledge, seem to be less effective than interventions that build in feedback. Just telling people, “Be safe,” or, “This is how to be safe,” isn’t as effective as saying, “This is the behavior we want, and then when we see the behavior different, we gently remind you of that.”

David: The original question that we asked about when do behavior change interventions work, it’s important they’re based on theory and it’s important there’s feedback and not just knowledge. How would you answer that question now? When do behavior change interventions work?

Drew: I have to say that based on the systematic review, there is weak evidence that behavior change interventions work sometimes, but we don’t have broad knowledge about when they work or how they work. Either there’s a mess of better designed and evaluated interventions out there that have never been published or the construction industry is unusually bad at designing interventions.

Having checked out those first two possibilities for myself, I have to go to the third one that the people who are absolutely sure that behavioral safety works, are not basing that faith on good evidence.

David: I think I agree there, Drew. I think the behavior-based safety programs that I’ve seen and seen organizations now that have got a little bit more across the literature, I think many of them are based on popular science or hopes, as opposed to organizational hope and care, as opposed to strong evidence about behavior change in the organizational context.

We need to understand that the organizational context is not the same as this social context more broadly, where just taking that research about road safety campaigns or health campaigns, where you might motivate someone to keep themselves safe in a different way, may not actually apply in an organizational setting.

So Drew, we probably would have been a little bit strong in our ideas about behavioral safety and I suppose the researchers have supported maybe some of the way we feel about behavioral safety, but do you want to talk about how we’re linking our own ideas to the theory of behavioral safety?

Drew: Sure. I think it’s important to note that our intention in picking this particular article for our first episode wasn’t [...] behavioral safety, but really to make a point about the quality of evidence that’s generally available on safety topics. We simply can’t make broad claims that this theory works or that theory works, or this way of looking at safety is better than that way of looking at safety.

Sure, we can have academic and theoretical discussions about those arguments, and I’m sure you will see both David and I piling in on LinkedIn, taking part of some of those arguments. But when it comes to basing our practice on evidence, we can’t make those claims. Hardly, anyone out there really is properly evaluating their interventions. Most of those interventions aren’t based on the theory anyway, so whether it works or not, doesn’t say much about whether the theory works on not. That’s going to be true of most types of broad brush thinking about safety. If we want good evidence, we’re going to have to zoom in closer.

David: I think we’ll learn like our listeners will hopefully learn and we’ll get more specific and nuanced in the questions that we ask through these podcast episodes, if we turn ourselves now to what does it mean for practice.

I think firstly, we need to be very careful of the conflation of the number of different safety ideas. Ideas like safety culture, visible leadership, behavioral safety, and safety climates because sometimes we use those terms interchangeably. Sometimes, we don’t take the time to talk and align on what we’re actually talking about, but these things are actually very different and even within some of these constructs there’s very different ideas about what works.

These are all different kinds of strategies about modifying the behavior of people within the organizations and there are different theories in each of these areas, but I think from a practice point of view, when we talk about behavior-based safety, we need to be very, very specific about the behavior change that we’re trying to make within our organization and how we’re aiming to make it.

So, if you’re doing that, if you’re currently implementing or thinking of implementing a very specific behavior-based safety campaign or program, what would be your advice?

Drew: My honest advice, stop. Possibly not stop permanently, but certainly stop a serious thing about what it is that you’re trying to achieve. The default assumption for any behavior-based safety program based on the evidence should be that it doesn’t work, not that this is a good idea that does work.

Probably, most interventions aren’t going to help. If you want to do an intervention, then you’re going to need to put real resources into evaluating what you’re doing. Not just before and after, but using control groups and well-designed measurements. Otherwise, there is a more likely than not chance that you’re throwing your money away. Throwing away resources, time, and giving safety a bad name. That’s going to be the default assumption.

David: I think the answer to the question and we’re trying to zoom in at the very end and provide some very specific answers. The answer to this question, when do behavioral safety interventions work in the construction industry, the answer is probably when you are trying to change a very specific behavior, one where the employee is motivated to change that behavior, and one where there’s sufficient resources and feedback mechanisms provided to sustain and monitor that behavior change.

There was one study in that 15 that looked at people wearing safety glasses, and people being motivated to wear the glasses, having the glasses available, and having feedback when they were and weren’t wearing them, was actually shown to change that behavior through direct observation.

That’s it for this week. We hope you found the episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send in any comments, questions, or ideas of future episodes to us directly at feedback@safetyofwork.com.