The Safety of Work

Ep. 35 What is the relationship between leading and lagging indicators?

Episode Summary

On today’s episode of Safety of Work, we discuss the relationship between leading and lagging indicators. We have received a number of questions about this topic, so we’re looking forward to diving in.

Episode Notes

The paper we use to frame today’s discussion is Leading or Lagging? Temporal Analysis of Safety Indicators on a Large Infrastructure Construction Project.





“One definition of a performance measure or indicator should be...the metric used to measure the organization’s ability to control the risk of accidents.”

“There’s lots of things in nature that aren’t supposed to generate bell curves.”

“Safety is performed by humans, who react to the things that they see.”



Lingard, H., Hallowell, M., Salas, R., & Pirzadeh, P. (2017). Leading or lagging? Temporal analysis of safety indicators on a large infrastructure construction project. Safety science, 91, 206-220.

Episode Transcription

Drew: You're listening to the Safety of Work podcast episode 35. Today we're asking the question, what's the relationship between leading and lagging indicators? Let's get started. 

Hey, everybody. My name is Drew Rae. I'm here with David Provan, and we're from the Safety Science Innovation Lab at Griffith University. We’ve had a number of requests to talk about this topic. David, we're just going to jump straight in. What's today's question?

David: Drew, today's question is all about safety performance indicators and you're right. We have had a few regular listeners to this show just asking us, “Can you talk about leading indicators?” We thought we might do that and I recall, Drew, in an early paper that I've talked about a lot in my career.

It's a paper by Blewett in 1994 that said beyond lost time injuries, developing positive performance indicators to safety. It was sprouting in 1994 that it's so good that we moved away from looking at lost time injuries because they’re not a good indicator of safety. 

I suppose when you combine that with Rasmussen’s work in the 1990s about dynamic risk modeling and complex systems and the performance boundaries of work. This discussion that we've been having about performance indicators, we've been having in safety for a very long time. Sometimes it feels like we haven't made a lot of process.

From my point of view, what I think we're trying to do with safety performance indicators is maybe a couple of things. Maybe on one hand to understand our business, maybe to understand our safety risk, or maybe to predict where the next incident might occur and to help us determine the next best course of action.

Drew, the opening sentence of the paper that we're going to review today is from a couple of authors who, I don't think we've spoken about much, but they're quite prolific in the safety space, Reiman and Pietkäinen. The opening quote is, “The management of safety relies on the systematic anticipation, monitoring, and development of organizational performance.”

Drew: David, I'm going to jump in there with a hard disagree. Not just to that sentence from the paper, but from all about that rubbish of understanding our business and understanding safety risk. That's the rhetoric we put around performance indicators, but if we genuinely cared about those things, then we care about whether the indicators we regularly use are actually capable of doing those things. 

The analogy I like to use is the stock market. Lots of people are obsessed with the stock market and whether particular stocks go up or down. David, I don't know about you, but I'm old enough to remember watching the daily news if that's still a thing. 

David: I'm not sure that's still a thing, Drew. I think the ABC in Australia is just about to [can] 7:45 AM news segment every morning that I think has been going since 1939, so there you go.

Drew: It always used to be a thing, at least that, you would get a half an hour broadcast with events around the world and it always finished off with the sports and a section about the stock market saying which stocks had gone up, which stocks had gone down, and some geeky guy giving an explanation for why those particular movements happened on that particular day. It always bugged the heck out of me because even if we know that stuff, even if we know the movement of the stock price, even if we know the supposed explanation for those movements, it's useless for knowing what's going to happen to those stocks tomorrow. 

I don't want the guy telling me on the back of a poor earnings report the price of Woolworths went down. I want him to tell me Woolworths’ earnings have gone down so their stock prices are going to fall tomorrow. The current movements don't tell you any of that.  Sometimes if a stock goes down, it means it's going to keep going down. Sometimes if it goes down, it means it's going to go back up again. If we could predict which one, everyone would be making a fortune. 

In fact, there's a whole industry of people that are making a fortune, but they're making the fortune out of trying to tell other people what the answer is to the question of whether particular stocks are going to go up or down. They're actually really bad at the bit of predicting stock market movements.

David: I suppose what you might be saying is economists have been working for 100 years to predict, or longer than 100 years, and they’re still probably are not much better at knowing what's going to happen next because they say any system that involves people are inherently irrational. Does that mean we've been at it for 30 or 40 years in safety, and we got another 60 or 70 years before we realize it's a bit futile? Maybe through this podcast, we'll try and see if we can change some tech before we run out of time so to speak.

When it comes to safety performance indicators, it's something we all want to explore in our organizations. When we talk about leading indicators, and particularly the leading indicators that we will talk about in this study, Drew, I think what we described is the evidence of safety work. We're looking at safety work activities and either the quality, the completion, or the frequency of those safety work activities. We're assuming that gives us an insight into the risk control environment around the work that is happening in our workplace. 

We’ve talked a lot about the relationship between safety work and the safety of work, so I probably won't come as any surprise to say that the first question we’ve got to ask, when we think about these leading indicators is, is there any relationship between these safety work activities and the safety of work? That's question number one.

Question number two would be something like, even if we did know what the relationship was, how do we know that the indicators are going to be reliable and give us any insights?

Drew: It makes sense that if you think this is the stuff we do to create safety, if we measure how much of it we do and we measure the volume and quality of those activities, then that tells us how much safety we've created, so it's going to match measurements of safety in the future. Just as a logical process, it makes total sense to use measures of safety activities as leading indicators. That logic in reality doesn't always line up.

David: Yeah. There's a few big ifs, buts, and maybes, or definitely no's in that relationship. Let's just put some definitions around this so that we know what we're talking about. When we're defining performance measures, there are a few definitions of the paper that we're going to talk about talks to. 

One definition of a performance measure or indicator could be as simple as the metrics used to “measure the organizations’ ability to control the risk of accidents’.” Another definition is “safety indicators as ‘observable measures that provide insights into a concept – safety – that is actually very difficult to measure directly’.” Drew, you've talked about the difficulty of actually measuring safety directly in the past.

Drew: I really like that definition because it gives us a really clear understanding of what it is that we're actually doing. There is no such thing as a major of safety. Everything we have is indicators. They show the presence or absence of safety. 

When we talk about leading and lagging, it's never leading safety or lagging safety. We need to ask leading and lagging compared to what. Your intuition about this doesn't always work. A common misconception is that lagging means that we're measuring safety in the past and leading means that we're measuring safety in the future—trying to predict future safety. But that's really not what we're doing. 

A clearer thing is to say that leading and lagging are relationships between different measurements, different indicators. You don't say this is a leading indicator or this is a lagging indicator, you say this indicator leads this other indicator, or this indicator lags this other indicator. The color of the sky tonight is a leading indicator of rain in the morning. The number of people attending a Trump rally without masks today is a leading indicator of the number of Trump supporters who are going to realize the hard way that COVID-19 isn't a hoax. One thing leads to another thing. They are both things that aren't measurements.

David: I like the way you described that like an indicator is a leading or lagging some other indicators because that's going to be directly relevant as we talk about the results of this paper. But if we can find these indicators that predict safety accidents in the future, then we can act on those indicators and we can reduce the risk of future accidents. That's a really sensible thing for organizations to try to do. Let's just hold these questions. If we could predict safety accidents effectively, then what are the things we can look at to try to help us with that prediction? That's our question, Drew. I liked it because the authors stated clearly their primary goal.

The authors of the paper that we are just about to introduce said that their primary goal in this research was to “investigate the relationships and the temporal interdependencies among safety indicators collected during the delivery of a large rail infrastructure project in Australia.” Lots of words, but basically they're trying to do basically what you just said, Drew. [You do know] what leads, what lags, and what other things.

Drew: I love that as a research question, but I'm just going to throw an idea into our listeners' minds to think about as we go through this paper. We're going to come back to this later. That is if it's really true that there is some sort of causal relationship, that an indicator now predicts safety in the future, then no one is just going to sit back and watch that happen. 

The moment you see a leading indicator that you think tells you an accident is going to happen, you're going to act on it. That's going to reduce the probability of the accident in the future. It's not going to increase it, which means your leading indicator is not going to be a good predictor of your lagging indicator because you predicted an accident and the accident didn't happen.

I just want to leave that thought in your minds because we're going to come back to it. 

Drew: Great, Drew. You've actually intrigued me. I'm looking forward to what you're going to say about that. The paper today is titled Leading or lagging? Temporal analysis of safety indicators on a large infrastructure construction project. Pretty much the research's aim. 

The authors are Professor Helen Lingard, Matthew Hallowell, Rico Salas, and Payam Pirzadeh. Helen and Payam are from RMIT in Melbourne here, Drew—the home city where I'm based. Some of the others are—Matthew's from the University of Colorado and Rico is from Chevron, which is an oil and gas company in the US.

It looks like some collaboration across a couple of universities. I’m not quite sure how it came about, but at least, I'm not familiar with Matthew Hallowell's work, Drew. But I'm quite familiar with Helen Lingard's work.

Helen is very big in construction safety and has been for a long time in terms of safety research in the construction industry, particularly looking at how those projects are managed over the entire project life cycle.

Drew: Matthew Hollowell does a similar thing in the US. He runs a big research network in construction safety for major projects. These are very big names, one in Australia and one in the US. 

David: This paper was published in 2017, Drew, in the Journal of Safety Science, which is the source of quite a few papers that we've reviewed on the podcast. Those who listened last week to episode 34 about how to source and review research data would have heard you talk about Safety Science as one of the big four safety journals. 

Drew, the method they had for this is there was all this data that was collected as part of the routine reporting on a large infrastructure construction program in Melbourne. A multi-year, multibillion-dollar construction program that involved the construction of a new rail track, new rail stations, upgrades to existing stations, 13 level crossing removal projects involving railroad, grade separation, and the construction of the entirely new rail bridge. 

There were multiple constructors in this project working for the government. All of the principal contractors reported their data monthly to the client, so they had lots of different contractors, a huge scope of work. Although there were different companies reporting, according to the paper, the definitions and the processes for data classification was consistent across all of the different contractors. 

All that data was carefully specified and carefully checked by the government as the client. The data period that the researchers looked at was between January 2010 and January 2015. That's where the data came from. Do you want to talk a little bit more about the method?

Drew: Sure. Let's just start with some of the measures they used. One of the main things they used was the TRIFR. I always get this wrong, David. Can you please expand TRIFR for us?

David: Alright, let's go. Total recordable injury frequency rate. Lots of industries have lots of different definitions, but usually, it's a recordable injury. It will be a combination of lost time injuries and medical treatment industries and some restricted work cases where a person is incapacitated in some way from their normal duties. 

You count up all those numbers of occurrences that you have that are work-related and you divide that by every million men or personnel that your organization works and gives you a number. If you believe everyone that a number of one incident every million [hours] is where lots of organizations say they want to be, and if it's higher than 10, they probably start to get a whole lot of attention from their senior management.

Drew: Most of the other variables are safety activities. Number of toolbox talks, number of pre-stop meetings, number of safety observations, number of site inspections, and number of safety audits. Some of those are outcomes from those activities. Number of non-compliances, number of hazards reported, number of hazards closed out, and number of regulatory penalties or infringements, etc. You get the idea of the types of things that they are measuring—activities and the results of those activities. 

The researchers had a justification for why they use TRIFR rather than any lagging indicator, but this was a little bit odd. They say, firstly, that it's harder to manipulate, which is absolutely 100% not true. Because it follows a normal distribution, which is really weird because if your TRIFR follows a normal distribution, something has gone really wrong. 

Incident rates are supposed to follow a thing called Poisson distribution. If you think about it, that should be absolutely obvious. A normal distribution is a bell curve. It's even on either side, but you can't have negative injuries, so you can't have a bell curve of injury rates, it would just get cut off. The probability distribution can't be symmetrical. If it is symmetrical, something has gone wrong. 

I also really wish people wouldn't use injury rates instead of raw numbers. We'll get into this a little bit later, but one of the problems with using rates is it introduces false correlations between things. I'm a little bit skeptical that some of the results they find are actually correlations that they have accidentally caused by using rates instead of the raw numbers.

Despite that problem, I think the paper is really quite good. All of these is just illustrating how hard it is to get indicator statistics right. These are four highly competent and experienced research professionals. The paper has been peer-reviewed, it's published in a respectable journal, and it still has some serious statistical problems in it, which just shows you can't just randomly throw around safety injury numbers and expect the numbers to mean something.

David: Drew, that's why I love doing the podcast with you because I'm learning every day about statistics because quantitative stats is not something that I would spend a lot of time researching. What I might get you do is just continue on with what they did with the data because there's a chance that I'll misexplain it.

Drew: Sure. These are just fairly straightforward steps, and most of these are steps that you do just to eliminate possible objections. The first thing they did is they did what you call normalizing the data, which means dividing each number by an estimate of the total number of hours worked that month. Otherwise, things like the number of inductions just go up with the number of people and hours worked, or the number of injuries goes up in winter with the number of hours that are happening.

The first thing you do is divide the raw numbers by the rates. The trouble is when you divide two numbers by the same third number, automatically the result is always correlated. That's just a mathematical fact.

The second thing they did is they converted the data into time series, which is a fancy way of saying that they paired each number with a timestamp so that you know which order the things came in so you can look at this as a graph rather than just a collection of numbers. Most of the work in this is just converting things like November into a number 11 so that you can put things in order instead of just on a bar chart. 

The third thing they do is they tested the data for normality. The reason for this is that many statistical tests assume that data follows a bell curve because that gives nice, neat, and stable statistical properties. The big problem with this is that people are just used to seeing bell curves. People see normal distributions all over the place. Actually, there are lots of things in nature that aren't supposed to generate bell curves.

If you flip a coin and record the results, you don't get a bell curve. You get a thing called a binomial distribution. If you count injury events, you don't get a bell curve, you get a Poisson distribution. If you have a target for the correct number of safety observations, you're not going to get a bell curve, you're going to get something that is skewed particularly with a big peak right around that target.

If you test any of these data properly and you're still coming up with normal distribution, you shouldn't be thinking excellent, my data has the right statistical properties. I can run the test now. You should be thinking this data doesn't say what it's supposed to say. End of that rant.

The fourth step, they perform a difference transformation. The reason for this is to just get rid of seasons and long term trends from the data so we can look at what it actually predicts. Otherwise, you don't get any correlation at all because you're comparing winter to summer or things have been changing over time to read a long term trend, so the predictions don't work out.

All of those four steps are about preparing the data, cleaning it up. The final step they do is mathematically complicated but really easy to understand in practice. If you imagine they graph each of these variables on a sheet of clear, transparent film and just lay them over the top of each other, then they shift them back and forth to see where they line up. That's what they are doing with mathematics. If you shift things in time, how many months forward or backward do you need to go before one number starts to line up with the other number. 

David: Drew, they got 60 months of data. Lots of data from lots of companies. They have gotten when the recordable incidents occurred. They got all this information about toolbox talks, inspections, safe work method statement reviews, and all those other variables that they looked at. 

Let's talk about what they found. You mentioned correlations, and with that warning that I wasn't aware of until you said that when you divide two numbers by the same third number, the results are always correlated. I was already going to give a warning that correlation doesn't equal causation and that's a podcast in itself, but let's just say what did come out of the data. 

I'm going to run through a few correlations that were statistically significant in the data. They said that the toolbox talks led TRIFR four months prior, so they could tell when an incident was going to occur by what's happening with toolbox talks since four months earlier. Toolbox talks lag TRIFR for the next two months. After an incident, they could tell you what was going to happen to toolbox talks for the next two months afterward. 

The pre-brief led TRIFR by two months so looking at what happened in pre-briefs two months prior gave you an indication of what was going to happen with TRIFR. TRIFR led safety observations by one month, which means that if you had a recordable injury then the data can tell you what was going to happen with safety observations the month after. TRIFR lags safety observations for the next four months. Based on what's happening with observations, you could say what was going to happen with TRIFR four months down the track.

This is the kind of data that came out. Site surveillance lag TRIFR at month two, which means two months after a recordable incident, the data could tell you what was going to happen in terms of site surveillance inspections. I don't know where they keep going through this stuff, Drew, or whether our listeners are already going to start to get a trend.

Audits led TRIFR at month two prior and then audits lag TRIFR two months after, which meant if you didn't audit two months before or didn't do an audit, you could tell what was going to happen with recordable and then two months after a recordable incident you're going to tell what was going to happen with audits at that particular location.

Drew: David, just to put it into plain language, and let's just assume that correlation does equal causation for a moment. This data tells us the most significant thing. What it's doing is it's telling us that toolbox talks cause a change in the TRIFR rates, but a change in the TRIFR rates changes the number of toolbox talks. Side observations cause changes in safety, but changes in safety cause sign observations. This is not a simple relationship where safety activity causes safety outcomes. It's a relationship with safety outcomes that causes safety activities as well. 

David: Drew, exactly right. As I was saying it in my own head, I thought listeners might be forming that and be going, I know exactly why toolbox talks go up two months after an incident occurs or something like that.

The authors conclude this. They saw this on the data, and they concluded that the indicators that we generally believe and talk about as being leading indicators are not always leading. What you said, Drew, toolbox talks, pre-briefs, audit, non-compliance, safety observations, alcohol test, drug test, making statement reviews, and site inductions, they all lead TRIFR. There was something in all that data where the statistics say we have an indication of what's going to happen to TRIFR, but they also lag TRIFR as well, which meant when TRIFR happen, the data could also say what was going to happen to all those things on that site in the months after that incident.

It means that these safety efforts may cause something to do with safety, but then the safety also causes something to do with these safety efforts. Straightaway, they became, I wouldn't say confused, but it became clear from the data that it was a pretty complicated system of interactions.

Drew: David, I want to just throw another spin on it because we don't even know that there weren't other safety indicators happening as well as TRIFR in the background here. 

For example, the increase in site observations in the lead up to an accident could be that other safety data was telling them that there was a problem with this particular site. It's not even necessarily that the observations are predicting future safety. It could just be that the same things that are warning them that an accident is coming are also causing them to start to do these other activities. 

David: I think you said that. You said the researchers didn't necessarily design this data collection process. They just, at some point, got access to five years of data, so they had it because like what you said, just then if you were designing this from the starting point in a longitudinal type of study, you try to observe and control for some of those other possible things.

Drew: I want to really point out to our listeners the impact of these findings because they confirm something really fundamental about finding good safety indicators. That is if you take action in response to any of your indicators, if you see an indicator and you do something about it, that automatically interferes with the relationships between the indicators, so you'll never know if the indicator was telling you something useful in the first place. 

This makes it fundamentally impossible to come up with good safety indicators in the way that they are trying to do in this study, which is not a failure of this study at all. This study is really for the purpose of proving this point. You cannot find good safety indicators by looking at your past data and trying to work out what correlates with other things because you always have these cyclical, complex, and causal relationships. The only way to find out what's a good indicator is to go hands-off and let the accidents happen. No one is ever going to do that just to find out whether their indicator was true or not. 

David: Drew, look, the authors we're really, really clear about this. They said, look, you need to know what you said. You don't know if your indicators are moving because you might have an incident, or they're moving because you just had an incident and that's the way the organization is responding. The researchers also concluded that these supposed leading indicators don't simply drive safety outcomes, but they also drive these changes in activity. 

We really need to look at third things, which aren't related to the safety activity and aren't related to the safety incidents and outcomes and find this third thing that actually is more related to but independent of the movements in the others that we can actually look at, but not necessarily be manipulated through our responses to safety incidents.

It's complex, Drew. It's cyclical. I had a bit of a cringe when I read this after our podcast on the Brady report, where we talked about the fatality cycle is nonsense. Here’s another paper that talks about a cycle of indicators and safety work in an organization.

Drew: What they are saying here though is less about a cycle and more about a circular relationship between causal factors. Unlike when I called up the Brady report, I said there are tests you can do for data to check if it's cyclical. They actually ran those tests in this study. 

David: Drew, like you said, our listeners will be able to see that it is cyclical. They will know the amount of activity that happens after an incident in their organization and then they'll see that just revert to normal level over time and maybe bounce back again or something. I'm sure that the cycle of safety work, if you like, is true.

Drew: David, I'll just point out one final part of the paper that they before we start talking about the conclusions. After they had done this initial analysis, they did some tests for causality. Remember, this isn't an experiment, so we're not really talking about this proof that something causes another. We're more looking for how one indicator responds to another indicator. If we can see some time relationship where one thing going up seems to cause something else going up. 

They showed that toolbox talks, audits, non-compliances, drug tests, and site induction indicators caused TRIFR in this sense. They also showed that in that same sense, TRIFR caused the toolbox and audits. That confirms and highlights that things like toolbox and audits are behaving as both leading and lagging indicators. It shows that some other things that you might think of as improving safety only were acting as lagging indicators. Things like the number of safety observations don’t create safety according to this. It shows you how much you're worried about safety because you just had a bunch of incidents. 

David, there's a note that you’ve put in here that I'll hijack which is that leading indicators probably tell you more about how your company responds to safety incidents rather than telling you where the next one is going to occur.

David: I think, Drew, this seed was planted in my mind by you a couple of years ago now on another paper, which we might again talk about at one point that said sometimes—and if I get it wrong, correct me—these things like safety culture surveys are probably a stronger lagging indicator of the company safety performance than leading indicator on the basis that if people are having accidents, they think they're safe and they answer those surveys better than someone in the company that is having lots of accidents.

Drew: No one who's just had one of their workmates injured is going to tick, you strongly agree on a statement, my management cares about my safety. If I remember correctly, the finding is that the safety climate works the same way. It is both a predictor and a lagging indicator of things like TRIFR rates.

David: A little bit in the same way as we're talking about how this safety works. Drew, let's move onto the conclusions because the conclusions point out a number of things that you’ve started here by suggesting that they’re well-known in non-safety measurement. I talked about economics and things earlier, but safety people probably need some reminding of it. Do you want to kick off the conclusions?

Drew: The first conclusion is that talking about leading and lagging indicators is just dumb. The paper doesn't call it dumb. Their actual conclusion is that talking about leading and lagging indicators is problematic. Researchers can be very diplomatic. When they say problematic, they mean dumb.

The reason is there's no neat arrow going from the past to the future that lets you talk about leading and lagging. Safety is performed by humans who react to the things that they see. If you have an accident, that influences the number of inductions you do and what goes into those inductions just as much or more than the number of inductions and what's in them influences the accidents. 

If all your report is TRIFR and your TRIFR drives your safety activity, then TRIFR is a leading indicator. It's a thing that drives safety activity, it leads those activities. That will hold just until you start measuring the safety activity instead because you want to focus on leading indicators. Safety activities become the leading indicators because then they are the things that you're driving that are causing other things.

This leads to their second finding, which is they think there is clear evidence of cyclical relationships between the indicators over time. This isn't the idea of a safety cycle or a normal accident cycle. It's a relationship between the indicators that as you increase one type of activity, it changes another measure like TRIFR. But over the next period, that direction of causality changes and the TRIFR is driving the safety activity, which then drives TRIFR. 

None of these necessarily means that safety is going up or down because remember, TRIFR itself is very easy to manipulate. Doing lots of heavy safety activities can drive TRIFR down without changing your actual risk of an accident. This is direct evidence of the problem that the moment you start acting on a measurement, the measurement then turns into a leading indicator, but it's not a leading indicator of safety, it's a leading indicator of the next thing that you're going to end up measuring. 

The third conclusion, this one they don't say directly from the data, but it's a useful message. They say, “This cyclical behavior will not produce sustained improvement in safety performance over time.” My translation of that, and David I'll be interested in your thoughts, is that this is a direct response to the idea that some people have I know the measurements are bad but at least they drive good behavior. 

The authors are saying no, don't kid yourself. The cyclical behavior is driving safety around in circles. It's not causing any sustained improvement. Your measurement is just causing other distorted measurements. 

David: Drew, sort of reflected today as we’ve been looking at this paper, I think that's exactly right. We talked a lot about all these things and we put them—the training compliance data, the actions that they get closed out on time, the investigations that they get performed, and all of the other list of 10, 12, or 15 indicators or like I said, quality, frequency, and completion of the work activities. 

Then we say, but we want the organization to be doing all those things anyway. What this says is the organization will do them at a base level, then when there's an incident, they'll do a lot more of that, then it will revert to a base level, then there might be an incident, and then it will come back again. It is just like going around in circles because we don't know. I suppose we're seeing relationships in the indicators. I think you and I are both concerned about what is the strength of the relationship between all of that safety work activity and the safety of work.

Driving all that activity may not be that useful anyway. They’re really good conclusions. We go into what could be measured, Drew, because that's the logical next step for us. If we sat here for half an hour or so and we’ve said, all right, there's not a useful distinction between leading and lagging indicators. There may not be a strong relationship between the safety of work that gets measured as leading indicators traditionally and injuries or the safety of work. Then what should we measure, Drew? That's the next question. Maybe we'll talk a little bit more about that.

Drew: Sure. David, I suspect here that we're going to disagree with each other and with the authors of the paper. I'm just going to say that the authors of the paper opinion and then get you to give yours. The authors conclude that the solution to this is to move more towards Erik Hollnagel’s style of measuring resilience, measuring positive capacity rather than measuring safety activity. 

That would be things like measuring your ability to respond to variability, measuring disturbances, looking for opportunities, and monitoring for changes in the base state of the organization to anticipate things in the future that could have an impact on safety. Your thoughts?

David: Intellectually, we'd like to do that. When I talk to organizations about having good, let’s say predictive safety conversations on, I’d align a little bit more to some of the ideas about maybe Rasmussen and his modeling work then maybe Snook and drift and things like that. Actually, I'm more interested in where there's goal conflict in my organization, where there's resource constraint, where there's change or instability in the operation as opposed to maybe some of those real positive capacities that Erik might talk about.

When I said that I mean where in my organization is production rates 20% below their target and that team might be trying to catch up to that target, where in my organization is the maintenance budget underspent because maintenance is being deferred, or where in my organization have I got vacant roles in my frontline safety-critical people, which means there might be resource constraints in your front line roles? Things like that, where am I mobilizing new contractors into my organization? 

What I'd be looking for in terms of what would be an indicator to have a predictive safety conversation, I’d probably be looking more towards where my organization is drifting, where its resource is constrained, where its goal is conflicted, and things like that which isn't quite what, which isn’t quite Erik is saying in his resilient potentials model.

Drew: I would agree at least to the extent of you using statistics and using data as flags as to where to look, but all the things you’ve mentioned there are not things are actually genuinely indicators, except that they say something interesting here. Because your 20% reduction productivity could be a sign that things have slowed off and are perfectly safe, or it could be a sign that actually they are experiencing real problems and stress trying to make up time until they’re dangerous. It tells you something there’s interesting, it doesn't say which direction it's interesting in.

David: Yeah. Look, Drew, we’ve spoken about this before, an indicator will only ever give you a question, not an answer. That's what we should be looking for with our indicators. Organizations now say don't give me information, give me insight. They think their indicator is going to tell them what to do next, it won’t. It will raise a red flag or it won’t raise a flag. Let's just say it won’t raise a flag, it will raise a hand. What you have to do is just go and ask a question to find out if there's something you need to do something about it or not.

Drew: You don't want to go to the boardroom just presenting your indicator. You want to already have looked into it and say we had this figure and so I went out and looked. Here's the explanation and this is what we need to do something about.

The other thing that I would throw in is that actually, some of these things that we think of as leading indicators are much better thought of as good variables to use in safety experiments. One way to get around these cyclical problems is to have a clear thing that you're trying to improve and have a control group. 

If we're using our indicators, not to evaluate our general organization’s performance, but we're using it to evaluate a specific strategy, a specific activity, a specific mechanism, or even a specific set of training, then that's when we can use these indicators as actually genuine comparative indicators. But we're not looking for whether it goes up or down. We're not looking for whether it's leading or lagging. We're looking whether when compared to the control group, it's good or it's bad. 

David: Yeah, Drew. It would be great if we could be a bit more deliberate with our use of indicators in an organization and try to learn more about how they work and what they tell us over time. Those are the conclusions that we’ve worked our way through there about the use or non-use of indicators. 

What are the practical takeaways for our listeners now? Maybe if I kick us off, I'd say the simple discussion that we have about leading and lagging indicators of safety are absolutely not representative of interactions in a complex system. We've shown today in this paper that indicators can be both leading, they can be both lagging, and it's probably not a sensible distinction to have.

Drew: Let's just get rid of the discussion of leading and lagging indicators. If you don't like injury rates as a measure and you certainly shouldn't, then say you don't like injury rates. [...] say let's not use lagging indicators, let's use leading indicators. Let's just leave that terminology out. It's not helping us come up with better indicators. 

David: Just use the term safety indicator, I suppose. The second one is that measurements of safety activity, or what we’d say safety work, rather than the actual physical work environment is not the most useful representation. The authors say this. They actually specifically talk about maybe it's not worrying so much about the safety activity and more looking for the indicators of the physical work environment.

Drew: I'm going to put a slight caveat on that. That if you are trying to introduce a new practice in your organization, then absolutely you should measure the success of the role out of that. If you're trying to get people to do inductions, then measuring how many inductions people actually do is a good sign of whether you've successfully rolled out that practice. It's not an indicator of safety. It's an indicator of whether the organization is doing what you're trying to get them to do.

David: Drew, I totally agree. You're quite clear on that thing there is that you're measuring this to actually know if people are doing the inductions like you want them to be doing as opposed to putting that on your safety scorecard for the next five years and talking about it as a measure of safety.

The third practical takeaway we’ve got, Drew, is that it appears that there are relationships between some safety work activities and recordable injuries. 

The researchers conclude that it is worth doing more systemic modeling to understand these relationships fully, but if you're out there at the moment in your organization and you’ve got these types of leading indicators that we spoken about today on your safety scorecard and you're telling your organization that these activities are leading indicators of your recordable injury rate, then these researchers have looked at five years of data, done all the statistical testing, and come and said there are a few correlations, but we are not going to claim there's a causal relationship. 

There needs to be thorough, systematic modeling, and with much larger sample sizes of data before we can actually conclude that. Practical takeaway is, don't be a professional who talks about these leading indicators as predicting TRIFR. 

Drew: I've got separate thoughts of my own about whether even researchers should be doing this modeling with TRIFR as the target. It's one of those areas where if you're critical of the ivory tower and critical of the way we do research, don't dive in and think you can do it yourself in your own organization. There's a lot of extra work that needs to be done before you can make those claims. 

David: The last practical takeaway is as much as anyone, Drew, you and I would love to know what information we can look at. You spend all of your time researching what ways can we look at and think about safety in a way that gives us some kind of predictive capability. 

That's what we do. I’d hope the listeners think that we're not just interested in trying to figure out what are the things that we can understand in our organization to give us a sense of what might be around the corner. But the last practical takeaway is that we, when I say we I think practitioners and researchers, are still in search of what information exists in our organization that we can use to predict where safety risk is increasing.

Drew: David, are you saying that the quest for a reliable indicator is our version of mission zero? It's a target we'll never reach, but we're constantly searching out there trying to get there. It's the goal and the vision that matters.

David: Look, for the Safety of Work podcast, Drew, I'd be saying that the safety of work is the ultimate mission. Anything we do has got to be in service of the safety of work. If that means finding the information in an organization, which gives us insight into the safety of work, then that can be a mission. We might have to do a little bit of work on our branding though.

Drew: Okay, David. What else would we like our listeners to tell us their thoughts about in the episode?

David: We've done a couple of episodes which have been quite popular. Our zero harm episode, like you’ve loosely referred to, Drew, was popular. Our behavioral safety one was popular. I suspect that our leading indicator conversation might be popular. I'd love people to chime in either on LinkedIn or directly to us and tell us what leading indicators you're using in your organization. Tell us how you feel about them. Tell us how you use them to talk to safety risks within your organization and what kind of decisions you make in relation to those indicators. If we can get a good leading indicator conversation going, then that's what I like to see.

Drew, today we asked the question, what is the relationship between leading and lagging indicators? What's the answer?

Drew: The relationship between them is that they are both bunk, David. They're complex interrelated bunk. 

David: Perfect. Let me just say that it's something that people can takeaway in non-Drew speak, which would be if you're using leading indicators of safety management activity in your organization, it's very unlikely that it's giving you any predictive capability over where your next incident might occur.

Drew: That's a little bit more diplomatic, but you just said the same thing as me, David. That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. You can contact us on LinkedIn or you can send any comments, questions, or ideas for future episodes. Hopefully, you've seen for this episode that we do respond to those. You can send it to