The Safety of Work

Ep.16 What can we learn from the Brady report?

Episode Summary

Welcome back to The Safety of Work podcast. Today, we discuss the Brady Report.

Episode Notes

Tune in to hear us discuss the lessons learned from this important report.

Topics:

Quotes:

“The report contains, like, a couple of hundred pages of graphs and nowhere is there any sort of test to see what model best fits the graph.”

“It’s not new for big investigation reports...for people to get hold of one particular theory of safety and think that it provides all of the answers.”

“This definitely shows the naivete, if you think you can’t hide hospitalizable injuries.”

Resources:

The Brady Report

Feedback@Safetyofwork.com

Episode Transcription

David: You’re all listening to the Safety of Work podcast episode 16. Today we’re asking the question, what can we learn from the Brady report? Let’s get started.

Hey everybody. My name’s David Provan. I’m here with Drew Rae, and we’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. So Drew, what’s today’s question?

Drew: Dave, the question we’ve got today is what can we learn from the Brady report. Since not all of our listeners are in Australia, and many of our listeners are not in Queensland, I guess we need to start by explaining what the Brady report is, how it’s attracted so much attention, and why we think it’s worth talking about. So David, maybe it’s your turn this time to give it a bit of background, and why was this report commissioned.

David: Quick shout out to Mark and Josh, friends of the show and the university who (at least) suggested this topic because I thought they’d be interested in our views. The mining industry in Queensland has had, I supposed, what could be described as a run of fatalities in the last 12–18 months. I think a total of eight or maybe nine fatalities.

In June or July 2019, as a result of five fatalities in the previous financial year, the Queensland administered for mines commissioned a review into mining fatalities. The scope of that review was to analyze essentially 47 fatalities that occurred in the last 20 years in the Queensland mining and quarrying industry, to come up with forward-looking industry recommendations, and report those back to parliament and to the minister. This was to be a health and safety review tabled in Queensland parliament.

The final report is 321 pages, it was done by Dr. Brady. Dr. Brady’s a forensic structural engineer. I can only assume that this was one through Queensland government tenders for all the work, and he went off and performed this review. So Drew, we both have had a read of the review, do you want to add anything to the background of the report before we get stuck in?

Drew: I can quickly mention that we always make a point of saying who writes the research and try to give you some sort of background. The reason we do that isn’t to try to make some sort of claim to authority or to poison the well. Good research is good research no matter who does it. Bad research is bad research no matter who does it. We think it’s worth knowing as an early check on a paper who wrote it because it tells you how to interpret between the gaps, how much you trust that they’re representing the current state of knowledge adequately, and it gives you an indication of what sort of assumptions and world view they bring to the work.

It’s really important for something like this report, which is really a form of qualitative analysis, that you really need to trust processing that’s not statistical and can be checked. It’s analysis that goes on in people’s minds and behind closed doors, so where they come from is important. Dave as you point out, the author of these reports are forensic structural engineers, so their specialty isn’t looking at the physical causes of construction defects, accidents, or things that otherwise make it to court. They don’t have a big background in safety, and I think that’s something that is going to color a lot of the things that we look at through the rest of this work.

David: Yeah, Drew, and we’ll just try to lay out what the reports found, and our views on it in a way that’s as neutral as we can be. I may have to hold my tongue a few times during this podcast because I’m not sure how well these reports take us forward. The author was given copies of the 47 regulator reports into the fatal incidents. These are very detailed reports that are prepared by the regulatory investigator following the fatal accidents.

First of all, the author assumes that these reports accurately reflect the causes and the circumstances of the incidents. There’s (of course) no checking down on the accuracy or the content of those reports. In addition, the mining regulating in Queensland collects a whole raft of industry information through monthly reports from all of the mine operators. So they have something like 40,000 additional incident reports into high potentials, serious accidents, lost time injuries, and a range of other things. A number of these types of incident classifications were then normalized against the hours worked.

In that 20 years they’ve been sucked in 1.6 billion hours of work in the mining industry in Queensland. There’s literally hundreds of pages of graphs of incident data, hours worked data, and trying to make sense of all of these numbers.

Drew, you’ve done a lot of work in the accident reporting space. How would you feel about that is a process picking up 47 accident reports, and then trying to use that to make sense of what an industry should do moving forward?

Drew: I’m speaking here to those of our listeners who are interested in doing research themselves. It’s always really tempting to start off as a safety researcher saying, “I’m going to take accidents that have already happened, and I’m going to study them to find patterns. From those patterns, I’ll build my own new theory of what causes accidents.” This general process we call revisionist analysis.

The term “revisionist” comes from history. As you might imagine, this is something that people who study history have to do a lot. If you’re studying ancient Rome, you’re not going to be able to collect original data. The data has been collected to death. All you can do is pick up all of the stuff that people have previously analyzed and put your own spin on it. Try to create something new out of what other people have intensively studied.

There’ve been some great examples in safety of where people have done this really successfully. A few examples we can call out: Diane Vaughan with her mammoth ethnography about the Challenger Launch decision. What made that one special was she collected a lot of new information and new data that wasn’t considered by the original investigation.

Another favorite of mine is Snook’s theory of Drifting Into Failure, which comes out of reinterpretation of the Black Hawk Shootdown events.

Even things like the Hillsborough Independent Panel, that looked at the stadium disaster. Again, they had the power to uncover new information and disclose new things that hadn’t been revealed in the previous reports. There are also good examples of where people have taken past accident data that’s already been analyzed and create a brand new theory out of it.

David: And I think there’s an endless popular example of authors. Professor Hopkins would be one of those that tends to replay information from accident reports in a way that just presents a whole lot of counterfactuals about what the new author proposes could’ve prevented the incident on the way through.

I think this report is a little bit more like one of those reinterpretations of the incident investigations where it doesn’t uncover anything new or it doesn’t overlay any new theory or ideas over the content on the accident reports. It just themes the original report information and tries to say these are the things that are happening more than once in this data set.

Drew: So the report actually has in its reference list, quite a number of books by Hopkins. I’m going to take a stab and guess that Dr. Brady is a fan of Hopkins’ style of doing this sort of work. The limitation is that you’ve only got in front of you, stuff that has already been collected using the biases and mindset of the regulator, and already classified according to the models that are in the mind of the investigators.

The risk that you think you’re making new and interesting findings, but all you’re doing is reconstructing the models and the biases that the investigators used. You find that if you collect 10 reports by people who use ICAM and look for what’s similar about them, what’s similar about them is ICAM. You haven’t discovered something new about the accidents, just about the investigators.

David: Yeah, separate to the mining issue, [...] evolved with the International Oil and Gas Producers Association a number of times. They had a particular classifications system of reporting the causes of major incidents, and they would be able to come back every year and say that 27% of incidents were the result of a lack of supervision, 35% was the result of lack of following procedures, 22% were human error, and it was because all of the reports have to classify their findings into that classifications system. It was like what you look for is what you find, the stuff that goes in comes back out looking exactly the same.

Drew: Since my personal test gives some indication of how much this is what’s happening, is to look at what recommendations the report makes about better future investigations. There are reports that come back and are very critical of previous investigations, and says people need to investigate substantially differently. It’s a good sign that the revision is in fact looking at something new. Where there’s a report that simply uncritically examines the previous investigation isn’t going to say that those investigations did anything wrong. I think there’s a preview of things to come.

The Brady Report takes the investigations mainly at face value. As a result, it’s only recommendation for investigators is when you’re investigating an accident, don’t focus on a single cause, which is not actually a particular problem with the investigations that it summarizes anyway. It’s not like a telling or scathing analysis of the regulator in the way the regulator has previously been [...] accidents.

David: Let’s get started. We’ll talk about those recommendations because what would our listeners who haven’t had read the report are going to be interested in what these recommendations say for this industry moving forward and what we think of those, so we’ll talk about each of the recommendations. There’s about 12 of them, and we’ve grouped them up into chunks as sort of what we’re talking about. Then, we’ll just make some general comments and provide some practical takeaways. 

Recommendation one said that the industry should recognize that it has a fatality cycle. Tell us about what this might mean. Recommendation one, the first recommendation that’s put on the table as a result of these 47 fatalities, is that the industry needs to understand its fatality cycle.

Drew: The first thing that we need to know about fatality cycles is that they’re not a thing. It’s something that I’m familiar with. It comes from a paper by Malcolm Jones. Jones actually published several papers, so if you’re looking for it, you’ll find it stored under different dates. The original one was in either 2002 or 2003, in the journal of system safety. That’s not a peer review journal, and you’ll be struggling to get hold of a copy of it because you need to be a member of a particular society. It’s basically the equivalent of a professional society magazine.

He published this idea that industries have almost like a grief cycle, where you have an accident and then you go through various stages. You start off denying and blaming. Then you incriminate. Then you start to put a spotlight on safety. Then you work really hard at safety, you get success, and then you get complacent. You start to accept things going wrong, and then you have another accident.

There’s zero research that goes into these stages. All of this is a thing. It’s really just an informal observation that someone made. So, talking about the fatality cycle is basically a pseudo-scientific way of saying, “Don’t be complacent if you have a year when you don’t have many accidents.”

David: For those DisasterCast listeners, I think you talk a bit about accident cycles and statistical significance in fatal accidents. I think you did an episode on Metro-North Railroad early on in DisasterCast [...] people who want to go into the back catalog there and figure it out. There’s a lot of discussion about what you can read into changing accident rates year in year out in a particular industry or a particular type of operation.

Drew: I don’t know if we’ve been doing the podcast long enough now to know if we’ve got regular listeners. But regular listeners will know that mathematical literacy is something that drives me absolutely crazy. If you just think, just how many people think that they’ve got a system for the stock market because they’ve taken a look at the commodity price graph and said, “Hey it looks like commodity prices go in cycles.” So, all they need to do is buy when it’s low and sell when it’s high, and that’s guaranteed to work. They very quickly find out that a graph that looks vaguely like there’s a cycle, doesn’t at all mean that there’s actually a cycle going on there.

Statistically, the question is, can you fit a sinusoid into this data better than just assuming that the data is random? And the answer is, no you can’t. Random data does look vaguely cyclic. It doesn’t mean there’s a cycle.

David: And I think even if that was the case, if the advice is only to know when to jump on and jump off that cycle, then you really got to back yourself, that you’re going to know when to do that, and how to go about actually breaking yourself out of that cycle.

It’s a largely unhelpful recommendation in the term that the way it’s framed. There’s probably two things that may be at play here, Drew. Maybe if I ask you the questions, (1) is it just absolutely random? Or (2) is there some other cyclical force (not complacency) at play, which might go to commodity price, capital investment cycle in projects, or a whole range of things that could play into some sort of cycle that contributes to work and risk in the industry? Or is it just random variation in the number of incidents?

Drew: The honest answer is I haven’t done the statistical analysis. It takes a fair bit of work to try to work out. You need to test various distributions and various models against the data to see which one is the best fit. But I think the important point here is that the report contains a couple of hundred pages of graphs, and nowhere is there any sort of test to see what model best fits the graph. And I think if someone hasn’t done their homework, it’s not up to us to do their homework for them. There’s no particular reason to believe that there’s a cycle.

David: So recommendation number one, the industry should recognize it has a fatality cycle, that it becomes complacent, and we’d call BS on that in a sense that it probably doesn’t even have a cycle. Even if it does, it’s not a really helpful recommendation because what you do when you jump on and when you jump off, and there’s nothing really actionable in there.

So, recommendation two, three, and four. They started with saying that the industry should understand that the nature of these fatal incidents were not extraordinary. They were (as Sidney Dekker would say) normal people performing normal work, in the same way that they do it every day, and then one day it goes horribly wrong. There’s a whole raft of recommendations two, three, and four that talk about things that are quite tactical and transactional in relation to safety, so do you want to overview these for us Drew?

Drew: The first thing to say there is that this is (at least) a better finding than trying to say that there is some magical cause. You’re (at least) recognizing that these accidents are [...], and the factors that are present in accidents and incidents are just the same factors that they are in normal work, may not be groundbreaking. There’s plenty of people who aren’t capable of recognizing and seeing that. Maybe it is worth saying that in a report, you don’t look for magical solutions, don’t look for this one thing that you can fix.

David: I suppose those four recommendations call out the industry’s need to focus on training and pulls out a number of incidents that the results of the person not being trained in the task. They need to focus on supervision. They had some big number, like 32 out of the 47 incidents, were the result of some kind of non-routine activity that should’ve required supervision and it wasn’t adequate at the time. Then, a focus on risk control in a way that said that the organization had controls for this type of work and this type of risk, but they were absent or failed defenses. I think that was the actual language used on the day of the incident. So training, supervision, risk controls. Pretty simple formula that we see a lot in safety.

Drew: So David, let me ask you this. We know that the mining industry skews heavily towards using ICAM as an analysis method. Have you ever seen an accident report produced using ICAM didn’t mention as findings? There were problems with training, problems with supervision and controls that were either not present or not adequate?

David: That’s a good question, Drew. I think that’s largely a rhetorical question, though by the time you got to the end of that one. ICAM is well-used in the mining industry in Australia or at least probably around the world. Obviously, people factors and organization factors, and so on. Equipment factors tend to go straight to training, supervision, and risk control. Like we said earlier, what you look for is what you find. If your incident investigation cause or categorization tool—that’s a big word, big sense—if it’s looking for these types of things, then you’ll find them if you want to find them.

Drew: There is I think an underlying message there. When it comes to safety, a lot of the things that we know we need to do are things that we could just go ahead and do them. We don’t need accident investigations to tell us we don’t need risk assessment to tell us. We know that spending money and time on building competency in the work being done does make people safer. We know that equipping supervisors with the skills they need, and in particular giving them time to exercise those skills instead of having to focus on safe paperwork does make people safer.

We know that there’s a lot of tasks where we do know what the appropriate controls are. What we don’t know is what are the organizational forces necessary to keep those controls reliably in place. So, maybe sometimes it is just worth pointing out that these things do matter.

David: I think you’re sort of practical in there and they’re good reminders. Interesting, that thing on risk controls. The mining industry (through the International Council of Mining and Minerals) for the last five years (or even longer) with Professor Jim Joyce been pushing critical risk controls, is one of the first industries to have a mass take-up in critical risk controls. There has been a lot of work in there, but this was a 20-year review. I think that industry still recognizes that it's still at the start of trying to get all of those organizational forces in play to get those controls in reliably, as you said, Drew.

Drew: I certainly do think that it is a very open question, with the critical risk control programs are effective at making sure that critical controls are in place. There's a certain point at which you have to say, “We've had 20 years of the same controls not working.” If you want to fix that, you need an explanation for why that's the case, just so you're saying, “We need to try hard and we need to focus more, we need to make sure they're in place.” I think the time has come when we need to move beyond that somehow.

David: Yeah, and like any system of work, it depends on how it's designed and the work that we've been involved in with organizations in the space. The list of critical controls may actually make you cringe. It's like a procedure listed as a critical control or some of these other types of administrative controls (and we'll talk quite a bit about administrative controls later). They're all exercised to the point of risk by the operator, whereas a lot of the systemic controls for risk are far back behind the worker and the operator in the point of risk, which aren’t always checked in critical control programs.

I see a lot of critical control programs which turn into self-imposed behavioral safety programs on the operator, there's a [...] in there. Interestingly in this recommendation, it’s probably the first mention of the idea that these fatalities were not all the result of human error. There's a section report here that said, “Look, 17 of these 47 fatalities had absolutely no human error involved. A further 10 fatalities were the result of known faults and issues that were unaddressed, and another 9 had previously and this almost identical to the event that also had gone unmanaged by the by the bind.” The report calling out this idea that these fatal incidents can’t just say human error for these. Not that surprising, but nice to appear in a report.

Drew: Yeah, I have to say that I'm impressed because if I’d had to guess, I would have thought that the vast majority of the reports would be calling out some sort of human error. I don't think we can say much about the accidents from that sort of thing. What we can say is, “Okay we've got a panel of investigators who are looking beyond the simple investigations.” That's a hopeful sign and it would have been interesting, I didn’t see anywhere in the report that tracks the trends. It would be interesting if there's been a decrease in assigning accidents to human error. That would be a very promising sign if there's a trend towards looking at that systemic causes.

David: Yeah, That would give you some indication of maturity, at least on the regulator side who's performing the investigations of their maturity, and understanding incident causation or their learning around causation of incidents over 20 years, but I don’t think we got that level of detail.

Drew, we get down to recommendation six, and Dr. Brady says that the industry should adopt the principles of high reliability organizations.

Drew: I was a little bit flabbergasted by that one. It's not new for big investigation reports, and (I guess) even though this is dealing with multiple fatalities, that you can think of it as an investigation report, for people to get hold of one particular theory of safety, and think that it provides all of the answers.

We've previously mentioned the reference list. As well as lots of Hopkins’ work, it has a couple of books by [...], one of the key proponents of HRO. It has a whole heap of Sidney Dekker’s work and Nancy Leveson's work, who aren’t HRO theorists but are certainly sympathetic to the HRO ideas. I'm really not certain that you can say, “We should adopt a safety theory as an approach to managing safety within an industry.”

If you just sort of substitute it with equivalent comments, we should adopt the Swiss cheese model across the industry or we should adopt safety, too, as the solution across industry or we should adopt the garbage can theory of organization choice across the industry. Yes, safety practitioners should be familiar with all the big theories in their field, but I'm not sure that a theory or big model like that necessarily gives you clear guidance on what you're expected to do next.

David: Yeah, I think that's the point, the clear guidance and recommendations that are actionable. 1 know that (for example) after Columbia, professor Dave Woods was on the independent accident investigation board. There was a lot of resilience engineering language put in through that independent report, but it was done in quite a practical, direct way about exactly how the industry should rethink, the way that the space program should rethink, the way that it operates and what it does. Whereas here, the way that it’s presented, the way that HRO theory is presented here, it's quite general and it's really oversimplified. It almost just says that HRO is all about just reporting incidents and acting on them before they cause fatalities. It's pretty much of what may boil down to.

There's also some things that made me absolutely cringe where there's one quote in there that said, “In high reliability organizations, there's no such thing as a safety culture, there's a reporting culture.” I don’t know where Dr. Brady got that sentence from, but the explanation of HRO and the explanation of what it means for the mining industry (I think) was very superficial and in some places, just wrong.

Drew: Yeah, I certainly would hate for the mining industry to use this report as their guide to what HRO offers in terms of ideas and practicalities. It's translating some deep, complex, and a theory that actually has lots of different variations on it, into a single very dumbed-down version.

David: We've seen it, Drew. We've actually seen this, particularly when you look at a reference list of these reports. In the mid-2000s, Hopkins publishes a book called Learning From High Reliability Organizations. It might have been in the early 2000s. Texas City happens in 2005. He publishes the book on value to learn and people in the oil and gas industry internationally picked up Hopkins books and the HRO theory that’s throughout that.

I lost count of the number of international oil and gas organizations that put in place HRO programs. BP had one in between Texas City and Macondo. BP had a global HRO safety management program following Texas City. I just think we’ll talk about this in the end, but I can just see this happening in the Queens, their mining industry in the next 2–3 years with very little safety benefit.

Drew: We’ve seen the same thing happen with the idea of just culture. I'd even say we've seen the same thing happen with behavioral safety, that your very complex ideas get turned into programs that people think are going to be a quick fix. So, they grab on to this idea without properly fully understanding what it is, what its strengths are, what its limitations are, and ideas applied without fully understanding them often miss out on the things that make them good ideas in the first place.

David: Drew, while I'm on a bit of a rant, I much just say a bit more about this because I was reading through some of the sections of the report, the section that we're talking about. There's a section started to talk about just culture, layman learning, and phrases such as Newtonian linear thinking, and I thought, “Gee.” I then I had a look at the reference list and could see where all of the books of all of that have come out of.

What was curious was that Dr. Brady had said that the industry really needs to understand that mining is a complex system, it's tightly coupled, it's an open system. It needs to understand emergence, non-linearity, adaptation, drift, and all of these things that come out of complexity science and modern socio-technical systems theory.

In the same report, it joins the conclusion from the production of 47 linear causal diagrams of historical incident investigation reports. It was fascinating that there was almost no basis for the recommendation in any of the way that the report was actually done. I find that quite ironic, but also a little concerning that the level of understanding by the author of some of the safety theory was quite superficial.

Drew: David, you know that I love Sidney as a person, and I have a great respect for his academic work, but whenever I have students who are directly putting Sid Dekker quotes into their assignments, I always scribble on the margins, just asking them to explain what they think Curtis and Newtonian thinking is. Some of these phrases roll off the tongue and they sound great and impressive, nonlinear approaches to accidents. If you actually stop and think about what this means, Sidney meant something when he wrote it, but I'm pretty sure a lot of people who copy it just have this weird idea about what it means.

David: Yeah, terroristic world views and a whole raft of phraseology that Sidney places so well in his text, but there’s a whole lot of understanding underneath it to use it well. On one hand, I think that the industry will pick up this idea of high reliability organizations and go, “Oh wow. This is interesting. I want to learn more,” and I think that's a good thing. I think industry getting interested in safety theory is a good thing, and different types of safety theory, particularly the mining industry to think about these different types of approaches to safety management. I hope that there's some careful thought given to how it gets translated into action.

Drew: Let's move on to recommendation seven and eight. David, I'm interested in your thoughts here. By this point, I was thinking we've got on to a bit of the report, though I can probably agree with a lot of what it's saying, but you seem to have some practical difficulties, that you don’t think regulators can actually do what's being asked of them?

David: Recommendation seven and eight are about the regulator. I kind of felt that maybe the author thought that the regulator was the customer because this report doesn't say anything negative at all in relation to the role of the regulator in the regulating industry. It's kind of interesting as the government is the customer. The regulator’s part of the government and we tread softly when it comes to what the regulator has or hasn't done.

But I said that the regulator should (to support the industry) operate like high reliability organizations. The regulator should play a role of colliding and sharing lessons across the industry. The second recommendation was that the regulators should have incident reporting processes that allows people in their field to directly report to the regulator. I'm driving a truck at a mine somewhere in Queensland and I have an incident. I open my regulator app, I just enter the information straight in, and it goes immediately straight into the regulator.

On one hand, I thought Dr. Brady really doesn't understand how safety regulation works and how organizations interface with regulators. On the other hand, I just don’t think there's any way that the regulator can play its regulation role and then try to be a partner with the industry in learning. I think something serious would have to change in governmental and societal expectations of what regulation looks like.

Drew: This is probably something that we can reopen on another podcast episode. I think this is an important constraint around a lot of safety practices. The regulator, both as an organization and as individuals, is caught in this bind where they're supposed to be helping and supporting industry right up until the moment when industry crosses an imaginary line. Then across that line, suddenly the regulator has to be collecting evidence, prosecuting, and determining what's been done wrong.

I know that this is something that individuals within the regulators struggle with constantly. I say the regulator, any regulator struggle with constantly is the conflicting expectations of these roles. You can't be someone who impartially collects information, analyzes that information, gives advice, and people talk to freely, when you know that next week, you might be the person putting together the case to send off to the prosecutor.

David: Yeah, exactly right. The regulator is there, measured, and sees its value through enforcement action. It's interesting. If Dr. Brady had to pick up maybe one of Todd [...] books and read the quote that you can punish or you can learn, and you can't do both. To think that a regulator can play a role of both a learning facilitator and an executioner, I just don’t see how that’s practical in our modern business construct.

It's interesting to say that the regulator should be involved in collating and sharing lessons, and yeah, the regulator should have an app so that people in the field can report directly, but frankly that's not the way that regulation and industry interfaces.

Drew: At the time when we appear to have a bit of political will for talking about these fatalities and responding to them, I say this one is a missed opportunity. I would really like it when people make these reports to look for the big changes, and giving regulators genuine options. For example, restructuring them into multiple organizations. One for providing industry support that is independent from the enforcement arm and things like that. It may not be achievable, it may not be politically feasible, but this reports the opportunity to float those ideas.

David: Yeah, absolutely. I agree that it’s a bit of a missed opportunity and a bit naive in its recommendations.

Recommendation 9 and 10 basically go to replacing lost time injury frequency rate with serious acts and frequency rate. What we know from the last 20 years is that LTIs aren’t that useful for safety and subject to corruption, and says we should replace lost time injury rates with serious accident rates because you can't hide when people go to the hospital. Replacing one injury count to another. Does that fix the problem?

Drew: Two points here, the first one is this definitely shows naivety if you think you can't hide hospitalizable injuries. I have tried saying before in closed doors. We have safety professionals. At least fatality counts are reliable because you can't hide fatalities. Only had to have people explain to me carefully the many ways to hide a fatality. Hiding hospitalizable injuries is certainly something that can be done. All you need is to classify that something that doesn't happen at work or ensure the treatment happens somewhere that is not a hospital.

My favorite one, which is to declare your entire work out of hospitals, so that no one ever crosses that boundary towards being admitted. All real things done and real organizations to avoid, but I think the fundamental things here. Firstly, kudos to the report for pointing out yet again the problem with lost time injury frequency rate. It needs to keep being said because people need to keep hearing it as a consistent message whenever this sort of report happens. We just can't let off constantly reminding people that lost time injuries are not a good measure of anything.

David: Yeah. Also, I couldn't help but read this report within industry lens and a practical lens of my time in organizations. I think one of the ways that Dr. Brady suggested we get around that was to actually put the same reporting app in the hands of the doctors in the hospital, so that when anyone came from a mine to get hospitalized, the doctors would have to report it, and then they could crosscheck the information with what the mine reported to make sure they both reported the incident.

I was just thinking practically, how do you get doctors to report to a mining regulator and know that someone from a mining industry in a normal routine to the thousands of doctors across the state? It’s just some of the recommendations. The detail of the recommendations kind of just didn't make practical actionable sense to me. I agree with you, I think it's great that we continue to point this out, but it would have been nice to have a different kind of solution.

I’d agree, I say one of the mines report to the regulator every month their current profitability, to know where they might be increasing goal conflict in particular mines around the state and a potential for cheaper contractors or less labor to be employed to perform normal work tasks and things like that. There was a chance for this report to move right beyond safety indicators in its entirety, which they kind of missed as well.

Drew: I feel obliged to put my statistician's hat back on and to point out that if this recommendation is true, and if serious accident frequency rate is in fact a good measure of fatality risk, then how come there's no cycle in the serious accident frequency rate? How come that has been trending steadily upwards? Either the recommendation about cycles is untrue or this recommendation is untrue. You can't have both.

David: Yeah, to some. For those who haven't read the report, I think this serious accident frequency rate is only being measured for about last 8 or 10 years, or not quite half the period, and yeah, it trends up every year. In the years that fatalities went from three or four down to zero, serious accidents went up in the same period. To suggest that serious accident frequency rate is a particular fatality is not true in the statistics.

I think this report doesn't quite make that claim. It just says that serious accidents are a better overall indicator of the safety of the industry than lost time injuries. In that claim, you could probably say maybe that's an okay claim, but it's still not going to help you maybe prevent fatalities by looking at that rate.

Drew: Should we also mention recommendation 11 which talks about reporting of high potential incidents? This is another thing that is a little bit obvious, but I think does need to be said in every report, that we shouldn't be using reported incidents as a negative indicator. We shouldn’t be punishing people and thinking badly of them because they report a lot.

David: Yeah, I agree. I think anything that's reported is useful. It’s better to know about something than not know about it. You have to be careful with how you treat anything that anyone tells you in the safety context in the organization. I agree. Report it. You should never be trying to push down and write anything that you want your people to actually report to you.

Drew: But the flip side of that is, it doesn't also apply that deliberately pushing up reporting doesn't improve the quality of your reporting and the quality of your information. What I'm suggesting in this report is that we should use high potential incidents as a measure of having a good reporting culture. We should be celebrating success every time the number of high potential incidents goes up. That seems to me is the recipe for people to just put in place organizational policies that people have to report so many incidents every month, and we had people deliberately stubbing their toes in order to claim that they almost killed themselves.

David: Yeah. There was no relationship between the high potential incidents and the fatalities either. There was (I think) the same amount of high potential incidents reported in one of the years where there were zero fatalities in the industry as the year before this report when there were five fatalities in the year.

I was also thinking about the [...] saying, kind of say, “What? What could this even look like?” It’s 40, it’s 50, it’s 60. When you say that we're going to use that as a barometer for our reporting culture, then you have to stop putting goal posts in place and where do you put them. If reporting goes down, what do you do? If reporting goes up, what do you do? These indicators really aren’t indicators that could lead to any decision around action. They're just counting things that you can count.

Drew: David, I would like to move on beyond the recommendations because I think buried within the depth of the report are some insights and useful discussions that they had in preparing these reports within the industry, and with some real problems that do deserve further investigation to be uncovered and talked about if we are really serious about improving mining safety in Queensland and more broadly. 

These are the things that didn't get in-depth treatment because they didn't get that much treatment in the accidents that made the main focus of the report. Maybe you can point out a few of those things?

David: There's a section there about industry consultation or discussion within the industry. Disappointingly, we actually don't know who and how many people Dr. Brady spoke to which organizations or how these sessions were carried out. But it did report on some things that came back with discussions from people in the industry. 

We don't know if these are the results of a conversation with one individual or a much bigger sample, but the first one that was highlighted to him during this review period was that industry feels like it got too much paperwork and procedures, and that these procedures are the default approach to managing risk. 

Drew, we've written a bit about safety work and safety clutter. It's interesting that was not gone and looked for in the incident reports and are maybe reflected up on the recommendation. At least to me, that was a little bit interesting.

Drew: It would certainly be something that would be worth checking back in the recommendations from the original incidents because certainly it's true withinorganizations. I'm less certain whether it's true about the mining regulator  in Queensland. But there is this default that we respond to a recommendation by creating some sort of new paperwork, new procedure, or modification to existing procedure. It hasn’t gotten to this, so we added that hazard to our checklist. There was inadequate supervision, so we created a supervision form.

David: Yeah, a bias and assumption on my part, but we've seen in other research papers and reports that the lack of supervision in these 32 or 47 fatalities could have been the result of the supervisor having to sit in his office and do the paperwork, or it failed and absent defenses that are in place could have been the results of all of these administrative controls that weren’t complied with or followed. Maybe these things weren't discussed as underlying contributors to the causes in the original incident report, which is a bit of a shame.

Drew: The second one that gets talked about is something I know you've talked about before. This is a real cycle that is very interesting in mining. Is the influx of new people to the industry, and the way the boom-bust cycle of mining requires these high manpower demands, that require bringing in people who are unfamiliar with mining, and to the point where we lack people to do the safety roles, training roles, and supervision roles, that those people become new and inexperienced as well?

David: This report talks about or at least some people made comments to Dr. Brady about the influx of new inexperienced people into the industry, and those people being trained and supervised by people who'd only been in the industry for a few years or less. That's definitely something that could be really useful for the industry to understand, but again not in the recommendations. 

The third one after paperwork and industry experience was this idea of contractors being less safe than employees. This was a little bit in some parts of the data, not compellingly but definitely with the people who were engaged to talk. There was definitely this feeling that contractors work less safely than employees.

Drew: This is probably one that we can discuss another time because there are papers published or being published out of it. One of my own PhD students, [...] has done similar work looking at fatality reports, but he focused specifically on what are the factors in these reports that are related to the employment status of the people involved? This is something that the original investigators didn't focus on, so it’s a genuine revisionist-type approach to look and say, “If we look at this purely from the lens of contracting, what can we see about the way contracting relationships contributed to the accident?”

In an industry like mining that has so many different types of contracting—we've got subcontracting, mining services terms, labor hire—those are definitely safety issues worth exploring. 

David: Absolutely. As much as I went to this committed to not being too critical (because these reports are hard to do, given the data that you've got, and the politics around it) I probably be a bit more critical than I intended to be through our review of the recommendations. Now it's our chance to talk about some mis opportunities and some overall comments. What would you have liked to see in this report based on the recommendations, the understanding of the industry, and the other comments we've made?

Drew: Maybe we can take it in turns for these. The first big one that I would like to see serious discussion because it requires both industry, government, and the regulator to talk about, is workforce mobility, the problems it creates for safety, and how we are going to tackle it. There's nothing in the statistics that says mining is getting more dangerous overtime, but it is certainly facing new issues as a result of increased fly in-fly out, increased people coming into the industry and out of the industry, and packing again.

David: The report had a huge opportunity to talk about engineering solutions for hazard control. There's lots and lots of graphs about the recommendations in this report for the high potential incidents, for fatalities were always more 70% administrative. Always about admin and PPE, or even no action specified at all for a number of these high potential incidents.

There was a real opportunity to talk about engineering solutions, and probably what I would call proper investment in safety, in protecting people from physical hazards in the workplace through engineering solutions. Something that talks through that growth in paperwork and it talks to the administrative controls that are just so easy to put in after an incident.

Drew: I'd like to know about how we deal with the boom-bust cycle that I have already referred to a little bit. Particularly when it comes to how we build and maintain a cadre of industry-experienced safety professionals. Not just people with safety experience, but experience in the mining industry. Not just maybe safety, maybe safety training supervision or the big ones where risk of losing talent every time there's a bust and having people come in quickly with inadequate training and experience when there's a boom.

David: One of the things that we talk about often—again, the Safety of Work, the title of this podcast—is there's no real serious discussion about the work itself. The work of mining, the business models in place, the supply chains, the changing nature of goals, pressures and objectives of the industry, how work is managed, different emerging technologies into the industry, autonomous mining vehicles, different underground mining technologies and processes, how does that change the nature of work itself and the risks that people face. There was no point to the industry to think about the changing nature of the work and the technology.

Drew: I think we are going to have to already declare this an hour long special episode of the podcast. Let's move on (if that's okay) to practical takeaways. 

David: Let's do that. I suppose you've been holding on to this idea all episode so how about you start it yourself?

Drew: The first thing I said to David when he suggested that we do an episode of this report, is that I wanted to talk about the idea of deepity. That's my takeaway for people is to learn the concept of deepity. A deepity is a statement which can be read at two levels. It always has a surface level where it's true but trivial,  but it also sounds profound. At that deeper level, it is profoundly wrong.

A classic example is the idea of a fatality cycle in this report. I'm sure some of our listeners will be saying, “Stop nitpicking with this idea of fatality cycle. The main point is just don't be complacent,” and that's fine. Don't be complacent is good, but trivial advice. You have someone paid tens and thousands of dollars to create a report, and their number one recommendation is don't be complacent, they are going to look like they did not do a good job.

They try to make it sound more sophisticated. Instead of saying, “Don't be complacent,” they say, “The industry needs to recognize that there's a fatality cycle.” That either just means the same thing as don't be complacent or it's total bunk. You look out for things in safety that seem trivially true and can be defended as trivially true, but have no deeper meaning or to the extent that they have a deeper meaning. That's wrong.

High-reliable organizations can be a similar thing. In fact, any safety theory can fall into this trap. You can look at it at surface level which is almost in this report the same thing. They are using high-reliable organizations as a form of don't be complacent and collect data on incidence. But if you're going to put in atheory, then go deep into the theory. Look at all the things that suggest. Look at the new ways that offer to look at the world. Maybe you'll agree with that, maybe you don't, but you don't read it at the surface level. 

David: Similarly, this idea that high potential incidence is a good measure of a reporting or a safety culture. That sounds profound but trivial. You have people report it's open, but at a deeper level, as soon as we start setting targets around HBIs, they’ll succumb to the exact same institutional forces that other measures have faced.

At a deeper level, what's the denominator? How will industry react? What's good? What's bad? What are the tolerances? If there were more HBIs last year than this year, what does that mean? Are reporting cultures going down, are works changed? At a surface level, HBIs are a measure of safety culture. At a deeper level they’re kind of absolutely not.

Drew: The final one for me is a bit more sympathetic to the report. I think this type of exercise is an illustration of just how hard it is to get useful learning out of accidents. We can learn from accidents, but it's rare and it requires a lot of self-awareness.

That's one of the reasons I actually stopped doing DisasterCast. There's this constant risk that you are just using the accident to tell people about your own pet ideas and theories. You are not uncovering or discovering. You are just using it to present what you think is already true, and it's almost an impossible challenge to go back to these accidents that have already been investigated and tell the industry something it doesn't already know. 

David: As we are given the prominence of this report in the media, Drew, my final practical takeaway which is entirely tongue-in-cheek would be for any safety consultants out there in the mining industry, make sure you add HRO program implementation to your list of services in the next year or two.

Drew: Is that tongue-in-cheek or is that just good advice?

We end each episode with invitations to our listeners, things we'd like to know. In this particular case, the report is publically available. Even though we've been a bit critical here, I still do think that people who are interested in safety in Australia and interested in the mining industry should read the report. It's got problems, but read it, think about it, think about what we've said, think about your own opinions about what the problems are, what the solutions are. We'd be interested in what you think.

We're also interested just more broadly in what you think of this episode. We've strayed outside of our normal format, we are not talking about a paper, and we're probably verging more on being critical than trying to find learnings out of it. Tell us what you think. If you want more of this episode, whether you hate it, and you just want us to stick to the normal format, let us know.

David: Thanks, Drew. That was fun. Apologies if it's a bit longer than our regular listeners’ commutes or exercise sessions, but that's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Join in our discussion on our LinkedIn group, or send any comments, questions, or ideas for future episodes to us at feedback@safetyofwork.com.