The Safety of Work

Ep.90 Does formal safety management displace operational knowledge?

Episode Summary

Our discussion today centers around a 2014 paper by a group of Norwegian academics (Almklov,  Rosness, & Størkersen) entitled “When safety science meets the practitioners: Does safety science contribute to marginalization of practical knowledge?” From the Journal of Safety Science, 67, 25-36.

Episode Notes

An excerpt from the paper’s abstract reads as follows: The proposition is based on theory about relationships between knowledge and power, complemented by organizational theory on standardization and accountability. We suggest that the increased reliance on self-regulation and international standards in safety management may be drivers for a shift in the distribution of power regarding safety, changing the conception of what is valid and useful knowledge. Case studies from two Norwegian transport sectors, the railway and the maritime sectors, are used to illustrate the proposition. In both sectors, we observe discourses based on generic approaches to safety management and an accompanying disempowerment of the practitioners and their perspectives.

 

Join us as we delve into the paper and endeavor to answer the question it poses.We will discuss these highlights: 

  1. Safety science may contribute to the marginalization of practical knowledge
  2. How “paper trails” and specialists marginalize and devalue experience-based knowledge
  3. An applied science needs to understand the effects it causes, also from a power-perspective
  4. Safety Science should reflect on how our results interact with existing system-specific knowledge
  5. Examples from their case studies in maritime transport and railways

 

Discussion Points:

 

Quotes:

“If you understand safety, then it really shouldn’t matter which industry you’re applying it on.” - Dr. Drew Rae

“I can’t imagine, as a safety professional, how you’re impactful in the first 12 months [on a new job] until you actually understand what it is you’re trying to influence.” - Dr. David Provan

“It feels to me this is what happened here, that they formed this view of what was going on and then actually traced back through their data to try to make sense of it.” - Dr. David Provan

“I have to say I think they genuinely use these case studies to really effectively illustrate and support the argument that they’re making.” - Dr. Drew Rae

“Once we start thinking too hard about a function, we start formalizing it and once we start formalizing it, it starts to become detached from operations and sort of flows from that operational side into the management side.” - Dr. Drew Rae

“I don’t think it's being driven by the academics at all and clearly it’s in the sociology of the profession's literature all the way back to the 1950s and 60s.” - Dr. David Provan

“We’re fighting amongst ourselves as a non-working community about whose [safety] model should be the one to then impose on the genuine front line practitioners.” - Dr. Drew Rae

 

Resources:

Link to Paper in JSS

The Safety of Work Podcast

The Safety of Work on LinkedIn

Feedback@safetyofwork.com

Episode Transcription

Drew: You’re listening to the Safety of Work podcast, episode 90. Does formal safety management displace operational knowledge? Let's get started. 

Hey everybody. My name is Drew Rae. I'm here with David Provan, and we're from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work podcast and welcome back to Australia, David. As we can see from LinkedIn, you've been on a bit of a tour. 

David: I have, Drew. I have been out of Fortress Australia for nearly a month in January in the States, which was really, really cool. Really cool to be out and about and still COVID-free, fortunately. 

Drew: You caught up with a whole number of people, starting rival social media empires.

David: Yeah, some colleagues that many of our listeners would know—Todd Conklin, Ron Gant, Beth Lay, Tom McDaniel—lots of people in the US that are really interested in contemporary safety science. It was a lot of fun. Drew, how's your January?

Drew: While you were gone, we have finally finished the Take 5 paper, it has been dusted off and submitted for peer review.

David: Okay, in a coming episode, look out for that. Many listeners will be very grateful that we will record that one. 

Drew: Yes, until the peer review sees the amount of snark that I've snuck in. I had more junior co-authors this time. Normally, I have someone who tells me to pull my head in, but I was cowriting with [...] who basically just told me to leave it all in. That's how it goes. 

David: The tip for our listeners will be to read the preprint. Don't read the final journal version.

Drew: If you want to hear what I really think about Take 5s, yes absolutely. 

David: Very good. 

Drew: In each episode, we ask an important question in relation to the safety of work or the work of safety and we examine the evidence surrounding it. David, you've actually read today's papers. What's it all about?

David: This is one of my favorite papers, and it was a paper that was published in 2014; we'll talk about it in a moment. I started my PhD in 2015. It really made me think at the start of my PhD about the safety profession and the interaction of the safety profession within organizations, and really how as the influence of a safety manager or a safety official increases, the less attention gets paid to other actors and practitioners in the organization in relation to safety. The more emphasis on the safety work in a Safety Management System, the less emphasis on local safety action in frontline teams. 

That's what we're going to talk about today in this episode, is just how—I think this differ from the very end—careful we need to be with our balancing, system-level knowledge, standardization, and formalization of safety management with local adaptive practices of frontline teams and what it means to be safe to the people who are exposed to the risk.

Drew: I'm really interested to hear what you've got to say about this paper, David. Exchanging our own feelings about this was something that led into when you were doing your PhD and when I was agreeing to help supervise. I had all my own frustrations with the way so much safety—particularly at the engineering end of safety, the sort of system safety type practices—seems to flow very much from the top down where the safety work was basically from this institutionalized knowledge of how safety work is meant to be done instead of bottom-up how the industries that were going into actually worked and achieved safety. 

David: It's not like everything that we talked about or most things that we talked about, it's not necessarily one or the other. It's just how balanced is the organization and how conscious is the organization in where that emphasis lies and what the unintended consequences are of different approaches.

Drew: The authors of this paper are (I think) possibly at least two out of the three ones that we've encountered before. The lead author is Petter Almklov and the third author is Kristine Størkersen. We encountered Kristine's work when we were talking about the way in which deregulation of government increased regulation within organizations. Professor Almklov is not actually a safety scientist. He's more of a social and political scientist, who has published a few works stepping into the grounds of safety, applying social and political theory to look at the way in which safety works within organizations. The middle author, I don't think we have encountered before is Professor Ragnar Rosness. They are all highly reputable Norwegian scholars.

David: Yeah, and out of NTNU and SINTEF. NTNU feels a little bit like the Griffith Safety Science Innovation Lab, doing lots of industry-based research—being in Norway—lots of oil and gas, lots of maritime, lots of industry-based papers. I sort of had a bit of a look back through Petters's work and there are a lot of applied research that goes on out of these institutions, which is really great to see.

Drew: The paper is actually called, When safety science meets the practitioners: Does safety science contribute to the marginalization of practical knowledge? Published by three academics rather than three practitioners and in the journal Safety Science in 2014.

David: Drew, early introduction of this paper, so let's dive straight in. They say there's a common-sense notion in applied science, where it's the job of the scientist to produce information that can then be disseminated down to the practitioners, which enables practitioners to increase their knowledge base and as a consequence, increase their capacity, or their power, or their impact to handle safety challenges. Knowledge is seen as a sort of additive and empowering. The more we know, the more we are able to do, and therefore the more confident and impactful that we are. 

This paper then goes on to say, we're going to explore an alternative view in relation to safety, knowledge, and power, and propose that the introduction of management safety management regimes that are based on generic safety principles and international standards and delivered by safety professionals actually displaces or marginalizes existing local systems–specific safety knowledge. Rather than adding capacity, being empowering, and expanding the knowledge base, it actually substitutes out the local safety knowledge base for this top-down generic safety management knowledge base.

Drew: David, before we dive too far into the paper, I think it's probably worth characterizing the different types of knowledge that they're talking about, because it's very common for people to complain about new ideas in safety like Safety II and resilience and say, these are just academics coming from the outside telling us how to think. 

There's possibly some of that in this, but a lot of what's happening with the introduction of things like Safety II is competing against other models of safety, which have also been introduced through this same process. 

This paper from 2014 is talking mainly about things like the idea that we should have safety management systems, introduction of formalized risk assessments, auditable and paper-based documented safety management. That's really what they're talking about when they talk about management regimes replacing local safety knowledge. They're not really talking about these new ideas in safety, but it is an interesting question to think okay, how much does this apply to when I did such a Safety II start coming out of academia? Are they doing exactly the same thing trying to colonize that same space?

David: Absolutely, and as we get into the case study examples that get drawn on, it is very much about the widespread industry adoption of safety management systems in the late 90s and early 2000s, that this is the case study basis that this paper is drawing on for its empirical research component. 

Drew, the premise for this paper specifically says, look, there's been a growth in centralized safety management systems based on international standards in a number of different industries. There's a growth in safety professionals, there's a growth in external safety consultants, and each of these—the systems, the professionals, and the consultants—have credibility and get an organizational power very much more so than frontline staff and frontline work knowledge. When the safety management system says something or when the safety profession says something about safety, it is taken more credibly, more influentially than the practitioners when they say something about safety in relation to their own work.

I guess what we're really drawing on at the start of this is how does knowledge and power interrelate inside an organization to (I suppose) create a dominant organizational narrative around safety (or the paper uses the term discourse)? What is the organization's safety discourse and which ones become the most dominant inside the organization?

Drew: When you hear the word discourse, David, in fact, this paper directly cites who I think is probably the scariest author in social science—Foucault. The moment you start hearing about Foucault and you talk about discourse and power, you begin to worry that this is going to descend very quickly into very convoluted and revolutionary-style social science challenges towards the near liberal state. 

They've got a really quite simple explanation for how this plays out in practice that I think is probably accessible. Well, I certainly found it very accessible. They talk about it as a competition about who gets to create and own the models for how safety works. 

What happens is that if the academics create models that then are coming to the organization through consultants, regulators, and then safety professionals within the organization, it creates almost like a monopoly for the picture that we draw of safety. Everyone else is then forced into the position of learning about how that model works and following along with that model. They never get to argue about what the model looks like. Because the experts are the ones who own the model, they're not listening either. They're not trying to update their model or change their model based on what frontline practitioners are telling them.

David: I think that's a good overview and even just to be more specific for our listeners, when we talk about models, say the narrative in the discourse in here has gone from when they talked about the two regimes is going from a place where safety is based on the deep technical knowledge of the specific frontline operational situations in operations and how to manage those very specific issues through to the model of safety is no longer about that. 

It's about high-level accountabilities, general management processes, compliance, auditing, general training activities, and external certification. This model is actually the organization's model of how do you create safety inside an organization. That's what we're talking about when we talk about models.

Drew: How do you create safety and what does good safety look like? Good safety means that our system has been accredited to ISO 9001-3. Good safety means that all of our safety actions are documented, transparent, auditable, repeatable, and happening consistently across the organization. That's what good safety looks like. That's the new model that we're putting in place and then dragging other people in to fit into that way of thinking.

David: There's a section of the paper early on around theoretical perspectives, but I think we're going to cover most of these when we talk about the case study. I might just highlight just the general areas of interest before we get into detail a case study if that's okay. 

Just contextually, the paper talks about the rise of international standards. This idea of making safety look homogenous across all of the organizations within an industry and talk about interoperability, transparency, ease of regulation, and compliance so that we start to see these international standards and we know all of them in safety and different industries, whether it's ISO standards or something. It's a means of actually being able to see safety beyond an individual organization. Talking about a big rise of that as a perspective or model about safety. 

There's also this discussion about the safety profession itself and the power balance created by the rise in the professionalization of safety management. [...] also talks about a social theory like you mentioned before and it's really just talking about organizations do have a discourse and a narrative about how to create safety. This is what we're talking about here, about an existing (call it an) approach discourse model being displaced by a new model, how does that happen, and what are the consequences of that happening.

Drew: It just occurred to me that one of the things, they don't specifically call this out in the paper, but I think it's possibly a red flag for what this type of thinking looks like. That's the question of whether a safety manager can go from one industry to another. I think it started to become asserted in maybe the 1980s and became literally true in the 1990s because of the way safety had been captured by this model. 

The idea is that good safety is good safety. If you understand safety, it really shouldn't matter which industry you're applying it on, how the systems are supposed to work, how the methods are supposed to work. It's not up to you to do the work safely; that's for the local practitioners to understand. But a good safety manager should be able to go from aviation to railways, to hospitals, so long as they understand how to do safety well. 

Do you sort of remember in your career the point at which that became an idea and then became almost true through the fact that everyone seemed to believe it?

David: It's a really good point, Drew, and it's in the paper. I think it's somewhere in the notes we'll talk about. I think for a long time, I held the belief and I know a lot of professionals do that if you know how to lead safety, then you can go from construction to oil and gas, to aviation, and safety processes to safety processes. That's kind of what this paper's sort of saying, that these standardized, generic models of safety, displacing local operational context and knowledge.

The more I have thought about it, the more I think that is a scary or wild thing to do. I don't think I should have been given a head of safety job in an oil and gas company a month after the Macondo incident when I'd never set foot on a drilling rig and was then responsible for the safety of offshore drilling operations all over the world. I just think that the language, the operational context, the people, the players, the technology, I can't imagine as a professional how you're impactful in the first 12 months until you actually understand what it is you're trying to influence.

Drew: That's a pretty brave thing to say and I want to push back a little bit, because I think based on the way we were managing safety at the time, we'd almost made it true that you could do that. But we got to the point where senior leaders were so responsible for managing these systems. When the systems were so standardized between organizations, that it was in fact true that you could move from one industry to another. Maybe that's not how we should have been doing safety, but that's how the organization's were doing safety. To lead with standardized to the point where you could lead the system, so long as you understood how the system worked.

David: Yeah, sure. My comment was not necessarily could you but should you. Then I did advocate for a while for industry-based certifications like Certified Construction Safety Professional, a Certified Aviation Safety Professional, and a Certified Oil and Gas Mining Safety Professional actually have industry-based certifications that actually include the operational context of those industries, the languages, the technologies, and the design of work issues, but that didn't seem to apply with safety professionals.

Drew: Maybe this is a point we can come back to, because I don't think it's a no-brainer in either direction. If you take the opposite belief, it basically says we can't bring in ideas from outside our industry because only someone from our industry knows how to do safety in our industry. That's how you become insular, entrenched, and don't learn from advances in safety that happen outside where you're working.

David: It's a good point. Not black and white but something to debate and discuss. We might get some discussion going on LinkedIn maybe and see what people have to say. Drew, do you want to go into the method now?

Drew: Sure, but I'm going to let you explain it so go ahead. 

David: They call this case study research and some of its sort of called historical case study research. It looks like they started forming up these ideas and then drew on data that they had. They did sort of a historical review of the rail industry in Norway going all the way back to a rail accident that occurred on the fourth of January in 2000, and worked forward about how the safety management regime in the industry changed and what it did for the management of safety and the organization. 

They also had a whole lot of data from the maritime sector, like 80 interviews, 300 hours of field observations, and a whole lot of stuff that sort of coincided with the publication of the IMO ISM (the International Maritime Organization International Standard for Safe Maritime Operations), and the rise of the external certification to that standard.

It's a little bit like when we wrote the Safety Clutter paper. We sort of just saw these patterns in things that we were doing, and then we went back and used the data to actually try to figure things out. It feels to me like this is what happened here that they formed this view of what was going on and then actually sort of traced back through that data to try to make sense of it.

Drew: Yeah, at the risk of ticking off the authors, and in the confession that I think you're right, that we did something a little bit similar. I'd argue for some differences in how we did the Safety Clutter paper. I think what they've done is they've formed a theory of how this happens, not based on research but based on their own deep knowledge as academics who have been working in industry adjacent for quite sizable careers. They've seen it happening. They've tried to make sense of what they're seeing. A lot of the data isn't formally collected. It's just in their heads as people who've been around paying close attention. 

They've then formulated a theory which they have illustrated using two case studies. Having formed this theory, they use the two case studies as ways of illustrating explaining how this thing that they've observed works. I don't think it makes it any less true. I think it's just the messiness of good qualitative research that sometimes doesn't fit into neat models, and sometimes we collect our data and form our theories before we've even started following a neatly defined method.

David: I think you're right, Drew. The case studies are more of a narrative account of two industries, how they operate, and what's happened over the last decade or two up until when the paper was published. Illustrates the points quite well but doesn't actually call on any data in the paper that you can see or refer to. 

Drew: I think they genuinely use these case studies to really effectively illustrate and support the argument that they're making.

David: Let's talk about these two case studies just to make it clear. We talked about the Norwegian railway and maritime sector. If we start with the railway sector, they went back to a train collision on January 4th, 2000 that caused a fire, killed 90 people, injured 67 others. This was at a time when the Norwegian rail sector was still a government monopoly. The management of the infrastructure and the rail operations was done by one entity. 

This led to a new safety management system, a new safety regulatory regime, increased investment and recognition of the safety professional discipline, increased dedicated safety resources, a whole new wave of training for all line management and safety professionals in formal safety management systems and risk analysis. 

Then the case study goes on to talk about this clash of regimes. We've got this new management-led risk-based approach led by the safety professional in the organization colliding with the existing language and discourse of local operations with deep technical knowledge managing the safety and integrity of their local areas of operation.

Drew: David, I don't know about you, and I think this is the power of a good case study to generalize and transfer across situations. I was reading all the details they were giving about Norwegian Railways and in my head, I was just thinking Piper Alpha, Piper Alpha. You got this pattern that plays out that you can recognize that you have an accident that is so big that it's not allowed to be investigated by the people who would normally investigate it. 

We call in outside experts to challenge that point of view. Those outside experts then draw on theory and knowledge from outside the industry, bring that in to explain the accident, and talk about what we are supposed to do instead. You see that happening with mines in Queensland, I think at the moment, that exact same pattern. We saw the same pattern after the SeaWorld incident, also in Queensland. Sort with Piper Alpha, where they're pushing the safety case regime in consequence. 

David: Pike River in New Zealand?

Drew: Yup. In each case, it’s that same thing of we no longer trust the people within that industry to tell us how to achieve safety; we're going to look to the outside. The moment we look to the outside, we look to more formalized knowledge. Knowledge that's more neatly packaged that has some key definitions and terms, key systems, key practices, clear key models. We use those to explain the accident and then we use those to explain how to do safety “better.”

David: One of the core changes here that get talked about a lot in this case study in this paper is that there was a former senior role in the organization called the Head of Traffic Safety Operations or something which was a safe working role, safe working in the context of the rail industry. They created a new role, which was a General Head of Health Safety and Environment reporting to the CEO, but a generalist practitioner from outside the industry, and pushed that head of traffic operations down a level in the organization. 

There's a lot of discussion in the case study about how that then went on to marginalize this domain technical-specific safety knowledge from being visible and understood by the most senior people in the organization because it's filtered through a generalist head of safety.

Drew: David, I'm now imagining an entire research project just tracking what's happened with all of those positions. I'm thinking of things like, I don't even know what Australian oil gas does. Did you have a position like Chief Engineer?

David: Not so much. Normally, there is a technical authority structure inside an oil and gas sector where you'll have heads of all of the different technical disciplines potentially reporting to a chief engineer.

Drew: Then sometimes in railways, we call it Signaler in Charge. Because these are male-dominated organizations, I'm just simply repeating the stereotype, not trying to say it's the right thing, but we're talking here greybeards, some old guy who's been in the organization forever and deeply understands how things work. All the junior people come to them and look for advice on how to do things. Then we're trying to replace that with more of a sort of like system process-based knowledge. We're now the people who own the system, the people who come and ask what you're supposed to be doing.

David: Yeah, hadn't cave-in that is Nimrod inquiry, made that exact point about the use of very descriptive terms that I won't use, but not having enough greybeards in the organization who were negative on any sort of change going on in the operation, but actually just trying to hold the course for good integrity operations. 

Drew, beyond rail, the right to move on to the maritime? 

Drew: Yup, let's go into the next case. 

David: The maritime I mentioned earlier was the ISM Code. People in the maritime sector will know this code published by the IMO. It required all maritime organizations to have a safety management system that complied with the code and be certified by independent consultants.

An international standard, international consultants were engaged to support organizations to interpret this code, develop their safety management system, and then also engaged by the organization to certify their management system. This resulted in lots of generic safety management systems, cutting and pasting of ship SMSes, and the consultants were the ones in this sector that had the power to both interpret the regulation, define the company's safety management systems, and then certify and endorse those safety management systems for the companies.

Drew: David, I don't know if I've told you this story before. This is one of the criticisms I had of the whole situation when I was working at the University of York. This is not a criticism of any of the individuals involved, either in academia or in industry. I think it's just this exact same power process playing out.

We have academics who are writing papers about how to handle the safety of software within systems. It's very specialized knowledge. Software development is very much a very technical discipline. The people who do software development don't often think about what they're doing, they just do it. 

We have the academics creating these models for how software safety works. Those models then get enshrined into regulation, and it gets into things like defense standards. None of the companies know how they're supposed to be meeting these standards. They go back to the academics and consultants again to say, can you give us guidance on how we're supposed to be meeting the standards that we only wrote because you told us we had to?

So they write the guidance, then all of the individual companies need to know, okay, so how do we comply with this guidance? We did go out and find new consultants and new academics to tell us what we're supposed to do to comply with the guidance and to comply with the regulations that we all only have to do, because this is what the academic said we had to do in the first place.

I'm implying that there's something fundamentally wrong with that process. But what's really wrong is that it all happens in isolation of understanding how people actually do this work in the first place. Where is the knowledge about how to write good, safe software? Where's that coming from? That's being lost because we've overshadowed it with these new models that people have to learn.

David: That's exactly what this paper is referring to. I think there are some comments in the paper around regulators outsourcing to markets, to do what is inherently the role of regulation. There's a fair amount that's broken in the way that this system of safety seems to play out in terms of rule setting and rule following and then compliance assurance that goes on in many different industries. 

Let's dive into the discussion. There are a few topics that we'll cover in the discussion. It might be coming through in a thread. There's the discussion that starts off by saying that local experience-based knowledge about how to make a local operation safe seems to be rendered irrelevant by the more theoretical and generic discourses of safety that are contained in these standardized accountability-based systems.

The discussion leads straight in and says, yes, this is what happens. The more top down, generalized, and compartmentalized formal safety management processes that you put in your organization, the less the organization is going to listen to, learn from, and follow the knowledge of the local operation.

Drew: David, I don't know if I'm telling too many stories in this episode, stepping away from the actual paper. I'm going to give you another one, which is about producing a course for a company on how to do safety. The usual process is you produce a draft course, go to the company, and then  get feedback on what's in the course, what's not in the course, and what else should be there. 

We had this argument because they said there are lots of the processes that we put around design, but there's absolutely nothing in this course about how to design a safe system. Our response to that was, well yeah, but that's not what teaching you safety is all about. You guys already know this stuff, how to do the design. We're teaching you how to do the safety.

It was interesting because from our point of view, we weren't marginalizing. We were just demarking expertise, get our expertise in the safety process. You guys were already experts in the design. What was happening in this reinforcement is you're getting sent on a course about safety. You're hearing from the academics, and you're not hearing anything about those design processes that says, these safety processes are more important. Even though in reality, the actual design processes might have been more important for design organization.

David: I think there's a whole nother sidetrack there about shipping safety programs and whether you're doing anything of value in your organization. We're talking about generic safety training that's not tied or connected to the core operational work of the organization.

Drew, I guess the follow on from this marginalization of system-specific local safety knowledge is that the organization's attention and resources gets directed towards the development and implementation of these new safety systems and programs and all of the follow-up activities that are created by these systems and processes. It's one of the things the paper then goes on to say it's not like this local knowledge disappears. It's still actually necessary to operate—in this case, trains and ships—in a safe manager.

The local operations still need to know and do all the things they used to do. What happens is it's not understood or supported or enabled by the organization. The big fear here is that over time, by not being seen by the organization to be important to safety management doesn't get supported, and therefore not reinforced, and therefore it's probably into where drift can happen and organizational decisions can be made that account to supporting what's actually necessary for the safety of local operations.

Drew: And as you say that, David, I'm picturing a competency management system, where we have 20 items in the main competency management system based on our safety management manual, all of which are the safety worksite stuff. Then we've got one line in that competency management system, which is understanding what the heck they're talking about when it comes to the system. It's not as it disappears. It's just that we just haven't focused on it because we've focused on all the other stuff.

David: That's a great example. What does the competence to do their work safely look like? This would say that bottom up that you asked the practitioners, what do people need to know? Will it be all of the mentoring, the understanding of the systems, the technologies and the tasks, and everything like that? Then when you look at the formal safety system, it says, what does a person need to be competent? It would say, working at heights training, confined space entry training, safety induction training, and so on, and wouldn't make reference to any of the core work competencies that create safety at the task level.

Drew: So the next issue they've got here, David, is about compartmentalization of safety.

David: It covers in two ways. It's really about safety becoming a separate discipline on its own and detached from the practice field. It probably is around the time that this paper starts looking at these industries in the mid-90s through until the early 2000s. It's seen the rise of the independent safety function in the organization, the rise of the dedicated safety management system, and a lot of things in the organization with safety in front of it—a safety training program, a safety audit program, a safety competency program, a safety investigation program, so just seeing this professionalization and compartmentalization of safety as a separate discipline from operations.

Drew: I'm torn between agreeing that that happens and recognizing that this has been happening for so long. I think it actually predates there being a safety academic community, which makes it a little bit hard to blame this on academic knowledge here. You look back in the 1930s and 1940s, we had completely separate safety departments. We had people whose entire career was in safety instead of in mining. This is not remotely new. I don't think it's correct that it's being driven by the academics.

David: No, I don't think it's being driven by academics at all. Clearly, it's in the sociology of the professions literature all the way back to the 1950s and 1960s in saying that organizations are going to shake themselves over time like this, with these internal functions that are separate to the operation, governing and supporting the operation. So this is not being driven by the academics.

Drew: Maybe we could think of this as an ongoing process that happens not just with safety, but with any function within an organization. Once we start thinking too hard about a function, we start formalizing it. Once we start formalizing it, it starts to become detached from operations, and sort of flows from that operational side into the management side. We see it with quality. We see it with safety. We see it with human resources. They become their own thing over time.

David: I think, Drew, the consequence here that's called out in this discussion is that the safety consultants and the safety professionals, they have the knowledge of the systems and also the model power. This is like when there's something to do with safety in the organization, the organization turns to the safety manager and says, what should we do? As opposed to maybe turning to the local operational teams and saying, what should we do? 

I think that the tension that's called out all the way through this paper is where does the organization look for its influence on how safety needs to be managed and what's important. It's not one or the other but I think what this paper is saying is that, if it's not carefully understood and balanced, then it's likely to go all the way away from the frontline, in terms of where the organization is looking for its discourse about safety.

Drew: It might be a good time to jump to the section that's actually titled, Implications For Safety. I'm just going to read directly what they say, which is to be relevant and effective, a safety system must be anchored in and relevant for local practice. I think that's a really good way of putting it because it's not that this separation professionalization is wrong. It's that once it becomes detached from local practice, to the point where we're not listening to local practice, we're not responding to local practice, we're talking to but not listening to, then we're losing our ability to actually be effective at all, that helping local practice be safe.

David: Exactly right. It goes on to talk about the disempowering effect of that model monopoly and that power imbalance for local practitioners where they potentially don't get to convey their concerns, their observations, their potential contributions to the system itself. The narrative and the decision-making around the safety management in the organization is tightly controlled by the safety professionals and the safety consultants in the senior management.

Drew: David, I'm just going to actually jump ahead again, because I just saw in your notes a bit that I absolutely love, which I think is quoting directly from the paper. Some of our most cherished academic virtues, such as precise definitions, consistency, and exclusion of irrelevant facts and arguments, may at times promote a model monopoly. I just saw that and I thought, oh, gosh, this is exactly what happens, isn't it? The more we try to be good academics, the more our stuff just tries to override all of the messiness and nuance and reality of what creates safety.

David: That's under the section titled, The Role and Responsibilities of Safety Science and Safety Scientists. Speaking directly to you, my friend.

Drew: Well, what I'm hearing is that I need to get more involved in the messiness of frontline work. I'm very happy to take on that responsibility because that's what makes safety signs fun.

David: Absolutely. There's a lot in this paper that goes to the things that we've talked about in previous episodes around audits, clutter, bureaucracy, paper trails, and so on. So you can lump all that together. Drew, can we arrange to go into the conclusion now?

Drew: Sure, so long as in the conclusion, I want to talk a little bit about where Safety II fits into this.

David: Okay, well, let's just conclude on the paper then we can do that before some of the practical takeaways. Basically, it says that safety science doesn't have this neutral information when it goes to practitioners. So a new piece of regulation, a new safety management system, a new program in the organization, a new theory or research, doesn't just hit an organization and a frontline practitioner neutrally. It comes loaded with a whole heap of power, authority, and impact.

What happens then is that anything that comes with that sort of power and influence is going to displace and challenge the existing operational safety discourse or model in the organization—the assumptions of the old regimes—and is going to sideline that local system–specific knowledge. So there's push for accountability, standardization, bureaucracy, assurance that goes on across organizations and across industries.

The conclusion of this paper would suggest that be very conscious that unchecked is actually going to devalue and reduce the support that's flowing towards the things that people in frontline operational context know and believe to be the things that create safety for them and themselves everyday.

This has largely been about the rise of new regulatory regimes and safety management systems and what that does for the frontline, but these new theories that are coming out every second week from safety academics. How should we think about those?

Drew: I guess I wanted to start with the idea that I think a lot of people who are very associated with resilience engineering or with Safety II would applaud this paper. They would say, yes, this is exactly what we're talking about. We need to return to listening and being sensitive to frontline work. 

I think a lot of people who are reactionary to resilience and Safety II would say the exact same thing. They would say, yes, this is exactly what I've been talking about. We've got to stop imposing these theories from the outside and just listen to practitioners. I think what both people who said that would be doing is engaging in the exact same problem that this paper is talking about, that a lot of what happens is we've got this contest of models. The contest is happening entirely within and between people who are removed from work.

A lot of people who think that they've got really their finger on the pulse of frontline work—and they're the ones who really understand what's going on—need to realize that they're actually squarely in what's being talked about here as academics and consultants, and that we're fighting amongst ourselves as a non-working community about whose models should be the one that we get there to then impose on the genuine frontline practitioners.

That's a real risk when you come up with even a supposedly very empowering new idea like Safety II. The language and Safety II is so empowering. But the reality can be that it just becomes something that gets institutionalized put in as a system, a system which is driving safety work, and a model of how to see things, which no longer listens because we're so sure that this empowering model is the way that everyone should be seeing things.

David: Great insight example, Drew. I agree. I think anything that's going generic and top-down in your organization—ironically, even a theory that's based on bottom-up safety going into your organization top-down—is going to do what this paper says. It's going to set the terms and conditions for the discussion about safety within the organization and exclude things that don't fit inside that model, and therefore, not listen, not learn. and potentially not match the operational context of the business.

Drew: I'm interested in where we go then for conclusions and takeaways because that's a real challenge for people who genuinely believe in Safety II is how do we introduce Safety II in a Safety II way?

David: It’s a great point, Drew. The practical takeaways that I've got here, is the first is just to understand that safety professionals, safety management systems, external standards, consultants, actually have a very high level of institutional power and credibility when it comes to speaking about safety, and describing how safety should be managed. Even if you're a safety professional, it feels incredibly disempowered.

The way you talk about safety, particularly at practitioner level, even if you think that your management doesn't listen to you, if you walk out on site as a professional and speak to frontline teams about safety, there's a great deal of institutional power that comes from that position and having safety in your title. So with great power comes great responsibility.

I guess, Drew, to be very careful about the narrative about safety, that you're communicating as a safety professional, the design of company safety processes and how that shapes the discourse and the model inside the organization and everything. We've talked about in every other episode about field interactions, safety audits, and safety investigations. How we talk and approach safety in the organization—according to the social theory—will define how the organization thinks about safety.

Then I think this idea that doing that and knowing that all of that is going to devalue local practitioner knowledge. For safety professionals talking about how to do safety in a Safety Management System describes how to do safety. Fred or Mary, who are working on the production line on the shop floor, no matter how great their idea is for how safety should be managed, they're coming from a significant power imbalance to make a contribution to the organization's discourse around safety.

Drew, I guess it's like what you said about implementing Safety II in a Safety II way means that it has to be about actions, not words. It has to be about how the organization actually does safety. I'd like to believe that can be done by the safety profession. It's not just what we say. It's what we don't say, in relation to Safety II discourse.

The scope of the discourse is really what's talked about in relation to safety. So if it's not mentioned in the system, and it's not mentioned by the safety professional, it's not seen to be relevant in relation to safety. We need to know what we are saying and what we are including in our discussion about safety, and what we aren't.

The last point that I've got is that knowing that the formal safety systems of the organization and the formal model of the organization is going to be what consumes organizational resources and attention around safety. That may displace the resources and attention on the local practices that are required to maintain safe operations in favor of the standardized safety practices. That might mean that the organization focuses on making sure that Take 5 gets done, rather than this really intricate little valve gets turned three quarters of the turn before any work happens that day.

Drew: Thanks, David. That's a great example and a great set of takeaways. The only thing I'd leave for our listeners is I think there's a preprint of this paper that is available on the web. So if you search for the title, don’t be frustrated if you can’t immediately locate it. It is actually out there if you'd like to read it.

If you start to read it, and you come across the reference to Foucault in the first couple of sentences, and you decide it's not for you, then I'll refer you instead to a Mitchell and Webb sketch that you should be able to find on YouTube, which has two Nazi officers on the Eastern Front chatting and they realize that they're dressed in entirely black uniforms with skull and crossbones on their hats and they look at each other and ask, are we the baddies? That's, I think, partly the takeaway message of the paper.

David: Perfect. Drew, I might have to go and reread Foucault’s Archaeology of Knowledge in the next week before we publish the episode. What I've been doing in the last episode, I did publish the link to the freely available papers, so one for one at the moment, determine that there is a publicly available paper. I'll post the link in the LinkedIn post each week as well. I'll do that with Petter’s preprint that's on the NTNU website or it might even be in academia. I'm not quite sure where.

Drew: Fantastic, David. The question we asked this week was, does formal safety management displace operational knowledge?

David: Well, Drew, the short answer is no. That knowledge still exists within the organization, at least temporarily, but it definitely does devalue it in the context of what's important for safety in the organization.

Drew: Thanks, David. That's it for this week. We hope you found the episode thought-provoking and as always, ultimately useful in shaping the safety of work in your organization. Send any comments, questions, or ideas for future episodes, and we genuinely are looking for ideas for our next few episodes to us on LinkedIn or feedback@safetyofwork.com