The Safety of Work

Ep.20 What is reality-based safety science?

Episode Summary

On this episode, we discuss reality-based safety science.

Episode Notes

We have just co-authored a paper with two other researchers and it examines the big picture of safety science. We don’t usually like to plug ourselves, but we’re very excited about this particular accomplishment. We use his paper, A Manifesto for Reality-Based Safety Science, to frame our discussion.

Topics:

Quotes:

“There was a strong perception that there was a lot of evidence about what worked and didn’t work, that wasn’t making its way into practice.”

“When you study an accident, all of the analysis that you do is necessarily driven by counterfactual reasoning and hindsight bias.” 

“If the researchers are influencing it, if the researchers are controlling it, if the researchers are doing it, it stops being a case study and it becomes action research…”

 

Resources:

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work podcast episode 20. Today we're asking the question, what is reality-based safety science? Let's get started.

Hey, everybody. My name’s David Provan. I’m here with Drew Rae, and we’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

Drew, what’s today’s question?

Drew: Hi, David. Hi, listeners. Every ten episodes of the podcast, we've given ourselves a bit of permission to talk about our own research. We don't want to make this the “What's Drew and David published recently podcast?” but we do want to have a chance to plug our own work. 

We've just co-authored a paper along with Rob Alexander from the University of York and to Hossam Aboelssaad from the University of Queensland. It's a paper that examines where we're going in safety science, a sort of big picture look at the whole field. By this, we don't mean the usual argument about Safety I and Safety II, or new view versus old view. We want to look at the big picture of how people create and defend all theories and practices in safety, and what we can do to make that work better.

We think that this is a bigger question than just disputes over particular theories at the moment. The paper we'll be using is not itself a research study, but it is based on a close and critical examination of lots of published and unpublished literature in safety science.

David: This is the bit, Drew, where we normally introduce the title of the paper and the authors. The paper is titled, A Manifesto for Reality-based Safety Science, published in 2020 in the Journal of Safety Science. It's actually part of a special issue on the Future of Safety Science.

Drew, you came up with the title, Reality-based Safety Science. Do you want to share a little bit about the background of that title?

Drew: Sure. The way special issues tend to work is that they tend to be invited papers. Not always. Often, there's an open call as well as the invited papers. In this one, they reached out to a number of people and said, "Look, what's your vision? Where do you think this is all going?" A lot of the people who are invited were, in fact, associate editors of the journal because our opinions matter when it comes to what papers do we want to accept? What sorts of papers do we want to encourage? Where do we want to steer things? 

When I personally was thinking about what I wanted to contribute, I didn't want to just plug my own personal approach or my own personal theories. I get frustrated as an editor with some of the stuff that I see coming across my desk quite routinely. I find that I have the same conversations, particularly with young researchers, about their view of what counts as good research, and what they think will be a useful project.

What I really wanted to do was put down my own personal vision for if I was starting in safety science, where would I like someone to tell me the field was going? So that I had a path that I could go along that is following along with the future of the field, rather than treading along a bit of the past. 

I reached out to a couple of my common co-authors. David, you and I have written a fair bit together. We've got a bit of a shared vision for our own research of what counts as good, and what doesn't count as good. A guy called Rob Alexander, who I've also written a fair bit with. Rob is my writing mentor. He takes a real critical look at stuff that we're co-authoring, keeps me writing smoothly and clearly, and challenges all the ideas I want to put in. He and I have written a bit before that's critical of some of the research practices in safety science.

The other author, Hossam, is actually a PhD student. I happened to be talking to him at the time when we were writing the paper. He had a couple of really good ideas about specific parts of it. That's how he got roped in as a co-author as well.

David: Drew, we published this paper. It's a manifesto, sort of a roadmap for how we see the future, and the changes that the safety science discipline should try to make. I assume that our listener base is a large part practitioner and also, some part researcher. Why is this an important conversation to have between both the academic side and the practitioner side? More clearly. why shouldn't the practitioners tune out now for the rest of this podcast?

Drew: That's a really good question and a question that's not written down on our script, either. Let me have a bit of a think about that. The title, Manifesto, and the other part of the title, Reality-based Safety Science, come from two big influences we had in writing this. Both of those influences are neither academics criticizing the practice of a field or practitioners criticizing the research of the field. They're collaborations of people from practice and from research, who wanted the relationship to work better. 

The first influence—this is what prompted me to use the term manifesto—is, of course, the Agile Manifesto. The Agile Manifesto comes from software engineering. It comes from this perception that what academia was contributing to the development of software was way too much heavyweight guidance on how software should be written; not nearly enough concern for what made good software writers write good software. They thought lots and lots of useless guidance that was (in fact) acting as constraints and burdening down the development of good software.

What really struck me was this idea of a manifesto, not as a critique. It was not an Agile critique. It was not an Agile attack. It wasn't the Safety I versus Safety II of software. It was a group of people saying, "We want to do better. We plan to do better. We invite you to join us in our aspirations to do better." 

The second big influence was a movement called evidence-based medicine. Now, this came about through this idea that practitioners of medicine were tending to treat patients based on a very scientific understanding of how the human body worked but very much based on their own personal experience they know have worked to treat patients. There was a strong perception that there was a lot of evidence about what worked and didn't work that wasn't making its way into practice. 

The evidence-based medicine movement was not academics clubbing together and yelling at doctors saying, "Listen to us." It was a reform movement in the practitioner community to realize that doctors are researchers. It's not that you have people doing research, and you have people applying for medicine. It's having people who are applying medicine to patients. That's the data that we need to collect en masse to find generally what works. Once we know generally what works, we then need doctors to apply their own specialist skills to the particular patient in front of them; combining the knowledge of the patient with their knowledge of the evidence.

David: Yes, Drew. I think we've spoken a bit about the gap between safety science research or safety science evidence, and safety professionals or safety practitioners in their organizations, and the application of that evidence to what they're doing or the lack of. What we're really hoping for as we move further forward is to create a much stronger bridge and connection between the practitioner community and the research community.

We're laying out a series of commitments in the manifesto that we'll go through. There are seven commitments or so. Before we do that, there's a big context around here. You came up with this ‘giants and dwarfs’ metaphor. Describe that, because that line in the paper has received a fair bit of attention. The one that says something like, "We don't apologize for the spikes on our boots when we're standing on the shoulders of giants."

Drew: Since the thinking here is that if you ask people to name researchers in safety, I'm willing to bet that if you pick 50 people and say, "Name a safety researcher," you're only actually going to end up with three or four names. Possibly even you're going to get three or four theories without people even knowing what those names are. You can make a pretty quick list. We can have Safety I and Safety II, high-reliability organizations, safety climate, safety culture, behavioral safety. Many practitioners might be able to name one or two names associated with each of those theories. Those are the giants in safety. A lot of researchers tend to align themselves with one of those particular views. 

For example, Eric ‎Hollnagel talks about resilience. He's got a particular technique, which is a subpart of his thinking about resilience. It's called FRAM, Functional Resonance Accident Model. You'll have people who want to do their whole PhDs applying FRAM. 

You've got Nancy Leveson, who as part of her broader thinking about systems-based safety engineering, comes up with a model called STAMP. You have people who have entire conferences just talking about each individual person, talking about how they have applied STAMP.

We've got these few giants and scurrying around the feet of these giants are dwarves living in their shadows. People who are applying the ideas and thinking about safety are entirely constrained by the thinking of one of these giants. Nowhere is anyone actually updating the big theory. We've had dozens now of STAMP conferences, and not one of those has changed STAMP as a theory. It's all just about “Let me take STAMP and let me not dare to criticize anything about it. Let me just apply it.”

We've got dozens of people who follow behind Safety I and Safety II, but the fundamental idea of Safety I and Safety II is the same as it was when the book was first published, including all of the problems with it that everyone recognized from the very start. This is the concern. The big thinkers are not empirical researchers. They're theorists. The empiricists are not theorists, or not even updating the theories through their empirical work. They're just applying the theories.

David: I think perhaps the biggest example of all of that is safety culture. In 30–35 years now of safety culture, we're really no closer to understanding it, or knowing how to work with it in our organizations than we were 35 years ago.

Drew: That's pretty damning, given the thousands of papers published on it. We still are having literature coming out that doesn't make a clear distinction between safety culture and safety climate. I would argue even any clear agreement amongst researchers about what precisely the difference is. Each individual is pretty clear on the difference, but we don't have a community consensus or development of the ideas. You can say the same for STAMP. You can say the same for safety cases. You can say the same for Safety II. I think resilience is probably going to head the same way if we don't do something about it.

David: Yeah. We laid out seven commitments in the paper. They were literally personal commitments that we were making and inviting others to make with how they approach their safety science research going forward. Drew, we might go through these seven commitments. Talk about the problem and how we frame this solution in their manifesto paper.

The first commitment was that we will investigate work as our core object of interest. Traditionally, I think safety has always been about studying accidents. We're proposing that we should always focus on work as the object of interest, and the object of understanding or the object of research. Tell us a little bit about this, the difference between studying accidents and studying work.

Drew: We've talked before on the podcast about the problems and limitations of studying accidents. I don't want to go deeply into that because we tend to do that anytime we do a paper that's based on accidents. The problem is that when you study an accident, all of the analysis you do is necessarily driven by counterfactual reasoning and hindsight bias. You're looking with the knowledge of the outcome, and you're trying to judge the path by which we got there. We're trying to use that to make claims about what could or should have been done differently.

Now, this doesn't mean you can't do that. The trouble is, though, when you do it, what you end up with are big picture explanations. They're usually untested, and they result in these theoretical claims about what can prevent accidents. 

Frankly, we've already got enough big picture and tested explanations. We're not adding to the advancement of the field by coming up with new versions of this. You're not making anything better unless you can show that your theory is somehow better. This process isn't a process that can tell you which theory wins. It's just a game of who can make the nicest narrative.

On the other hand, we've got this huge industry that exists in order to prevent accidents. If you look in this industry, you can find pretty much all of the stuff that the theorists say we should be doing. Someone is doing somewhere. Yet accidents are still happening. If we know that when you follow the theories, accidents happen. When you don't follow the theories, accidents happen. Then, there's got to be some gap somewhere in the theory. We're not going to find those gaps just by coming up with new big pictures.

David: I think we would be if the view is through that safety is an emergent property of work. We've got to understand and intervene in the core work activity itself. Safety research, therefore, at its core becomes about studying work. What makes work safe? What makes work not safe? It's a little bit interesting to come at safety. We actually just can't come at safety from coming at safety. We have to come at safety from coming at work.

Drew: I agree absolutely. I think that makes a lot of sense. When we talk about principles of safety, we're really talking about principles of work that are true across lots of different types of work. Generally, when you look at work as it happens in hospitals or on construction sites or when people are designing spaceships, there are some things that are universally true about work and relevant to whether an accident is going to happen. That's ultimately what we're trying to come at with safety.

David: To help our listeners be clear, when we say studying work—Drew is giving a few examples there—what we want to know is how does work happen? The basics of what work looks like. How does work make sense of what they're doing and what their co-workers are doing? How does work change and vary in response to what? What are the factors that cause it to remain consistent over time? What events occurred during work that is meaningful for workers? Where do workers take their cues for how they then change the nature of their work throughout their workday or work shift? There are some types of questions that we really want to understand when we actually want to get right down in the messy details of how people interact with their work every day.

Drew: If you think about it, this leaves room for a lot of different approaches to safety. If you're interested very much in the human and cultural side, you can ask questions about who performs work? How does their identity influence the work? If you're interested in the engineering side of things, you can say, "Where does work take place? What equipment is used for work? How is that equipment designed?" If you're interested in your organizational sociology, you say, "How is work organized? What's the effect of organization design on the conduct of work?" There is just so much potential research because, for the word ‘work’ here, you can substitute anything you like. 

There is an entire thesis waiting to be written just on substituting work with just a couple of these questions for Uber drivers. So how does Uber driving happen? What's the identity of Uber drivers? What do they think they're doing? Do they think of themselves as drivers? Do they think of themselves as just people who are having to do Uber driving work? How does this change over time? How did you get into Uber driving? How do you leave Uber driving? What's your career path in the middle? That's an entire thesis. That's just your one type of work and a couple of the questions. There's a lot of potential here.

David: I think for our practitioners who are listening as well. I say a little bit when I talk about the role of safety professionals, the objective of understanding being work, not being safe. We'd advise safety professionals to go out into their organization. Don't go out there with an audit checklist. Don't go out there just after an accident to do an investigation. Just go out into your organization, look, listen, and understand how work happens, how it functions. We're telling researchers to do the same thing. Put down your badly designed questionnaires, put down your preconceived ideas about what safety is, and get out there and understand the work that people do.

Drew: The way I like to think of it is looking at accidents that can be fun, but it's really a very niche side interest compared to the mainline field of safety science. It's like you have some historians who like to say, "What would the world be like if the Nazis had won?" That's fun to do. It may be informative, but that's not mainstream history to do that sort of work. It's a fun side hobby. The real meat and bones of safety is studying work, not studying accidents.

David: That's our first commitment to study work. Our second commitment in the manifesto was that we will describe current work before we prescribe changes. This is making sure that we're descriptive before we're normative as opposed to the way that we approach our research. Is that the way to think about this?

Drew: Yeah. This is the first conversation I have with every new PhD student. It's not just the first conversation because I often have to have these conversations multiple times. The conversation just starts with me trying to talk about descriptive and normative. Then, that leads me almost yelling or writing in capital letters on the blackboard, "Stop trying to fix things!"

I think people get into safety because they want to make the world a better place, but research is not about finding solutions. The way to realize that is just translate it to almost any other field of study that you take seriously. Astronomers aren't out there to make stars better. Physicists aren't trying to work out what are the right laws of physics to substitute for the ones we currently have. Not everyone, I'll admit, but most people who research psychology and neuroscience are about understanding how the brain works now, not trying to change the brain to make it work differently. Everyone in all of those fields wants to improve the world. They know that you've got to describe and understand the world before you can use that knowledge to make the world better.

David: I think I was one of those PhD students at the start because I came into my PhD on the safety profession with a dead set intending to try to figure out how to fix the profession during my PhD. By the time I got to the end of it, not only didn't understand it, I probably had to admit that I understood it. Not understood it less than when I started but had more questions when I finished tonight than I did at the start. I'm only just starting now to figure out how to translate some of the things that I found during my own PhD research into ways that it might actually be able to make improvements for the way that professionals do their job.

Drew: Well, David you know a bit our listeners might not. All of the advice that I give to PhD students is precisely because that's advice that I wish people had given me. I got 10 years out of the end of my PhD project before I realized that I'd spent the whole time trying to fix the world, and I shouldn't have. It was only then that I really started doing research on the topic that my PhD was about.

David: Safety researchers are wanting to take shortcuts. What would be saying is that we're not even at the point where we can describe work well enough to start to put in place theories about how to improve that work to improve safety. Is that what we're saying?

Drew: Yeah. I think that's one of the areas where researchers and industry need to work together a lot better. You can't get through a full research cycle in a couple of months. If you're asking someone to do a project in six months, what you're doing is assuming that they've already got the answer. You're wanting them to make that answer bespoke for you and implemented in your own organization. That's a standard consulting task, but it's not a research task.

If you want someone to do research like that, then you're shortcutting through all of the steps that make it good research. Understanding the problem takes more than a couple of months. Just understanding what the literature has already said about understanding the problem takes more than a couple of months.

We've got to turn around this idea that practical research is about deliverables. We definitely want research to be relevant to the real world. We definitely want research to be useful. We definitely don't want research to be a pie in the sky, but we need to understand that in order to do that, research needs to stop taking shortcuts. It's the shortcuts that make it unrealistic. We need to slow it down enough to do each part well and to recognize that it's just as beneficial for the industry to get a good description of the needs for the industry to get an untested solution.

David: Absolutely. The solution here, we're saying we actually want a virtuous cycle of studying and describing what's happening right now in organizations; the way that people perform work in your organization. We want to analyze that insight and knowledge to build theories of how work happens and how organizations function. What we want to then do is actually use those theories to design interventions, experiments, and improvements in a way that lets us test these theories and understand the application of these theories in different settings, refining and improving these theories over time. I think what you're saying is this cycle has to happen slowly enough so that we can actually do each part of that well.

Drew: Yeah. When you describe things as a cycle, you never quite know where to start. I think the core of this commitment is we think that where safety science is right now. We need to slow down the description thing. We need to work out what we're interested in, then we just need to spend time describing it, and throw a whole range of methods at that description.

You pick something that you're interested in. Pick a type of work and interview people about it. Watch it. Sure, do surveys about it. Experience it for yourself. Go and do that work for a while. Take part in the work. Tinker a little bit with the work, not in order to fix it, just in order to make sure you know what's going on.

Good researchers shouldn't be like picking one method and saying, "I'm going to understand work by doing one of these things," but throw the whole battery of measurement at the descriptive task.

David: The next commitment after describing before we prescribe solutions is that we'll investigate and theorize before we start measuring. You've taken caps lock in the script. I might just let you step straight in now. What do you want to say about investigating and theorizing before we start measuring?

Drew: Single biggest problem in safety research. I say as an editor who sits with studies, some of them are eventually going to be published. Most of them, which are going to be rejected, flung across my desk. Safety research waste a heck of a lot of time, money, and research careers on badly designed Likert scales.

Just for clarity, a Likert scale is one of those things where you got like five bubbles, one to five, and you rate them. They are incredibly difficult to get right. You have to have some sort of benchmark in the first place because otherwise, you don't know what a three or four means. There is an entire science of psychology devoted to analyzing why Likert scales usually don't even work as a measurement tool. Yet, we think that this is a great form of research.

We take these badly designed scales, and then we feed them into these complicated models that compare the variables we think we're extracting from these scales. In the whole time, all we're doing is just looking at how people have ticked boxes. We're not looking at how people have worked. We're not looking at how accidents have happened. It's got no real connection to the real world. Of course, the biggest single example of this is just that people keep reinventing different versions of safety culture or safety climate surveys over and over again.

David: I'm glad now that I haven't personally done a survey based on [...] of it. We have spoken before on the podcast about well-designed open questions and the collection of data through surveys in that way. We're not saying that they don't have a place. What we are saying is that it's very easy to just pull a questionnaire together, go and get 300 people to do it or 200 people to do it, run some statistics, and then write a paper. What we'd be saying, you're really not contributing a great deal. Drew, what's the solution, then?

Drew: Here's a sort of simple checklist. If you want to ask someone a question that's got a number as an answer, then you’ve got to realize that what you're trying to do is measure something.

The very first question to ask yourself is, what is this thing that you're trying to measure? Is it even a thing? Do we know that it exists? Do we have any evidence that it exists? What thing is it? Is it a psychological phenomenon? Is it an organizational interaction? Is it something that happens at work? Is it a system? Is it a process? Is it an opinion? What does it look like? How does it vary? Does it tend to go up and down within a person? Does it tend to stay stable within a person? Does it tend to go up and down in an organization? Does it tend to stay stable in an organization? Does it stay the same when you measure it twice? Then have a look at what other people have found, that look like this thing. 

Let me take an example. Let's say you want to measure trust. What other things are out there that are like trust? Is psychological safety like trust? What's the difference between them? Do we have a measurement that can tell the difference between them? Is one of them an aspect of the other thing? Is the relationship reversed? Are they two separate things? Are they two things that overlap?

Find out what other people have said about those things. Do that for every other thing that might look like this thing. Only then, do you ask how do I measure it. Once you really understand what it is that you want to measure. After that, there's a whole heap of questions, like whether you're actually measuring the thing that you set out to describe in the first place.

I think if you ask yourself those questions, 99% of the time, you'll have finished your research project before you ever get around to the point of wanting to put in place a Likert scale to measure it.

David: Without thinking deeply about those questions, Drew, it's so easy to go "My manager is committed to safety on a scale of one to five," get a whole heap of answers, and feel like that you're understanding what's going on. But when you actually step back, it's like your [...] from the Brady Report last week, the concept of that. It sounds good on the surface, but when you actually dig deeper, you go "Well, actually this is meaningless."

People who are following along, we're focusing on researching work. We're describing and understanding that work really deeply before we start to put in place ideas and models of what happens, then we're really investigating and theorizing before we start measuring. Our fourth commitment is we talk about directly observing the practices that we investigate.

Drew: This is the one that had the most (I think) controversy among the people who were writing the paper. I think we agreed on the problem, but we were unsure whether this was certain enough or sure enough that everyone should do this; to actually make it into a commitment. As you can see from the paper, we eventually decided that yes, it should be a commitment. It's worth bearing in mind that we can agree on the problem without necessarily agreeing on the solution to this one.

The problem is pretty clear, which is that we've got a bad habit in the safety of collecting data which doesn't directly represent the question that we're trying to answer. I've got a list of examples that I've made. David, you can see the list in front of you. Are any of these that you particularly agree with or disagree with?

David: I think this idea of us believing that self-reported behavior is going to be the same as actual observed behavior. When you ask people, what do you do? That's very different from actually watching what people are doing. Drew, that's one that I really liked. That really challenged me to go even things like interviews and focus groups where we actually ask people how they think about their work, and how they normally perform their work. We're still a step removed from people actually doing the work.

Drew: I think actually, we're sort of jumping ahead a little bit to the solution. I think that might actually be clearer than talking too much about the problem. I'm going to jump back to the problem in a second. If you think about every type of research we do is good for something. It's just a question of matching what that something is with what you're trying to do.

Psychometric surveys, what are they good for? They're designed for comparing people to other people, and nothing else. That's what an IQ test is. It's not even like describing your intelligence. It's about comparing your intelligence to the standard population. That's what a personality test is. It's not about describing how personality works. It's about compared to a typical person. Where do you sit? Sort of higher or lower than average on each of these sets of five traits. Psychometric surveys are really good for that; pretty much useless for anything else.

What about non-validated quantitative surveys? They're good for collecting demographic data, and not much else. When you fill out a census form, we know that 40% of the population is of a particular age, or 20% of the population went to university or something like that. That's descriptive statistics. That's what surveys are good for.

Interviews are really good for understanding people's subjective experiences. Understanding what people did through their own eyes and through their own memories, which is very valuable. As long as you remember that that's not understanding their objective experiences. It's not understanding what happens. It's understanding what they experienced. If you want to understand what actually happened, you got to be there and watch what happened.

David: Drew, I think it's worth just cycling back around the problem if nothing else than to give our listeners a sense of what to be wary of if they see some of these things in research papers. Let's just whip through them really fast about where we see inappropriate data use in safety science. Self-interested reports are always bad. Asking your manager about how good a manager they are or asking a safety department how good safety is within their organization. The data is always going to be colored by their own self-interest in that.

Drew: Yeah. I've almost got into the habit now with recent safety science research that talks about organizations. Unless it's a paper about safety practitioners, jump to who was interviewed. If they collected the data from safety practitioners and the paper is not about safety practitioners, trash the paper. It's entering what safety practitioners think. It's not answering anything objective about the organization.

David: Yeah, and where we see papers with guesses at frequencies of risks being represented as actual risk, where the risk of this is X because 10 people decided that the risk of this was X.

Drew: Or what are the 10 most important challenges in construction safety today, and the researchers have never been to a construction site. All they've done is asked a whole heap of people on construction sites what they think are the biggest risks. What they're telling is they are publishing a paper to tell people on construction sites what people on construction sites already believe are the risks of construction sites.

David: It has been a circular argument.

Drew: You got to ask yourself what you're doing with your life when you're publishing papers like that.

David: Being very mindful when people are reporting about their own behavior or reporting about events, and then substituting this information or claiming this information to be equivalent to the actual behavior or the actual frequency of events. How many times this has happened to you in the last 12 months is very different from asking someone that question. The answer that they give you is very different from how many times something might have actually happened to them in the last 12 months.

Drew: This one's really sneaky because you might be tempted to tell yourself, particularly if you're a researcher. "We'll, I can't access the actual data. I can't work out exactly how many times this happened. The best I've got is how often it was reported." But then you look at what you're claiming, and what you're claiming is something like that. Safety culture is going to drive down the number of accidents. You realize that the independent variable is going to be more powerful in changing reporting than it is in changing the underlying statistic. That's where the indirection becomes a real problem.

David: That fourth commitment is a really important one about directly observing practices that we want to investigate or understand. The same is true for practitioners in your own organization directly.

Drew: Just before we move on, I want to point out that. This is something that practitioners won't necessarily see unless you start reading academic papers. It's certainly relevant to people wanting to do safety research, and you'll start to run into this really quickly. There's a current fad of using complicated techniques to massage this bad data. You start to see lots and lots of papers coming out where they've got words like neural nets, AI, deep learning, analytic hierarchy process, fuzzy logic, big data in the titles of the paper.

All of those things are a clue that this is a way of massaging quantitative data. That's the time to start getting really skeptical and ask "Okay, so what is the data that you are massaging?" If it's not good data to start with, then it's not going to be useful conclusions at the end of it. Survey data is survey data no matter what method you use to process it. Survey data can't tell you what's going on a site.

David: We move on to number five. The fifth commitment there is that we'll position each piece of research in an appropriate disciplinary context, informed by the research practices and recent advances in that discipline. Drew, this is one that I really like. When we started doing a little bit of digging into the institutional work literature, I sort of side-branched there for about six months into institutional logics.

I think I came back to one point and I said, "The institutional theory has just moved on from organizational culture as a useful construct." It's got all these well-developed ideas about institutional logics at the individual and the macro scale of organizations. It's got all these ways of describing and explaining what's going on in organizations, all these really relevant theories for safety. We had this discussion about safety theory is lagging, maybe decades behind the parent disciplines, like organizational psychology or some other parent discipline. Do you want to talk a little bit more about this problem? Because they ignore the parent disciplines a lot in safety science.

Drew: This is something that even if you were aware of it, you can still fall into the trap. Even before I became a full-time researcher, I was saying that I thought that safety management techniques tended to lag at least 10 years behind anything that didn't have safety in the title. You could pretty much pick up any management fad. Ten years later, you could add safety to it. You'd see it everywhere.

So, two things there. One of them is I think I was being a little bit optimistic with the 10 years there. David, I think you're on record as claiming more like 30. The other was that even knowing this, I did things like I thought I was making really original insights about safety culture.

I don't know if this has occurred to anyone else, but if you ever noticed that if safety culture was real, if there really was this thing called safety culture, and it influenced the way people think and understand and talk about safety culture and talk about safety, then if you're in two different organizations, and each organization is reading a safety culture survey, then the culture is going to influence the way they think, understand, interpret, and fill out the survey. You can't possibly use the answers to the survey to measure the differences between the two organizations.

Now, that was obvious to me, but I also thought I was making an original observation when I said it. So, I fell into the trap. It turns out that people outside of safety in organizational culture research had highlighted this problem back in the early 1980s. In fact, they just totally moved on. They'd realized that this was a problem, realize that culture is real, but we can't measure it like this. This idea of quantifying culture, analyzing culture through surveys or psychometric instruments, is just a totally doomed project. Unfortunately, by that time, safety had already picked that up. 

The really interesting question is, how come most safety culture researchers don't know that? It's not that safety culture researchers are stupid. Multiple times, people in safety have uncovered this same problem and even published papers about it. Everyone who does that then stopped doing safety culture measurement work, because they know it's a doomed project. They know it's wasting time. The people who keep publishing are the ones who just don't quite get it. If you're a new researcher coming in, all of these papers that are published are all from the people who don't understand what's going on outside of safety. All the people who do understand are not bothering to publish about it.

David: If you're a safety researcher, you tend to look for the core body of evidence around safety. Even when you're doing your literature searches in your databases, you’re using safety as a keyword search. So, you will search for safety culture if you're writing a paper about safety culture. I assume it would be quite rare for someone to go, "Well actually, no. I'm going to do a thorough read of what's been published in the last 3–5 years of organizational culture and use that before I start writing my safety culture paper."

Drew: Actually, I did a test of this in preparing for the podcast. I didn't want to pick on safety culture either, because I think we picked on safety culture enough. The one I picked was Myers-Briggs. I'm guaranteed to offend some of our listeners here. Myers-Briggs is well known to be pseudoscience. I'm going to forgive anyone who wants to argue with me. The reason I'm going to forgive you is for this reason.

If you do a search on Google Scholar, on Crossref, or in your university library search engine, and you search for Myers-Briggs and safety, every single hit you get for the first few pages will be people using Myers-Briggs as a measurement tool in a peer-reviewed published paper. They will be all recent papers. You'll assume, "Look, this is a well-known, well-used tool that is appropriate for measuring personality, and to checking whether personality has an impact on safety." 

Here's the thing though. If you do the same search, you just remove the word safety, you just search for Myers-Briggs, and you search all of the top psychology journals, then the only hits you get will be from the 1970s. When they first started developing modern personality measurement theory, there was this spate of papers that came out that said, "We used to think that you couldn't measure personality. All we had was all the pseudoscience like Myers-Briggs, but we think that there is actually a way of doing it." That's all you get.

In safety, lots of people use it, and no publication is criticizing it. In psychology, nothing's criticizing it either, because no one's been talking about it since the 1970s. It's actually really hard if you don't know the answer to this question for a researcher to come in and say, "Is Myers-Briggs an appropriate tool to use?" Lots of people just look at the data parent evidence and say, "Well, sure. In safety it is."

David: That commitment is really important for researchers to position their safety research within the context and position it within the parent discipline literature, even for our practitioners in organizations and where they're getting their insights from. There's a whole lot of stuff that doesn't have safety in the title that could be more useful to you and your role for safety than the things that do have safety in the title. We might do something in the next couple of episodes, Drew. We might go and pick a piece of research out of one of these parent disciplines just to maybe just show how some of that research can be applied back into safety.

Drew: That's a good idea. Just a quick list of examples for the type of things that this matter for. Obviously the really big one is organizational science. Organizational science has diverged from the safety discussion of organizations almost completely. That's something that you're trying to bring back a little bit, David. There are a few others like you who are trying to bring modern-day organizational science and theories of institutions back into safety research.

Forecasting theory is another big one. How do you predict the future? That's what risk assessment is supposedly all about? Picture the use of expert opinions to do that. There's a whole subfield of economics that's been working really intensively on this and safety is still using methods from the 1960s. Behavior change and social psychology, a bit of a mixed bag, but mostly it's still this pattern that we tend to copy things across. They get discredited or disproved. No one in safety ever realizes that they've been debunked. We keep using the old stuff that's been debunked instead of the new stuff that's credible.

David: Number six was when researching safety methods, we will prioritize real-world case studies over work examples. I'm not sure I still completely understand the difference between real-world case studies and work examples. You might have to help me out. At least for starters, and hopefully help our listeners out in the process.

Drew: This is another one that I find myself saying a lot to other researchers, typically young researchers. It usually comes with me saying, "That's not a case study. Stop calling it a case study." Let me just be very quick and clear about the difference. A case study is where you study a real-world thing in its natural setting. You go out and find an example of something as it's currently happening in the real world, and you study it. If researchers get involved in that thing at all, they're getting involved as embedded observers. They're not trying to take control of it.

If the researchers are influencing, if the researchers are controlling it, if the researchers are doing it, it stops being a case study. It becomes action research. It's the researchers saying, "Here's this thing that we did, not this thing that we found." Case studies are things that you find. A work example is an exact opposite. A work example is where you demonstrate a technique by doing it yourself. The trouble is that people call work examples case studies. You get a very different understanding of how a technique works if you follow the two approaches.

Take risk assessment. If you do a work example of risk assessment, it means that researchers pick the technique. Researchers apply it. They have as much time as they need. They use an example they've picked themselves, usually with all of the messy details stripped out of it. And they've picked an example that they know the technique is going to work on.

If you do a case study of risk assessment, you go and find someone in the real world who is doing a risk assessment, and you see exactly what they're doing. You notice practitioners who don't have special training in what they're doing with limited information, limited time, real-world social pressures, and organizational pressures.

Both techniques give you just views of risk assessment that are so different, you don't even notice that they're the same thing. If you just stick to reading papers about risk assessment, you read the ones with work examples. You get this idea that risk assessment, the biggest problem that researchers need to be solved, to be challenged, is about getting the right notations and mathematical techniques. 

There's a real need in the literature to have a more precise way to define uncertainty and to work out the difference between probability uncertainty. There's a big academic debate about whether the best thing to do is to integrate uncertainty into risk assessment, or whether to separate it out and do a risk study and an uncertainty study. That's what risk assessment looks like in the academic community. They have whole conferences about this.

If you look at the real world of risk assessment, you realize that no one ever does any of these techniques that the academics are talking about. If you do that, you're pretty much going to conclude that the biggest problem with risk assessment is that people do risk assessments after the decision has already been made. This idea about exactly how we define and document uncertainty is meaningless compared to the actual challenges of real-world risk assessment.

I think that highlights what we're getting at with this problem. Your work examples are good for explaining or teaching how to do a technique. If you want to know what the problems are that need to be solved, you got to do case studies.

David: Now that's a good description, Drew. Thanks. I think I've got it clear now. We might move on from a time point of view as well. The seventh, the last commitment we had, which is nice and friendly, is that we will treat practitioners as respected partners. There's an implicit assumption in there that maybe today we feel sometimes that the research community is not treating the safety practitioners as respective partners. Would you agree?

Drew: I would agree. I don't think that this comes from academic arrogance. I don't think that the researchers realized it, but they're treating practitioners like misbehaving children. They think that the researchers' job is to work out how to do it, and the practitioners’ job is to learn that from the researchers and do it. If the practitioners aren't doing it right, then they need to learn better. 

Even when researchers think that they are respecting practitioners, they're respecting them in the way that a teacher respects their students; not the way a partner respects a partner. I don't just blame the academics for this. I think a lot of practitioners obviously see respect in the opposite direction. They treat safety knowledge as if it's this fixed thing, where you know how to do it, or you want someone to tell you how to do it. If someone's not telling you how to do it, then they don't know anything worth knowing.

David: This is confusion between the safety academic community trying to actually come up with the new techniques and come up with new practices, and as you said, then be in a position to be able to tell the safety practitioners what to do. That's not really the role that academics play in any other discipline. We use medicine as an early example. The researchers in medicine aren't trying to teach surgeons how to be better surgeons. They're coming up with the knowledge that can then inform the practitioners to advance their own practice as the experts that they are.

Drew: If you doubt the value of that, just think of the difference that germ theory made to the practice of surgery. This is what we're talking about. Germ theory is understanding what's going on. That knowledge drastically changes practice, drastically improves practice, and does it without academics ever having to publish standards and papers about correct hand-washing techniques. It's the generation of knowledge that's valuable.

What we've talked about in the papers, like knowledge-producing partnerships rather than leader-follower relationships. Researchers should be helping practitioners measure and evaluate interventions that the practitioners already want to do. Researchers shouldn't be coming along and saying, "Let me come into your organization and try out my new technique." Organizations should stop demanding that of researchers. Stop believing that research only has value if it's going to come in and propose some big revolutionary change.

David: There's quite a lot underneath this that needs to change in terms of these knowledge-producing partnerships. I would say that a large part of the practitioner community has no understanding or interface in any way with the academic community. The academic researchers are really only just dipping in and dipping out to the practitioner community when they need them for individual studies. There doesn't seem to be many mechanisms in place for large scale ongoing relationships between the practitioner community and the research community. Something quite significant there needs to change to actually create that partnership.

Drew: I try hard not to be a university education snob. I truly don't believe that universities are the only place to get knowledge or that we should prioritize university knowledge over practically learning knowledge, self-education, or any other form of knowledge.

The big thing that happens when you have an entire field where only a small subset of it has been to university is most people have this weird idea of how universities work. On the one hand, they pay universities not enough respect. On the other hand, they pay way too much respect. And they do each in the wrong order. They criticize the universities for things they shouldn't be criticizing them for and don't criticize them for the things that they really should be criticizing them for.

Having this clearer idea about the relationship between general knowledge which is what universities produce, specific applied knowledge which is what practitioners produce, and how those linked together is the key to making this relationship work in the future.

David: I agree. Our podcast is a small part of trying to connect some of these communities together. Hopefully, we can do some bigger things down the track.

Drew, the seven commitments that we make, and we propose others to make in the manifesto. Number one, we will investigate workers, our core object of interest. Number two, we will describe current work before we prescribe changes. Number three, we'll investigate and theorize before we start to measure. Number four, we will directly observe the practices that we investigate. Number five, we will position each piece of research in the appropriate parent discipline or within the appropriate parent discipline. Number six, when researching safety methods, we will prioritize real-world case studies. And number seven is we will treat practitioners as respected partners.

These are seven commitments. They might sound like they make sense, but I don't want to undersell to our listeners just how significant a departure these commitments would be from, to a large extent, the majority of safety science researches going on around the world.

Drew: And since we were preparing the episode, I cut and paste the entire conclusions of the paper into our C segment, which I'm not going to go through in the interest of time and not boring the readers. I would encourage anyone who's listening to the podcast to have a close think about those seven commitments and whether you think you can get on board.

I think the power of a manifesto—this is particularly what we saw in the Agile Manifesto—isn't that everyone agrees. It's that people think hard about what this says about current practice. What do you like about it? What you don't like about it? It allows people who want to do things differently, who want to strive to be better together to form communities of practice around those commitments.

I know we've got some listeners who are either doing research or thinking of doing research. They're the ones who are most strongly encouraged to take these commitments. If you want to break one of those commitments, do it in the knowledge that we are warning you about mistakes that people have made. It's always fine to break rules as long as you first understand the rule and know that you're not just breaking it in the same way that many other people have made the same mistake. You're genuinely being innovative in the way that you break it.

I think the other people that we particularly like to think about the manifesto are people who are regulating and funding research. We don't want to be the epistemological or ideological police for research, but we do think that it's always important for people who are spending money to think about what quality looks like, and what they really want to get out of it.

One of the things that we've tried to show in this manifesto is just asking for research with real-world results and implications and application isn't asking for the best research. We've put up our vision of what we think good research does look like and we challenge people who are funding research to do the same. Don't want to adopt this, pick your own, but do set out a clear remark for what quality looks like, and stand by when you're funding research.

David: My final comments as well would be for the practitioners. I don't know how many maybe downloaded this podcast with the title or have listened all the way through because we've been talking about research. I strongly encourage your practitioner community to get involved and get connected with what's happening in the research community.

Demand better research. Put your hand up and find ways to get involved yourself. Get the data, the case studies from within your own organizations. Draw researchers towards you, because we actually need the practitioners to be pulling more for the research. So that we can get to the work. We can get to the case studies. We can get to all these things that are going to give us far better safety theories because we'll have far better data to work with.

Drew: If you reward the researchers who are doing the right thing, even if you're just rewarding them by giving them your time and attention, then you're going to get more of the type of research that is useful for you. Otherwise, if you just like to complain about research and complain about academics without rewarding the ones who are trying to do the right thing, then we're just going to get more of the same.

David: I'm really interested to see how you found the episode. If you have had a chance to read the manifesto, then let us know your feedback. As the authors, you can get in touch with us at the show. If you can't access the manifesto, we can send that to you as well. But we are keen to know what you think of the manifesto, what your experiences with safety science research, what you'd like to see with research going forward, and what you think the big questions for our discipline are in relation to safety. We'd love to hear all of those things.

Drew: That’s it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Maybe in this case, hopefully in shaping your own research. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.