The Safety of Work

Ep.11 How are trade off decisions made between production and safety?

Episode Summary

Welcome back to the Safety of Work podcast. On today’s episode, we are discussing how a trade-off decision is made between production and safety.

Episode Notes

We use the paper, Articulating the Differences Between Safety and Resilience, in order to frame our chat.

Topics:

Quotes:

“So, you’re constantly in this fuzzy boundary of, well, we’ve made the trade-off for safety, but how do we know that we had to make it?

“Step one was to do what we suggested is necessary for a lot of safety research; which is to get out there and to at least spend some time watching it correctly in context.”

“We need to be very mindful of piece-rate contracting strategies...which is that contractors don’t get paid if the work doesn’t get done.”

Resources:

Morel, G., Amalberti, R., & Chauvin, C. (2008). Articulating the differences between safety and resilience: the decision-making process of professional sea-fishing skippers. Human factors, 50(1), 1-16.

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work podcast, episode 11. Today, we're asking the question how are trade-off decisions made between production and safety? Let's get started.

Hey everybody, my name is David Provan. I'm here with Drew Rae and we're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work, or the work of safety, and we examine the evidence surrounding it. Drew, what's today’s question?

Drew: The question we're going to ask today is how are trade-off decisions made between production and safety? This is something that we’ve talked about a fair bit in previous episodes. We’ve got competing operational goals, we’ve got achieving what the main thing the business is setting out to do, the production. Then we’ve got avoiding getting hurt, which is the safety. 

Obviously, it’s a really complicated question, how those two things sometimes work together and sometimes need to be preyed off against each other. We're going to be able to answer it quickly in 30 minutes, looking at just one paper. Hopefully, we’ll be able to use the paper we’ve got today to talk a bit about the issue and provide a few practical paper ways and maybe some questions that we can dive into deeper on future episodes.

Goal conflict is one of those ideas that comes up constantly in major accident investigations; whether it's spacecraft, trains, planes, nuclear power stations. It comes up as this question of safety versus cost, or safety versus keeping going, getting the business done, getting the shuttle launched, or getting the trains arriving on time. Safety compromised in favor of one of the other operational goals, but it's something that is really addressed in detail in the safety theories. 

One of the people who has talked about a lot is Rasmussen. You’ll hear the name Rasmussen coming up a lot whenever we talk about safety theories, he’s kind of the granddaddy of a lot of the current safety intellectuals. You can trace the genealogy from Rasmussen, who is supervised and who they supervised. I think I'm technically something like a fourth generation Rasmussen. David, I think you’re pretty much the same.

In the 1970s, Rasmussen created this pretty famous model that shows an envelope where you got one line which is the boundary of cost, another line which is the boundary of quality, and another line which is the boundary of safety. You’ve got work getting pressure from each of those boundaries and moving very fluidly inside the space that those boundaries create. 

The trouble is that we know the cost boundary, we know the treasury places, we know the resource and quality boundaries, and the pressure they place. We don’t really know where the boundary is for safety, unless there's something pushing back. You’ve always got this constant pressure to push the safety boundary. That’s what makes goal conflict and trade-offs hard to understand and hard to make, is that we know that safety matters, we know that we need to think about it, but we’ve got this much more concrete, much more immediate visible pressures coming from the other direction. That’s a bit abstract David, do you want to give us a couple of more specific examples?

David: I'd really like to talk about some practical examples here just to make it clear what we're talking about, when we talk about production and safety trade-off. It's going to be similar, but a little bit different to the authority, the work discussion that we had in a previous episode Drew.

Let me give two examples that our listeners can work through. Say you've got a lone service worker who comes across a job where they're required to lift their difficult piece of equipment. Now, that worker decides to call for help, another worker has to be sent from a different job that they were doing to this job, they have to defer the other job, and so the overall job costs three times as much. 

What happens there is that production has been sacrificed for safety, but there's no incident when the second worker comes along to help. There's also no guarantee that there would have been an incident the first time, if the individual worker had tried to lift that piece of equipment themselves. You’re constantly in this fuzzy boundary of what we've made, the trade-off for safety, but how do we know that we had to make it? 

I suppose a larger, more famous one, was say for example in the 80s, NASA had decided to postpone the Challenger Space Shuttle launch due to the cold temperature on the plane, the day of the launch. It would have cost millions of dollars, there would have been somewhat irreversible public media and government reputation damage. 

Say they relaunched a month later with no incident, it was warmer, it didn't fail, the launch was successful. They still have no way of knowing that if they had launched a month earlier, on the day that they planned, whether there actually would have been the incident because you can't prove something that didn't happen.

This trade-off between a certain cost, schedule, or quality disruption, and the chance to avoid a possible safety incident is what makes these concepts that we have today [...] as long as reasonably practicable, cost benefit analysis, and return on safety investments. These concepts are really fuzzy more than 25 years after they have been introduced. 

We don't know. That's why I think we don't have a lot of great theory around goal conflicts and trade-offs, even though a number of people have talked about it. Probably, Drew, engineering, literature, and theory is probably one of the theory areas that's had the most attention given to trade-offs and goal conflicts. Much of this was due to the early work of David Woods and his term, “sacrificial decisions,” after he was involved quite closely in the Columbia Space Shuttle incident and NASA’s now infamous faster, better, cheaper ideology that he’d concluded had encourage risk taking that had compromised safety. 

Drew: It's a bit of a risky event. We start with something that sounds more sophisticated, like resilience. It comes back to basically just making people for blaming a decision that we’d rather they didn’t make. The real question that’s raised here is how are these decisions made? Are people even conscious that they’re making a decision? What are the factors that go into making the decisions? It’s clear that you're understanding how these trade-offs are thought about. It’s really important for understanding why people sometimes go ahead, and why they sometimes make the trade-off in the other direction

The paper we’ve got for this week is called Articulating the Differences Between Safety and Resilience: The Decision-making Processes of Professional Sea Fishing Skippers. Try saying that three times fast. It’s a little bit of an old paper, it’s one of the classics. It was in the Journal of Human Factors in 2008. The three authors are Gaël Morel, René Amalberti, and Christine Chauvin. I think the name that you'd most recognize out of there is René Amalberti, who has gone on to become quite a big name in resilience and safety science theory.

David: I think Drew, these authors are well credentialed to research and write about production and safety trade-offs. I like your way you framed it early. If we just assumed that production and safety do trade-off in all organizations, or let's just say, I can't think of an organization where they wouldn't. Then what we concern ourselves, or what we should be really interested in is how people make decisions. When do they choose safety and why? When do they choose production and why? Therefore, how might we, as practitioners, create circumstances and environments  within organizations that enable decision making to be made in favor of the way that we want it to be made.

The researchers picked sea fishing as an industry. I think specifically, because a number of safety authors at the time and there was some statistics round that had labelled sea fishing and the maritime industry more broadly as the world’s most dangerous profession. Charles [...] would stress that human factors were always pertinent aboard ships. I think Charles would famously talk a lot about the Torrey Canyon incident, that the fear of talking disasters with the host of DisasterCast.

The Torrey Canyon was one of the first big super tankers that wrecked Isles of Scilly in 1967. That was down to the captain taking a direct route over the Seven Stones Reef, inside the Isles for the sake of saving six hours. It wasn’t just the saving of six hours, if the window hadn't been missed, I think the Torrey Canyon would have had to sit around for over a week until the tides were good enough to come into the harbor. Running 120 million something litre, or gallon tanker full of oil over a set of rocks to save a week was a decision that that captain had made. 

Drew: I guess I should point out here, since our listeners are all podcast listeners and I know at least some of them are suffering from DisasterCast withdrawal. Tim Harford has a new podcast out. I think his very first episode deals with the Torrey Canyon disaster. He’s got quite a sophisticated story, he tells about it. His new podcast doesn't just cover safety, but covers disasters stories that have a moral to tell.

David: The title of that podcast is Cautionary Tales for those that are interested in chasing it down. I'm up to date, it's a really good podcast. By way of numbers, for those of us who are used to dealing with numbers as most of us are in safety, in 2000 the fatality rate for sailors was 100 per 100,000 sailors per year. That was opposed to a reference rate of 15 per 100,000 for construction workers. It's sort of six times the fatality rate in the maritime sector at that point in time, in 2000, as for the construction sector. These researchers chose that if we're going to look at the trade-off between production and safety, let's go to a high risk environment from a safety point of view, and a dynamic environment, let's find the one that we think is the most dangerous and go to work researching there.

Drew: The question is how are we going to study decision making? If we’re going to study people and a really high risk occupation making decisions, the researchers can't possibly spend the number of hours necessary. On board, lots and lots of these ships waiting for decisions to be made and then analyzing those decisions, quickly given that the decisions are made in such a high risk environment. They have to have a method that would let them explore how people make decisions, without having to sit and watch all of the decisions. 

Step one was to do what we've suggested is necessary for a lot of safety research, which is to get out there and to at least spend some time watching it directly in the context. One of the researchers spent 14 days onboard a sea crawler observing the decision making. In particular, what they're interested in is what other necessary factors that go into that decision so that we can replicate the decision making environment somewhere that's a little bit safer to study it. 

They did that 14-day, and then discussed what they thought they saw with a bunch of other experienced skippers to make sure that they could develop two really realistic simulations that they could then put a bunch of real sea skippers into these simulations to watch how they made the decisions. David, do you want to take us through how the simulations were run?

David: Once they had formed this model of in what context certain decisions were made between safety and production and what their relevant inputs, sources of data, and decision making processes that were followed were, they designed these two simulations. What they did, we were just talking about this before we jump on the podcast, we got to remember this research was done about 15 years ago. They didn't just provide scripts to the research, they designed a computer interface where the participants were able to select buttons that represented certain pieces of information and then certain scenarios would flash up on the screen. 

They had a fairly standard set of scenarios across this 14 day simulation, or 14 day simulated trip where on day zero, they were given certain information to make a decision, day 2, day 6, and day 10. They were constantly getting fed information about the catch, the state of their fishing equipment, and other vessels. 

Drew: One of the cool things was when we say “fed information,” the information was available, but they could choose which bits of information to look up and focus on and the researchers kept keep track of that. For example, they could see whether the skippers looked at the weather reports before they left. Reports were available, were emailed from other captains, so was information about the weather forecast, so was information about where the fish were, and they could choose which bits of information they were going to use to make the decision.

David: When they made their decisions, they gave them certain options that they could choose. For example five options, it might be continue fishing with this information, or move to another fishing spot, or return to the harbor, or so on, or other, and then it had to always explain other. Then when they made that decision, they had to provide a justification. I choose to find another spot to fish, because of A, B, C, D, or E, those were all defined about whether it was safe, whether it was to maximize the catch, whether it was to look after their fishing equipment, and things like that.

I'm getting really good insights into how people were accessing data to make decisions, what information sources they are accessing, then what decisions they made, and what their reasoning was behind making that decision. Really interesting. For a lot of statistics, 34 participants were equally divided into two groups, so 17 skippers did each of the two scenarios and they worked their way through the scenario. 

Before we talk about the findings, there was a subset of that group, I think about eight, that were involved in a debriefing exercise, Drew. They were given a questionnaire about the theory of planned behavior to get some additional data, but we won't talk about that too much now. They did actually do a third part of the study which involved a questionnaire just to try the difference between what they found in their own study and what responses that were provided against the survey of the theory of planned behavior. 

Drew: A couple of quick things I want to point out before we get into the findings. The first one is these two different groups. What they’re testing is one of the groups has access to good fishing and so the catch is complete really quickly. The other scenario, they have the same weather and the same decisions, but the fish are much more scarce. They didn't say this explicitly, but I think the theory that they’re going with is that people are going to make more dangerous decisions when they’re much more worried about getting enough fish. They’re going to be more conservative when the catch going well and they don't need to worry so much.

David: I don't recall, was that mentioned in the paper, Drew? Because I was thinking, it was more people were going to be far more willing to take the risk when the price was greater.

Drew: That’s interesting. I don’t think they explicitly said which way they’re resuming, but they have these two conditions that they were comparing.

David: Yeah, at one point they talked in the paper about mountain climbers, the people in the Himalayas who climb Mount Everest and the risk that they take because of the size of the prize of summiting the biggest mountain in the world. When I interpreted that, I interpreted that if the fishing was going badly, then they might as well just head back to the harbor and come out another day, but if the fishing was going good, then we're going to stay out.

Drew: It's interesting that we made different assumptions, but yes, spoiler alert, we’re going to find out that it makes no difference, whatsoever. 

The other thing I want to point out is that one thing we do know from simulation studies–because they do this a lot in economics, particularly–is that people tend to take higher risks in a simulation than in real life, because they’re dealing with hypothetical consequences, not actually going broke. The authors acknowledge this and they try to make the scenarios as realistic as possible. Just a word of caution when interpreting results is that we should assume that in real life, that maybe the skippers would be a bit more conservative than they were in the simulations, they would hit us with the results. 

David: The results are interesting. Basically, the researchers observed that from the time that the skippers left the harbor aboard their sea fishing vessels, they never gave up on fishing. That was their core business, that's what they were there to do even in extreme conditions, regardless of the weather, regardless of whether the catch was good, or not good. 

They reference that they didn't consider them to be suicidal. However, the sea fishing skippers used multiple expert strategies to reduce risk, which meant that they never had to give up on actually fishing. The actual option, the option that was provided to them multiple times in both scenarios, was to suspend their fishing activity and return to the harbor. It was actually never considered, except by one single fishing skipper in all of the 34 scenarios to maybe actually be decided. 

Drew: Something that I love is, remember that the skippers have to explain each decision? The one skipper who did return to harbor, when asked to explain the decision, it had nothing to do with the weather. He was going in because he thought that everyone else was staying out and he’d get a better price for his catch if he went in now. That was consistent across the study, that whenever skippers made decisions that were good for safety, there was always some non-safety reason. The guy who returned to harbor early was doing it to get a good prize, even though he’s also avoiding the worst weather. The skippers who moved away from the bad weather were making themselves safer, but they said they were doing it because the heavy swells made it hard to catch prawns if they moved away from the bad weather, it would also be better fishing.

David: That was the importance of collecting that information about why they made the decisions, because you could look at that decision, which a number of them made to move into sheltered waters when the weather got bad. Thank God that's great, they're being safe, but they're actually only doing that to maximize their catch. Because when the nets are bouncing up and down, they can't catch the prawns on the bottom, so they actually just move in purely to get more catch. If it was the opposite, they probably would have just stayed in the weather. 

Drew: What did the authors conclude when they found these results?

David: The authors conclude that systems that are run by crafts people, these systems that are highly dynamic, sometimes remote, small work groups, people who are very experienced, they conclude that their systems are basically natively very resilient, because they rely on a high level of adaptability based on the actors’ expertise. It’s that expertise gained by exposure to frequent and considerable risk. 

It's like this cycle that the more risk that people are exposed to, the more expertise they have in dealing with that risk, and then the more, I suppose, risk that they're willing to take on because of their reliance on their own expertise.

In these situations, you'll say that each actor is responsible for their own safety. What the authors were concluding—and remember that this paper was published only two or three years after the resilience engineering theory movement first started—they were saying it was these industries where this native resilience was present that was probably more adaptive than anything that we can try to put into our existing organizations and our existing systems.

Drew: One thing we should definitely point out here—and remember the title of the paper gives a hint of this when it talks about the difference between resilience and safety—is that when we say that these systems have a lot of native resilience, that is not the same thing as saying that these systems are inherently safe. This is a very resilient, very adaptive system but it is also 10 times more dangerous than construction which is one of the most physically dangerous industries.

When we talk about resiliency, we're not saying hey, everyone should copy this, this is great. What we're saying is that we should be very worried that maybe classic safety interventions are not going to work here but because of the way the system works, we should be very careful about how and when we intervene.

David: Yeah. I think the authors expressly referenced that they conclude that they're not even sure it's possible to design a safety system and preserve the craftsmanship and expertise, and therefore, the native resilience in the system. They're saying that classic safety inventions, like you said, Drew, putting in rules, procedures, requirements, processes, and all these things that we might think about as classic safety interventions in this type of context may not work.

The authors also proposed that it’s probably going to have more adverse effects on safety given the pursuit of these types of industries towards production. Putting additional constraints into their system is potentially likely to cause less safe workarounds than how they're currently adapting today.

Drew: This doesn't come directly from the study but it's very well reasoned, it's not just speculation either. What they can see in these industries is that whenever there is some constraint in the system, that increases the motivation for people to take the high-risk, high-return strategies. That's been seen, for example, when fishing resources get scarcer, that increases the amount of risk-taking that each individual skipper has to do.

When there are conditions like limiting the time that people are allowed to fish in the conditions they're allowed to go out in or the size of the catch that they can have, all of those previous, well-intentioned policies have had these perverse effects. The concern is that safety rules would follow that same pattern that they’ve already seen in the fishing industry that putting constraints around the system just makes it harder for people to adapt and more likely to take higher risk strategies in order to survive and be productive.

David: You've actually proposed a formula for observe safety at the end of the paper. They talk about two things, they talked about constrained safety is basically SC in the formula and then managed safety or self-managed safety which is almost that individual adaptive capacity.

They talk about different types of systems, about the different balance between your ability to constrain safety through normal traditional safety management practices, and your necessity to rely on self manage safety and the adaptive capacity of individuals. They propose different formulas and different writings for almost different types of industries for what safety you'll observe in that particular system.

Drew: This is something that Amalberti likes to do a lot is to put these categories around types of industries. He makes the suggestion much more exclusively in his book Navigating Safety that you can almost decide what type of industry you're in and how sensible it is to apply type safety around that industry. He thinks that for example in aircraft design, it does make sense to put in tight rules and tight management systems whereas, in industries like sea fishing, he puts down the other end of the spectrum.

It's interesting that he puts healthcare down at the same end as sea fishing as one of these complex craft-driven industries where it wouldn't necessarily be productive to put in place the tight rules and constraints.

David: I think for those of our listeners who are in the healthcare sector, at least, a couple of conferences that I’d spoken out in the last year or two, I think that the whole industry is struggling to understand where it might sit on that continuum because adopting a lot of constrained safety practices from other industries including aviation into the healthcare sector and I think there are some situations in healthcare that there is as complex and adaptive as any environment you’re going to find in any industry anywhere.

Drew: David, that's probably a good point to move on to practical implications. Just before the podcast, we were looking at the heat map of downloads. I didn't see any sitting in the middle of the North Sea. I'm presuming that none of our listeners around are sea trawlers. What conclusions should we be drawing from this if we're not ourselves sea skippers?

David: Yeah, let's try to have some conclusions that are a bit more generalizable than just to see those seafarers who might be listening. I think one of the things that I took out here which was a bit of an extension of the work that if the fishermen don't catch fish, then they don't get paid. It's not like these fishermen are getting paid an hourly rate by anyone to sit in the harbor and not to catch fish. If they stay in the harbor for bad weather, they don't get paid.

We need to be very mindful of piece-rate contracting strategies. These happen in lots of industries, construction, oil and gas utilities, transportation, which is that contractors don't get paid if the work doesn't get done. Typically, they might have a fixed rate for a particular unit of production.

If you're wanting your business partners, your delivery partners, your contractors, or even your own employees—which probably wouldn't be the case—but your contractors, at least, in these environments where if they don't do the work, they don't get paid, then you need to understand the push you're making towards trading off safety against production because you're expecting people to trade off a certainty of not getting paid and have them be cautious around their own safety.

What you're going to get in that environment is you're going to get people to continue production to the absolute minimum acceptable level of safety that they've got as individuals or as workgroups because the pressure in those organizations to get the work done to get paid is going to be very high.

Drew: Following on from that, the skippers in this study were very open to any type of safety which didn't force them to compromise trying to maximize their catch. They didn't like the idea of having to trade-off safety for productivity but they were very on board with anything that would give them the tools to be safe while they were being productive.

Anything that gave them, for example, better information about where the weather was, they were very open to. Anything that gave them tools that they could navigate and be safe in bad weather conditions, they were openly encouraging of. Whereas safety work that tries to control their decision-making is not helpful at all, it's just putting them in an impossible situation. I think that's a good general rule to take.

Giving people assistance with safety and giving them options is unlikely to have perverse consequences. Where we run the risk of doing safety that's problematic is when we're actually trying to make people make decisions differently from how they would otherwise have made them. We might say we're trying to help them be safe but actually we're trying to just force them to make different decisions. That's what can have perverse consequences.

David: Yeah, I think, Drew, then the third takeaway is that it's obvious that for jobs where risk-taking is great so the more dangerous jobs that our workers are working, they will acquire skills in how to manage those dangerous risks that they face in their work. That can, and whether that can, in turn, further increase their risk-taking, it's a meta-knowledge type of effect.

You have that story of someone who's a roof tiler, for example, and has been doing that for 30 years without any full protection and they've never fallen off a roof. They probably had a whole lot of things happened in those 30 years of work where they've acquired a whole lot of expertise and skill to adapt and manage that working on the roof environment. That experience can, in turn, further increase their risk-taking.

I think, Drew, we talked about this as a finding in the paper on why do people break rules, I think, way back in maybe even episode one or episode two. Knowing that that's happening in your workplace and knowing that your people who work in risky roles are acquiring a lot of skills to manage those situations means you need to work closely with them to understand how they are adapting and managing those situations and whether that's consistent with the way the organization wants to support them to do their work.

I also want to think about on the other side is the development of this adaptive capacity is one of my fears, Drew, is we've taken so much variation out of work through our pursuit of standard operating procedures and prescribed work methods means that if one step is out of order for a job, there's a risk that our workers don't necessarily have any experience or expertise in how to safely adapt around that work.

The example that I give is something like if people always do a task off an elevator work platform and one day, that task isn't available, do you know if your workers have ever done that task off a ladder before? Because we know from this study and we know from a lot of other studies that workers will find a way to get their work done.

We spoke about it last week in the authority to stop work paper that they will find a way to adapt around the situation they face and get their job done. Understanding how they're going to adapt, how they're going to do that, and whether you can support them to do that in a safe way means you might have to let people do their work in different ways on different times to develop the capacity to face the problems that they face with their work.

Drew: One thing that really frustrates me is that people try to copy so much the aviation industry. They copy the safety management systems which suck up a lot of people's time but the one obvious thing that they don't copy is just how much aviation does simulation training to allow people to develop skills in different situations.

Yes, that training takes up time but so do safety management systems. That working in simulations exposes people to risks in ways that let them develop the skills without actually having to face risks. There are a number of jobs where I think that's a very positive thing to do.

David: Yeah, I think, Drew, there's this more trade I've seen cost and risk of doing those simulations. I know of organizations that have suspended their training of fire wardens with using real fire extinguishers because of the cost of basically letting people actually discharge fire extinguishers in a controlled way. People no longer have the actual simulated training in how to use the equipment that they're responsible for the organization to use.

I also know of organizations that have suspended their physical evacuation training of workplaces because of people who rolled ankles and had a recordable injury on the stairs while they were doing an evacuation drill. The time, the cost, and the recordable incidents associated with simulation training means that it can be easily discarded, whereas, in aviation, it's a mandatory requirement for a number of sectors within the aviation industry.

It is important to understand if you can give people the expertise in a simulated environment, then you can have a little bit more confidence that they might be able to navigate the situation when it occurs for real.

Drew: Should we move on to what we don’t know?

David: Yeah, let's do that.

Drew: One thing that this study really sparks for me is just I'd love to see more of this genuinely innovative research. You pick up paper after paper and its survey, survey, survey. Then this one, they did a really cool and interesting simulation. Putting people into simulations has its advantages and disadvantages. All research has things that it's good at and it's got weaknesses. But at least when you do research differently, you encounter different advantages and disadvantages.

One thing I'd like is just some ideas for this type of research. I’d love our listeners to get in touch with us on LinkedIn or email and tell us instead of having to fill out surveys as research, what type of research would you find it fun to take part in? Thinking about things like simulations, what would you find a fun simulation to either run yourself or be a participant in and what could we learn from it?

David: There have been really useful safety findings from simulation research, Drew, particularly in the aviation, nuclear control rooms, and others that I'm aware of that have been quite fundamentally now understanding of naturalistic decision-making, emergency response, and a whole range of situations. I'd love listeners to reach out. I'm not even going to add anything else. I'd love listeners to reach out to us and help us come up with some things, some questions that we could actually create some simulations around.

Today, we asked the question of how do trade-off decisions get made between production and safety? Drew, do you want to have a go with the short answer?

Drew: Gosh, I don't know that there is a short answer for that one except to say that decisions don't get made as straightforward trade-offs, they get made in the context to that production is a necessity and safety is something which is much less certain and much vaguer.

David: I think that's right. I think if I had to say the incentives, I'd probably say something like that individuals will always try to maintain production in a way that doesn't create for them an unacceptable level of safety, whatever that means for them as an individual. To try to manage that trade-off between production and safety, we actually need to understand all the things that we can do to create an environment that allows for a balance of decision making to occur between production and safety. I think just saying from last week, just telling people to stop work when it's unsafe, this research shows that it’s not going to be an effective strategy.

Drew: That was a very long one for a summary, David.

David: Yeah, I've got to be clear so I kept going.

Drew: We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.