The Safety of Work

Ep.81 How does simulation training develop Safety II capabilities?

Episode Summary

Drew Rae and David Provan are back in today’s episode discussing research findings in the field of maritime safety through simulator training.

Episode Notes

The specific paper found some interesting results from these simulated situations - including that it was found that the debriefing, post-simulation, had a large impact on the amount of learning the participants felt they made. The doctors chat about whether the research was done properly and whether the findings could have been tested against alternative scenarios to better prove the theorized results.

 

Topics:

 

Quotes:

“Very few advocates of Safety-II would disagree that it’s important to keep trying to identify those predictable ways that a system can fail and put in place barriers and controls and responses to those predictable ways that a system can fail.” - Dr. David Provan

“It limits claims that you can make about just how effective the program is. Unless you’ve got a comparison, you can’t really draw a conclusion that it’s effective.” - Dr. Drew Rae

“A lot of these scenarios are just things like minor sensor failures or errors in the display which you can imagine in an automated system, those are the things that need human intervention.” - Dr. Drew Rae

“Safety-I is necessary but not sufficient - you need to move on to the resilient solution ”  - Dr. Drew Rae

“I don’t really think that situational complexity is what should guide your safety strategy. - Dr. Drew Rae

 

Resources:

Griffith University Safety Science Innovation Lab

The Safety of Work Podcast

Feedback@safetyofwork.com

Research paper

Norwegian University of Science and Technology

Episode 79 -  How do new employees learn about safety?

Episode 19 - Virtual Reality and Safety training

Episode Transcription

Drew: You are listening to The Safety of Work podcast episode 81. Today we're asking the question, how does simulation training develop Safety II capabilities? Let's get started.

Hey, everybody. My name is Drew Rae. I'm here with David Provan. We're from the Safety Science Innovation Lab at Griffith University. Welcome yet again to The Safety of Work podcast. In each episode, we try to ask an important question in relation to the safety of work or the work of safety, and have a bit of a look at the evidence surrounding it. David, what's today's question?

David: Drew, today's question is, well, as you said, how does simulation training develop Safety II capabilities? The paper that we're going to talk about is an interesting paper. Actually, I cheated this week a little bit because it was fired directly into my inbox because I follow one of the authors on Google Scholar. It's something that you can do. You can create an account, follow authors, and get notified when papers get published.

We're going to talk about that today because I thought it was a really nice way to explore an existing safety capability development process inside organizations and to look at what that might do to develop some Safety II capabilities that we talked about with some of the more recent safety theories. I suppose, underneath this topic, we're going to talk about today is this idea that in our complex socio-technical systems, our frontline operators need to be able to handle very variable work conditions. They need to be prepared to manage both the known situations as well as the potential unknown situations.

Before we get into it, Drew, I think one of the broad perspectives in resilience engineering and all of the new view safety theories is that work is always changing. Therefore the performance of the work, even some very routine tasks, is always variable as the context, the situation, the people, and the circumstances surrounding that work changes. These constantly changing conditions of work require ourselves and operators to constantly adjust their work.

Sometimes these adjustments save the day and sometimes they lead to incidents. This is a place that I think is right in the middle of this debate about how much we follow the rules and how much we rely on trust on the experience of operators. We really need to consider what are the individual and team skills that are necessary to enable frontline teams to adjust their performance in order to maintain safe and efficient operations during both expected and unexpected situations.

Drew: I think this is a good illustration of where the application of new safety theory can directly lead to changes in safety practice. One of the criticisms that Safety II gets or really there are two criticisms. The first one is that there's not a lot of research and evidence behind it, which is a fair criticism, but you can apply that to anything in safety. So it's not a fair criticism, or at least not fair to single out Safety II for that.

The other common criticism is that it's all lots of theory without a lot of knowing what to do in practice. I think this focus on the skills needed to handle variability is a good example of where you can directly take Safety II attitudes and theories, and use them to change the way we're doing things. Use them to adjust our training so that it's less about, here is the one right way, the one safe way to do work, and more about how it's equipping people to handle a varying and unexpected world.

David: Yeah, Drew. I think what you said there is true. As people would know now, 81 episodes into The Safety of Work podcast, whether we talk about management systems, safety cases, audits, incident investigations, life-saving rules, or critical risk management programs, you can make the same statement about the evidence for those practices being weakened variable and in some places not really existent as well. Which is why we're on this crusade to bring the research that is there to your organizations.

The paper we're going to review today, like you mentioned, talks about in the resilience engineering literature this idea of the skills and capabilities that individuals and teams need. But the authors point out that there has been limited research too and practical implementation of targeted resilience capability development. When we say resilience capability, we're talking about the capability of people for resilient performance of their work tasks.

Just as an aside, Drew, I think this paper provides a really good overview in the literature review of the theoretical frameworks for resilience engineering and Safety II. It lays out a really nice clear narrative that really bonds the ideas of Safety I and Safety II together, which is that it shows how Safety II plays out within a context of Safety I. We're always going to have a compliance context surrounding work, and a routine, dependability, and reliability of our work tasks. But shows how those tasks need to switch modes into Safety II, which I think is a really nice description in the literature review.

Drew: Yeah, you've got me thinking a little bit about the way in Japan they have this interaction between Buddhism and Shinto. That depending on what part of life you're dealing with, you might apply the traditions of a different religion. If you're dealing with death, you're going to use one. If you're dealing with marriage, you're going to deal with another.

I almost think that the way it describes Safety I and Safety II here is like handing back and forth between theories depending on the situation. We use Safety I to identify what can go wrong and then we use Safety II to steer us away from those waters. We use Safety II to keep us generally safe. But when we get into an absolute emergency, we switch back to Safety I.

David: Yeah, Drew. A nice pun there about steering away from the dangerous waters because we are going to be talking about the maritime industry today. We haven't introduced the paper, so, unfortunately, that's a joke that I only got at this point. We'll introduce the paper in a moment, but I think very few advocates of Safety II would disagree that it's important to keep trying to identify those predictable ways that a system can fail and put in place barriers, controls, and responses to those predictable ways that a system can fail.

The main argument though in Safety II is that this will never be enough when we're faced with complexity and variability. We'll never be able to predict all of the ways that our system can fail. Thus, we always need to maintain this close relationship, Drew, like you described between the two modes of safety. Do you want to start by introducing the paper?

Drew: Sure, though, if I have to introduce the paper, I need to pronounce the authors’ names. The paper is called Balancing Safety I and Safety II: Learning to manage performance variability at sea using simulator-based training. I think they've pretty much put everything into the paper into the title there. They've told us the theory, they've told us what they're studying, they've told us the industry. Pretty much the only thing they haven't thrown in is the exact research method into the title.

The lead author is Aud Wahl. Other authors are Trond Kongsvik and Stian Antonsen. They're all from NTNU, which is the Norwegian University of Science and Technology. Frequent listeners may notice that that name crops up a fair bit. There are a few centers around the world that have strongly Safety II influenced research.

There's a real cluster in Norway around maritime and coastal industries, which are highly variable industries where Safety I approaches don't fit very well. You see lots of Safety II and resilience style studies coming out of that. Along with that I think comes a good tradition of very grounded ethnographic or interview-style research.

David: The paper was published in Reliability Engineering & System Safety. It's got a late 2019 publication date. Do you want to mention anything about that journal? We haven't heard a lot from RESS so far on the podcast.

Drew: Growing up, I always thought that there were two big safety journals, Safety Science and Reliability Engineering & System Safety. Depending on what type of study or what type of focus you have, you tend to publish in one or the other as your target journals. If you've done a good study, you think, okay, that's good enough to send off to either Safety Science or to RESS. Then if you don't send it to one of those, then you might pick one of the next tiers of more specialized or boutique journals.

System Safety is rather eclectic. Reliability Engineering & System Safety tends to go for more quantitative, more mathematical, more process-driven approaches to safety. It's a little bit surprising to see a study both about education and using interview methods published in RESS. I don't know the story behind that.

David: The paper had some good photographs and diagrams. The simulators are pretty technical. But it was nice to see ethnographic research in our Reliability Engineering & System Safety journal.

Drew: Now that you mentioned it, maybe that they're dealing with a highly automated system. RESS tends to be a bit more technology-focused.

David: Yeah, and quite a big system safety theme in what they're talking about that we'll talk about shortly. I was in the maritime industry like we said. In the literature review, they said in 2017, there were almost 3000 fatalities and nearly 100 ships lost internationally to maritime incidents. That blew me away because we don't hear about two ships a week being lost or sunk at sea. Did that surprise you?

Drew: The number of fatalities didn't surprise me, but the number of actual ships lost did. Although we're talking about ships of all sizes here. Lost doesn't mean disappeared off onto a Pacific Island, never to be seen again. It basically means written off.

David: Yeah, and I think what we saw in the literature review, they talk about a human error is seen as a major contributor to these incidents. I suppose we can just look at the response to the event in the Suez Canal in March this year that blocked I suppose one of the world's most important, if not the most important shipping routes, for almost a week and where the blame fell for that incident.

These maritime workers, extremely remote and variable work contexts. From a researching point of view, these workers really have this need to manage both known and unknown situations.

Drew: As we're going to see in the paper, maritime is very, very automated. Ships' bridges can look as complicated as aircraft cockpits, with many of the same issues. This human error is the major contributor to accidents is very similar to the way that pilots typically get blamed for aircraft accidents. Most of the time, the vessel or plane is being operated totally automatically.

The person on bridges keeping a lookout and monitoring the controls might not even be able to see anything out of the window due to the weather conditions. But when something goes wrong, the last line of defense is the human taking over. The last thing that pretty much always happens in a maritime accident or in an aviation accident is the human fails to stop the accident from happening and they take the blame. It doesn't mean that they are the major contributors. But it means that human performance is always a vital part of what we use to prevent accidents.

David: Yeah, absolutely. The aim of this study was to explore how resilient performance can be achieved in practice, which is a great aim to take these theory ideas and look at how they can be achieved in practice. And to do that using simulator-based training to maintain safety of operations by managing performance variability and developing these resilience skills. The training that we're going to talk about, training of deck officers that operate a very specific computerized system on what they call shuttle tankers.

These are the vessels that head offshore out to an oil rig. Hover is not the right word, but sit offside for between one to three days while they fill up the vessel with oil, then they head back into the harbor, offload it into a storage tank, and then head back out. These shuttle tankers. The simulator-based training allows the deck officers to experience the operation of those tankers in a simulated environment that looks and feels like their workplace.

They can safely test the ship’s operational limits, trial and error, and all those other good things that you would normally do in a simulator if you think about pilots in a flight simulator and things like that. Safety environment that's as close to reality as possible. You can have the accident without having the accident.

Drew: David, I don't know if you thought it was funny, but the trainees in this scenario have been taught to be dynamic position operators. You ought to be captain of the ship, today you get to be a dynamic position operator. Someone who looks after a computer, which in turn looks after the automatic positioning and heading control of the vessel.

David: It's probably a bit like a pilot but changing to becoming the autopilot operator instead of being called a pilot. But that's exactly what this is, dynamic positioning, DP. Different types of DP systems on different types of vessels, DP2, DP3 depending on the vessel. That's how the computerized system holds the vessel basically in the same location.

You've got the wind, the waves, and everything moving around. The dynamic positioning system either keeps the vessel at rest or keeps the vessel on a particular heading by a whole range of very, very sophisticated GPS information and sensor information from the vessel. The operator really—you're right, Drew—is a stand and watch function. It applies a supervisory control while the computers dynamically manage the position of this vessel.

Drew: Which would be nice and easy if the vessel was floating in a bathtub. Not so easy if the vessel’s out in the North Sea. If listeners are interested, the paper’s got all sorts of details about dynamic positioning and all sorts of details and pictures about the simulator setup. But that's not really what the focus of the learnings in the paper is about.

The paper’s also got a bit of a literature review about learning and how simulators are meant to support learning. This is something that we've talked about a couple of times on the podcast, most recently in episode 79. We were looking at learning for new trainees in retail, elderly care, and metal trades. I think we've also looked at simulators before, but I can't quite remember the episode, David.

David: I think early on, maybe in the 30s or so, we looked at virtual reality and safety training.

Drew: We'll throw in a bit of a recap here about how learning happens. Learning is one of the key elements of resilience according to Erik Hollnagel. He says that a resilient organization, they anticipate, learn, monitor, and respond. Learning is pretty obvious. It's about modifying or requiring new knowledge and skills. We want this to happen for the organization, that for it to happen for the organization, it also has to happen for individuals within the organization.

Because knowledge doesn't just sit inside pieces of paper. Knowledge sits inside people's heads. To have it in the organization, individuals have got to learn. Do you want to take us through the single- and double-loop learning?

David: The idea that’s talked about in learning, typically, by showing in terms of single looping and double loop is where we can either intellectualize the learning or operationalize the learning. There's a lot in the paper about in-use models and thought models. In my understanding of the way that it's used in the context of this thing is we can give people the knowledge, but the simulator training is really making sure that it actually changes the way they practice, changes how they think about and perform their tasks. 

I think one of the comments in this paper is that most organizations genuinely only change the way they think, not the way that they act if that makes sense.

Drew: This is something that we touched on in episode 79. When you're learning in school or university, we've got this quite sophisticated model of how you learn. You start off with a problem, you then do something in response to that problem. You reflect on the stuff that you've done. Most of the learning happens during that reflection on performance, not in the upload of content that allows you to do the performance.

But when we come to organizational training, very often we think that people can learn just by uploading knowledge to them. We don't give them the opportunities to learn by doing, let alone to learn by reflecting on doing. The idea of a simulator program is to put in all of those steps. To give people a very concrete experience where you're actually piloting the ship, to get them to think and talk about what they're doing, which is the reflecting. Then using those lessons to then modify their understanding, and then go back and try again to apply what you've learned to experiment.

That basically takes you through a cycle because once you've done that, you're back to having a concrete experience, which you can then reflect on again and continue to learn by continuing to do.

David: The good thing about the process we're going to talk about in the method here, which was already set up in this particularly established training scenario, was that built-in reflection. You're not just a simulator training itself. It's important to understand that the results that we're talking about today were generated by that whole cycle of the knowledge component, then the practice compliant component, and then the reflective component.

Even in this situation, there was a very interactive reflective component between peers on a particular training program. There might have been up to seven participants and then they all had to go in the simulator. They all discussed what they did. They got to observe what other people did. It was some of those parts of the training process alongside the simulator experience itself that the authors reflect probably generated the resilience capabilities more than just the actual using the simulator. Because using the simulator is just like doing normal work.

Drew: It's kind of fun because, in normal work, you don't get to crash the ship.

David: Your version of fun, Drew. I'm one of those people who have been in a flight simulator and my sole purpose is to try and actually take off and land, not actually intentionally try to crash the plane. It's actually harder to not crash the plane.

Drew: Actually, I tend to use simulators to try to break the simulator by seeing if it can model the weird situations you can put it in.

David: Speaking of the North Sea, this study is being done in the North Sea. I was in Aberdeen and I got an opportunity to be in one of the large offshore aviation simulators for one of the operators there. I recognize I got the helicopter in the air in the simulator for about 16 seconds before it fell back to the ground upside down. When you got a simulator instructor, they're saying, gee, I've never seen that before. You know you're not doing that well.

Drew: Should we go through the methods for the research?

David: Sure, let's do that. Mix methods approach which was good. Collect the data over a six-month period, observed two full DP, dynamic positioning training programs. These programs are run over two days, so two classes just observe that entire end-to-end program. Alongside that, did interviews with 12 instructors and 7 course participants. So 30 to 60-minute interviews. The instructors, the researchers were asking questions around what characterizes good simulator-based training? What do you emphasize in your debriefing sessions and your interactions with the learners?

For the participants, it was about describing characterize simulator-based training that best enabled them to handle errors in the DP system or incidents during the DP operation. Really asking the participants, what was it about the training that gave them the capabilities to handle the unexpected situations they face? These are pretty straightforward questions of what the researchers were trying to get at.

Drew: I'm curious what provoked the research. It seems to be set up as a bit of a training evaluation. Typically, this thing happens, people are running a program and they just want to know, is the program effective? Sometimes that's genuine inquiry. Sometimes it's because the people who are funding the program have insisted that it be evaluated. Sometimes it's because it's part of some larger organization that just requires that all programs be regularly evaluated.

It's fairly normal in this design that the researchers can't influence the thing that they're evaluating. What we'd really love to see is some sort of controlled experiment where you compare people during this program with people who do some other type of training activity. But usually, there's simply just no opportunity to do that. The researchers just have to come in and look at the program as it is now, which gives you a nice naturalistic feel and naturalistic description. But it limits claims that you can make about how effective the program is. Unless you've got a comparison, you can't really draw conclusions that it's effective.

David: Yeah. I think what would have been really good with this study is, like I said before, there was a whole training process that was wrapped around a simulator and the authors make some claims about the impact of that process around the simulation itself. It would have been really good to have people just do the simulation component maybe without the feedback or maybe without other processes. Then you could actually make some of those claims confidently about which parts of the training process were contributing what you're claiming as an outcome. Would that be an idea, Drew?

Drew: Yeah, that's a really good thought, David. I hadn't thought about testing that particular aspect of it, but you're right. They do make some fairly strong claims about the importance of certain parts of the training program. If you wanted to test out those claims, the way to do it would be to remove those things and see what happens. See if they genuinely are as important as the researchers claim.

David: Yeah, because one of the practical takeaways that are not written that I've been thinking about as we've been talking about this, we're saying now, is that if the simulation environment is replicating normal work, normal work is variable and we give people variable situations in the simulator. Then, when people read this study and they read the process that gets wrapped around the work with the debriefing and the peer learning processes, you could say, well, can I develop resilient capabilities without the simulator just by actually wrapping this program around normal work activities? That's something that I'm curious about.

Drew: Logically, according to the theory they're applying and the way they're describing the learning, you could do that for everything except for learning the limits of the system. As we'll get into the results in a moment, one of the key things you can explore in a simulator is just how close you can get to the edge, which we would rather people weren't deliberately doing, and then reflecting, and talking about in their normal daily practice.

David: Nice. Now assuming that the edge of the system in the simulator is actually where the edge of the system is in reality. Okay, all right. So the results, Drew. There are three key aspects in learning to manage performance variability that came out of the grounded theory analysis. I did this blank page, bottom-up analysis of all of their data. The ability to prevent adverse events by recognizing errors and solving problems in a flexible way.

How early can operators detect a problem with this automated computerized system and their ability to solve that problem in flexible ways? The ability for them to expand their limits of action through shared knowledge. So learning more about the way that the technology and the vessel operate, learning more about the different actions that they can take as an operator, seeing what actions others take are actually just expanding their repertoire of actions.

The third one is the ability to operate the system with confidence. Saying that people, particularly when operators need to take over manually from the automatic system like you mentioned earlier, being able to do that swiftly and confidently to take control of the situation. Those are the three areas that the author said, here are our results, here are our three areas.

Drew: Should we talk through each of them in turn, David?

David: Yeah. Let's make a few comments about each of those. Do you want to kick us off?

Drew: The first one is about recognizing areas and anomalies. They're putting a big focus on very early identification. Before anomalies become actual problems, training people to recognize the—I guess they don't use the term early warning signs or weak signals. But what they mean is recognizing quite quickly that things are not quite right. One thing I found really interesting about this one is that they create their scenarios by using incident reports.

This is a lovely blend of Safety I and Safety II because your incident reports and investigations are very much a Safety I activity. But they take the outcomes of that and feed those into designing training scenarios for resilience, which is a very Safety II attribute.

David: I think aviation is very similar. The instructors, when they're designing these or reading the incident reports, will see a particular sensor failed and the vessel responded in a particular way. Then they'll actually use that program into the simulator and use that as their scenario. The participants said it was really important that these scenarios were realistic areas of problems. It's all fine to say, okay, the ship is sinking and there's this massive emergency. But the participant said, look, we get more out of the program when we're dealing with realistic minor types of problems, which are subtle but with the potential to cause us big issues if we don't understand how to manage them.

Drew: A lot of these scenarios are just things like minor sensor failures or errors in the display, which you can imagine in an automated system, those are the things that need human intervention because the automation is only as good as the information that the automation can gather. If a sensor starts to go wrong then the system starts to drift, the operator needs to recognize that before things get out of hand.

David: What we're talking about here is the Boeing MAX 8 situation where if the sensor provides certain data back to the computer, then the computer will respond and take action for the vessel based on that data. The operators really got to know what the system is doing, why it's doing what it's doing, and when to intervene. The operators get to experience this, Drew. They get to go, well if I do this, what happens then if I intervene like this? What does the vessel do?

This broadens their understanding. This gives them this total scope of action. Since we're talking about resilience engineering, this is this idea that Professor David Woods talks a lot about this capacity for maneuver. The more I know about the system, the more I know about what happens when I act in certain ways, then the more capacity I've got for maneuver when I'm faced with an anomaly or a problem in my normal work.

Drew: The second thing that they talk about is the idea of shared knowledge to define limits of action. This encompasses a few different ideas that to me it was a little bit not intuitive that they put these together, but still kind of interesting. Some of this is quite specifically training in how to follow specific procedures, so here is what to do in this situation. But then some of it was also giving people freedom within those procedures and giving them variability.

This is where a lot of the interaction between trainee operators happened. Giving different trainees the same scenario and getting them to watch what each of them does following the same procedures, they still make different choices and have different ideas about how to deal with it.

David: Yeah, Drew. Like you said in this scenario, it's just the loading operations under the dynamic positioning, which kicks in about 10 nautical miles out from the platform or the floating production storage and offload facility, whatever it is. Then there's this procedure. From 10 nautical miles down to 900 meters or whatever it is, then this happens, then this happens to 500 meters, and this happens. This is how many minutes or hours each stage is supposed to take.

Their time in the simulator was anywhere from an hour and a half to three hours. They're actually going through this very specific procedure. Like you said, they've got all different weather conditions thrown at them. I think the procedure in the paper is only about seven steps to do this whole process. But even within that, it was surprising just how flexible the operators manage the situation based on weather or just the only way that they operate.

Drew: I didn't actually know where to throw this into our discussion, David, so I've just put it under this section. Another thing that I noticed from the paper was that the participants weren't just given scenarios by the instructors. They could also come in and ask to use the simulators and use them for re-practicing particular tasks that they wanted to practice. Sometimes they actually have a question from operating the real system. They would come in and want to try to answer that question by using the simulator. 

All of this speaks to almost like a community environment where the simulator is one of the tools that people have in common that they can use to try things out. But it's really only part of the broader conversation they're all having with each other about how to do work and how to do work safely, and what's acceptable, what's not acceptable. 

David: Yeah, absolutely, Drew. They get to see their peers, how they respond to situations, and obviously then what happens. Again, a little bit like that first point. We're just expanding our understanding of the system and our options to respond and adapt. 

Drew, the third one is operating the system with confidence. The idea is the participants said that to be successful as a dynamic positioning operator, you must feel safe in an unusual context. Like you said, you're on the bridge of a vessel offshore, you might not be able to see out of the windscreen. You know you're somewhere near or you’re somewhere besides an oil and gas producing facility, and you got no hands on the controls. 

This idea of if we've got a detailed enough knowledge of the way that the dynamic positioning system works and we've got enough knowledge of the vessel, we've kind of got the confidence to be calm and focused in that very unusual situation.

Drew: One of the particular reasons they want people to be calm is they want people to be willing when necessary to turn off the autopilot and have the confidence that that's not an unsafe thing to do, that they're quite comfortable handling the vessel. 

This is something that happens a lot as we make systems more automated. People get much less practice in the basic skills. I did a search for seamanship because I thought that might pop up somewhere. The paper doesn't actually mention the word seamanship anywhere, but I get the sense that this is what they're talking about is people in aviation talking about a loss of airmanship skills. That people are so used to operating automated systems that they're not good at flying aircraft without those systems. 

The same here, they want people to have comfort using the system without the automation. But because automation is what makes the system safe, you can't just randomly turn it off just to practice, but you can in the simulator. 

David: Yeah, I think that's right. Even though I said earlier that the participants got a lot of value out of those routine anomalies and those small disruptions to their work because they're likely to happen. They still spend a lot of time practicing those big emergency situations. 

The worst-case scenario, what they said is a complete loss of power of the vessel, which means not only that the dynamic positioning system goes down, but all systems on the vessel go down. And thrust and you're basically just drifting around in the ocean. They still spend time practicing that worst-case scenario. 

Drew: I don't know how to read Norwegian understatement. David, I was picturing Australians, English, or Scots in these circumstances where they're talking about whether the instructors let the trainees get into trouble. And you could almost picture particular instructors and particular trainees. The instructors basically said that they set some of the trainees up to fail. They got someone who comes in that's overconfident, then they're quite willing to give them a task that they don't expect them to succeed at, and then let them run right to the edge. 

On the other hand, some of the other people want to build up confidence. One of the instructors says that they don't like letting people do things in a simulator that they would later be ashamed of. Another one said that they pause and give hot debriefs to the trainees so stop them in the middle of simulation and give them a chance to work out what to do to have a dialogue back and forth, rather than just let them make a fool of themselves. It's about building up people's genuine confidence and maybe taking people back a peg if they've got too much confidence. 

David: Yeah, Drew. I suppose the performance variability of the instructors themselves is exactly like what you've described and they go into quite some detail from those 12 interviews around how they said they had to make all of those dynamic decisions on the spot. 

Is the person going to get more learning from just seeing what happens with the actions that they're taking now? Or is the person going to get a better learning experience if I have one of these hot debriefs? Where I just put the whole simulation on pause, and at the moment coach them through the actions they're taking, what their thought processes is, what might happen, and actually correct them at the moment.

It's a very complex system. This idea of putting someone in a simulator is a complex system of process around it, but also of interaction that happens within it with the instructor as well.

Drew: Yeah, until I read that, I just sort of assumed that every time they put someone in a simulator and wait till they either succeed or die, and then restart it. They're making judgment calls all the time about whether to stop and pause, to debrief, to just have a word while the simulation still goes, or to just let someone get into trouble to see what happens. 

David: Drew, those three areas are the results. This idea that what do we get out of simulation training and how does it contribute to the developing resilient capabilities of individuals. We develop the capability to recognize errors and anomalies early and respond in a flexible way. We get the capability to understand the safety limits of the system that we're operating within through shared knowledge and experience. We get this confidence in the operation of the system, we know how it works, we know what to do.

They're kind of the three ways that this simulation training develops resilient capabilities. Drew, there are a few points in the discussion that I think are worth us talking about if you're happy to go into that now.

Drew: Sure. These are mainly your notes at this bit, David, so I'm happy to move on. I might just hand it straight back to you. 

David: All right. They’re actually the author's notes pretty much so we'll go through it. The authors talk about this idea of managing variability through experiential learning and reflection. Talking about just the importance of learning by doing and then reflecting on what was done and what happened. 

This idea of reflective practice from [...], we know how important that is in, I suppose all performance—be it sport, be it work task, be it possibly life as well. Having a simulation environment is realistic and safe to practice hazardous work, to experience variable hazardous work, and then be able to step out of that safely and reflect on okay, what happened? What do I need to change in terms of my frames of thinking around this task, and then go back into that environment again and try something different? Reflect on that and try something different. 

Learning these resilience skills requires this reflection feedback, and the simulator offers the backdrop to do this. It provides the experiential part, but it's actually that reflection and feedback that's wrapped around it that the authors say deliver the learning.

Drew: That's consistent with other learning theories and how we teach at schools and universities. I think that is fairly widely accepted knowledge. I don't think this paper really provides strong evidence of that, except it shows that that's clearly what the instructors and the trainees believe. They think that they're getting the most out of these reflections in the conversations, not just out of the simulators. 

That in itself is a bit interesting because the simulator is a very shiny toy. And to have people with this exciting technology who still say it's the conversations that matter, that is kind of interesting. 

David: Yeah, and important that we understand that the shiny toy provides the backdrop and the vehicle if you like. But it’s the end-to-end process that you wrap around it to get a learning outcome. 

Drew, they don’t have a good discussion, this paper, about balancing Safety I and Safety II. As in the literature review, restate this idea, that Hollnagel—when talking about Safety II—always emphasizes right from the start that Safety II is intended as a complement to Safety I rather than a replacement of it. 

They talk about a couple of things that they've observed in this study about just how the boundaries between Safety I and Safety II are really quite blurred. I just want to throw out the two ways that they described. 

The first one you mentioned earlier, Drew, is that we actually go and look at previous errors and accidents, which is a very Safety I paradigm. Look at what went wrong and then start looking at what failed. And then they use that past experience to then, like you said, build scenarios into the simulator that let people actually learn resilience skills by experiencing that failure mode in real-time and practicing what they do and how they’d adapt to that particular situation. It's like taking our Safety I accident reports with causes and failures and designing a resilience training process around it.

Drew: I think this is one of those areas where the rhetoric of Safety II and safety differently has somewhat twisted the original message. Particularly this whole idea about human error and getting people to stop thinking about human error makes a lot of sense when you want people to focus on improving systems. But if the whole aspect of the system that you're trying to improve is the humans, then really it is worthwhile that taking as a starting point where humans contribute to the system for good or bad.

In this state, quite explicitly take situations that get created from the technology that humans have in the past not handled successfully. Very directly identifying these human errors, and then using those to create your training scenarios around so that people don't make those errors so that people find different ways of handling the situations.

David: In those different ways that people find of handling the situation, that's where we get this learning from success. If I get provided with the failure mode that has caused an incident in the past, and I get to experience experiments in the simulator I find ways that actually work. I'm actually learning in a very Safety II way from success because I'm not learning from the accident exit report about what that person did wrong, why it went wrong, and theorizing about what might work in the future. I'm actually putting into practice what might work in the future and actually understanding if it does work and learning from that success. 

It's very tightly blurred, but I think the very complementary way of thinking about how I take something that's failed, turn it into a training situation, and learn about how to be successful.

Drew: Yeah, it's a very direct illustration that Safety I is necessary, but not sufficient. You need that incident report, but the incident report is not enough to prevent the accident from happening again. You need to move on to the resilience solution.

David: Yeah, absolutely. Drew, the second one is about balancing Safety I and Safety II in a dynamic work situation. This idea of, okay, I've got my vessel, my DP systems clicked into autopilot. I've made my cup of coffee, we're 10 nautical miles out, and I now have got a day and a half to approach this installation. That's a very Safety I thing. I've got a procedure, the system's working, I'm monitoring and watching for deviants and nonconformance, or whatever errors or problems. And then I've suddenly got wind and waves and other things that rapidly change the operational limits. 

If we accept that this situational complexity isn't static, it's like what you said earlier, the implication is the need for us to quickly be able to go beyond our procedure following and have variable strategies to adapt to all of these variable situations that we don't know when, where, or how they might arise. 

The paper talks about where the situational complexity is low, then Safety I mode of operation. We know the hazards, we know the barriers, we know the rules, we know the way the system functions. Let's just follow how the plan is meant to play out.

Then in an instant, just changing this to this high situation complexity requires that we break out of that routine and be very tuned into the situation and the specific decisions that we need to make. I thought that was, like you said earlier, about having a mode of operation that's consistent with the situational complexity.

Drew: David, I'm interested in your thoughts about the underlying premise because the authors are drawing on an idea I've seen come up a few times in various Safety II literature. Which is this idea that if you've got low complexity, Safety I works. Then if you've got higher complexity, that's when you shift to Safety II. 

I've never quite bought that assumption. It seems to just sort of be invented out of nowhere as a neat map. And I've sometimes even seen it in three—low complexity, Safety I; medium complexity, use a blend; high complexity, use Safety III.

I don't really think that situational complexity is what guides or should guide your safety strategy. I think it's more to do with the ability to preplan and predetermine, which we can sometimes do for very high complexity situations. It's in situations where your existing rules and plans fit, then you want to stick with those existing rules and plans reliably. 

When you don't have well-defined rules and plans or they don't suit the situation, that's when you need to vary. Maybe that happens sometimes with low complexity, high complexity, but I can imagine lots of very complex situations where we've still got good rules. And I can imagine some low complexity situations where still we've got no guiding rules where we need to be resilient.

David: Yeah, Drew. I think complexity is something that's thrown out there like the variable of interest for safety. I'm a little bit like you. I actually say, well, what's underlying complexity that causes us to think that that's the problem? I actually think it's—if I'm interpreting what you're saying correctly—uncertainty. 

The more complex something is, probably the least certain you can be about it. But in low complex situations, you can also have a lot of uncertainty as well like I'm just excavating my footpath, and it's a pretty simple task, I'm digging a hole. But I don't know if there’s an underground high voltage electrical cable just below the surface. 

A very low complex task with a very high level of uncertainty. I can't just have a Safety I process, which says I just dig a hole with a shovel. I need to look at this particular situation and what is it about that. I don't know if that is aligned with what you were saying, Drew, but I think uncertainty is something that's a real challenge for us in safety because it limits our ability to plan and predict.

Drew: Yeah, it wasn't in fact where I was going. I do like that idea of uncertainty as being more of the variable and just giving the footpath situation, I didn't know where you're going with that example. My immediate thought was, if I was taking a footpath, I've got no idea how to do it, so that's automatically a high uncertainty situation for me because it's a novel job. I don't know the rules. I don't know the procedures. I don't know how it's meant to be done. 

Yes, I would have to rely on resilience and adaptability. For me, that would be an unpredictable highly variable task, whereas for someone who has done three kilometers of footpaths so far and they're just under their next 10 meters of it, they can follow very, very tight routines and processes. 

David: Still doesn't necessarily change the amount of uncertainty about what's under the ground, though, whether it's a very experienced or an inexperienced person.

Drew: Yes. 

David: If they've got the same information that they're presented with. 

Drew: I think in safety, people tend to throw around this idea of complexity as if it is a well-defined term and it really, really isn't. Complexity is not something with a single or simple definition. We should be very careful when people talk about high complexity, low complexity. I don't want to dive too deeply into it here, but there's a whole meta ontology about whether complexity exists inside your mind or inside the world. 

David: Yeah, and maybe it sort of came out of some of the work about tightly coupled and loosely coupled systems, and we sort of morphed into complexity, which has been associated with sort of HRO theory for a long time. So maybe it's just a continuation of that.

Drew: We went from one sociologist misusing physics metaphors to another bunch of sociologists misusing physics metaphors.

David: Yes, but I think even experimenting with teams that we've been doing at the start of workshops with pre-start meetings, instead of going what are the hazards, and risks, or whatever. Just ask them about, what aren't we sure about today? What are we uncertain about? Or what don't we know in relation to this work? And you have really good risk conversations when you ask people to talk about the things they're not sure about. 

Drew: Getting back to the original point then, I think there is an underlying thing here, which is that in different situations, different strategies apply. There are some strategies where you want people to reliably follow the rules, some when you want them to vary, and we can't specify in advance which strategy we want people to follow. A lot of this training is about recognizing and being comfortable in shifting out of following rules into more resilience and back again. 

David: People have got to have the information in the experience to do that which is why it's investing so heavily in this simulator training experience. Whether it's maritime or aviation so that you're confident that people have got the capability and know what information to use to make those decisions. 

For all the people who I think, Drew, and maybe who haven't explored Safety II with an open mind, this is actually how work gets done. This is how our planes get flown and this is how our work gets done. [...] has always described this native resilient. People are working in the gray areas in between our beautiful systems all the time. 

Drew: Should we move on to takeaways?

David: Yeah, let's do a couple of practical takeaways that I thought. I mean, not everyone's got a maritime simulator in their workplace so let's talk about a few takeaways. 

It appears to say work simulation is an effective learning process because it involves this full learning cycle where we get knowledge, we get action, and we get a reflection. That's why it's used in these high-risk domains. That's why it's regulated in many of these high-risk domains like frequency of simulations and all this sort of stuff in these high-risk domains.

With the increased access to technology and even many of your organization's, whether it's digital twins, whether it's virtual reality that's expanding across all industries, it's worth listeners thinking about ways that they can safely simulate the variability of work in their organization and wrap a reflective training process around it, even if that's actually a process around their normal work that they do.

Drew: I'll just throw in there, David, that the ability to create simulators not just simulations is getting much, much easier. We've gone from needing to have these totally dedicated setups, to being able to mock up realistic enough situations just by using personal computers and a few monitors. It doesn't take much imagination or construction skill to create a realistic environment for a lot of tasks. Remembering that we're not trying to create perfect fidelity, we're trying to create this backdrop for the practice, the conversations, and the reflective learning.

David: Yeah, I know the School of Aviation at Griffith, Drew, talking to one of the senior lecturers there, Guido. When they went from having access to a flight simulator where they could go one student at a time through that to getting virtual reality headsets and desk-mounted control panels. They can have 30 people sitting inside a tutorial room all doing simulation at the same time. 

Like you say, slightly lower on the fidelity. The simulator is not moving around, rocking, and rolling in that. But in terms of practicing the skills and wrapping the whole training process around it, the peer learning process of 30 people in a room doing the same simulation at the same time, you can probably make an argument that the learning outcome is better.

Drew: I've seen that lab and they've got things where you can take one person and put their view up on the big screen for everyone else to look at and talk about, and then take recordings and go back and look at how the different people solved the same problem. Very cool things you can do with really quite a low tech. 

David: Drew, after-action debriefs. The claim in the paper is that half of the learning comes from the debrief, which is about less than 10% of the total simulator training time because this is double-loop learning where a person actually changes their frame of reference for the work task and action. 

It's that real embedding into their sort of understanding and decision-making around the task. This can be done not just in a training environment, do this in your work. Organizations that say, we've known for a long time the value of collaborative planning or toolbox talk type processes at the start of the day. Do it at the end of the day as well, otherwise, you've got no learning loop. 

If you just plan and execute work without doing any reflection or learning, then you're actually not going to continuously improve your work. I'd probably just throw that one in there, Drew, as a practical takeaway just to reinforce daily learning processes. 

Drew: I'll agree absolutely in the building, in the times, and opportunities for reflections. I've had mixed reports about the end-of-day debriefings. It makes total sense until you consider that the very end of the day is when everyone wants to go home and it's probably not the ideal time for reflecting and learning.

David: Definitely there's a whole lot that needs to be wrapped around planning it into the workday so it's inside work time and making it efficient. Maybe it's an end-of-task thing, not an end-of-day thing. There might be other times, but having a what did we just do, what happened, what do we want to sustain, improve, and fix for next time is important. 

Then the third one, Drew, is if you are doing simulator training, I would read this paper because it has lots of good advice. It says, if you're doing simulated training, you need to ensure that it's realistic. You need to combine unexpected events and normal operations together, not just the extreme situations. And focus on all of these learning processes that support this joint reflection and peer learning that goes on. This persistent use of hot day briefings and post-stimulation debriefings as really critical tools in your learning process.

Drew: The big thing there that surprised me and stood out to me was, these are between participant discussions. It's not just the people currently in the simulator and it's not just the people currently in the simulator with the instructor, but it's a different group as they cycled through the simulator all talking to each other. That's what they mean by the peer simulation, peer learning.

David: Drew, do you have any others? Anything else you want to say about takeaways? 

Drew: No, that sounds like a good set, David.

David: All right. The question we asked this week, Drew, was how does simulation training develop Safety II capabilities?

Drew: And the paper this week gave us really three answers, all based around the idea that we're providing reality-based work experiences that let people experience routine and nonroutine operations.

This lets them recognize errors and weird things coming up and resolve those situations flexibly before they get in trouble. It lets them define the limits of action, socialize those discussions, and it lets them operate with confidence including stepping into some of the more unusual situations. All of this is wrapped around this full cycle where we do, we reflect, we learn, we go back and try again.

David: Thanks, Drew. That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com