The Safety of Work

Ep.28 How does coordination work in incident response teams?

Episode Summary

On this episode of Safety of Work, we discuss how coordination works in incident response teams.

Episode Notes

Dave is joined by special guest, Dr. Laura Maguire, a researcher at the Cognitive Systems Engineering Lab at Ohio State University. Her recent research pertains to the topic at hand. Tune in to hear our informative discussion.

 

Topics:

● Dr. Maguire’s personal relationship to safety.

● Exploring coordinated joint activity in the tech industry.

● The difficulty of doing research in the natural laboratory.

● What Dr. Maguire noticed during her research.

● Why breakdowns in common ground occur.

● Why a phone call can involve effortful cognitive work.

 

Quotes:

“In cognitive systems engineering, we’re most interested in what are the generalized patterns of cognition and of interpreting the world…”

“Doing research in what we call the ‘natural laboratory’ or trying to examine cognition in the wild, is really, really hard.”

“Tooling is never going to solve all of the problems, right?”

Resources:

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to Safety of Work Podcast episode 28. Today, we're asking the question, "How does coordination work in incident response teams?" Let's get started.

Hey, everybody. My name is David Provan and I'm from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work Podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week, and the show notes can be found at safetyofwork.com. In each episode, we asked an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

Today, I'm joined by a special guest, Dr. Laura Maguire, from the Cognitive Systems Engineering Lab at Ohio State University in the US. We'll be talking about her recently completed PhD research studying incident response teams in the tech industry.

Welcome to the Safety of Work Podcast, Laura, or should I say, Dr. Maguire. Congratulations on recently completing your PhD. For our listeners to know, Laura and I have been friends for about five years. We both did our PhDs at a very similar point in our lives and also at a very similar time. I'm really excited to have you on the podcast, Laura. Before we dive into your research, I'm really keen to know how did you come about to be doing a PhD and how did you end up at Ohio State University? Why did that make sense for you?

Laura: Thanks for having me here, Dave. I'm super excited to be able to share some of my research. I think that my relationship with risk and with safety management is really personal because I grew up climbing in the Canadian Rockies and the primary goal when you're hanging off the side of a mountain, hundreds of feet off the ground, is safety. It's about managing risk and what's an inherently risky and often dynamically changing environment. 

That relationship is also quite professional because I came up with the tools in forestry before I transitioned to a safety management role. I was really perplexed between these differences that I saw in personally managing risk, and then in the ways that I was being told a right to manage risk in a professional context.

I got really interested in this a difference because what I knew created safety in the mountains being able to anticipate and adapt in real-time when the weather changed, when there's a variation on the route that I was climbing, or coordinating smoothly with my partner, when you're at belay stations or when you're in transition points, just really being able to approach the climb and approach the mountain, and adapt and adjust, were all things that I could see that was needed in the forest industry, and then later when I moved into oil and gas but weren't happening. I decided that I needed to explore this by looking more deeply at the research. 

I did my master's degree at Lund University, in system safety and human factors and got exposed to Dave Woods’ work at Ohio State and a number of his graduate students and I thought they are working on really interesting problems, and they're based in the field. Their work resonates with frontline practitioners as well as academic researchers. That was I wanted to be a part of that.

David: That's a great story. I think that is exactly what Drew and I are about on his podcast is somewhat different between safety work and the safety of work, so the things that we put in place in our organizations in the name of safety, and sometimes how divorced that is from what it takes to create safety in frontline work. Like you said, decentralized decision making, autonomy, expertise, adaptation. We often try to ignore that in the way that we take care of organizational approaches to managing safety.

Laura: There's obviously a lot of substantial work that's been done that's looking at bureaucracy and organizations looking at the rigidity that's imposed by rules. I think that one of the really interesting things about the work that Ohio State is doing is offering another lens to say, well, what does it look like to be in the boots of the pipeline for men or to be in an emergency room, when you've got a mass casualty event coming in? It gives us these tools and techniques that actually place us in the context of work to be able to design from that perspective.

David: After all that operational experience in forestry and oil and gas, you talked about being in the boots of people. Your PhD, you've spent most of your time in the tech industry in the keyboards of people working behind computers. How did your question come about and how do you find yourself in the tech industry for your PhD?

Laura: One of the strategies that Dave has with his graduate students is to throw them into environments where they have very little context for that kind of work. There is a method to the madness because in cognitive systems engineering, we're most interested in what are the generalized patterns of cognition and of interpreting the world, noticing change and anomalies, noticing events as they happen and being able to synthesize that to make sense of what does this variability mean for the goals that I have for the work that I'm trying to conduct.

When I started looking at research topics and looking at different domains, the tech industry has recently been interested in resilience engineering. They've been interested in complexity, and in applying systems safety to the work that they are doing. It seems really arm's length. It seems like, well, what does a computer programmer have to do with someone swinging a hammer or flying a plane?

We are inherently and I think in the middle of COVID, this is very apparent to all of us. We are increasingly reliant on technology to mediate the world for us. Whether you work through a computer, whether you get data displayed to you, in your equipment, or whether you use an app to do your inspections, those kinds of things change the way we experience risk and change the way we interpret risk.

The ability to study how computer programmers were interpreting risk and managing disruptive events in their world gives us a really great opportunity to look at or to generalize some of those patterns about how you do this? How do you do this successfully?

David: Central to your research was this topic of the cost of coordination. It's something I'd never heard of before; you started telling me about it. Tell us a little bit about that as a topic (cost of coordination). What were you trying to understand and what were the questions that you were most interested in?

Laura: My topic was officially on controlling the cognitive costs of coordination. In short, that is the additional kinds of mental effort or workload that's associated with joint activities. Joint activities are that work where you have multiple parties whose individual tasks and activities have to be synchronized in order to achieve an expected outcome.

I'm sure a lot of your listeners are thinking, well, this is every workplace, we coordinate everywhere. It's true and it's relevant across a broad spectrum. I think the way to ground this abstract topic is to use an example that I think all of us can think about. If you picture cooking a meal on your own or following a recipe, and then picture making that same meal with your seven-year-old or your eight-year-old.

It's a very different thing, largely because the nature of the cognitive work changes. It changes from being about the hands-on tasks or the actions that you're taking [...] to encompass a whole other range of coordinated work such as monitoring to make sure that your kids are not going to burn themselves, or they adding too much or too little of a certain ingredient. It's anticipating what kinds of activities you might need to slow down or speed up in order to synchronize more closely with how they are cooking. 

All of these adjustments that you're making are part of that coordinated work as well. All of that additional thinking that goes into working jointly with your seven-year-old is the cognitive work. Those are the cognitive costs.

David: I can absolutely sympathize with that. I'm not very interested in cooking with my children much at all, but just for that effort because it's twice as long and half as good. There was a quote in one of your articles that really drew this distinction out that marshaling resources and coordinating joint activity is a very different activity and very different resource requirement than just task completion or direct problem-solving. I think that's something that we need to think about. You also said in your paper (and we'll talk about findings in a minute), that through normal work, we just expect that stuff to happen. We don't pay too much attention to that. We're talking about the task all the time, but we don't really train for, resource for, or pay any attention to how people actually work together.

Laura: Absolutely. It's one of the things that I think is really exciting about this topic is because once you start thinking about it, you realize it's everywhere. It's the space in between the boxes, flowchart, or it's the space in between the bullets on your safe work procedures. We can structure the activities in a certain way, but that doesn't necessarily mean that you're going to be able to perform the task well. There are all these additional demands that take place. 

I think another example that I use quite a bit is in driving. I'm a fairly safe and reasonable driver. I like to drive in the right hand lane so that those who are going faster can overtake me, but if I have to make a left-hand turn, it's not just about putting on my indicator making the turn or switching lanes and making the turn. I have to look at the traffic around me. I have to anticipate if I need to slow down, if I need to speed up. I need to figure out is this person on their phone? Are they able to notice my indicators on? Is my speed too fast to be able to make the turn in time? 

I think that there's a lot of additional work that goes into trying to align yourself with the world around you. In slower-paced work, we can often deal with the consequences of really poorly designed work or with people who are coordinating effectively with us, but as soon as the pace goes up, the consequences become higher, the complexity of the tasks that you're trying to do goes up, the penalties for poor coordination get much higher.

David: I think that's a great analogy. The driving analogy for our UK and Australian listeners, that's the left lane, not the right lane. I think that idea and I've used that analogy before for (you said) work method statements or safety procedures in that you can't sit in your car in your driveway and know everything you're going to come across on even the shortest trip down to the shop. You don't know where all those surprises are going to come from. Organizations need to think about how they might want things to work but also plan and support their people to respond to things that they don't expect.

Laura, it's great to have a field researcher on the podcast as well. I think on your LinkedIn at the moment, you've got three years of interning at IBM. How did you go and explore this idea of coordinated joint activity in the tech industry? Tell us a little bit about your research method for that?

Laura: First off, I guess, doing research in what we call the natural laboratory or trying to examine cognition in the wild is really, really hard. The reason it's hard is that the world keeps moving forward. As a researcher, you're trying to capture data. You're trying to collect nuances of very complex situations. Each of the techniques that we have for data collection is limited in some way. If you are doing video analysis of something, you are missing out on really subtle cues that are often quite relevant to work happening quite smoothly. If you're doing only interviews, then you're taking people out of their context. They might not be able to tell you everything that they do. All methods have some form of limitation to it.

One of the great things about the software industry is there's a lot of inherent traceability. What that means is, all of the activities that are being done are captured in some way. The keystrokes on the keyboard or the chat channels that they're using to diagnose a service outage. All of that is naturally captured and recorded.

For my dissertation, I was able to take advantage of a lot of these natural assets, incident response, which is what I was interested in, typically takes place in an online chat forum. Something like Slack or Microsoft Teams, where someone will say our users are seeing an issue in this part of the system. Someone else will jump in and say I'm looking at it. Someone else will jump in and say I noticed the monitoring for this system is going crazy over here.

I had a nice natural transcript of how the interactions were happening, and then I could triangulate that with other sources such as log files, audio bridge recordings, some of the dashboards, and some of the monitoring systems as well. So, it is able to recreate what was happening at that point in time in the systems as they were trying to diagnose and repair those problems.

David: I suppose it's like the cockpit voice recordings and the flight data recorders in the cockpit of an airplane. It gives you the traceability like you say, but you don't have all the context, but if you speak to the pilots as well as have that data, then you get a really good understanding of the situation that people faced.

Laura: Absolutely, but you may not capture information like the pilots aren't going to say, I'm looking at this style specifically at this point in time. Whereas all of the communications I had we're time-stamped. We were able to triangulate at different points in time where people were looking how they were interacting with each other as well.

David: Let's talk about what you found because I suppose what became interesting to you is like you said earlier, how do people coordinate when they're under a lot of time pressure and what we know in complex systems that as soon as we get these conflicting goals the system changes really, really quickly, how it operates.

I think what I understand is you spend quite a bit of time looking at tech incidents and one of the articles you talked about, you gave this great case study (I suppose) or story of this system fault, what happened in a short 10-minute period, involving 77,000 users and multiple parties to try to coordinate. That just blew me away just how quick and distributed that system problem actually became. I think that's quite a unique thing to the tech industry that many of our listeners or heavy industry operational environments wouldn't have seen before.

Laura: That is what is fascinating. The software industry is operating at scales that are much larger than many other industries. They're operating at speeds, a lot of automation is in microseconds. Things go wrong very, very quickly, and they get big, really, really quickly. Even though this situation was large scale, fast speeds, all technology-mediated work, I could still see the same kinds of problems emerging from a project on a pipeline right away.

I think there are a lot of the patterns that I found in how effective coordination takes place and where coordination breakdowns happen that are applicable, even if you're operating at slower speeds or smaller scales.

David: What I took out from your research, I suppose, there's quite a lot of takeouts in how we think about joint activity in the normal operations of our organizations when we think about working with contracting parties and just trying to understand how to do that effectively. I suppose “normal work,” just whatever normal work is, but what I was most interested in is what you learnt about incident response. I think these lessons are really, really relevant for any incident or emergency response situation. 

Talk about the findings. You talked through three different ways that companies try to manage incident response if I get this right. You've talked about using an incident commander. You also talk about trying to enforce operational discipline. Then you talked about using technology to facilitate coordination. You found problems with all three of those ways of trying to manage incident response. Do you want to give us an overview of those findings and what you’ve learned with them?

Laura: Absolutely. The software industry borrowed from disaster response borrowed this idea of incident commander from FEMA and from wildland firefighting-type contexts. They have adopted it to the extent that this is accepted as the standard. When I went in to start my investigation, I thought, okay, well, this makes sense. Somebody is in charge. Everybody has clearly defined roles, communications only happening in one place.

When I started to look at the incidents themselves, I noticed, well, the incident manager or commander actually creates a bit of a bottleneck, because they are one person, they are dealing with a very rapidly evolving situation and a very complex situation that requires them to have a lot of technical detail, but also the bigger picture in mind. They're shifting back and forth and they're getting updates from different folks.

What I started to notice was because there are different channels that the incident responders can communicate with each other, started to see this side channeling happen, where people would break off and they would say this is going to be a real problem, we need to act on it right now, and they would start troubleshooting something independent of the incident commander-led conversation.

A lot of people said that's problematic; we need to get them only communicating in this one channel. What I was actually seeing was they are thinking, they’re sense-making about the nature of the problem, the potential solutions. The way to prioritize action was more closely related to the pace of the incident itself. It was being slowed down in some cases by the incident commander role because they just couldn't keep pace.

David: We see that in operational environments very, very formally structured. Incident commands, twice daily meetings, no decisions being made outside of those meetings. I like the way you described that incident commander. Obviously, it's going to be a bottleneck. Just the challenges of that role, they have to be working in as well as on the incident. They have to be (like you said) technically diagnosing the problem but also coordinating all of this joint activity. Then, as soon as a problem becomes big, then you've got all these management and blunt end intrusions into the problem, so people who feel the need to be informed but don't actually contribute much to the response of what's going on. It's almost an impossible position, really, to be efficient and effective.

Laura: I guess I would change the framing a little bit. They don't actually contribute to the response because they do bring things to the response effort. Particularly in the example that I gave in the article that you're talking about where you had 77,000 users. Imagine if that was Netflix. Netflix went down and everybody's watching their favorite show. What's the first thing they do? They go to Twitter and they're like, “What the heck, Netflix? What's going on?” or they start calling the customer service line.

You immediately have this escalation of attention that has to be dealt with. Organizationally, there are mechanisms that are set up to try and buffer the responders from having to cope with informing stakeholders about what's going on. If it is a high-value service, like an electronic health record or a financial trading market that goes down, you can bet that the blunt end response in being able to talk to regulators or being able to shield the organization from needing to give an oversimplified response about what's happening is really important work as well.

I think the problem that you're alluding to is when those kinds of tasks and activities that are necessary to adequately control the response take precedence over some of the technical work to try and slow or diagnose the event.

David: I suppose that's some good context and perspective. I struggled to get my head out of the operational environment. I just can't empathize with that volume of activity that's going on in and around the incident response.

Laura: I think, even in a wildland fire context. You could say some of the [...] might be local government officials or it might be some of the communications folks. Their ability to communicate out to the community or to potential other responders can help recruit new resources that can actually support the incident response itself. It does bring up an interesting pattern which is coordination costs. We can establish that but you also need to coordinate with other responders in order to handle large magnitude events.

You're bringing in resources to help you cope, but you actually don't have time to make those resources useful because you're so busy stopping the bleeding, containing the event, or whatever it is. Some of the patterns of Klein Feltovich, Hoffman, and Woods in their 2005 paper, Common ground and joint activity, called choreography the ability to manage this collection of players.

I guess one of the spoiler alerts—I'm sure a lot of your listeners right now are thinking if it's not the incident commander, what is it?—is this idea of adaptive choreography. It's about the same kinds of roles and responsibilities in being able to recognize the need for decisions to happen in real time, the need for communication to happen broadly, the need to protect incident responders times to be able to focus on the task at hand. Those roles are important, but it's about organizing that collection of participants to be able to very smoothly and fluidly shift between different roles to backfill different functions in that response without needing to rely on one person to create that bottleneck.

David: I was actually going to put you on the spot and ask you that exact question is from what you learnt whether you thought the incident commander role could be useful with better coordination, system, processes, mediating technology, or whether you actually felt that hierarchical structure. I think from your answer then that hierarchical structure is always going to be a problem and we actually need to define a new way to organize the team to be able to respond dynamically.

Laura: I think in order to answer that question. It's worth talking a little bit about some of the findings of this. What are these elements of choreography that are needed to be able to coordinate with others? At a 30,000 foot view, there's this idea of establishing common ground. This is the mutual knowledge, the shared beliefs, and the assumptions about the situation that the people that are involved all share.

Typically, we all come into a group activity with variation to that. I might think we're doing something a little bit different than what you think, so we have to talk about that. We have to establish what are the important characteristics about this event, this group, the other individuals so that I know not only how you're approaching the problem, but what do you know about the problem that's going to be useful to us, so that when we get into the thick of things, I can very quickly understand.

Dave has a reference for this and I can recruit you to be able to bring that expertise to bear in real time. That establishing common ground is fundamental to being able to coordinate effectively. What comes along with that is in a rapidly changing event, you're going to have a breakdown in common ground. You're going to have departures as things change that I might see but you don't see our mental models of what's happening shifts. Maintaining and repairing those breakdowns are also really fundamental choreography elements. 

I would love to describe all the elements to you, but in the interest of time, I'll focus on one element that I think is common, regardless of what situation you're in and that is if I know that Dave has relevant skills, experience, or knowledge for me to solve the problem that I'm facing, I'm going to recruit him.

In order to recruit you, if I phone you and it is 3:00 in the afternoon in British Columbia which maybe 6:00 in the morning there, I need to set the context for you about why I'm calling, what's the nature of the problem. That is in and of itself, quite effortful cognitive work because I can't just get you on the phone and say I need you to solve this problem for me, and then spend the next hour bringing you up to speed.

I need to anticipate what do you know, what is going to be relevant for you to be able to bring that knowledge to bear in this specific incident, and then how do I prepare you to be able to step in, in real time. You know when you're at the gym and you turn the treadmill on, and it's eight or nine kilometers an hour? You can't just step onto that like you're walking. You have to get your foot into a cadence that matches the speed of the treadmill so that when you jump on you don't go flying off the back. It's the same thing. An element of choreography is when you are recruiting resources, bringing those resources up to speed so they can quite literally hit the ground running.

David: I found a couple of things in your article were really fascinating. One was this idea of following the sun, which I hadn't heard before, which must be a tech industry thing with resources, supporting systems at scale all around the world, which is like if you can call someone at 3:00 in the morning or you can call someone at 3:00 in the afternoon they're local time and information is broadly available, you've probably got a better chance of them being slightly up the curve.

You gave the example of NASA mission control and I think we all know the way that mission control communications work where you've got the flight commander and the team that's actually managing the mission, but they've got these open audio channels that managers and other resources can listen in back rooms and follow along, and know what's happening.

If there's an incident, they can recruit all these resources in who've been listening and keeping up to date with what's been going on in the mission. Is that what you're talking to about this setting of context of having these ways of keeping people engaged in real-time and what's happening even if they're not immediately part of what's going on?

Laura: Yeah, so that's a really good example of helping people come up to speed of establishing common ground, but It's also incredibly effortful. If you think about your own crew, if you could have them listening in and looking in on every meeting that you were in, you would have amazing common ground, you would have very well-established common ground, but you might not get other things done.

It's about recognizing that all of these resources, many of whom are very high value in the sense that they're experts in a certain way, so they have other work that they need to be doing. You're not necessarily going to call them in just because you think you might need them later. They're sitting in, listening in, and looking in. Instead you're trying to balance multiple priorities within the business as well. It's, I need this person but I need them within a predetermined set or not predetermined but on an ad hoc basis so I understand that they're going to come into these situations needing some briefing, needing a way to orient themselves to the specific context.

David: I don't know if this is closely related or a bit of a tangent but you talked about chat ops tools. You mentioned earlier teams and Slack and these channels and I suppose, is the way the tech industry tries to get people up to speed just to let them dive in and suppose start from the start and try to catch up to real time by reading what's happened. You make this statement and I like it when research is really bold statements in their work where you said that these chat ops tools are nearly useless as a tool for sense-making.

Laura: Those are fighting words.

David: I suppose for the people who are going through the incident might be, but what did you mean by that in chat ops tools, and is there anything that you've learned about how to get people up to context faster?

Laura: I think, in this specific example of using a chat tool, you might get paged into an event, two or three hours after it started. How you would come up to speed using that tool itself, would be to go back to the beginning to scroll all the way through the screens of text and have conversation and start reading all your way through those two or three hours.

That's what I'm pointing to when I'm saying that this doesn't really help someone come up to speed quickly. It doesn't help them orient very quickly to what's important, what's been tried, what didn't work, what additional information do we have at this point in time, so that that person who's coming into the event doesn't suggest things that have already been tried, doesn't suggest things that are not plausible because of the current state of the system.

It's the same thing with coming into an event in the physical world as well. It's like you would hand them a book or you would hand them 20 pages of notes and say, well, just get yourself ready to participate. We have the ability in designing tooling to be able to start from the cognitive work perspective. This comes full circle to where it started with thinking about the thinking.

When we build a tool that says how do I make very salient relevant information immediately available to someone so that it's as if they were there right from the beginning? If we build tools that says it's very likely that I'm going to need to rapidly bring together multiple responders from geographically dispersed areas and very different skill sets, I need to design for them to be able to establish common ground, to notice when there is a breakdown in common ground, and to be able to very quickly repair that.

It's not an inconsequential problem but unfortunately, a lot of technology tends to start from the feature development or from the limitations of the tool, and then leaves it up to the practitioner to fill the gaps between what the tool cannot do and what the real world demands are.

David: You spoke about the paradox of these tools, both facilitating and hindering coordination at the same time. I was just wondering because this is something that I got a little bit excited about and I must admit, when I read the article, because I think of all of the operational environments, and all the emergency response situations, there's not a lot of tech tooling deployed into those situations.

We're still losing teleconferencing calls twice a day at scheduled times, and things like that. Do you think you can design an incident response tool that can do those things that you said about common ground and knowing what people know? If you can, that's got a really broad application for every industry that needs to fix problems fast.

Laura: Absolutely. I think we've been talking about this relative to technology and relative to the demands of certain kinds of industries that require us to be working through technology. I think that tooling is never going to solve all of the problems. Every tool is a weapon if you hold it right in some senses, and it can both help and hinder the ability of practitioners to effectively coordinate.

I think not only is there ways that we can design technology that can help support joint cognitive processes but we can also look at some practices that helped to establish some of that choreography and to smooth out that choreography so that the interactions between small groups, large groups, ad hoc groups well-established groups can be better and can be smoother over time.

David: Is that what you're working on now, Laura, those practices and that tooling to take everything you've learned in the last four years and help teams, organizations, and incident responders everywhere be better at what they do?

Laura: I am currently working in the tech industry and I am looking at how we apply some of these ideas into developing new tools to aid in incident analysis and in learning from incidents. There is a lot of underappreciated value in being able to take a look at the choreography of work and improve that both in physical practices, ways that you set up and structure different interactions as well as the tools that you can use or integrate into those practices.

David: Practically, from your research, if I try to give a summary and see how well I understand it, which is probably not very well, when you look at this joint cognitive work, particularly in situations of high pressure, significant time constraints, or a rapidly evolving problem, you've got this challenge of how to coordinate resources towards a common goal, so solving a problem, for example.

When we think about that as an incident and an incident response situation being in the tech industry, any other industry, or environment for that matter, these ways that we've thought about doing that which is let's create a structure and an incident commander around that, let's get centralized decision-making, and let's keep everyone on the same page, everyone within these swim lanes makes things a little bit slow. Creating some bottlenecks doesn't necessarily support people to work the way they need to work to actually sense, make, and solve problems. The other idea is enforcing operational discipline, which is to follow processes, and that also doesn't give people the ability to adapt, run side channels, recruit new resources, and things like that.

We're maybe left with, you mentioned the example from the emergency response in your article, starting to think about new teaming structures and different ways of coordinating and organizing, and also then these ways of bringing mediating technologies in. They can just help speed up communication and sense-making. That's very different for a lot of industries to go, well, actually, maybe I need to think about my incident command structures and my processes (like you said) in the reverse direction.

Laura: I think that's a really good synopsis, Dave. Well done. One of the fundamental things that I think about here is a very core tenant to resilience engineering to cognitive systems engineering, to a lot of thinking about operating safely at speeds and at scale. That core tenet is that people are the most adaptable element in your system.

When I look at the relevance of my work, it is to understand how we help support that? How do we design work environments that enable that adaptability and that flexibility to be able to be supported and not constrained? When the world is changing quite quickly and the nature of the problem that you're facing might also be changing, the people that are involved with responding have to be able to change. They have to be able to adapt to cope with that variability. 

I think that's what's really fundamental about this work is about supporting adaptability in ways that recognize you have a lot of parties. You may have a lot of parties involved, so you can't just have people shifting ad hoc. There needs to be some degree of choreography to be able to maintain safe operational performance.

David: Complex problems need adaptive solutions, not structured and rigid solutions would be a way of saying that, I suppose?

Laura: Yeah. I recognize that and I think it's about recognizing the limitations of those structured systems. They have a time and a place, but they can't be the only thing that we have in place to cope with sometimes unpredictable worlds.

David: Is there anything else that you want to say about your research, your findings, what you hope people take from it, and do with it?

Laura: There are few things that I hope people take away from my research. One of those is this idea to coordinate across multiple parties is not just as simple as organizing the tasks and activities. A flow chart does not mean you're going to have smooth coordination. It does not mean you're going to get the safety or the production performance that you want.

It's about recognizing that there's a lot of cognitive effort and work that goes into being able to smooth out those interactions, to be able to anticipate the needs of the parties that you're coordinating with, to be able to adapt and adjust your performance in alignment and in sequence with theirs, and to be able to ensure that everyone can remain on the same page, even at high speeds, high consequence, or high-stress conditions.

David: Thanks, Laura. Thank you so much for joining us on the Safety of Work Podcast. I look forward to a time when we can fly again and we might be able to catch up in person but stay safe. You're in a beautiful part of the world to be isolated, as are we in Australia, so very fortunate indeed, but thank you again.

Laura: Definitely. Thanks so much, Dave. Take care.

David: That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Please leave us a review and send any comments, questions, or ideas to future episodes to us at feedback@safetyofwork.com.