The Safety of Work

Ep.77 What does good look like?

Episode Summary

Dr. Drew Rae and Dr. David Provan have a great discussion in this episode around a recently published research paper around the features of safety in maternity units. While these features can’t be exactly applied to all industries as a “How to” guide on safety, they do offer great insights into the thought process behind these safety measures. This in turn makes it easier to apply to different industries - using a similar approach but end up with an industry-specific implementation.

Episode Notes

The findings of this research point to the importance of staff buy-in and a team-driven approach to safety.

 

Topics:

  1. Commitment to safety and improvement
  2. Staff improving working processes
  3. Technical competence supported by formal training and informal learning
  4. Teamwork, cooperation, and positive working relationships
  5. Reinforcing, safe, ethical behaviors
  6. Systems and processes designed for safety -regularly reviewed and optimized.
  7. Effective coordination and the ability to mobilize quickly

 

Quotes:

“The forces that create positive conditions for safety in frontline work may be at least partially invisible to those who create them.” - Dr. David Provan

“Unlike last time, we’re now explicitly mentioning patients’ families, so last time it was ‘just do patient feedback’, now we’re talking about families being encouraged to share their experience.” - Dr. Drew Rae

“These seven [Safety Findings] may or may not be relevant for other domains or contexts but the message in the paper is - go and find out for yourself what is relevant and important in your context.” - Dr. David Provan

 

Resources:

Griffith University Safety Science Innovation Lab

The Safety of Work Podcast

Seven features of safety in maternity units -Research Paper

The Safety Of Work - Episode 14

Feedback@safetyofwork.com

Episode 75 - How Stop-Work Decisions are Made

Episode Transcription

David: You're listening to the Safety of Work podcast episode 77. Today we're asking the question, what does good look like? Let's get started.

Hi, everybody. My name is David Provan and I'm here with Drew Rae, and we're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. In each episode, we ask an important question in relation to safety of work or the work of safety, and we examine the evidence surrounding it. Drew, what's today's question?

Drew: David, today we're going to be talking about a paper that, for a refreshing change, actually specifies its research question right upfront. The paper says that it's about what good looks like in healthcare. Listeners who've been with us along the whole journey may remember that back in episode 14, we discussed high-reliability organizations. 

At that time, we reviewed a 2005 paper that talked about what are the characteristics of high-reliability organizations in healthcare. I went back and had to look at our notes and I can see, David, that we weren't very complimentary about the method they applied in that paper. It was a bit of a retrospective where people from the organization were looking back and patting themselves on the back for what a good job they had done in producing HRO theory. 

I think that's a bit of a risk when we come to try to identify good. That obviously, the people who want to say that their organizations are good, and not exactly the most impartial people. People trying to pick out good organizations don't always have easy ways of telling who is good and who is not unless they already have some fixed idea in their head of what good looks like. 

Your thoughts, David? I don't know if you can remember the episode. 

David: I absolutely can. I think also the authors were writing the paper almost because they were critical of the changes that have been made in the organization over the 18 months, where performance in their eyes had deteriorated. They were almost reminiscing about the good old days of when their organization was high performing and the things that were different to today. That's my recollection of that episode, Drew. 

Drew: As a general rule when people tell you that this organization was great when we were there and bad when we weren't, they could well be right. But it's not exactly a rigorous research methodology. I'm hoping that the paper that David found for us today is a bit more rigorous, but it still follows that general HRO principle, which is that we do research by finding an organization that seems to be managing safety well. We try to describe and understand what's going on, what is producing the apparently good results, and what is producing the apparent high performance.

David: More recently, Drew, in episode 74, we discussed the capacity index. We were somewhat critical of that paper as well so hopefully, new listeners don't form the view that we're critical of the resource for every episode that we do. 

This paper actually does what we suggested in episode 74, which is the starting point for monitoring the presence of safety is to actually understand those capacities, characteristics, or features of frontline work that create and maintain safety. Then work backward from there in terms of your measurement, your activity, and your safety management systems once you know what it takes to create safety on the frontline. The article that we are going to talk about today that you've mentioned is aiming to do just that. 

Drew: Let's jump right into the paper. It's called Seven features of safety in maternity units: a framework based on multisite ethnography and stakeholder consultation, which is an incredibly boring, but really quite honest title. Paper does what it says on the tin and it reports what it claims in the title. It's got a huge number of authors, I'm not going to go through all of them. 

In fact, it's got about nine listed authors and then the last author is a thing called the scaling authorship group, which has another 20 or so authors. I think this is a really promising trend given some of the problems with peer review. What we've got is basically a pre-publication review from a large number of people. Some of them were involved actually during the writing of the paper in critically looking at how the authors were interpreting their transcripts and their data, and looking for contrary evidence to improve and flesh out the theories. 

Then some of them are more of a large group of people that the paper was sent to after it was written to provide commentary and feedback. We complain sometimes about the fact that things slip through peer review when they just have two or three people reviewing it. Here’s where the authors have, in fact, selected a large group of qualified people to be involved in that polishing process before they even submitted it. 

David, I actually forgot to write down where this was published. I think it was probably BMJ Quality & Safety. 

David: Yeah, it was BMJ Quality & Safety and Professor Mary Dixon-Woods was a corresponding author. She seems to be coordinating the grant funding or the whole piece of work. 

Drew: Professor Dixon-Woods is head of the research center that did this work. She's an expert in healthcare quality and improvement. She's got a strong focus on qualitative research methods. 

The first author of the paper, who I gather did a lot of practical work in putting the paper together is Dr. Elisa Liberati who's an experienced postdoc ethnographer in healthcare. There seems to have been a sort of multi-year funded project to do this ethnography. A lot of Dr. Liberati's recent research, not surprisingly, is an ethnography of our healthcare during COVID. This work was actually done before that other publication. 

David: Yeah, Drew, the data collection was, I think, 2014–2017. It was almost as if the first part of the study had already been published in 2019 and then extended on something. It might have been a PhD research that was then followed up postdoc into a much broader study or something like that because the data collection was a very extended period of time. 

Drew: In terms of ethnography, this is exactly the type of thing that we want to see if you're trying to describe what's going on and where the good is coming from. David, you've included a little bit of a quote just at the start of the method, I just wonder if you'd like to give us the quote and talk about why you picked that one out? 

David: Yeah, so I picked out because I liked the way the authors had suggested or understood at least, that the forces that create positive conditions for safety and frontline work, maybe at least partially invisible to those who create them because they remain this tacit or habitualized way of working. You have to have some kind of structured study to surface them. 

If you just ask people, for example in a questionnaire, what are the things that you do to make sure that your work is safe every day? They won't necessarily, maybe unconsciously, even be aware of all of those things that they do in relation to communication, observation, interaction, planning, or anticipation. 

All of these things that they do throughout their day through experience or routine without maybe consciously thinking, I'm doing this so that I can be safe. You might end up getting scripted responses instead that say, I follow the procedure, I draw on my training, or I report hazards. This study, I suppose, framed what they already knew, which was the only way to be able to describe something that we're trying to describe is to have a large ethnographic type of design. 

Drew: The ethnographers in this case are outsiders. Their authorship group is made up of people who are sociologists of healthcare. They're in universities, they're not in teaching hospitals. They've got a good understanding of how healthcare works, but they're not themselves closely enough linked to the particular hospitals. They're not just cheerleaders for the sites that they're talking about.

There are basically three steps to the research. The first one involves a thing they call an index site. This is going to be the main example of good. They used a few different ways of saying that this is a place that is doing safety well. Some of them are just in terms of outcomes statistics, this is a maternity ward, I should say. 

They've got a really low rate of birth complications. They've got a number of outcome measures for babies that they have sustained improvements on. They've got other statistical things like staff survey results about teamwork, culture, and job satisfaction. 

Also, this place has been recognized by other places. In fact, there's a particular training program that's been developed based on this particular maternity ward that has been rolled out to other hospitals. That's a good sign that there's a number of reasons to genuinely believe that this is a high-performing organization. Step one is the index site, we're going to do ethnography there. 

Step two is branching out to five further sites. Now, these ones haven't been particularly selected as being high performing, but they're all ones which are trying to copy the original site. They've adopted the training program that was produced based on the first site. You would expect to see some features in common, some features different, and use that as a good way of understanding what is actually special about the index site, what can be successfully copied from the index site and replicated elsewhere?

Then the final step they've labeled as stakeholder consultation. This is basically feeding back to people what they think that they found from the research presenting it to them tentatively to get feedback. From the data gathering stage, we have observations, just watching what's going on. We have semi-structured interviews, and then the stakeholder consultation is based on a larger number of interviews and a focus group. 

Through those three steps, we're basically following a process in qualitative research known as constant testing. You start by forming a theory of what you think is what's going on and then you just collect more and more data to test and refine what it is that you're finding, constantly looking for what have I not quite got right, what have I thought I've seen but haven't actually seen? Until eventually, you're producing something that the people that you're studying agree, yes, this describes us correctly. 

David: There was a lot of data gathered. The initial work at the index site collected more data than the follow-up five sites, but they still had a total of over 400 hours of observation data and 33 interviews across those sites. The stakeholder consultation wasn't a small exercise. They conducted 65 interviews in one focus group and they went to a whole diversity of professional roles. 

They interviewed users of the services, people that had given birth inside those maternity wards. They spoke to middle and upper management. They spoke to policymakers from the NHS, so the National Health Service. They spoke to frontline clinicians—nurses, midwives, others, and doctors from other sites that weren't in the initial six that were observed, as well as the professional bodies like the Royal College of midwives, and things like that. 

They're really trying to cast a net with two purposes, I suppose. One, to do that member checking which is like, does this make sense outside of the sites that we collected the data from? The second is to just understand the ways that different stakeholders talk about these particular features that create safety for the purpose of developing a framework that uses more plain language or more generalizable language so that it can be understood by more stakeholders. 

Drew: Just to give an indication, 400 hours of observation and 100 interviews in total is huge for a qualitative project. There are plenty of quite good PhDs that are based on 20 interviews or at least 20 participants as their core data. So having 100 interviews, just even conducting those interviews, analyzing them, doing the observation, that's a huge amount of data that's building in. Since it seems to be just one paper out of this work so far. David, your thoughts on it? 

David: Absolutely. You've got the logistics and administration of coordinating the interviews. I think in this paper, they reported a 54% participation rate in people that they've reached out to. So to get those 100 interviews, they would have tried to coordinate 200 to actually do 100. Then you've got to conduct them anywhere, probably 30–45 minutes for a semistructured interview. Then you're right, Drew, the transcribing but the analysis.

To do the analysis technique they mentioned, which is that progressive comparison or constant comparative method where you read one, then you read the second one, what's the same or different about the first one, what's the new information? Now I need to go back to the first one and check if that information is in there or not, then go to the third one, then go back to the second, back to the first. You need to go through each transcript, maybe a dozen times as you're building out your themes. That's a huge analytic effort.

I think most of it was done by about two or three people. That's a full-time research team, I think, maybe for 18 months on this study.

Drew: I realized I made a small mistake that, in fact, they have published on this work before. They published a preliminary paper, I think, based on their study of the index site before they did the comparison. In fact, we can see what they found from the index site and then how they then evolved and changed that as they continue to work through the rest of the project, which is a good example of showing your works. 

You can see how the ideas evolved and that they didn't just lock into a single theory, but that they have actually tested it, refined it, changed it based on the remainder of the work. 

David: Absolutely. Drew, are we ready to go to the findings now? 

Drew: Sure. The way you've got it listed in our notes, David, is talking about what they thought they'd seen from the index site first and then expanding out to the final version. Do you want to just talk through it? 

David: I thought it was really interesting because it goes to a couple of things. It goes to how it changed once they went beyond the index site to not just confirm what they were doing at the index site but to actually deepen what they learned at the index site. The second thing is to see how the language changed when they did the broader study, as well as all of the stakeholder consultation. 

When they went to this first site that Drew had mentioned—had good outcome statistics, good climate conditions, good recognition across the industry as just being a high performing site including safety or around patient safety outcomes, particularly—they identified six features or themes of what was going on to create safety. 

I'm just going to read through these six and we can read them like the five principles of high-reliability organizations or something like that. These were their six features. The first was collective competence. The second was the insistence on technical proficiency. The third was monitoring coordinated and distributed cognition. The fourth was clearly articulated and constantly reinforced standards of practice, behavior, and ethics. The fifth was monitoring multiple sources of intelligence about the unit’s state of safety. The sixth was a highly intentional approach to safety and improvement. 

Those are the six features, Drew. The good thing about this paper, it goes on to explain what it means by each of those six, but your thoughts on those initial findings?

Drew: Let's dive into the descriptions of each one because I think there are some really interesting details once we flesh them out from just motherhood statements into descriptions. 

The first one is collective competence. It's interesting that in the longer text for these, it sounds very HRO. They say professional boundaries are managed flexibly with deference to expertise, rather than hierarchy. Collegial behaviors, strong social ties amongst staff interdependency. That's very similar to HROs idea of deference to expertise and sensitivity to the frontline. Instead of a hierarchical view, we've got this idea of interdisciplinary routines with mutual respect.

David: Yeah, Drew. This technical proficiency talked about a very high standard proficiency expected in tasks but talked about high fidelity structured training combined with informal learning and mentoring. It talks a lot about what we know about simulation-based training and training that really matches the real-world tasks but also talked about in this index site, there was lots of informal mentoring that went on in lunchrooms that were observed. 

People would ask others professional opinions in the lunchroom and allow themselves to be taught, mentored, and coached, and that was sort of a common social interaction that went on in the hospital informally. 

Drew: Yeah, to be honest, David, I'm reading this and I'm thinking of the horror that might come from someone trying to deliberately replicate this culture of proficiency. It sounds like that this is almost like a generationally transmitted habit of the way that they include people in legitimate peripheral tasks while more difficult things are going on to build up their competency.

They model having these conversations and sort of informal pretask reviews and after-action reviews happening in the lunchroom, which all sounds fantastic unless an organization was trying to do those deliberately and introduce a process or competency scheme for managing this. In which case, I imagine it would all fall apart very quickly. 

David: Yeah, I think we will talk about some of those challenges in the practical takeaways, I suspect. In the third one about monitoring coordination and distributed cognition, they talked about having mechanisms to maintain a shared awareness of the external situation in the maternity unit. How staff play coordinating roles in controlling the function of the entire ward and how that information is shared and understood by others. That's also a description of very informal practices. 

Drew: David, could you give a little bit more of an explanation of some of the content in the paper about that one because I find this very interesting at surface level. It seems almost like you've got this emergent system that they've put in place people with deliberate functions of watching everything that's going on. 

David: I think also there were references that were made to like even the use of whiteboards. Every maternity ward had a whiteboard to do patient monitoring and updates. It was more about just how many different people had contributed to notes on individual patients on whiteboards and things like that to show this shared coordination and distributed cognition that was going on for each individual situation.

Drew: Any healthcare unit would have to have this same shared awareness. The difference in this place is that they've put in place these explicit tools for facilitating it. The whiteboard practices and having people in control rooms dial functions remind me a little bit about how the school manager in our university department tends to sit at the receptionist desk instead of hidden in an office away. Just because that little step of having someone who is visible and in the center watching what's going on helps information flow much more easily around the department. Just small structural steps that help distributed cognition. 

David: Drew, the fourth one there is clearly articulated and constantly reinforced standards of practice, behavior, and ethics. This is about how collectively, all members of the team articulate and reinforce through role modeling the norms and expected practices of everyone in the team. The collective lack of compromise on important professional routines and actions.

Drew: David, let me just give you a quote from this one and you tell me whether you think it describes a high-reliability organization or the most draconian place ever to work. “Staff make positive use of social control mechanisms to ensure that other people behave in a way that is aligned with the unit standards.” That could either be like really, really good or really, really bad depending on exactly how well it happened. 

David: Although social control mechanisms are a really interesting choice of language there, Drew. It reminds me of a case study that I read about prison guards and maintaining order within prisons. There's no way that prison guards can control prisoners through rules and procedures. It was this study that talked about how prison guards provide minor concessions and things like cigarettes and other things to prisoners to make their life just a little bit easier and a little bit better in return for those prisoners following the rules and not creating really difficult situations for the guards. 

It's like a social exchange type of theory going on here, and maybe that's a little bit the same here. Maybe high-reliability organizations don't seek those standards of practice from enforcement of rules and requirements, but they seek those standards of practice through this kind of collective social influence or expectation that gets followed. It gets followed because everyone expects it and everyone does it. 

Drew: Yeah, it's amazing how one little word—social control mechanisms—you replace that bit with behavioral control mechanisms.

David: Procedural control mechanisms or disciplinary control mechanisms. 

Drew: The next one is monitoring multiple sources of intelligence. They talk about having many different forms of data used to sense problems, fairly standard using routine clinical data, making it available to all staff rather than just someone sitting in an office monitoring the statistics. Using soft intelligence such as patient feedback and staff ground knowledge to learn and improve safety. I think that's a big one.

A lot of the healthcare scandals, particularly in the UK where this study was created, one of the things that have come out is just how much that patients and families were trying to raise issues and being ignored by the medical staff. They're talking here about explicitly using that feedback, not as a public relations issue, but as a source of intelligence, as a source of data to identify problems. 

David: We see here, Drew, the use of the word psychological safety as well on this one, which is that openness, listening, and what we all know to be important in safety within all organizations. 

The last one here, Drew, is the highly intentional approach to safety and improvement. Commitment to safety is collectively pursued and it's also socially legitimized. Everyone in the team values and prioritizes this intentional approach to safety and operational improvement. I think here, Drew, we're seeing this not being about the bureaucracy and the compliance, but it's this socially legitimized expectation that we do things for a purpose and we do them to a level of quality.

Drew: They haven't used the word resilience here, I didn't actually do a search, but I don't recall them using the word resilience throughout the paper. They certainly haven't applied a resilience theory lens at all. What they're talking about here sounds very much like a very conscious attempt to be resilient. 

Doing things like identifying in advance that something's going to be a risky situation and then making sure that we're ready for that situation. Doing things like monitoring for small signs that things are deteriorating and taking early action to put in extra resources or more experienced staff into those places to sort out the problems early. Very much this idea about preparing for disruption, recognizing early signs of disruption recovery back to normal. 

They're doing that very explicitly in the way they're managing things, including with some form of risk management processes built around it, not just expecting resilience to be there as a grassroots thing. 

David: Yeah, Drew. Look, I think what we know from one of the principles of HRO about their commitment to resilience or say Erik Hollnagel, is the potentials in resilience engineering around anticipating and responding. They do mention resilience engineering in the conclusion where they just talk about the features of safety that they identified to have strong conceptual ties to the HRO literature and the resilience engineering literature. They make that acknowledgment in the conclusion. 

Drew: David, should we move on to how they then refined these things once they got into the latest studies? The ones we've given so far were from that first study looking at the one place. By going to extra places, they can work out how much of those things I guess are real, but also see what's actually important. Because they can see where other organizations have tried to replicate some of those things and look at where it hasn't worked well, and work out what features of that original one are the most important to replicate or what might they have missed in their initial description.

I think a good example of that is the very first one where they've really expanded on what they meant by competence. In fact, moving from 6–7, it's not quite as simple as just splitting one into two, is it? They are really sort of seven new principles. Competency is now sort of spread across a few different principles. 

David: Yeah, Drew, you're right. They have really sliced and diced this. When we talked about those descriptions just before we talked about three or four different components to those six themes if you like or six features, and now they've got seven. We're seeing the emergence of things that are higher level, which were previously sort of buried in a different one like this idea of teamwork becoming a feature in itself rather than within that monitoring, coordination, distributed cognition thing. 

We're seeing this ability to mobilize quickly that you mentioned there at the end being pulled right up into a top-level theme. They have really recut the whole thematic analysis and reemerged with seven new features. You can see that familiar resemblance through it all, but it's a very new set. 

Drew: Should we just go through this new set one by one again, David? 

David: Yeah, let's do that. I don't know how much detail you want to go into, Drew. The good thing about this paper, which we haven't mentioned, is it is open access. We'll be able to link it in the show notes and anyone can go and read the whole paper, which I would strongly recommend anyone involved in safety to do. 

Let's go through these seven, Drew. Should we pause on each one and have a brief conversation about it? 

Drew: Well, there are certainly some things here I'd like to highlight. The first one is a commitment to safety and improvement at all levels. Some gem is buried here. The first one is within this commitment to safety and improvement, the idea that there is an authentic commitment to learning from risky situations and adverse events. 

I think the authentic there does a lot of work because almost every organization has got processes for learning from risk and learning from adverse events. But very often those processes don't translate into actual learning.

David: I think also, Drew, in here buried in this description and people, when you do read this article, you'll see all of the detail right down to the activity levels. In this, they talk about also having risk management processes like audits and risk assessment processes that are known, trusted, and used by all staff. Sometimes words about having processes that are trusted are really important for that collective commitment. 

Drew: Another one that's in there is staff investing in making the unit better. This is just non management staff finding ways to improve working processes. They talk about small scale, easily actionable ideas, and the fact that staff feels free to do this. They get praised for doing this, they feel that it's part of their job to do this is a sign that people are going well beyond procedures and rules and taking the initiative to improve safety. 

David: The second one is now technical competence supported by formal training and informal learning. Rather than collective confidence, we're going, okay, let's talk about the technical competence of the whole unit. This competency is not just, oh, well, we've got training programs. It's all about having opportunities for all staff to be debriefed, to ask questions after experiencing complex clinical situations, having social spaces accessible to all staff that supports informal knowledge sharing as we mentioned earlier, Drew. We could very easily be [...] into saying, technical competence, we do that in our organization. I encourage you to read the details here. 

Drew: David, my favorite part about this one is that the coffee room, which was previously you mentioned but it was buried within the text of the paper, has now been elevated to be part of the formal description of this feature. It doesn't have to be a coffee room, they say any communal social space accessible to all staff. I just encourage listeners, just imagine your own organizations, imagine do you have that space? 

Immediately I'm thinking of my own offices where we don't. We've got three separate little coffee cubicles and where you hang out depends on which one your office is closest to, and different groups of staff will congregate in their own spaces. I guess we fail on that one immediately. Just the lack of a shared communal space that everyone uses. 

David: Then I suppose, Drew, even if you built a really nice space like in some organizations in some offices that we see now, and those spaces are just empty all day. I think one need is to have the space but then the second need is that people want to spend time there. Then when they are spending time there, they engage in these informal learning and sharing activities, which is in one part, having the infrastructure but probably more part having the climate that creates that.

The third is teamwork, cooperation, and positive working relationships. You want to highlight anything in that description?

Drew: Most of the stuff here I think is pretty much what you'd expect. The idea that by working and training together, people are aware of each other's roles, aware of skills and competency. That ability to access the competence of other people increases the overall organizational competence. There are words there about respecting each other and valuing other people's contributions, looking after each other. 

Some explicit words about recognizing disruptive or bullying behaviors, and managing them effectively. You’re caring about staff and caring about how staff cares about each other. 

David: Also, in here about when deciding who should perform a task that skill and experience are more important than seniority. Also that any differences in opinion between professions or roles are settled openly through reference to shared goals. Bring the team back together, recognizing the value of diversity of thought and alignment around shared goals. 

Drew: This one flows, I think, fairly naturally into the next one, which is about reinforcing safe, ethical, and respectful behaviors. I think we already talked about there was quite a similar one in the previous list of six.

David: Yeah, there was.

Drew: This now spells out explicitly what it means for social reinforcement of these things. It mentions, for example, that errors are considered both problems and as opportunities for learning. People are encouraged to discuss errors openly, but still to surface them and make sure that things are taken to prevent them recurring. 

It talks about how newcomers are treated. How they're socialized to the unit's high standards, but also how their own previous experiences are respected. They're not just treated as newcomers. It's like you're welcome to our place, look at the standards here. But also, if you've got something from your last place that you'd like to bring here, then, by all means, tell us about it.

David: The fifth one here is having multiple problems sensing systems used as the basis for action. It's been here about speaking up for safety and people having the confidence that any concerns they have will be heard and action will be taken. We've got this reference to psychological safety, which is carried through from the first set of six for the index site. 

Drew: Unlike last time, we're now explicitly mentioning patients' families. Last time, it was just your patient feedback. Now we're talking about families being encouraged to share their experience both in real-time while they're there, and also afterward. Seeing that feedback from families is key to improving care.

David: Number six, Drew, systems and processes are designed for safety and regularly reviewed and optimized. We didn't hear too much about formal systems and processes other than having the value and the trust in those risk assessment and audit processes. This seems like we've got a bit more formal safety going on in the analysis of the broader sites.

Drew: Yeah, I find it interesting which systems they talk about here, David. What you might imagine when you talk about systems and processes. Then you look at the detail here, it's about making sure that equipment and the physical environment are designed consistent with human factors and ergonomics principles, making sure that we have simulated new systems and processes to check that they actually work the way they're supposed to work before we implement them.

It's easy to just jump on the idea of oh, we've got systems and processes. But in fact, the key here is the fit for purpose of those things and the human factors testing all the systems and processes. Then the processes, which are most important, are the ones which in turn test the human factors of the equipment and of other processes.

David: You're right, Drew, there's not much talking here of the quality and safety processes. The examples include the scheduling process for the operating theatre, certain availability of medical supplies, the usability of the technology and equipment that's procured, and those sorts of things. That really is about work as done and the tools and processes that are involved in the planning and execution of work.

Drew: The final one is fairly similar to the last one, it's again highlighting resilience. They've called it effective coordination and the ability to mobilize quickly. They've talked about having well-functioning IT systems. The whiteboards get a special mention now in place to share information quickly. They talk about things like structured handovers and safety huddles in order to make sure that we're sharing information about what's going on. 

Talking about having those control room settings again, special individuals who've got specific responsibility for patient flow and coordination between the different care settings. Anything else you want to highlight there David?

David: I suppose just the planning and preparation that goes into the ability to sense, respond, and mobilize quickly—simulation-based training, structured emergency protocols. I like the way it said allowing staff to be both competent and confident in responding to any crises. That investment ahead of time is recognized as a strong feature of safety. 

Drew: David, as we move out of the list of seven. I've got a general observation, which is how many of these things are very healthcare-specific? I could imagine our listeners hearing this list and saying, yeah, but that's not me because I don't do healthcare including patients' families. What's that got to do with operating on a construction site?

What we're identifying good here it's not just a sort of general abstract, this is what good safety looks like. It really is what good safety looks like in a maternity ward. In fact, some of the assumptions here about how things work may in fact not even just be healthcare, they may be very specific to this type of health care. 

David: Yeah, Drew. The author's stance around this topic a little bit. They don't make any sort of strong digs at people or other theorists, but what they say is that what is really important is to produce descriptions of what good looks like that's specific to particular domains and contexts. Even in their case, specific areas of care. 

They're not even claiming that this is a healthcare set of features. They're saying this is a potential set of features for maternity care facilities within the broader healthcare sector. They're thinking that this is more effective than operating at a level of generality where we might say, okay, well, the resilience engineering potentials are to monitor, anticipate, respond, and learn or the HRO five principles. 

I think that's really good that they've done that because if listeners remember a discussion that I had with Jop Havinga in episode 75 on stopping work. We had that discussion about the difference between maybe how stopping work happens in say the aviation sector versus how maybe stopping work happens in the utility sector.

These seven may or may not be relevant for other domains or contexts, but the purpose or the message in the paper is to go and find out for yourself what is relevant and important in your context.

Drew: I agree with that, absolutely. I think the more that you generalize, the more you create something that's easy to spread around and dare I say, to sell as a consultancy service or sell as an idea. The ability to spread and generalize doesn't make it useful to all of those places that are receiving that wisdom. Whereas the more specific it is, the more directly useful it is for other organizations. 

David: Yeah, this is a little bit different from our more common approach in safety and in large organizations, which is standardization and generalization. Well, it's a different approach. Let's just call it a different approach. 

A few more things, Drew, and then we'll do some practical takeaways. This is not a checklist so the authors actually say that these seven features are really tightly coupled, they're interdependent, they're mutually reinforcing. We can't look at any of these seven things in isolation from the other six because they all create safety at a system level. We need to look at all of those features being present. 

What they concluded was that to a large extent, all of those seven features were present in the index site. But then across the other sites, they were only present in various levels and various combinations across those seven features. 

The author also cautioned about ignoring those structural factors like staffing levels and the availability of equipment and the physical environment, which is the discussion that we had around the systems and processes being designed for safety. Maybe that's why that made its way into a top-level theme as well because the authors know it's not enough just to create these team factors if they're using poor equipment with not enough resources. 

Drew: If there's one weakness with this ethnographic approach is that we're describing what good looks like, not how to get good. There's really, I think, a dangerous and difficult step in translating from this is good to this is what other people should do.

The author's crossed that line a little bit. They do actually suggest that you could use some of these ideas for purposive improvement. In other words, you could take some of this stuff and try to make yourself better by following it. They've referenced a couple of citations, which they say are evidence that this can be done. I actually tracked down the papers because I was a bit skeptical and both of those other papers weren't nearly as good as this one. 

They were basically talking about attempts to improve culture in healthcare organizations. The evidence at best was if you try to improve culture, you can improve culture. Which is very different from saying if you try to become like this organization, you can become like this organization with the same safety improvements. 

The analogy I'd use is like looking at a good football team and saying what makes this a good football team? It's the fact that the guys are all tough. Okay, so I'm going to make people tough without ever realizing that in fact, maybe a good diet was what got them to be good [...]. You can't see any of that good diet. 

Maybe what made this organization display all of these things is the fact that senior management had extra staff in its organization, and with that extra staff and time, people had the attention to devote to all of these things. Maybe the reason other organizations don't replicate it is they simply don't have the time to do all of these extra things for safety and for information sharing. 

David: It's like observing the output and then all you can do is make assumptions a lot of the time about the inputs in the processes that have created those outputs. 

Drew: Yeah, and I think sometimes those assumptions will be valid and I think sometimes those assumptions will be well off base. An example I draw out is that I think some of the stuff about how people in this organization share information.

Absolutely, you could look at that and say, maybe we could deliberately and tentatively try to do some of this in our own organization. We noticed that people are writing stuff up on whiteboards to share stuff. We look at our own organization and there just is no space for people to share information. Maybe we could give them one. But maybe in your organization, a whiteboard would be an incredibly silly idea. Just giving everyone whiteboards because this organization uses whiteboards wouldn't make sense.

David: Then the whole social psychology and change aspects of having those whiteboards done to people versus done with people and did they choose the whiteboards? Do they even value the use of them? Is it a compliance activity, or is it a value-added work activity? 

There's lots of complexity in it. It's one thing to observe something performing well and then describe the things that you observe and that you think are creating it to work well, but it's a whole different thing to then try and make something work well that's not currently.

Drew: Yes, that's not a reason not to do this sort of work. I think this type of work is incredibly valuable. It's not a reason to not take this work and try to see what you can learn from it for your own organization. 

It's just that any step once you start translating from describing to trying to do yourself should be done just as humbly and as tentatively as the original work was done. We don't want to translate this into seven principles for how to be a good maternity ward, that then becomes the seven principles of safety, that then instantly become seven principles for managing safety on construction sites with no thought for how the ideas adapt and change.

David: Let's talk about some practical takeaways then. One general caution a bit following on from what you said. In every domain in every context, there are going to be features and capacities that create and maintain safety on the front line. We've got these broad theories like the reference in this paper resilience engineering or high-reliability organization theory that gives us some broad categories and high-level descriptions of what some of these features might be. 

This study demonstrates that it may be possible to pick very specific settings like maternity care within the healthcare sector and identify the features of safety right down to these things like the placement and use of whiteboards for communication and things like that. 

That should practically give us a lot of inspiration and motivation to look at our own organizational settings and try to understand what those features on the front line might be creating reliable performance. 

Drew: Another takeaway is that this is a sort of call for safety professionals to reflect on how well you understand what safety looks like in your own organization. David, I think you've noted here that only two of the seven features are things that are directly influenced by the safety management system. The other five are really things that exist in that frontline ecosystem. They exist in the spaces, in the people, in the processes that are below the level of things which are formally managed.

David: Yeah, Drew. Just having a look about, again, practical takeaways. I was starting to think about what our listeners might do and take in. Two of those seven features, one technical competence was at least, in part, something that might be influenced by the management system, but not the informal sharing of information and coaching. And also then, the systems and processes so that usability, human factors, fit for purpose, equipment, and resourcing levels. 

All of the other features that are claimed to be the creators of safety are social, are about climate, are about team dynamics, and sociological type factors. Those things aren't things that you write down and create through safety management systems. They're created very differently in organizations. 

Drew: I think it's an interesting question, what safety you should do with this sort of information? Isn't the job of safety to try to support these things? Is the job of safety to just ignore them and say, well, these are great, but they're not our responsibility. Let's not touch them. Is the job of safety to try to bring them into the formal systems?

I'm asking those as rhetorical questions because I think the answer would be different on a very case-by-case basis, but it is something that we should stop and think about. This is important for safety. Do I want to touch it? Do I want to help it? Do I want to try to own it? On a case-by-case basis, think about what value we add as safety people. 

I think for a lot of these things, maybe there are structural ways that we can support them. For others, it's very dangerous to think with them at all.

David: Drew, that's a great area to make an invitation to our listeners. For those who want to contribute to this discussion, have you done anything to understand the frontline features of safety in your organization, and are you doing anything to try and enhance the tacit knowledge and informal ways of working that are creating reliable and safe performance day in and day out? Are you doing anything specific in your context in your domain? I would love to hear about it.

Drew, if I fire the last question to you, the title of this episode is "What does good look like?" Your answer, what does good look like?

Drew: Good is not something that we should over-generalize about, David. We have an open-source paper, it’s easy to read. The results are in tables, go ahead and look for yourself either in your organization or if you're interested in maternity wards and healthcare in this particular paper.

David: Perfect, Drew. Good for you might be different than good for me. That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Send in any comments, questions, or ideas for future episodes at feedback@safetyofwork.com.