On today’s show, we discuss the practical characteristics of a high-reliability healthcare organization.
To frame our discussion, we use the common High Reliability Organization Theory. A few people have authored papers on this topic and we will use their work during our chat.
Topics:
Quotes:
“A number of organizations and industries have been linked to HRO theory over the years for maintaining somewhat error free operations over an extended period of time.”
“The technical name we use when talking about the position of the researcher, compared to the research they’re doing, is ‘reflexivity’.”
“It’s what model lets us make use of the local expertise and the professional expertise...as we’ve shifted to the model that gave primacy to the physicians, we lost that teamwork…”
Resources:
Roberts, K. H., Madsen, P., Desai, V., & Van Stralen, D. (2005). A case of the birth and death of a high reliability healthcare organisation. BMJ Quality & Safety, 14(3), 216-220.
David: You're listening to the Safety of Work podcast episode 14. Today, we're asking the question “What are the practical characteristics of a high-reliability healthcare organization?” Let's get started.
Hey, everybody, my name's David Provan and I'm here with Drew Rae. We're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety and we examine the evidence surrounding it.
Drew, what's today's question?
Drew: The question for this episode is what are the practical characteristics of a high-reliability healthcare organization? We figured that we're probably likely to do a number of episodes that look at high-reliability organization or HRO theory. We didn't want to just say, “What is an HRO?” that would be a bit broad. We want to be specific with the question and the research.
We’d also thought there'd be a good chance to look at a theory that I don't think we've touched on yet and in industry sector healthcare that we haven't talked about a lot either. Putting them together, high-reliability organization theory and healthcare industry. We’ll say a little bit later in the episode about HRO theory but for now, it's a theory that comes out of the 1990s, mainly, and it tries to explain why organizations that operate in high-risk complex environments seem to have fewer accidents than the situation might suggest.
There are a number of authors associated with it, probably some of the ones that you may have encountered most often, Weick and Sutcliffe. We'll mention a few of the other HRO authors again later in the episode.
HRO presents different characteristics of organizations. Just now for the introduction, the Weick and Sutcliffe version of them says that these are organizations which are preoccupied with failure, they're always looking to find little things that go wrong to fix them rather than assuming that everything is going well. They try to be very sensitive to operations, so listening to how the frontline is working. They have a commitment to resilience, the ability to adapt and cope when little things go wrong. They're reluctant to simplify. They don't accept straightforward explanations. They don't accept just dumb down quick explanations. They want to know what's really going on. And they defer to expertise. That's meant to be in contrast to deferring to formal management and hierarchy.
I think those principles, each individual item varies from author to author, but they give a sense of where we're heading here which is really looking closely at the frontline of the organization as the key to safety.
David: Yeah. I think a number of organizations and industries have been linked to HRO theory over the years for maintaining somewhat error-free operations over an extended period of time. These companies and industries include commercial nuclear power companies, passenger aviation companies, the US Federal Aviation air traffic control system, the US Navy aircraft carriers, some high-speed rail organizations around the world. These organizations that operate not just years, but sometimes, decades in very high-risk and complex environments without so much as a major incident.
Over the last 20 years, and based on a lot of this writing, many organizations have attempted to make their operations more reliable, with some of these specifically implementing and following HRO strategy. Some of our listeners may actually be involved in an organization that has some of these principles and programs around these principles as specifically part of their safety and operation strategy. I know for a while after Texas City BP had quite a significant HRO implementation program going on.
But during this time, researchers at the same time have been really trying to actually continue to deconstruct these organizations and really try to flesh out these explanations for why they might be so highly reliable.
Drew: Yeah. I suspect that one of the original researchers, Todd LaPorte, would be horrified where HRO has gone. One of the things he was very afraid of was that people would pick this up and use it as a solution when all he wanted to do was describe. In fact, he argued originally against the name HRO. He thought maybe they should be called something like reliability-seeking organizations to indicate that this wasn't a prescription, this was just a description of how some particular organizations operated.
These people have certainly taken HRO as a prescription. I have to admit, I'm personally a fan of the principles. I think they do create quite a good recipe for organizations that are looking to improve safety. There are worse ways to go than trying to follow the HRO principles. But researching what makes an HRO is a really difficult thing to do because how do you know what is a high-reliability organization? You can't just study an organization for 10 years. An ideal research program if you had infinite time and infinite money is study 50 organizations for 10 years. The ones that didn't have an accident will go back and check the data to see what made them special.
David: It is quite hard to research an HRO when you've got different explanations and theories saying what an HRO is. You've also got this passage of time which an organization would need to demonstrate being reliable for to be considered. You do end up in this situation as a researcher, and I think this has been one of the challenges for the applied nature of the HRO research to try to find these organizations, then justify why they are and how they're operating.
Like we've talked about in other episodes, what you find in those organizations even if you can convince other people that they are highly reliable is the things that you are talking about and finding out those are the things that are linked to them being reliable. It is a very difficult space to research, but I did find a paper. I found this paper while I was preparing to present at a conference last year for the Australian and New Zealand College of Pediatric Intensivists.
I went along and spoke about different types of safety theory, particularly newer safety theory like HRO, Safety Differently, and Safety-II. The paper was titled A Case of The Birth and Death of a High-Reliability Healthcare Organization. It was published in the Journal of Quality and Safety in Health Care in 2005. The four authors are Karlene Roberts, Peter Madsen, Daved Van Stralen, and Vinit Desai.
Drew, do you want to give us a bit of a 101 of the HRO history, particularly in reference to some of these authors?
Drew: Sure. I think in knowing who Karlene Roberts is and just how important she is to HRO, we need to go back quite a bit of time to when HRO was first founded or established. There are three people involved at the start. Todd LaPorte was a professor of political science who had done a lot of military service, Gene Rocklin, a physicist who was moving into more social science and was very interested in rigorous research methods in social science, and Karlene Roberts, who was a professor of organizational behavior.
What was revolutionary about all three of these people coming together was that they were all interested in studying successful organizations and programs rather than accidents. That wasn't what anyone was doing in safety. Safety was a game of what's a big accident and how do we explain it? I think it's also worth pointing out a bit of political background because you need to understand this to be able to interpret HROs.
David, contradict me if you disagree but I think it's fair to say that most sociology and humanities are a bit left-wing, along with that comes an innate skepticism of big business and technology. That's particularly true if you ever actually read the full book of Normal Accidents. You can read that as an explanation of accidents or you can read that as a diatribe against nuclear power.
David: Yeah, and the politics and the economics surrounding the energy industry as well. I agree with you, Drew. I think sociologists sometimes aren't quite objective about the organizations that they're looking at when they're trying to actually understand what's going on.
Drew: Yeah. I don't want to pretend that LaPorte, Rocklin, and Roberts were objective. What I'm saying is that they didn't come from that same theoretical background like Turner and Perrow, or even some of the people later who were doing HRO. Because of their backgrounds outside sociology, they were very optimistic about the ability of organizations to master complex technology and to handle difficult situations. They were fans of the American military, they weren't talking about the military-industrial complex, they were looking at the American military machinery as key examples of complex organizational machinery capable of undertaking big projects successfully.
While most of the focus in HRO have been your only ideas, what is an HRO, what are the characteristics, I really think that the real innovation is not the theory. It's the collection of attributes that come out of it and it's the methods that are exciting. This focus on close ethnography on studying normal organizations rather than studying disasters or times of change.
What made it particularly exciting was just how much access they had. I worked for the Navy as a civilian for two years and I didn't get to go on a ship. These authors were spending weeks at sea on aircraft carriers multiple times to do their studies. It was fantastic. The criticism that they faced is that they weren't being skeptical enough, that they were going native selling out to the people who gave them access, buying into the success narratives that organizations tell about themselves. It's a strength and a weakness.
David: You can understand how that can happen if you aren't quite sure exactly what you're looking for and you're not trying to deconstruct an accident, you're trying to understand an organization in its normal operations, you're within the US industrial-military complex, and you're listening to all the proud, patriotic narratives that are going on in that organization, and the can-do attitude, then it would be easy from the outside to criticize the researchers of just going, hearing the stories, taking them as a given, and replicating them.
But these authors, like you said, and these researchers, were quite rigorous and they've continued to publish over the last several decades. You've mentioned Karlene Roberts is one of the world's foremost experts in HROs. She has published consistently on HROs for the last 20 years. She's published with anyone else that you can think of that's written about HRO. She's most likely published with them. She also led the move away from military and aerospace organizations towards industries such as healthcare like we're talking about today.
Her co-authors also continued to publish extensively on topics of high reliability and organizational learning. One of the authors, Van Stralen, is now a professor in the Department of Pediatrics at a university in California. These authors are very credible to be researching and discussing high reliability in the healthcare environment, but the methods that they followed were not that clear.
Drew: Yes. The paper, I guess, can be best described as a retrospective case study. It's all written in past tense and it tells the story about what happened in an organization as they tried to be an HRO, then as the people who championed that movement left, and the organization went back to being a more traditional hierarchy. I think maybe the motto of this podcast should be all methods have limitations.
This paper has strengths and it has limitations. We don't have the original data, we just have the interpretations of the authors. This is always the trouble when you don't have carefully explained method sections that say what data they collected, how they analyzed it, and when the data isn't available for anyone else to look at. The question is how much do you trust the author's story about what happened?
There are a few things that I look for. If you layout your methods about how you gather the data and how you interpret it, that's the first thing I need for you to start trusting me. Then the second thing is you've got to be open about your own relationship to the research, how are you connected to it. Particularly with this one, the story talks about two doctors who led the changes in the organization. There are several doctors who are co-authors of this paper. It doesn't say whether these authors are or are not the people who tried the change. I find that a bit disingenuous.
David: I think it's difficult. I was having done some ethnographic research and qualitative analysis. I was really, really curious as to why they are in this organization at the time, what triggered the research, whether one of the authors was an employee or a past employee or whether they just came across one of their students who said, “Hey, this is what's going on in this hospital, you should have a look at it.”
It's really important, I think, for credibility and trust, like you said, on the one hand, we talked about the authors and the journal that the study’s published in but on the other hand, is being very transparent about why that research method, then letting readers be able to form their own view about what you're saying, and how generalizable it might be. I was a little disappointed that we didn't get more information about that.
Drew: Yeah. The technical name we use when talking about the position of the researcher compared to the research they're doing is reflexivity. The key for non-academic readers is just to remember that no researcher can ever claim to be objective or unbiased. That's not what we're looking for. What we’re looking for are researchers who think and talk about how they're positioned respective to the research, that reflexivity that gives us the notion of trusting.
You trust someone who says, “Okay, I was there, maybe I'm biased, but this is my point of view, this is what I saw,” rather than someone who just writes in third-person as if they magically got the information through a crystal ball.
David: Yeah. As an aside, it wouldn't be a bad practice for organizational incident investigators to do that as well when organizations claim their independent incident investigator from the Safety Department. Maybe they should write a page at the front of their investigation about their relationship to the activity in the organization, their beliefs about safety, and the work that’s involved before they start writing anything about the incident itself.
Drew: Yeah. I’d love that to happen, but let's skip forward and talk about the context of the research and what it found.
David: Yeah. First, let's tell the story of the research. The research was performed in a single pediatric intensive care unit. It was within the Back Bay Children's Hospital in Boston in the US. The authors describe an 11-year period between—as best I can work out—1993 and 2004 where they claimed it operated as a high-reliability organization and during that time treated more than 2 ½ million people.
It was one of the largest pediatric intensive care units in the USA, had 1704 admissions in 1996. That's like five admissions per day, kids in really, really serious health situation. Against industry, the average mortality rates for these types of facilities were an average of 7.8%. Back Bay Children's Hospital had a mortality rate of 5.2%, so much smaller. But within these numbers, obviously, when you just do a rate or a percentage (like you said), you hide the numerator and the denominator but they perform many higher risk services which they are the only hospital in the region able to do so.
It actually took transfer patients or transport patients from other intensive care units for particular situations that other hospitals wouldn't treat. While they had a lower mortality rate, they were arguably treating far more serious cases far more frequently.
Drew: There is a bit of plausibility here to their claim. At the very least, this is a high-performing unit. They're doing the most dangerous stuff and they're doing it with fewer fatalities than people who are less dangerous stuff over this period.
David: Yeah. It appears that this all started off the back of some incidents in 1993. The organization specifically focused on becoming an HRO. They went on to describe some specific characteristics of the hospital ICU during that period that it was operating as an HRO. I just like to list these practical type of characteristics that they describe because they steered away from the high-level elements and they started to be quite specific about what they believe to be characteristics of the organization.
They focused on supporting the bedside caregiver. In most cases, this was the caregiver had a whole lot of freedom in relation to how care was given to the patient that they were there to look after. They fostered goal-directed team formation so they didn't form teams by status, role, structure, or anything else. They looked at problems and they built teams around problems based on the teams having the capability and the experience necessary to deal with the problem that they were facing.
Shaming, naming, and blaming were not accepted after a bad outcome. They derived their care and their plans for patients by problem-solving. They all understood that there were many solutions to particular situations and so rather than protocol, policy, or algorithm, they solve problems day-in-day-out on their own almost in isolation from the protocols or policies. They developed specific objectives for each patient so they didn't have standard care templates for different types of conditions or situations. Every patient was treated uniquely and specifically, [...] treatment plans were built around that.
The caregivers had the freedom to try specific interventions for particular patients. If the nurse or any other caregiver felt that something was necessary based on the situation, they had the freedom to do that and build a care model, there were no mistakes. If a particular treatment was either helpful or not helpful, then all that did was give more information to make it more likely that the next treatment would be helpful.
A couple more, Drew, and then I'll get your views. There was always attending physician support in person within 20 minutes. Any deficiencies in care were discussed early, and they have used it as teaching opportunities because people didn't feel shamed or blamed based on what was happening. There's a list of eight specific things. Drew, what do you think about that list of practical characteristics of an intensive care unit within a hospital?
Drew: What I think is really interesting about this list is often when people are telling the story about their heroic attempts to improve safety, they give a list of things that they did. That list of things is obvious. You’re left half thinking, “Well, of course, you did these things,” and you're also thinking, “Why haven’t you already done these things?” Whereas this list of eight, there are some quite specific conscious choices that they've made. That's most obvious when you look at the alternative.
Lots of people would say that they operate just culture in investigations. Here they've got a specific policy that you cannot name the person who was involved in the incident. You can't call them out and say that was a mistake. For lots of organizations, that would be a really hard thing to do. You're giving up your opportunity to hold someone, specifically, to account. That's a choice you're making.
The deriving care from problem-solving rather than protocol or algorithm, that sounds great but in fact, you're making a conscious choice not to have a system-focused healthcare system there. You're saying, “Let's move away from having a system, let's move away from having it predictable, let's move away from having it reliable in the traditional sense, in return for making it case-by-case in a way that some people would call ad hoc.” It's not unquestionably good, it's a conscious choice in the belief that making this choice is both brave and worth doing.
David: Yeah. I think that choice we’ll talk a little bit later because that comment you made about some people might see it as ad hoc becomes really very relevant to the rise and fall, if you like, of the disorganization as an HRO, at least in the eyes of the researchers. But the researchers do go into more detail in relation to each of these characteristics and they conclude that this feature of decisions migrate to the best person to make them, which in this case is usually the bedside caregiver, most often a nurse.
Very rarely actually the attending physician is one of the most important things in HRO while getting this type of performance outcome. They go on to talk about this authority gradient between surgeons, doctors, and other hospital staff which can lead to tragedy in healthcare. We've seen that talked about with flight crews in aviation or any other organization where hierarchy silences open discussion and decision making about the operational situation that people face.
Drew, how do they sit with maybe some other theories, this idea of decisions migrating to the best person to make them and taking effort to reduce hierarchy and authority gradients? We see that in a lot of theories these days.
Drew: We see it a lot more often. I think this is what causes a lot of the vehement arguments in safety, often with people who disagree with each other without understanding each other. I think it's very telling that the overall thing is something that everyone can agree on They say, “The decisions migrate to the best person to make them.” Who would disagree with that? Every decision should be made by the best person.
What's interesting is that they see the best person to make the decision as usually the bedside caregiver. It's that bit that's controversial. This decision that we're going to make, which in effect results in sacrificing the command-and-control, sacrificing the predictability as we'll see later on, sacrificing the ability to put in place systemic improvements through evidence-based care in return for giving authority to the bedside caregiver, and you promising not to get mad at that person if they get it wrong. You can see both how this could be very effective but also how it could be very disconcerting to people involved in managing the organization.
David: That's exactly right and that's exactly what happened. What happened after these 11 years where this hospital intensive care unit was operating as this high-reliability organization? What had happened was two of the attending physicians, which were the advocates and supporters of HRO operations, and obviously, they were senior in the hierarchy so they were able to influence the way the operation ran, left this organization within 12 months of each other. They’re replaced by several more physicians. What had happened was these physicians were replaced but then also a number of other physicians and resources were added because maybe the hospital was so successful that they expanded the services that this intensive care unit provided.
Following these changes, what had happened was the intensive care unit reverted to what the researchers called a medical model which treated the physicians as the leaders, everyone else as the followers of treatment decisions made by the physicians. Teams were formed by status, role, and roster. Protocols and algorithms were put in place to treat patients following initial diagnosis.
All decision-making was made by the physician who held all the essential authority over what happened. They determined that basically, we’re following an evidence-based medicine approach as the basis for treatment which is, in most situations, this has been the outcome that's been the best for a patient so this is going to be the outcome we are going to follow regardless, to some extent, of the individual presenting conditions by the patient.
Drew, that's starting to sound much more like a normal and inverted commerce, a normal organization.
Drew: Yeah. I think it's pretty interesting that in HRO theory, we're supposed to defer to expertise. You could argue that both the HRO version and the normal version defer to expertise. There's just this real contrast in who counts as an expert. When it was in what they called HRO mode, the physicians were in charge, but they were exerting their leadership by having the confidence to delegate authority for making decisions to the point where some of those decisions were outside their immediate control. They balance that a little bit by making sure that they were always within 20 minutes. They were sacrificing some degree of control but still, you're able to get there and help out if they needed to.
Whereas the model was moved to one where the physicians are still in charge but now, they're exerting that authority much more tightly to the point where other people are not allowed to make decisions and they have to refer the decision back to the physicians. It's a really difficult question of who's the expert? Is it the physician who understands the disease, probably has better knowledge of the evidence, possibly more up-to-date knowledge, they're probably doing more professional readings, specifically about the evidence of a particular diagnosis, particular treatment modes? On the other hand, there's the nurse who's got a much closer knowledge of the particular patient, has access to the family, and all of that local information.
I think part of the problem here is we're framing the question not very well. It's not a choice of one or the other. We've got both available to us. It's what modeled lets us make use of the local expertise and professional expertise. The suggestion is that as we shifted to the model which gave primacy to the physicians, we lost that teamwork which meant we kept the physician’s knowledge but we lost the ability to access the nurses’ local knowledge.
David: Yeah, that’s a good point. Like you said, you've got access to both and it really depends on what support you want to wrap around the way that your organization operates because what had happened was it wasn't just the decision-making or the responsibility for decision-making reverted back to the hierarchy and the physicians. It was that some of the support, discussion, and teamwork fell away at the same time.
They concluded that the new people in the organization, the new physicians had considered this previous model of open decision-making by the bedside caregiver to be unsafe. From there, the nurses and others could no longer suggest treatments. If they did, they were criticized for making suggestions outside of their role responsibilities. Physicians provided assistance a lot of times by phone rather than in person, so bedside staff were left feeling unsupported with unstable and deteriorating patients.
At this conference that I presented and talked about this paper, the Pediatric Intensivist Conference, the regulator was presenting cases of mortality, mistreatments, or misdiagnosis within intensive care units at the same time. One of the cases that are presenting was a lady who'd passed away from a deteriorating condition, spleen rupture at about three o’clock in the morning. This was an intensive care unit in Queensland, actually, that only had physicians for daylight hours and then an on-call physician overnight.
The physician would do their last rounds at five o’clock or six o'clock, go home, and be on call, but the custom practice for the nurses was don't call the physicians unless you absolutely have to. What had happened was they'd waited, waited, waited, and finally made the call at two or three in the morning. At that point, the physician said, “I'm going to be in in a couple of hours so just do this and I'll be in in a couple of hours.” The patient had passed away.
The regulator was actually feeding back a lot of the things that you just mentioned in response to this paper about how the culture, support, and the proximity of that expertise and decision making… It's funny that one of the actions out of that was actually that, “Sorry, 24/7 physician required at that particular hospital if you're going to take those types of patients and you're going to operate in that type of a decision mode.”
Drew: I can think of nothing scarier than the job of a young nurse or even someone who is a physician but more junior, wondering whether to make that call as you don't know whether things are under control or not. They seem to be getting worse and you know you'll get yelled out if you make the call too early and someone's life is in jeopardy if you make the call too late. That's just a terrible situation for people to be stuck in.
I think the other thing about that case (and this is something that crops up too often as well) is when the family is trying to tell people that something's wrong and they're not being listened to because they're just having such brief encounters with each healthcare practitioner that they can't see that this is someone who's got genuine information to give you, not just your family members panicking. This is genuinely something important they're trying to alert you to.
David: Yeah, absolutely. The authors conclude that the result of these organizational changes (I suppose) reverted or transitioned these organizations being a high-reliability organization to that (in their own words) of low-reliability organization. They did this by claiming that there was a significant performance deterioration in statistics like infant mortality, re-admittance to the ICU within 48 hours, the number of days that patients spent in intensive care.
They asked this question about how this situation could persist when all of these performance indicators were deteriorating so significantly. They concluded that because the organization looked normal to its members because the new people that came in had said, “This is the way that an organization runs,” and progressively the people who had been involved in the organization for a period of time, left and went to other places because they were not satisfied with the way that the organization had a transition.
We talked about organizational memory and institutional memory, but once you get a bit of a change of people and a bit of a change in the mode of operations, organizations can change, actually, very, very fast. In this case, it was within 24 months, the organization and the author's opinion was almost unrecognizable.
Drew: That's part of the story that I find difficult to know how to interpret. I think this comes into you, what we can and can't we practically take away from a story like this which has a little bit of the hero narrative to it. While we were in charge, things were going well, and then other people took over, changed it, and it got worse. Our takeaways are you should do it our way. I think there are practical lessons to be learned from both the success and the transition back. We need to understand both of them in order to understand what we can do in our own organizations.
David: You’re right. For the transparency for our listeners, we don't have the exact data, we don't know exactly how much that performance deteriorated. We don't know exactly over what period of time and we don't know exactly the relationship of the authors to the organization that was being researched.
Notwithstanding that, let's talk about some practical takeaways. The authors actually, in 2005, predicted that the world would need to become full of HROs as our technology and society became more complex. We'd expand our risky technologies, they were talking about things like commercial spaceflight and what we now know as the Internet of things, autonomous vehicles, drones, et cetera. We're increasingly living with these complex and high-risk technologies, business models, and organizational arrangements.
They provide three of what they call their clear takeaways. This is 15 years ago. Their three is you need to provide frontline workers with the flexibility to meet changing situations without needing to follow procedure, ask your supervisor, or rely on their experience. The second is you need to encourage teamwork and you need to form teams to address specific challenges. And the third is that you have to avoid naming, shaming, and blaming the organization.
Drew, what do you think of this list, autonomy, teamwork, and psychological safety? It's a fairly contemporary list for 2005.
Drew: Yeah, it's definitely very forward-looking. I think there's very little to challenge in that list, although I think people will still sometimes be uncomfortable with that providing frontline workers with autonomy. I would suggest that one does need to be situation-dependent, it does depend on where the skills and expertise is.
But one thing that I don't think people did predict in 2005 is that one of the ways the world has changed is not the complex systems and risky technologies, but just how much skill and expectations we are placing on our front line workforce, how much resource they have in terms of educational background compared to where they were 20 or 30 years ago, how much of our institutional experience is in that direction knowledge that is in those brains, and how much technology gives us the capacity to support them at the frontline.
We don't need hierarchies to make decisions when everyone has direct access to information so much more readily. It's so much easier to get a physician within 20 minutes of anywhere when we have robots and video cameras.
David: Yeah, exactly. Those changes, I think, are really important for our listeners to consider if they’re having their own organizations now. The authors do talk about that innate knowledge that these processes that they've described for a high-reliability organization. They're very costly in terms of time, effort, and energy. They talked in detail in these lists about having redundancy in the organization, having multiple people crossing over roles, having two caregivers for really critical patients.
When the change in the hierarchy and the efficiency came into it, many of these things that were time-intensive and effort-intensive were re-engineered or redesigned. They also said that these processes can very easily fail and revert. If there's not a push in the organization to constantly drive for these, then normal operations of organizations will revert them back to a typical organization that we might all be experienced in working within.
Drew: I think that's something that we can take for any organization, not just healthcare, is this focus that they had on creating and maintaining capacity for teamwork. Their safety team wasn't focused on procedures, it wasn't focused on structures. It was focused on developing and keeping those skills.
David: Finally, the researchers also acknowledged that different parts of an organization might need different approaches. The traditional medical model, they conclude, does not suit emergency medical situations where you've got every case being unique, you've got very time-critical decision-making, you've got lots of confounding factors at play in that situation, so you need to treat every situation on its merits.
But the authors do conclude that that's not necessarily the case in all aspects of healthcare. In many parts of healthcare without the time pressure and with more standard diagnosis then, they also acknowledge that evidence-based medicine, diagnosis by protocol, and treatment is probably an effective and efficient way to run that particular aspect of healthcare. Our listeners might need to think about “Is your organization a one-size-fits-all approach or do you need to think differently about the way that you manage your operation dependent on the different types of operations that you run?”
Drew: This is (I think) the first time we've been running seriously overtime on one of our episodes, which I think shows how much you and I have got to say about HROs. Maybe the first invitation to our listener’s takeaway from this episode is we're interested to hear from you how much active interest is there in HRO theory? You'll love the online debate at the moment, is Safety-I, Safety-II new view, I think HRO still has a lot to offer. If people are interested, I'd certainly appreciate the opportunity to talk a little bit more about it.
The other thing is what are you doing in your own organizations to promote and enable local autonomy and flexible decision-making? Or even if you're not doing that, if you disagree with that as an approach, we’re interested in why that is, what your thoughts are, and where you think the balance is between autonomy and centralization, where do you feel the sweet spot is?
David: Drew, today, we asked the question “What are the practical characteristics of a high-reliability healthcare organization?” Since we are out of time, if you want to go back to that practical list of characteristics earlier in the podcast, there were eight points that were quite clear about what was going on in this organization when the authors claimed it to be a high-reliability organization.
That's it for this week. We hope you found this episode thought-provoking and ultimately, useful in shaping the Safety of Work in your own organization. Send any comments, questions, or ideas for future episodes directly to us at feedback@safetyofwork.com.