On this episode of Safety of Work, we discuss whether the Dreamworld water park accident was a foreseeable outcome.
“When I was reflecting after this incident, I don’t remember a lot of safety conversation at all.”
“There was a number of operational incidents associated with these rides; to do with, kind of, like, spacing and separation of rafts on the ride.”
“I think in this particular case, we can almost see the way that hindsight bias is causing the selectivity.”
Coroner's Inquest into Dreamworld Incident
Hawkins, S. A., & Hastie, R. (1990). Hindsight: Biased judgments of past events after the outcomes are known. Psychological bulletin, 107(3), 311.
David: You're listening to the Safety of Work podcast episode 21. Today we're asking the question, how foreseeable was the Dreamworld accident? Let's get started.
Hey, everybody. My name’s David Provan. I’m here with Drew Rae, and we’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.
Today, we've got another slight deviation in our regular format. We're hoping our listeners might get some hybrid between Drew's old podcast, DisasterCast and now our current podcast, the Safety of Work all wrapped up into one episode. We're going to be talking about the accident that happened on the Thunder River Rapids ride at Dreamworld, which is south of Brisbane in October 2016.
It's very likely that some of our listeners, particularly maybe in Australia, may know some people who are involved in the accident or its aftermath in the last couple of years. In addition to the families of the victims and the people involved, there are also a lot of eyewitnesses, lots of employees of Dreamworld, its parent company, the ambulance service, the police surface and the Office of Industrial Relations, as well as people who've been supporting the coroner's inquest over the last short while. We can't imagine how distressing this may have been. We definitely don't want to add to that distress.
If you have been affected by the incident or something similar, then please feel free to hit skip on your car console, or your podcast player. Go and listen to something more enjoyable. We don't expect you to keep listening.
Drew, this question, how foreseeable was the Dreamworld accident?
Drew: Just before we start, I've got a couple of disclaimers that I need to throw in just because it's hard to find anyone who's not connected somehow to the Dreamworld incident. I've got a couple of peripheral personal connections I just need to declare. I happen to know counsel representing one of the families, and one of the expert witnesses I co-supervised a PhD candidate. I don't think either of those are big conflicts of interest, but I always think it's worth being open about those things. David, I think you actually have a connection to this one as well.
David: Yeah, when I thought we should talk about this because it was in the media, we seem to have a good response when we talked about the Brady report a month or so ago. I actually worked on this Thunder River Rapids ride as a ride operator when I was doing my undergraduate Workplace Health and Safety degree. So, between 1998 and 2000, I was a ride operator on weekends and university holidays on this exact ride at this theme park.
Drew: Before we talk about the accident itself, one of the reasons we're talking about this and why David and I want to talk about it because it was very topical and interesting. I was a little bit reluctant until I read through the coroner's report. I thought there were a few things in there that really illustrate something that's important for all of us to talk about hindsight bias.
For this part of the podcast, I'll be drawing on a few different papers. If you want to read more about hindsight bias, we'll cite those papers in the show notes. I would recommend that those of you who are a little bit interested, do chase some of this stuff up because hindsight bias gets thrown around a lot when we're talking about accidents and interpreting them afterwards. It's a commonly misunderstood phenomenon. The biggest part is it's not an insult to say, "Someone has hindsight bias."
Hindsight bias isn't just an accusation you throw to criticize someone who is looking at an event that's already happened. Hindsight bias is a universal psychological phenomenon. We all experience it. There's no escaping it. Everyone has hindsight bias. This was known about for quite a while before anyone gave it a name.
People who are in psychological history, there are two observations. The first ones to do with probabilities. People overestimate the probability of a thing after it's happened. So, if you ask 100 people, "What's the chance of it raining tomorrow?" They might say 20% because they don't know the outcome. If it does actually rain, and you ask 100 people again, put yourself in the mind frame you're in yesterday. Given the information people had back then what was the chance of it raining, they won't say 20%. They'll give a much higher number.
That's related to another phenomenon which is that people overestimate the knowability of a thing after it's happened. Often to the point where they claim that they always knew it, even though they never did anything to prove that claim that they always knew it.
To find good examples of this, you really need to find common pop culture references. I'm hoping most of our listeners have heard either of the Jimmy Savile or Rolf Harris cases. If you haven't, both of these were very popular celebrities. Both later in their lives were credibly accused of sexual assault. Jimmy Savile was already dead when the accusations became widely discussed. Rolf Harris was convicted after the accusations. In both cases, lots of people said that they always knew that Rolf Harris and Jimmy Savile were just obviously creepy guys, but that's after the accusations were widely known and accepted.
Most of those people had never said anything like that before the accusations became known. Really importantly, some of these were the exact same people who had come to them, and they dismissed earlier accusations, and said, "There's no way. That's not credible." Afterwards, everyone knew, but beforehand, not only did no one know, but they dismissed it. They said it's not possible.
The guy who's usually credited with putting all this together as a properly researched thing is Baruch Fischhoff. He is the one who said that this is a phenomenon that needs to be investigated and researched. He's set up lots of experiments, both to demonstrate that it exists, and to rule out alternative explanations so that we know what's going on.
Like most researchers, he wasn't great at naming, so he had a few goes. He started off calling it “creeping determinism,” which didn't really stick, then he started calling it “a bunch of hindsight effects.” And then finally, the name hindsight bias. He introduced all three of those names. It's that hindsight bias that seems to have stuck.
If you want to know the history of how the experiments were done, in particular all of the people who came afterwards—just confirming, extending it, pinning it down—there's a paper from 1990 by Hawkins and Hastie that gives a good history of those experiments. Anytime you think, "Well, actually, maybe this is what's happening," go and look at Hawkins and Hastie’s paper. Someone has done an experiment to check that. According to Hawkins and Hastie, there are probably four different mental processes that we use when we try to work out how foreseeable something was. So, you're after an accident, you're trying to work out "Was this foreseeable?" There are four things that you might be doing.
The first one is where you already had an opinion about it before the event. What you're doing there is you're searching your memory to try to remember what you'd previously thought. If you've only done that once, that can be very resistant to hindsight bias—if you've just once had to give an opinion about something. Particularly if you've written that opinion down, then there's a good chance you can remember. You can say, "Of course, this was foreseeable, because I foresaw it. Look, here's where I wrote down my prediction." That's nice and ironclad, but that doesn't work as well if you had a range of opinions over time. That's where the phenomenon of creeping determinism comes in.
Ask yourself, did you think that Donald Trump had a good chance of winning the 2016 election? Now, I admit, for some people, they just don't think about that at all. If you thought about it, then probably you thought about it multiple times and you had a different opinion each time as you got more information. Our memories are terrible at knowing which judgments we held at particular times. Probably there was a time you thought it was impossible, and it was just a joke idea. Then you thought there was a time that it was possible but highly unlikely; then a time when it was possible, but still an outside chance. Then it looked like it was probably going to happen, and then it definitely happened. That's a normal process of learning. We update our brains with more information, but we can't unlearn stuff to work out what we knew at particular times. Our memories aren't designed to help pin down what different opinions we had at different points in time.
The second process is anchoring, where we start with the post-event belief, and we go backwards. We know that the actual likelihood of the accident is 100% because it happened. And then we say, "Okay, what did people know beforehand? Let's start adding in uncertainty." You come down from 100% as you add in uncertainty, but you never know all of the sources of uncertainty, so we never add enough. We end up with an estimate higher than if we had to reconstruct the likelihood from scratch.
I'll jump ahead into the fourth process, which is just where we want to look good. We want to present ourselves in a favorable light. We want to say that we knew it all along, or that we wouldn't have made those same mistakes. But the really interesting one is the third process, which is when we do actually try to reconstruct, we try to use the facts and events at the time to work out what the likelihood is. It's putting out minds in the past. We do that by assembling a new judgment of probability, both from our own long-term memories and from information that we can find in the world. But because we know the outcome, we tend to find more evidence to support rather than contradict the outcome we already know. We tend to evaluate the strength, credibility, and relevance of that evidence-based on knowledge of the outcome.
I'm conscious that I'm basically giving a lecture on hindsight bias here. I'm looking at David nodding and smiling in a video camera at the other end. So, throw over to you, David. Has this met your own thinking about understanding hindsight bias and how it works?
David: I'd have to admit that I didn't have a very good working understanding of hindsight bias until you prepared this. I was familiar with it as a concept through just a normal investigation of incidents. We talked a lot about it. We also talked a lot about counterfactuals. That's something that I've closely associated with hindsight bias whether or not it is closely associated with hindsight bias. It's very hard when something's happened for anyone to say, "Oh, gee. That was impossible to predict." I don't think that's comforting for organizations. I don't think it's comforting for family members. It's a big challenge for us in safety, which is how we look forward to the information that we've got. And then, I suppose how do we assimilate information about the things that have happened at the same time.
Drew: That's a really good point. One thing that makes it particularly hard is that the whole investigation process is trying to explain what happens to people. We're deliberately trying to find all the evidence that points towards what could have caused the accident. That's exactly what we should be doing, is you're finding evidence that shows us what was leading towards this accident so that hopefully, we can learn how to prevent it next time.
That whole process leans away from finding evidence that points away from what caused the accident because it's not relevant to that main task. It also leans away from interpreting evidence in ways that show it to be inconclusive or irrelevant. We don't like throwing away evidence and saying, "This doesn't say anything about the accident." That's good for finding out what caused the accident, but it's bad when we go to that next step of trying to understand what people could or should have known about those causes in advance. We've already biased all the information we're collecting and how we're interpreting that information, even leaving aside the mental bias, which is the hindsight effect.
The important thing here, this isn't people being hypocritical. It's not people trying to make a good impression. It's not people face-saving. This happens to people making a genuine effort to be fair and to be really realistic about what people might or might not have known in the past. They still suffer from just as much hindsight bias. A few snippets from the literature that we know really well. We know that it's easy to manipulate things, to make people have hindsight bias. We can make anyone exhibit this effect. It's not something that some people are immune from, or some people are particularly bad at.
Unfortunately, it's been heavily tested using real accidents and real judgments about negligence. There are some psychological effects where you can say, "Oh, that's really interesting but that's in the lab. It's going to disappear when it comes out into the real world." Hindsight bias doesn't. If anything, it gets worse in the real world that does in the lab. One of the reasons for that is some of the things that make hindsight bias stronger. It's stronger when you're talking about negative events than when you're talking about positive events. The more severe the outcome, the stronger the effect is. Everyone suffers from hindsight bias, but the particular conditions that make it absolutely worse are investigating a fatal accident that triggers all of the things that cause hindsight bias.
Two other things that are frustrating are hindsight bias can affect other cognitive processes, including memory. It can literally make you remember your own historical judgments incorrectly, to the point of getting people to forget who they voted for in elections based on how that politician has turned out in government. Again, that's not face-saving. That's genuinely rewriting people's memory of their own judgments.
Basically, you just have to learn and accept that there are some types of judgment, which are always going to be hopelessly contaminated. There are some things that we know a lot better in hindsight, we know history because it's happened. There are some things that we know worse, which are things like historical judgments. If we're careful, we can work out your, "What is a statement about what happened?" and, "What is a statement about what people should have known?” and separate out those things.
You can even (if you're really careful) separate what people did know based on what they should have known. You got to be really careful about that one that you don't make assumptions about what they did know based on your hindsight judgment of what they should have known.
There's only actually one thing we know that helps with hindsight bias. It's a particular mental trick, which we'll talk about later in the episode. Most attempts to reduce hindsight bias, including carefully explaining to people what hindsight bias is, don't actually reduce hindsight bias.
There's one final thing I want to throw in, which is again very frustrating. I have to be forgiving about it because it is a demonstrated part of the psychological phenomenon. It's not people being stupid. A common feature of hindsight bias is that people deny that they're influenced by it. This happens in laboratory experiments, people—even when you show them that they've exhibited hindsight bias—still insist that knowledge of the outcome didn't influence their judgment. It seems to be just like part of the self-defense mechanism of our brains that we have to believe that we're thinking clearly and consistently. You give people evidence that their brains aren't working, they are forced to reject that information.
There's a rule I really love. It's called Helen Lewis's law—based on a UK journalist called Helen Lewis—that says that the internet comments on any article about feminism justify the need for feminism. I'd like to propose Ray's law that says that most comments about hindsight bias and accident reporting are illustrations of hindsight bias.
David, I don't know if you noticed the recent LinkedIn thread I started when I was doing the research for this episode. I don't really want to pick on anyone here because as I said, it's not a personal judgment. It's actually the way hindsight bias works as an effect. You're guaranteed when someone says, "It's not hindsight bias. Look, here is the proof." The proof is almost guaranteed to be a very clear illustration of hindsight bias.
David: Yeah, Drew. That's why we felt we'd open this up with a discussion about hindsight bias because it's something that's really important for us to understand as we approach investigating things that have happened in our organizations. I really liked the way you described that framework, Drew, you're describing what happened. You sometimes conflate that with describing what should have happened.
Maybe describing what should have happened is just entirely unhelpful. Maybe all we need to do is describe what happened and then work together to figure out what we do going forward. Talking about what people should have done in the past—when we think about counterfactuals—is just entirely exercising our potential in wasted effort. Is that the way you maybe think about it?
Drew: I'm inclined to agree with that. Certainly when we come to some of the proposed solutions that are based on what that people think should have been done, or would have prevented this, I think we need to be really clear about what is a good recommendation because we think it is a general principle going forward. What we think is a good recommendation because we're frustrated about what people did in the past and wish we could have made them think differently.
David: Let's move on to Dreamworld and get stuck in. The ride we're talking about is the Thunder River Rapids ride. It's a very common theme park ride worldwide. The first one was also called Thunder River. It was made for Astroworld. The idea of this ride is to provide a safe simulation of riding a raft down a set of river rapids. There are lots of variations on a very similar design around the world. When you looked at the Dreamworld ride, you didn't really know how old it was because it was actually part of the Gold Rush Town. The whole design and features of the ride were made to look like an 1800s type of area, but it was actually built in 1986.
By 2016, it was genuinely an old-fashioned ride. It was 30 years old, and quite a lot of wear and tear. The setup of the ride is that there's a main body that goes through a narrow trough with a gentle downhill gradient of a few percent. You can see it if you're walking around Dreamworld, the actual ride itself goes underneath walkways and through and around certain areas on this gentle downhill grade. And then it flows to a pool at the bottom, and the water gets pumped back up to the top, and it just circles through the ride.
The passenger sits inside a circular fiberglass seating area (the raft), so they're in a circle. They put their bags in a little basket in the middle. They sit across from each other. That fiberglass seating area sits on top of a large inflated rubber ring. The rubber rings float down through the water. It bumps into underwater obstacles. It hits little rapids based on the different speeds of the water and people make their way through the ride.
Where the rafts get to the very bottom, they need to get back up to the top where the passengers get unloaded and reloaded. The rides have a couple of different designs for doing this. Dreamworld had a wooden conveyor system, which was like a big travelator. The rafts come in at the bottom, there's a constant travelator that just picks the rafts up, and they move back up to the top of the ride. It's not that different to just a supermarket ride or any conveyor system.
Basically, if you're a passenger, it works like this. You start in the loading area. My job, Drew, when I was on the raft, I wasn't actually the ride operator because they didn't let the university students actually operate the ride. You had to be a little bit more experienced to actually press the buttons. My job was to try to just unload and load the passengers. You had to line it up. That happened in the same place. You had to line it up because in these seating configurations where the seats were two, two, and two, there were little gaps for people to step up and out of between the two seats.
The number one job was to try to make sure that as the raft slowly spun in, that you either kicked it or pushed it in a way that actually made one of the exit points line up with the way that people could get off. It was very hard to get people off by making them climb over seat backs to the docking area. It was always the number one priority to make sure that the rafts lined up, then you let the group of passengers off, then they walked down one way.
Then you got the new people on, then the ride operator would push a button, and the raft would move into a holding area where they would wait for the separation between the rafts in front, then release the rafts at a regular interval on the way through, and just keep the ride moving in that way.
The ride operators themselves had CCTV of every single part of the rapid ride, there weren't people physically watching the whole ride, but the operator could see every part of the ride through the CCTV system.
Drew: Thanks for that, David. That gives a pretty clear description of what it was like to be on the ride or operating the ride. Based on a couple of things that the report—we might touch on a bit later—did you have the sense of how demanding the job was that you were doing or that the operator was doing?
David: The operator carried the responsibility to operate the ride. When I say operator, the person pushing the buttons at the console. For me, it was very monotonous work but one that was always under some time pressure because you had to load and unload the raft, and then move that raft on before the next raft came in. The task variety wasn't demanding, but the production line process of the ride was always quite demanding. Particularly if you had a few people that needed assistance in and out of the raft, or things weren't lined up properly. You could end up in a situation where you had rafts starting to come down behind. It was always very clear to us as operators that the ride had to keep circulating.
Drew: Were you taught to think of yourselves in a safety-critical role, where the point of doing things was all you need to do it this way because it's safer that way? Or was it more about keeping the customers happy and keeping the ride flowing?
David: Given that we're talking about hindsight bias, and this was 20 years ago, it's going to be very hard for me to know what I thought at the time. When I was reflecting after this incident, I don't remember a lot of safety conversations at all. I definitely don't remember a lot of conversations about emergency stops. The brief was the operator will operate the ride. That's the operator's job, pushing buttons. My job was not something that I saw as a safety-critical role. My job was just getting passengers on and off the raft. I didn't have definite tasks and decisions that were safety-critical in a sense of watching out for water levels. I actually couldn't control the ride. I couldn't press any buttons. All I had to do was basically get people in and out of those rafts.
Drew: I always find it interesting when there are these little details of jobs that make them challenging and skillful that aren't mentioned anywhere in the procedures. One thing I noticed is nowhere in the report did it say that trick of being able to spin the raft around. The little gaps between the seats lined up neatly to help people get on and off quickly without having to clamber.
David: The main thing for me is you actually have to make a judgment. You have to look at the rotation of the raft and make an estimate about where that rotation was going to end up by the time the raft would actually come to the unloading area. Then I'd take four or five steps down before that happened and give it a kick—either to spin up the amount of rotation or to slow down the amount of rotation—to have this nice little spin that when it arrived and it stopped. That you would have one of the three exit points of the raft neatly lined up with the way that people could just step straight off. That would happen every minute for eight hours, walk up and make that assessment about how the raft was rotating.
Drew: That little bit of judgment and skill that makes the job fun to execute and gives you that little sense of satisfaction each time you get it right.
David: Absolutely right. Yeah.
Drew: Before we tell you about what went wrong with the ride, we'd try to—at least people who didn't know what happened at Dreamworld—give you a little bit of this hindsight experience. Now, obviously, this isn't going to work for people who know what happened. For those of you who know that something happened, but don't know the details, maybe this can give you an assessment of how much this was foreseeable by looking at what had gone wrong before.
So, River Rapids Ride, as David said, they've been all around the world and they've been there for 30 years. We've got a lot of apparent data on how safe or unsafe they are. There have been hardly any accidents. That's a little bit difficult when it comes to knowing what are the real safety risks because they've certainly been incidents.
But where there are incidents, we don't have a lot of historical detail, unless there were fatalities and a lawsuit or a big investigation. I suspect that there were many similar incidents that we don't know about, just because they weren't considered serious at the time. The passengers were okay. They were given a free t-shirt and free ticket to come back to the ride. That was the end of things. Not saying anything about Dreamworld, I'm talking about hypothetical theme parks all around the world. They might investigate them internally, but there's no reason to make it public and to create bad publicity if nothing bad has happened.
To create these descriptions of the rides, I've had to hunt down news reports about the ones that did make it into the press, or where the ride manufacturers have issued official safety bulletins afterwards. I can't actually promise the accuracy of this information. It may, in fact, have been distorted by how they're reported, but this is the best I can do to reconstruct the history of the serious ride incidents. I suspect that anyone who was operating the ride might have been in a similar boat. They may have had a bit of inside track of people sharing information. A lot of what they're relying on is what other people choose to share, choose to put out in bulletins.
The earliest one I can find is in 1991. This is at Kennywood Park in Pittsburgh. A raft flipped over just after it'd left the starting point. All I've got on that one is the news story of six teenagers who had a scary experience, made it into the local paper, and that's it. I actually suspect reading between the lines that this thing was fairly common in the first couple of years after the first Thunder River. They tried to make them all a bit too exciting. By making them a bit too exciting, they were learning lessons about what makes a ride stable. Probably actually flip overs, bumps, gems were fairly common in those first couple of years. Reading between the lines there, they weren't fatalities. So not a lot of detail but probably a number of wet passengers who were thrilled to be the first people to flip over on the ride.
1999 is when things started getting serious. Within five years, we're going to have five incidents. Three of them within the space of a couple of months. The first one was a fatality that occurred at Six Flags Over Texas. Inside those rubber rings, they're not just a single continuous tube. They've got a bunch of different bladders and several of the bladders on one side of that tube deflated. The raft is running low on one side and high on the other side. It got stuck against a pipe that was running along the edge of the trough. So, the raft is stuck there. It's filling up most of the channel. The water is pouring down behind it, buffeting it, and building up. Eventually, the raft flipped over, spilling people into the water. The fatality was someone who drowned after they were tipped into the water and stuck under the raft.
Later that same year at Riverside Park in New England, this one is now also at Six Flags, and was operated by the same people but was run quite independently. The raft became unbalanced. In this case, it wasn't deflation. It was because there were several heavy people all sitting on one side of it. One of the people actually offered to swap places but the attendant was really keen that no one took off their seat belt, and everyone stayed properly seated. The raft went down at the ride very lopsided.
It hit a rapid section where they had ramp upwards going to a weir. The idea is it goes up, and then gives a heavy jolt but it got stuck on the going up bit. Again, it got water backed up behind it that pushed it over and went over the weir. It went over and flipped rather than just a bump.
The third incident was at Visionland Park in Alabama. This park has changed name several times since, but I'll stick with the name at the time. In that one, there were two rafts that were released with only a very short amount of time between them. David, you said that that's something right up you were always fairly careful about was making sure there's enough time between each raft. In this case, they will release you just a few seconds apart. One caught up to the other. They bumped, bumped, bumped, bumped, and then got to a section of the ride where that bumping was enough to tip one over.
There's another one in 2000 shortly afterwards. This one I just have zero detail about except that I know that it happened. The Renegade Rapids ride in Maryland raft flipped over. Visionland Park had another one in 2001. I'm not sure how much we can count that one. It was a time when the employees were playing around on it. They were deliberately trying to see if they could get one of the rafts, and they succeeded. Not sure how much you can call that one an accident.
The final couple I could find were the Efteling Park in the Netherlands which has a version of this called the Piraña river ride. They've tried to be quite inventive in that one. I think even at one stage, they're trying to put a whirlpool into it. There's one section where that ride goes between waterfalls on either side. There are two rafts. One of them got stuck in this waterfall section, and another one bumped into it. The second one happened in that same area of the ride with the two waterfalls where it got stuck, but I don't have details other than just that he got stuck and flipped over.
That's basically the history of what had gone wrong and gives an indication of where people might or might not have been able to foresee the danger of these rides. I know what happened and being misleading. I'm not going to try to create a story out of these about what the pattern is. But I invite readers to hear those stories and think, "Okay, so what is the pattern here?" See if you can work out for yourself. What's the pattern of danger? Where do you worry? Where do you not worry about the rides?
David: With this type of design of a ride, there have clearly been some incidents around the world. Dreamworld wasn't without its own incidents. In the interest of time, I'm going to move quite quickly. I do want to talk a little bit about the history of this ride and some of the incidents, and then we'll get stuck into what actually happened in 2016.
The coroner, the investigators, and the expert witnesses have all of the access to the full Dreamworld internal reports for all of these incidents. We don't have that. We've only got what they wrote about in the final coroner's report. For those that are reading it, it is discussed through paragraph 218 to 234.
We started in 2001. Basically, the line operators started up the ride. In the morning, starting up all the pumps and putting all of the rafts into a full cycle, which they normally do. What had happened was the second operator arrived, like I said, there's an operator on the buttons and a second operator doing loading and unloading.
The first operator who's meant to be pushing the buttons gets distracted while talking to some of the guests, and five rafts spank up in the unloading area. They all got pushed together because there's nowhere for them to go. Like I said earlier, you had to keep the ride moving. There was actually nowhere for the rafts to go because there was no place for them to safely bank up. They couldn't bank up at the bottom of the conveyor. There was no room at the top of the conveyor between unloading for them to go. They got tipped all over the place in 2001 and one of them became completely upside down.
In 2004, a passenger that was being unloaded was the last person hopping off. The next raft came up behind it. The passenger lost their balance and fell into the water. This happened quite a lot. Drew, do you know that like a raft bumping, you didn't quite get everyone out? What you'd be doing is you'd be watching the raft coming. You'd know that the passengers weren't going to get out before there was an impact. You just make sure people remain seated until that other raft bumps in from behind, and then you'd get them out. This person was maybe trying to get out before that raft bump, didn't quite make it in time and fell into the water.
In 2005, there was a leak in one of the rafts. It was riding really low in the water. It had trouble getting under the conveyor, and so two other apps came up close behind it. No problems, the operators intervene there.
In 2008, there was a problem with one of the sensors at dispatch so three rafts had banked up in the dispatch area. To clear this backlog, they let this bunch of three empty rafts go through close together, and then another four rafts, one with passengers. Basically, there was only about 20 or 30 meters between the unloading area and the dispatch area. You never really wanted more than one raft in that spot. Like I said, there was nowhere for these rafts to safely bunch up anywhere on the ride.
There were a number of other incidents, but I just wanted to leave the [...] that between 2001 and 2014 or so, there were a number of operational incidents associated with these rides to do with spacing and separation of rafts on the ride.
Drew: Probably just for the interest of time, if we summarize here that in 30 years, there have been eight rafts that have flipped over with passengers in them that we know of around the world. Dreamworld had at least five incidents over the time, but the worst one of those was involving a passenger ending up in the water. There's only one of them involved in the dangerous scenario that we're worried about here, where a raft gets stuck in the rapid section of the ride at risk of flipping over. If you don't know what actually happened at Dreamworld, you've got a small advantage when it comes to avoiding making assumptions here.
What I want everyone to realize is that the data we've just given you is already heavily cherry-picked. If you can see the problem with that, that might help to extrapolate into areas where none of us can see how much our own minds are messing with us. We told you about the river rapid rides, why did we pick those ones? Why didn't we give you the full history of amusement park rides? And, what isn't dangerous about an amusement park ride?
Fairly obviously because we know the accident happened on a Thunder River Rapids type ride. So, we've zoomed in on those. We've told you about incidents where the raft flipped over. Why did we pick those ones? Why not all the ones where people bump their heads or fell off? Because the Dreamworld incident involved a raft flipping over. We know those incidents are relevant.
There are three exceptions to this. There’s a couple of incidents we mentioned that don't involve a raft flipping over. David told you about incidents where rafts bumped into each other near the conveyor. Now why do we know about those? Why would those ones be in the coroner's report? Why do we bother to tell you about them? The reason is we know that the accident involved rafts bumping into each other. When we go back and look, those are the things that we dredge out. Those are the things that we find. Those are the things that we pay attention to. Not only that, but we pay attention to the parts of those reports that matter.
One thing that I think is really interesting is there's this 2014 incident that does talk about the possibility of a raft tipping over. It does talk about the conveyor belt, and rafts bumping into each other, but it talks about two separate rafts. The one that might flip over is the one that's floating around down the bottom. The one that bumps is the one up the top. It's not linking the bumping to the flipping over. The thread that connects all of these things together is our knowledge of the outcome that makes the story seem like it's headed towards an inevitable conclusion. Let's put in the details that now take us right past the outcome.
As David described it, there's this conveyor system that picks up the rafts from the bottom and takes them up to the unloading area. After that conveyor and during the whole time of unloading, there are underwater rails that prevent the rafts from tipping over. When people are standing on one side as they hop off, it can't tip over from that weight all on one side because there's this underwater support. You can't see it when you're on the ride but know that it's there because the raft isn't tipping too much. There's a 40-centimeter gap between the top of the conveyor and where those rails start. That's about to become really significant.
On the day of the accident, just after 2:00 PM, one of the two water pumps that lift the water up through the ride stopped. The ride needs both pumps working all the time. With one stopped, the water levels started dropping. As a result of that, one of the rafts—technically, it's called raft number six—got stranded in the unloading area. The water fell away beneath it, and it was just sitting there on the support beams.
Fifty-three seconds after that, another raft comes up the conveyor belt and bumps into the back of raft number six. Now, as far as I can tell, this is something that has happened a lot of times with various levels of severity depending on how many rafts came up and got stuck, or how severe the bump was, or when the bump caused a passenger to fall over. It's not a desirable thing to happen. They know it's not a desirable thing to happen but something pretty freaky happened in this particular case. The back end of raft number six and the front end of raft number five didn't just bump into each other. They pushed each other upwards so that the back end of raft number five went down and slipped into that small gap between the top of the conveyor and the start of the supporting rails.
Now the conveyor is pulling that part of the raft underneath. The wooden slats on the conveyor are acting like teeth on a cog slamming into the raft and pulling it down further until the whole raft is standing up vertically. The victims were killed either because they were shaken out of their seats in this process into the conveyor mechanism, or because they were hit directly by the conveyor slats as the conveyor kept running.
I really recommend to people, even if they've read the report, don't look at the pictures. If you look at what happened to the raft, it looks impossibly freakish for this to have happened. The raft doesn't even fit through the gap that it ended up in. The police investigators spend a lot of time just trying to make this happen to prove the circumstance of the accident, prove the mechanism. They couldn't make it happen. They tried doing it with loaded and unloaded rafts with the water there, with the water not there. They eventually tried just holding the raft into position with the conveyor. They still couldn't make it happen like it did in the accident.
That's the basic description of the facts. How foreseeable was that? David, do you want to take over for a little bit?
David: Yeah, that's the question for today. How foreseeable is that in the way you describe it as being a mechanical system that wasn't even able to have the failure mode replicated after knowing how it happened?
There were a number of expert witnesses and the coroner going through the evidence. There's been a lot of commentary around in the media and in professional circles about this. A lot of people have made it very clear that in their opinion, this was predictable, inevitable, able to be managed if certain things had been done prior to the incident—including both safety processes to do with risk assessment, and other safety practices by engineering solutions, or design changes to do with the event itself.
Drew: To be fair to these opinions, I should say that I haven't found anyone who's claiming that the precise mechanism or circumstances or severity of the accident were predictable. What they're saying is that there was enough evidence from the previous incidents and from the design of the ride that any reasonable person should have predicted it enough to have put in place the right fixes that would have prevented it. Not that someone would have predicted precisely what happened.
I've cut these down a bit. I'm going to refer to some specific paragraphs in the report. You can have a look at those paragraphs for yourself. I'll just pull out some of the keywords to give you a flavor of the type of hindsight statements that get made.
Paragraph 988 of the report. "It's clear from the expert evidence that at the time of the accident, the design and construction of the system—its conveyor and unload area—posed a significant risk to health and safety. Each of these obvious hazards poses a risk to the safety of patrons and would have been easily identifiable to a competent person."
At paragraph 989, "The experts reached their opinions independently, and we're all in basic agreement as to the combination of causes. They were highly qualified to do so based on the evidence presented and when not influenced by hindsight bias in reaching their conclusions. There was ample evidence of the potential for disaster of this nature occurring."
Paragraph 994, "Previous incidents—particularly in 2001 to 2014—should have alerted Dreamworld to the hazards present on the ride."
At 996, "While the exact scenario may not have been able to be replicated during testing, this is of limited relevance. It doesn't render the identification of the risk unpredictable without the benefit of hindsight. The hazards and risks which cause the rafts to collide at different points on the ride, in particular at the end of the conveyor, were present, known, and should have been identified.”
2014, "While the ride had operated fatality-free for around 30 years, at the time of the incident, it's clear the design and construction of the conveyor unload area posed a significant unidentified risk. Properly documented history with appropriate risk assessments would have identified and eliminated the serious risks."
1037, “In response to this finding, some of the parties raised the issue of hindsight bias. I have previously rejected this argument. It ignores the Australian standard. It ignores the history of four previous incidents extremely similar in nature. It ignores the well-known danger presented by the numerous and regular pump failures. This danger was well known to the operators with prescribed responses set out in the operator's manual."
Finally, 1039, "In terms of hindsight bias as to the hazards present in the ride, it's clear that while the maintenance operational staff as well as our inspectors when handed sites over this may not have identified such hazards, this was not because the hazards weren't obvious."
David, I'm interested in your opinion here, but I think this almost is actually like a contradiction. They're saying that the hazards were obvious. The evidence that the hazards were obvious is that they're claiming that the hazards were known about, but then they say that Dreamworld should have conducted a competent person because otherwise they didn't know about the hazards.
David: I think it's very clear and direct statements about what Dreamworld should have known in relation to this. We'll get on to it, but I'm just not convinced that even with all of the information that they say DreamWorks should have had, I'm still not sure that anyone would have assessed it as a credible risk that needed something to be done about.
They talked about that 40-centimeter gap, but the bladders in the fiberglass seating arrangement on those rafts are double the width of that minimum. So, 80 centimeters to a meter. A simple engineering assessment would have been that there's no way rafts are going to go through that 40 centimeters gap. The tolerance set at the conveyor doesn't actually keep separation between the fixed rails and the moving conveyor system. That would have been a designed tolerance.
I can't see how it would have been predictable, but maybe I've jumped in too far into it. I'm reading how direct those statements by the expert witnesses and the coroner where I felt the need to say something a bit more direct in the opposite direction.
Drew: I think in this particular case, we can almost see the way that hindsight bias is causing selectivity. There are two separate hazards here that I do agree were known about. This isn't a case of knowability because people did know about them and we're doing something about them. The first one is this general idea that has it tipped over on these rides; it’s a well-known hazard. But it's entirely in the context of the rafts going downhill through the whitewater experience.
That definitely includes the idea that rafts bumping into each other is dangerous. It includes the idea that rafts getting stranded is dangerous. It includes the idea that if you've got obstacles on the course, you need to be careful that the raft can't get stuck on those obstacles. It includes the idea you got to keep the raft well maintained. You got to keep them balanced to avoid being unbalanced and having unexpected interactions with the obstacles.
That's well-known about and Dreamworld's doing a lot to manage that. That's why the operators sitting there with the CCTV cameras can see every part of the course, why they've got instructions if a raft gets stuck at the emergency, stop button. That hazard is not just knowable, but it is known and being actively managed.
The second hazard that is known about and is being actively managed is the idea that rafts bump into each other in the unloading and loading area. They know that that's dangerous because someone has fallen over when that's happened. It's not just known. Dreamworld obviously took that very seriously. They implemented engineering controls. They put in an extra gate between the rafts coming in the unloading area to protect the raft that's getting unloaded. They did a hazard assessment to work out what's the right number of rafts going around the ride.
This is a pretty big deal. They originally had 12 rafts going around. They reduce that number and put limits on the number of rafts depending on how many operators were working that day. Just for this purpose of making sure that the rafts didn't queue up. That was identified and hazard managed. The bit that's only obvious in hindsight is putting those two hazards together, and saying, "These might be talking about the same thing."
All of the evidence says the biggest hazard on the waterside is a raft tipping over and the biggest hazard on the other side of it is rafts bumping. There's nothing putting it together and saying, "There's a risk of a raft tipping over from bumping in the loading and unloading area." The only possible connection there is to cherry-pick that 2014 incident, which talks about both but it's talking about different rafts in different times; or to pay lots of attention to the 15 years earlier accident in 2001, where the operator let five rafts pile up in the unloading area before the ride was even open. You have five rafts pile up, then yes, one of them tips over. That's not the scenario that anyone ever imagined happening with guests and isn't the scenario that caused the accident.
Let's jump back then to hindsight bias. People after an event, they know what the outcome is. They think it's more predictable than it actually was. They're unaware that that's happening. That's why you get all these statements in the report that know about hindsight bias, but still show hindsight bias. They're saying, "People have raised hindsight bias. It's not hindsight bias. We can legitimately say what people should have not... It's not hindsight bias, because this really was predictable." That's what hindsight bias is. It's giving you that false sense of strong confidence that you can say that it was predictable.
Basically, the rule is if someone says, "They're not being influenced by hindsight bias," that should be the clue that they're talking about the type of decisions, which get influenced by hindsight bias. You need to look not at their claim or look at yourself at how foreseeable it was, but look at is this the type of judgment which hindsight bias applies to. If it's a judgment about how predictable something was, or a judgment about what someone should have noticed or should have seen—even if it's a judgment about the relevance of something, that this evidence is relevant—that's a hindsight judgment. Because it's a hindsight judgment, automatically, it's affected by hindsight bias.
David: These things that people supposedly should have known, these predictable hazards, the spacing of the slats on the conveyor, the gap between the conveyor and the support rails, all of the effects of pump failures on water levels and what that would mean for the ride, and then these lack of an emergency stop for the conveyor on the control panel. How foreseeable are these things in terms of these situations that could have led to this type of an event?
Drew: There's a part of the preparation where I've talked through each of these things in turn, looking at how they would have known. For the sake of time, let's just jump straight to the one about the water level. The coroner's report takes for granted that maintaining the water level was critical for safety. They're basically saying, "The big thing here was everyone knew that if the water levels started going down, that's really dangerous. It should be automatically detected and have an emergency stop, or at the very least, it should be seen as a really dangerous thing." I think that is cherry-picking or mixing together two hazards here.
After the 1999 incidents, water level got flagged as a safety issue. That's actually the main evidence in the report that Dreamworld knew that it was an issue, because it came from these instructions from a ride manufacturer, O.D. Hopkins. It went from those instructions into the Dreamworld guidance for operators.
We actually got a copy of this bulletin. It's got nine separate items. All of these items are to do with this idea of a raft getting stranded while it's going down the rapids. That's why it's talking about water level because if the water level is too high or too low, that increases the chance for the raft getting stranded—either grounded on the bottom or hitting an obstacle on the side and getting stuck. The recommendation there is "Water level is important because of these things. Operators should check water levels several times during the day."
When the report says, "Dreamworld should have known that water levels were important," that's the context in which Dreamworld should have known. They should have known that it's important for operators to check that the water levels aren't chronically too high or too low. There was absolutely no warning that the moment when one of the water pumps trips, you're in a dangerous situation; you've got to stop immediately. If the danger is if a raft gets stuck, you have to stop. Water levels matter because water levels could cause the rafts that get stuck, so you've got to stop. I think that thinking where, "Okay, let's make the argument the other way. Let's make the argument that Dreamworld shouldn't have done something about it." That trick is what helps you avoid hindsight bias.
Actually, advocates for them should have done nothing. It's the only way to really try to get around that. I think there is a strong argument that you can make, Dreamworld should have done nothing. They should have recognized that water level was a hazard of that type. They should have recognized that where water level is really critical, you've got to put in sensors. In fact, they did that on other rides. We don't have time to go into it.
Dreamworld got another ride called the Log Ride, where they saw water level as part of the braking system. It is really important that the water level doesn't go down on the log ride. In response, they put in sensors.
Same time, on the River Rapids ride, they saw rafts getting stuck at the bottom of the conveyor as an issue and they put in sensors there. That seems like a pretty strong case that Dreamworld was capable of recognizing the water level as a hazard. They were capable of responding to hazards. They just had no way of recognizing that water level was a really time-sensitive hazard for the Thunder River Rapids ride.
David: I suppose we're struggling to agree or we're just pointing out the comments that I made in the coroner's report, that all points towards that being a very foreseeable and obvious hazard that Dreamworld should have done something about. We've said, "Well, actually, no. There's a big context around this." I'm personally at the view that I'm not sure that Dreamworld could have foreseen this. Even if they did identify this particular accident scenario, I'm just not sure that it would have been seen to be more important than all of the other things in the park that they are spending their resources on. Do you want to talk a little bit about some of the suggestions in regards to doing "proper" risk assessments?
Drew: I think that's important because the reason why hindsight judgments matter is because they can lead us down the path of very unhelpful recommendations. What we're trying to do is use hindsight judgment to say, "People should have known." Because we can understand that they didn't know, we think we can fix this. The typical way we try to fix this is by making people do risk assessments. To cut a long story short, there are two types of recommendations in the report. One set of recommendations I vehemently agree with, because they don't come from this hindsight bias judgment. The other set of recommendations are going to be really expensive, really useless, and distract from the good recommendations. What's really good recommendations are general principles of safety that we would like to see going forward.
I don't know about you, David, but I always assumed that rides were under some engineering control. That really shocked me. I've always assumed that you've got to ride on a model like a roller coaster where you carefully model the physics of everything, and you make it physically impossible for the most dangerous things to happen, and then the things that you can't make physically impossible, you put in place interlocks to stop the human operators do anything dangerous. If something does go dangerous, you've got automatic detection that stops any dangerous thing and shuts things down safely. The operators are basically there to push start/stop and to stop little kids somehow managing to circumvent all of that really clever safety system. That's pretty standard for public transport. I just assumed that amusement park rides are like trains but pretending to be scarier.
David: In 2017 or the year after this incident, I was actually at a conference with a colleague who works with me. It was in Munich, it was hazards at the time, and there was a regulator panel discussing this. He actually put his hand up and said, "I actually really think we need to move towards some safety case regime for amusement park rides." It was laughed off as being a complete waste of time to put in all these engineering design principles for merry-go-rounds, and Ferris wheels, and things like that. It's interesting now, a couple of years later that the reports come out. I think it's going to send that industry down that very path.
Drew: I'm not necessarily certain that a safety case regime is necessary to get what I'm asking for here, but I'm not going to say no to it either. I think it is a reasonably logical and sensible extrapolation. Certainly, this idea that every ride should have an engineer who designed it, those design principles are carried through the lifecycle of the system, that there's a responsible person who holds the license and approves changes in accordance with that original design intent.
I think that's actually quite cost-effective and really sensible but there's another side to it that comes from the hindsight bias. This is the idea that says, "Well, we didn't predict it. We should have predicted it." If we make people do risk assessments, then they're going to predict it. That ignores the circular reasoning that the reason why they think risk assessments weren't done in the first place is because Dreamworld never found this out. They're ignoring all of the evidence that suggested, in fact, it seems to have been really badly documented.
There clearly was a lot of risk assessment going on through the life of this thing. Creating more risk assessments and more documents aren't going to increase the chance of us finding out the unfindable. You have a system that puts in place good design principles and ensures them, but don't put in place a system that can miraculously discover mistakes that it's not capable of.
David: Yes, a huge amount of faith in those processes. During your DisasterCast episode about Longford and you talked about after the event, the people who suggested that things like hazard processes would have identified the specific operational scenario and failures that were evident at Longford. Whether it's heads up or normal risk assessment, it's a bit of a cop-out recommendation just to tell people to do risk assessments (like you said) for things that they should have known, in the hope that if they do the risk assessments this time, then they will know them and they will do something about it. I think that's just a cop-out for a particular specific incident.
Drew: I don't think it's at all evidence-based. So, the coroner would like the Office of Industrial Relations inspectors to regularly audit amusement parks. There is no evidence that OIR is capable of building up a team of competency for this purpose that's going to be any better than the existing commercial people who were all regularly recognized as competent and were regularly auditing and inspecting the amusement park. There is just this hope that we tell people to do it better, we make it more official, the task is somehow going to be done better. They'd like annual risk assessments done, despite the fact that there's no evidence that risk assessment would have detected those problems.
This is interesting. They'd like them to include all possible control system functions and variations, as well as a detailed examination of the ride during all modes of operation, and possible emergency conditions. If you think about that and how much it takes to do a risk assessment. We're talking here about functional value analysis, fault trees, failure modes, effects analysis done every year. You're talking multiple months of engineering work to do these assessments.
More than that, these rides are supposed to be foolproof and failsafe. They want us to deliberately build in a way of introducing emergency conditions, just so that we can do annual tests. If you think about engineering good principles, that's actually quite dangerous. To deliberately simulate emergencies just so you can check if the ride can handle emergencies. It's doing things like disabling the brakes on a roller coaster to check that it still comes to a stop. You have to build in a function for deliberately disabling the brakes on the roller coaster.
David: I don't know, Drew, if you're having too much fun or just had too much time on your hands when you were preparing this episode. We won't go through it. You did some effort into trying to calculate the cost of the safety department that will be required to actually administer this scale of a system in terms of all of these assessments, the documentation, the competency management system, and the overarching safety management system. Responding to all the regulator things in it, it'd be millions of dollars to put a safety department together that had all of the roles and all of the capabilities at a scale that was required for the park to do this. The whole exercise would be about the paper.
Drew: I don't want anyone to think that I'm forgiving Dreamworld for what happened here. I am never going on one of these rides again. We are talking literally about millions of dollars’ worth of safety management because what we really wanted was someone to have realized that they needed a $50,000 PLC control system in place. We wanted the PLC control system. We're not sure that the extra money on safety management would have alerted people to that need for it.
That's the fundamental problem with hindsight bias. We would love someone at Dreamworld to have been lying awake because we're worried that something was going to go wrong at the top of that conveyor. We'd have loved them to be able to go into a manager meeting, say, "This hazard above all other hazards was the one that needed prioritizing." We would love everyone else to have agreed with them and to dismiss all the evidence that said that there were other more important things to worry about.
Most importantly, we'd love to believe that if that was us, we would have got those things right. We want to believe that there's a magical process that will make those things okay that we can hop on the ride, we can hop in the car, we can hop on the train, and we know that everything's going to be okay because life is just really scary otherwise.
David: I think that's true. These amusement park rides are very dynamic environments and like you said, using force, physics, and nature, and moving water with other mechanical components. They're very natural. They're designed to be very dynamic experiences, but the big thing for me to take away is the failsafe of the system during normal operations.
A PLC with a remote stop for the conveyor might have been able to stop the accident sequence as it was playing out if the operator had been able to take control. Maybe that's just the one big takeaway from the recommendation here is whether it's a roller coaster, whether it's a merry go round, how does it all come to a stop safely when it needs to? That's something that I just think on the Thunder River Rapids ride. From my experience on it, there wasn't a way to bring that whole ride down to a safe state quickly and smoothly.
Drew: I think that's a fair observation. There was a considerable effort put into making sure that it was safe as it was operating, but very little consideration about how to bring it to a safe stop and recover from there.
I think we've moved on to practical takeaways already. The other takeaway that I would like to throw in is about what we can learn from this from our own investigations. This is something that one of my field researchers had some really sage advice for me. He said that when you're studying work, it's actually okay to have opinions about how things are working and how they should work. We can't really remove that part, exerting our normative judgment, but the trick is just to hold those opinions lightly. To form an opinion but to go looking for stuff in the real world that contradicts that, rather than stuff that helps build our case towards the idea we're going for. I think that's really important.
It's okay to hold an opinion about what happened. It's even okay to hold an opinion about what should have happened but recognize that those are the least useful opinions. Be ready to give them up in favor of looking for what's actually going to be a good way of managing things further.
I can really understand when you asked for this evidence from Dreamworld and they can't produce proper risk assessments. They can't even produce proper records of the decisions that they've made, or how they responded when they learned about previous incidents. I can understand you form this opinion that they haven't done risk assessment properly. That's what we need to fix.
But the evidence doesn't support holding that opinion strongly. If you actually go looking for it, go looking for evidence that they learned from incidents, there's evidence of that. Your evidence that they looked for what can go wrong, there's evidence of that. You look for evidence that they prioritize what they thought was most important, most dangerous, you can find that evidence if you look for it, even in the stuff that the coroner's put forward.
If what you're doing is just demanding that there's nice neat paperwork documenting all of those things, then what you're saying is the paperwork is more important than getting the decisions right. Or you're claiming that the paperwork would have made the decisions go right. That's just not true. The bottom line, be true to what you believe only to the extent that you've got strong evidence for those beliefs, not to the extent you can build a nice case.
David: I had a conversation with a colleague in Practical One, which is an organization in Queensland as well, but a very large organization that coincides with a hot environment. They had a heat illness incident. The person who investigated the incident had declared to me that we have no risk controls for managing heat illness. I said, "Well hang on a minute, you might have no formal documented policy about heat illness. For the last hundred years, your workers have been working in Queensland every single day in and out of the heat. I guarantee you, there's a whole lot of active risk controls in place within your organization for heat illness. You just are disappointed because you were looking for a document and couldn't find a document."
Drew: Yeah, that's a great parallel. We like to ask questions to our listeners. This is going to start another argument on LinkedIn where people are pointing out to me all of the evidence in the report that they weren't suffering from hindsight bias, including the bits where we know they weren't suffering because they said they weren't suffering. I think that's inevitable. I don't want these people to get the impression that I am defending Dreamworld, or I think that they're the good guys here.
I actually agree vehemently with the recommendations in the report, particularly to the extent that they basically say that thrill rides should be treated like any other safety-critical engineering design project. Now we can agree or disagree about what we think are exactly the right methods, what role risk assessment should play in that process, what role safety cases should play in that process, but there's no good excuse for separating them out and saying that, "Okay, a train has to have this level of safety, that a roller coaster is not a train because it's meant to look dangerous."
David: Yeah, I think that's a good takeaway, Drew. I look forward to the discussion about hindsight bias. It is really important for us because we spend so much time in safety, looking at events in the past, and talking about what should have happened. It's one of the more important things for us to understand and as safety practitioners how we can navigate around that space.
I like your advice from before about the opinions of the investigator or maybe the initial opinions of the safety practitioner are the least most useful opinions. I think you said to me once, Drew, as well, if you get to the end of an incident investigation and can't list out a really long list of all really interesting things that you learned, then you really haven't been investigating with an open mind.
Drew, that's it for this week. We hope our listeners found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Another departure from our standard format and maybe a personal record for us when it comes to the links. Send any comments, questions, or ideas directly to us at firstname.lastname@example.org.