The Safety of Work

Ep.53 Do parachutes prevent injuries and deaths?

Episode Summary

On today’s episode of Safety of Work, we discuss whether parachutes prevent injuries and deaths.

Episode Notes

Given that the last two episodes were about theories, we wanted to get back to something more concrete in nature. Hence, the topic of parachutes. We find they are often used in military operations, but are rarely required for civilian aviation. Let’s look at this discrepancy and discuss whether parachutes are actually used to prevent injury or death.

Join us for this interesting and somewhat surprising discussion.

 

Topics:

 

Quotes:

“...They hide a few key considerations. One of the big ones is, that it’s not really a choice between at the point you have to jump out of a plane, whether to wear a parachute or not; it’s things like, do we make laws that all planes should carry parachutes just in case?”

“So it’s not just that more research is needed, it’s that more research is almost guaranteed to reverse the result of this bad study.”

“Very often, when it’s come to the practicality of how do we investigate this within an organization, we’ve decided that an experiment is not the best use of our time and resources.”

 

Resources:

Parachute Use to Prevent Death and Major Trauma Related to Gravitational Challenge

Does Usage of a Parachute in Contrast to Free Fall Prevent Major Trauma?

Parachute Use to Prevent Death and Major Trauma When Jumping From Aircraft

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work Podcast, Episode 53. Today we're asking the question, do parachutes prevent injuries and deaths? Let's get started. 

Hi, everybody. My name is David Provan. I'm here with Drew Rae and we're from the Safety Science Innovation Lab at Griffith University. Thanks again to all our listeners for continuing to follow along as we sailed through the first year of the Safety of Work Podcast. We've got something a little bit interesting this week that Drew has prepared. Drew, how about we just go straight in? What is today’s question?

Drew: David, on the podcast, we try to sometimes explore fairly broad theories, but the problem with broad theories is they've got vague answers. We'd like to mix it up a bit with some more specific interventions or practices where we can be definitive about the evidence. 

Given that the last few episodes have been on things like climate and leadership, I wanted to get back to something that was a bit more concrete. We're going to talk about parachutes.

I personally love parachutes. I've got a picture up on my wall of all these ancient designs of parachutes. They're interesting for safety because you can see them as life saving personal protective equipment. Most often, they actually get used as life risking recreational equipment. It's really inconsistent on who does and doesn't wear a parachute. In military aircraft, they almost always have parachutes. There are very few civil aviation situations where they're required to have parachutes. 

I thought it was a good one. Just look at that discrepancy and ask, should parachutes be used as a safety device? Is there any evidence that parachutes work for preventing injury or death? We're going to look at a few papers that try to answer that question.

David: Drew, have you ever personally used a parachute to save your life?

Drew: I have never personally used a parachute either to save or to risk my life. David, how about you?

David: No, absolutely not. I’ve never seen a parachute in the flesh, so to speak. I imagine, from a risk management point of view, that eliminating the need for a parachute is probably going to be better than using the parachute. I'm looking forward to the papers that you dug out today, Drew.

Do you want to introduce the first paper that we’ll talk about? People who may have seen parachutes more than we have?

Drew: The first paper is called Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials.  It's published in The British Medical Journal, which is a very respectable journal, in 2003. The authors are Gordon Smith and Jill Pell. Professor Smith is a gynecology and obstetrics researcher at Cambridge University. Professor Pell is an expert in public health, looking at things like heart disease. Very, very qualified authors, but not necessarily the people you'd expect to be writing an informed study about parachutes. 

The paper starts off with some initial observations, like most papers do, based on literature reviews. They start off by pointing out that using parachutes is dangerous. Parachutes sometimes fail. Sometimes, they have adverse side effects and not just by failing, but by actually causing death through the operation of the parachute. 

They also point out that some people who jump out of aircraft without parachute survived—they give a few citations—and some people who jump out of aircraft with parachutes survive. They say there's a lack of systematic research into whether and how much the parachute contributes to the survival. That's what the paper sets out to study.

David: Drew, we've done a few systematic reviews such as this. I'm not sure if any of our listeners have worked out what's going on here. They developed some criteria, they looked for any studies that included a fall of 100 meters or more, and a parachute as the intervention. Then, they tried to look at death or serious injury as an outcome within those studies. They excluded any study that didn't include a control group. 

Drew, the result of this literature search is they found zero studies of suitable quality, measuring the effectiveness of parachutes. Very, very uncommon to see a literature review that defines the criteria and then says, well, we actually haven't found any papers.

Drew: The authors are quite blunt about this. They say that this is pretty extraordinary that we've got these claims about parachutes and a lack of evidence. You're probably guessing that there's something a little bit more going on here, particularly given the journal, the authors, and the topic.

What is the crossover between gynecology, heart disease, and randomised controlled trials of parachutes? It turns out there is a very specific intersection point of those three things, which is hormone replacement therapy. 

Smith and Pell, they're making an argument against the idea of evidence-based medicine. One of the central ideas behind evidence-based medicine as a movement is the fact that the history of medicine is filled with these interventions that seem obviously effective, that there's observational evidence that they seem to work. 

When those interventions are properly tested, such as with randomized controlled trials, they turn out not to be nearly as effective as people thought they were. Some of them are just totally ineffective. This seems to make sense, but it really annoys a lot of people. Particularly, if you're the one who's been promoting the thing which the randomized controlled trial has disproved. 

You've really got three choices. You can change your beliefs about the intervention, which is what evidence-based medicine says you should do. Here's a new trial, update your way of thinking. Or, you can try to understand what's going on and how come your past observations and the trials disagree? Is it more complicated than either realized? Or, you can attack the idea that randomized controlled trials are the best form of evidence? That's what this paper all about parachutes is trying to do. 

It's actually very highly cited in the medical literature and in arguments about when should we and shouldn't we rely on evidence? 

The reason why we've got a cardiac and a gynecology person is that hormone replacement therapy was one of those interventions that initially seemed to be a good idea. Once they started conducting large scale, high-quality trials, a lot of the evidence in favor of it evaporated despite the fact that lots and lots of people believed that it was obviously a good thing to do. 

Now, the first thing we need to say here is don't take medical advice from a podcast. Particularly, don't take medical advice from this one. I'm not a doctor, David.

David: Well, actually, Drew, you are a doctor.

Drew: But not qualified to give any sort of medical advice.

We also should point out that there's been 15 years of research since this argument started. In that time, the evidence for and against hormone replacement therapy has just become more and more complicated to the point where I couldn't even work out, like, what is the current consensus? I don't want to even try to say what is. It started off being something that seemed obviously a good idea. Then, the evidence just made the situation really complicated.

David: I think, Drew, the important thing for us here and what we do each week on the Safety of Work Podcast, is that Smith and Pell were trying to use parachutes to show that some interventions are so obvious. That randomized controlled trials, in their words, may be a waste of time, but at least we shouldn't discount what we observe in the real world because we are unable to replicate it in a laboratory setting. 

Drew, their discussion section has a subtitle, Evidence based pride and observational prejudice. The conclusion from the papers is worth quoting in full. Do you mind if I read the quote from the paper?

Drew: Please do that.

David: This is the quote from the discussion section. “Only two options exist. The first is that we accept that, under exceptional circumstances, common sense might be applied when considering the potential risks and benefits of interventions.”

“The second is that we continue our quest for the holy grail of exclusively evidence based interventions and preclude parachute use outside the context of a properly conducted trial. The dependency we have created in our population may make recruitment of the unenlightened masses to such a trial difficult. If so, we feel assured that those who advocate evidence based medicine and criticise use of interventions that lack an evidence base will not hesitate to demonstrate their commitment by volunteering for a double blind, randomised, placebo controlled, crossover trial.”

Drew: Basically, they're saying that sometimes we should just use common sense instead of demanding strong evidence and anyone who believes otherwise should jump out of an airplane without a parachute. 

David, this parachute paper and the metaphor that goes with it gets used a lot in debates about evidence based medicine. The debates really come down to some people that think that where there's a strong argument that when an intervention makes logical sense and there's observational evidence that suggests that it appears to work, it would be unethical to delay offering that intervention while waiting for better evidence. 

Other people say that's the exact problem that evidence based medicine is trying to fix. There are too many things that make strong, logical sense and seem to work. Later on, we find out that not only don't they work, they cause serious harm instead. 

We've got these sort of two sides of the argument. One, people seem to trust what's obvious and the others’ saying, no, what is obvious often is actually misleading us.

David: Drew, before we move on to the next paper, I really like the matter of fact this about this paper. They sort of conclude that maybe our listeners can think about, would you want a parachute or not if you're jumping out of a plane. There are no randomized controlled trials which say that the parachute is going to be a better outcome for you than not wearing one and point out that people have survived without a parachute anywhere up to 10,000 or 30,000 feet falls.

People, like you said, Drew, die with parachutes on. The balance of this risk and benefits, the basis for using parachutes, is purely observational. 

They further say when they continue the example that anyone who decides that they're not going to wear a parachute and volunteer to fall out of a plane or jump out of a plane has probably got some pre-existing psychiatric morbidity associated with their own personal condition. The people who use parachutes are probably different in some really important demographic factors such as income and even cigarette use. 

Making this statement that said, “Individuals who insist that all interventions need to be validated by a randomised controlled trial need to come down to earth with a bump.”

I like the way they carried that example all the way through to make that point.

Drew: They do make, I think, the most persuasive case for that point of view. Making that case, they hide a few key considerations.

One of the big ones is that it's not really a choice between at the point you have to jump out of a plane whether to wear a parachute or not. It's things like, do we make laws that all planes should carry parachutes just in case, given the massive expense that that would entail to create and maintain the parachutes? To ensure they're always in perfect condition to train people to use them. Compared to other interventions we could give people that might be more effective, such as having more reliable planes in the first place.

Do we move on to our second paper?

David: Yeah, let's do that. It’s also [...].

Drew: This is a lighter paper published in 2018. Given that the previous one said that there are no randomized controlled trials of parachutes, you might be pleased to hear that this one is Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial. This one has a long list of authors. If you look at the affiliation, some of these authors are skydiving instructors, so we do have a little bit more credibility here. 

This is, in fact, a study that did directly involve skydiving. The lead author is Robert Yeh, who is, again, a heart disease expert at Harvard, which might be a little bit of a clue to you as to where this paper is going as well. 

In the study, they approached people who were traveling on an aircraft and asked them whether they would be willing to participate in a study which involved jumping out of the aircraft, either with a backpack or with a parachute. Once those people had agreed to participate in the study, they then randomized which one they were given. 

They started off doing this actually on commercial flights by approaching people who were sitting near them and asking them to participate in the experiment. They didn't get enough people for a statistical sample for that, so they also started asking their friends to participate in the study and extended it beyond commercial aircraft. 

Now, when you're doing this sort of experiment, you need to calculate what is an appropriate sample size. If you've got a really large effect, you only need a small sample. They expected that based on a typical parachute jump from around 4000 feet or a bit over a kilometer in the air, 99% of people without a parachute would die and less than 5% of people with a parachute should die. 

When you've got that sort of extreme expectation, you only need a very small sample. Still, they struggled to find participants and had to stop scraping and varying the procedure to get enough people to agree. 

In total, they asked 92 people to take part. Only 23 agreed to participate. They got 12 people to jump out of a plane with a parachute and 11 people to jump out of a plane with just a backpack. The study concluded that there was no significant difference between the survival rates for people wearing the parachutes and people in the control group.

David: Drew, our listeners are playing along at home, that means randomized controlled trial, two groups, one with a parachute, one with the backpack by jumping out of planes, and no significant difference between the fatality rates of the various people. What's going on here? What's buried in the data?

Drew: Even though they say that they recruited people across this pool that included people traveling mid-flight on commercial airliners, the average participant in this study—the average person who agreed to participate—was traveling in an aircraft jumping from a height of 60 centimeters from an aircraft traveling at 0 kilometers per hour. 

Now, those figures are cleverly hidden with statistical graphs, in pluses and minuses, and error bars. Basically, everyone was jumping off a stationary biplane right at the ground.

David: I mean, they buried the statistics, but it's also a fun photo in the article of one of the participants jumping off of the wing of a plane.

Drew: Yes. When I say buried, I'm exaggerating a little bit. 

David: Yeah. Drew, there’s another study making some fun of this idea of the necessity for evidence based medicine. I suppose in this case, the authors are very much in favor of evidence based medicine, but they're sort of pointing out the dangers of just reading the headline conclusions of a study.

I suppose the key messages before using the results of the research, it's really important to look at how the research was conducted and whether it matches the real world situation it's meant to shed light on. I think we’ll do that hopefully on the Safety of Work Podcast. You particularly, Drew, take some time to look at how the research was designed, what we can infer, and what we should ignore in terms of our real practice inside organizations.

Drew: Sometimes people finish off a paper by saying what more research is needed. Sometimes that's just a token thing. Sometimes it is a genuine admission of just how weakly a study applies to the real world. 

This happens a lot in both medicine and in safety. In medicine, we see a lot that there are these very weak studies that are undertaken in very narrow circumstances that are more analogies to the real world thing that they're trying to test. The example everyone will be familiar with is everything causes cancer in rats when you give them a large enough dose of it. It doesn't necessarily tell us what is and isn't dangerous for humans to eat.

David: I think, Drew, we've seen these narrow studies in psychology for 100 years. I suppose as we look in more complex settings now where psychology is struggling from its replication crisis, I just think it's really hard to design an experiment and understand what's going to happen once you get out of the narrow confines of that experiment into your complex, real world setting. 

In some cases medicine, maybe one with biological systems and safety is definitely one with social systems.

Drew: David, I think we as a research community are a little bit guilty of that. We sometimes talk about how we could have better evidence, but this is the best evidence we have. As if having that best evidence is suggestive. That's why they've written this paper and that's why they're using the parachute example again. 

It's obvious that the truth is exactly opposite to this preliminary finding, that once you take this finding at 60 centimeters and you extend it up in the air, the results are going to reverse. It's not just that more research is needed, it's that more research is almost guaranteed to reverse the result of this bad study.

David: I think, Drew, the paper we're going to talk about now does do just that. It reverses the result with still a randomized controlled trial design, but a different way of randomizing the design of the study. The first year papers are somewhat parity papers. This paper is, I think, important for this question about parachute use and safety.

Do you want to introduce the final article for the episode?

Drew: Sure. Let’s not pretend that this isn't also deliberately written as a fun parody article, but it's not written to be misleading as to the results that it provides. I actually forgot to write down the journal for this one. It's a fairly obscure spinal injury journal. It's like a respectable medical journal making a point again about evidence based medicine. The study is called Does usage of a parachute in contrast to free fall prevent major trauma?: a prospective randomised-controlled trial in rag dolls. The lead author is Patrick Czorlich and all the authors are surgeons or coroners in Hamburg.

David, you want to talk through the method or shall I?

David: Yes. I just looked it up and you were right, Drew, it's the European Spine Journal. It is quite an obscure or more niche medical journal. Look, what the authors did here is they acquired a rag doll—a dummy human. One that's used to provide basic medical education for young children. It's represented in the brain, the internal organs, and in the spine with a whole series of air balloons, water balloons, and Lego brick structures. You can teach young kids about anatomy, I guess. 

What they did is they threw this doll off a building 50 times—25 times with a parachute on the doll and 25 times without a parachute. They measured the injuries that this ragdoll experienced as a result of these falls.

Drew: When we say we measured the injuries, they actually put things like whole body scanners, took cross sections of it, and all sorts of things that they'd normally do. That's why they had a coroner on the team to deliberately treat this like an accident investigation.

David: Very, very detailed injury classifications, so probably very reliable assessment of the severe injury. The control group received a very wide variety of very serious injuries. The parachute group received a much smaller number of injuries and the injuries were generally less severe.

Drew: They are trying to say that even though you can't ask humans to jump out of an aircraft without a parachute, and even though it's obvious that if you change the circumstances of the aircraft on the ground, you're not going to get valid results. That still doesn't mean that you can't do a randomized controlled trial of the effectiveness of parachutes. 

It's not just this paper. There is, in fact, a wide body of literature that studies parachute designs and has all sorts of methods for studying how parachutes work, what's the best design of parachute for different circumstances, and what makes them effective or not. Even what are the trends of injuries of people using parachutes in particular ways under particular circumstances? 

The really important point here is that we can start off by criticizing the fact that there are some things that are obvious and it doesn't make sense to do trials of them, apparently. There are some types of trials which it doesn't make sense to do because they don't generalise into the real world situations. With enough ingenuity, the right equipment, and a well-stuffed rag doll, it is possible to design experiments to test almost anything so long as you've got the willingness and the resources to do it.

David: Drew, hopefully this episode isn't confusing our listeners. From my perspective, I think we're very strong or at least I'm a very strong advocate of research and evidence for safety. I suspect that you are as well, Drew. 

I think what we're saying is that the real world doesn't necessarily listen to what we design and what we find in the lab. We need to have a whole range of different ways of trying to observe and trying to study the real world. 

As I was reading through your notes for this episode, Drew, I think it's a good point for more recent listeners that if they're looking through the back catalogue in Episode 20, we talked about our manifesto for reality based safety science, which actually talks about how we should think about describing the real world and how we should think about designing a reality based research within organizations. That might be a good stepping off point if people are looking for something to go back.

Drew: David, I think that's something that both you and I, at various stages in our career, have thought fairly seriously about is that we've wanted to do large scale experiments to test out our ideas. We've had that sort of mindset that only a true experiment with randomization and control groups would really give a thorough taste of things. 

Very often, when it comes to the practicality of how we investigate this within an organization, we've decided that an experiment is not the best use of our time and resources. One of the limitations is that experiments really only answer a very, very well focused question. If you haven't done the ground research to work out what would make that perfect question, then you may be wasting your time with an experiment. It comes back with a negative answer, but that negative answer doesn't tell you anything or prove anything.

David: Yeah, I remember when I first put my PhD research proposal. My hope was to do a randomized controlled trial with safety professionals and with some of them performing the safety professional role from a more traditional safety one perspective. Then, a cohort of safety professionals performing their role from a safety two perspective in designing those different types of roles, randomly assigning professionals into those roles, and then observing what happened in their sites as a result.

Then, we started getting all sorts of complexities around how does it change if the person is not of the belief that what they're performing is the right thing for them to be performing? How can we really mess around with people's jobs? 

Randomized controlled trials, I suppose in my experience then, becomes very hard if we're looking at anything beyond something that's quite transactional. Quite simply cause and effect. Maybe in safety days, there's limited applications for the things that we're interested in, in doing these traditional randomized controlled trials where you put people into different cases.

Drew: David, I've learned a really important lesson from your PhD work because I've always wanted to do that big trial that you originally proposed. I've wanted to take different organizations, do something fundamentally different between those organizations, and compare them. I was so excited when that's what you wanted to do as your project.

When I look back now, the ethnographic research that you did tells us that that experiment would have failed. You found out through your ethnography that the safety practitioners have much less discretion than we assumed that they'd have when we proposed that experiment. We almost certainly know now that the result would have been either a lack of difference or just a confusing mess of differences that showed no clear pattern. It's only the ethnographic research that we did that could explain why we would have got that useless result. I'm glad now we didn't do the experiment and it sort of dampened some of my desire for the big grand experiment in safety.

David: Look, I think there might be an opportunity for some bigger, randomized controlled experiments, but I think it's not necessarily with people as the randomized group. I think what's been done with other studies through the lab in terms of randomising sites for different types of if I put the cluster experiments and things like that. I think some of those are where you're not trying to change the way people think about the world and how to perform their role.

I just think that's hard in real organizational settings to just even figure out how you separate out the person from the activity, from the measurement.

Drew: Which is why I think that we probably do need more experiments on things that have very direct causal mechanisms. Where there is a specific activity that's intended to achieve a specific result. Definitely, that's the sort of thing that we should devise experiments and test. Whether it's something much more nebulous like the role of the safety practitioner or the grand theory of safety that you subscribe to, probably we're not going to be able to design experiments that can directly test. 

I really don't think there's ever going to be an experiment that proves that safety differently works, or this is the experiment that proves that safety differently doesn't work, or this is why you should have safety one and safety two. Those are just not the types of questions which are resolvable through experimental work.

David: No, not any that I can think of. Not any designs that I can think of anyway. Drew, some practical takeaways now.

It's an interesting question that we asked today that we'll reflect on shortly. You've gone and target three papers. The purpose of all of these three papers, I suppose our purpose in talking about them, was to just challenge ourselves, challenge our listeners, and others to think in more sophisticated ways about evidence based practice. What should our listeners take away from this?

Drew: I think the number one takeaway we'll put in is the basic principle of evidence based medicine applies to safety practice, too. That principle isn't about randomized controlled trials. It just says that wherever possible, we should use the best available evidence and we should apply that evidence to the situation in front of us based on the local conditions.

David: Drew, secondly, even when there appears to be evidence, don't just take people's word for it. Lots of people claim to have good evidence for certain safety practices until you look closely at exactly what the evidence is and isn't.

I think we've talked on this podcast about various times in relation to safety culture, behavioral safety, and safety differently. All of the people involved in all of those approaches claim to have really strong evidence. I think what we've shown on the podcast is that for all of those areas, rarely the case.

Drew: The third one is that we shouldn't be totally dogmatic about what counts as suitable evidence. Randomized controlled trials are pretty good. They're usually the best form of evidence for testing a specific intervention with a specific outcome in mind. There are lots of situations where doing that sort of trial just isn't practical or isn't appropriate. 

The best example of that is that if we have only accepted randomized controlled trial evidence, we'd never try improving safety at an organizational level. We'd always be doing small tactical things that we could do trials for.

David: Like, whether people should wear hard hats or not wear hard hats and other smaller interventions. I think we just demonstrated that with our discussion about the role of safety professionals as well, Drew. 

I think, as the rag doll study shows, there are more things that are amenable to randomized controlled trials than we might initially assume. Having a bit of ingenuity with how we think about setting up our research design, you might not be able to randomize people. You might not be able to get people to participate in everything that you'd like. How can you think about trying to test your hypotheses and your research questions in some more obscure but potentially relevant kind of ways.

Drew: David, one last observation. Just keep that metaphor of a parachute in the back of your mind and the three different things that parachutes represent. They represent that sometimes, doing a trial is not appropriate and applicable. Sometimes they represent the trial that we're doing isn't providing evidence that maps onto the real world situation that we care about. Sometimes they represent that ingenuity, that there are clever ways that we can taste things that might seem hard to test.

David: Yeah, I think they're great. I also wrote a little note here to myself to say for a few principles to remind people of. It sort of goes into other things that we've said most weeks on the podcast. Whatever you're doing in your organization, always be clear about what the direct mechanism is that you're trying to influence. Get clear on that. Don't just say improve safety or reduce injuries, but get clear on that mechanism. 

Ask yourself the question, do I have a baseline? If you don't have a baseline before you start doing something, you’ll never know what's happened as a result and where you can try to find some kind of comparison group or control in some way to kind of test. 

There's still some research principle. There are definitely some research principles that should apply to any kind of organizational intervention that we undertake. We shouldn't think about them as having to be a randomized control trial or not.

Drew: Thanks, David. I agree with all of that. Message for our listeners, things that we would like to know from you, let's have a bit of a discussion about what do the parachutes represent in safety? What are those things that we think are so obvious that it's not even worth trying to test whether or not they work?

David: I'm looking forward to that discussion, Drew. I’m looking forward to seeing what our listeners think. Be brave. Tell us and tell the community of what are the things that you don't feel the need to have evidence for because you've observed throughout your entire career as things that have an impact on improving the safety of work. 

Drew, our question for this week was, do parachutes prevent injuries and deaths? The answer?

Drew: I think the answer is no, no, yes. No, if you're only going to accept randomized controlled trials of humans. No, if you're conducting your study from a parked plane. Yes, if you're interested in protecting the life of a poor little rag doll that's been cut open, filled with water balloons and lego, and thrown out of a building.

David: That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. 

Join us on LinkedIn and engage in the conversation. Send any comments, questions, or ideas for future episodes to us at feedback@safetyofwork.com.