The Safety of Work

Ep. 13 Are there more accidents on friday the thirteenth?

Episode Summary

On this episode of Safety of Work, we wonder if there are more accidents on Friday the 13th.

Episode Notes

To frame our discussion, we decided to reference a few papers. The papers we use are Females Do Not Have More Road Accidents on Friday the 13th, Much Ado About the Full Moon, and Moon Phases and Nighttime Road Crashes Involving Pedestrians. Tune in to hear our chat!

Topics:

Quotes:

“The idea is that if it’s a robust result, it should apply regardless of the decisions you make…”

“It’s becoming increasingly common now for researchers to publish their raw data alongside their publications, so that other authors can actually make their own assessment of the papers…”

“We’re pretty sure that accident-proneness is really a symptom of confirmation bias or statistical artifacts.”

 

Resources:

Näyhä, S. (2002). Traffic deaths and superstition on Friday the 13th. American Journal of Psychiatry, 159(12), 2110-2111.

Radun, I., & Summala, H. (2004). Females do not have more injury road accidents on Friday the 13th. BMC public health, 4(1), 54.

Redelmeier, D. A., & Shafir, E. (2017). The full moon and motorcycle related mortality: population based double control study. bmj, 359.

Rotton, J., & Kelly, I. W. (1985). Much ado about the full moon: A meta-analysis of lunar-lunacy research. Psychological bulletin, 97(2), 286.

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to The Safety of Work Podcast, episode 13. Today were asking the question, Are there more accidents on Friday the 13th? Let's get started.

Hi, everybody. My name is David Provan. I'm here with Drew Rae and we’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work Podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. So Drew, what's today’s question?

Drew: Well, David, we were going to talk about behavioral economics today, but then I noticed it was the 13th episode, so I thought we might have a bit of fun and I’ll ambush you with a bunch of different questions, mostly about significant dates.

In the academic literature, dates like Friday the 13th or the full moon are called calendar effects. The idea is that there are particular dates that are lucky or unlucky and maybe there is some purported mechanism because of tides or movement of the planets or lunar cycles, or maybe it's just pure superstition. I think the way you sort of most commonly encountered this, the idea that emergency rooms have lots more people showing up on full moons or that there are more accidents that happen on Friday the 13th. That’s our question for the episode is, are any of these calendar effects real?

David: Drew, we sometimes in safety hear about calendar effects. Maybe we might talk about more accidents happening on a Friday because maybe workers are already thinking about the weekend or during the December-January period around Christmas, we hear stories about increasing incident rates. But let me go to the Friday the 13th, Drew. I was actually born on Friday the 13th and it was actually the same day that the US space station, the first space station (Skylab) crashed into Western Australia. Depending on which way you look at it that’s either one bad thing or one good thing on Friday the 13th, or two bad things.

Drew: You stole the joke off me there, David. I was born on the 11th of the 11th, which has no significance except that it was the day that Whitlam got sacked.

David: And also Remembrance Day for the Essex as well.

Drew: Yes, so a minute of silence every time it's my birthday. Yes, so rather than pick just one paper as we usually do, we’re going to look at quite a few papers in this episode to try to get a bit of an idea of this sort of spectrum of things that are available.

We'll start with Friday the 13th. We’ll start with a 2002 paper in the American Journal of Psychiatry. This paper looks at traffic fatalities in Finland from 1971 to 1997 and I found that for men, there was no difference between Friday the 13th and other Fridays, but for women it found a big effect. So big in fact—and this is the authors, not me—they reckon that 38% of the female traffic deaths on Friday the 13th happened because it is Friday the 13th. Your immediate reactions, David?

David: I'm really curious that word, because, I'm really curious as to how the researchers made that conclusion.

Drew: Their conclusion was a common statistical thing to work out how much of the variability is due to a particular factor. They’re sort of saying that once you account for other factors, 38% of the difference between that day and other days, the only factor that's different is that Friday the 13th.

Before we take that too seriously, the real paper I got this from is actually a 2004 paper, which is a response by authors Igor Radun and Heikki Summala. This is going to be a bit of a recurring pattern. This skeptical paper has less than a quarter of the citations of the 2002 paper, but it's called very pointedly, Females Do Not Have More Road Injury Accidents on Friday the 13th.

The paper’s a bit of a combination of a takedown and its own analysis. It begins by pointing out a number of features of the original article. The first thing it says is, well supposedly about driving. The original article includes deaths in water accidents and in air accidents. It also includes passenger deaths, regardless of the gender of the driver, which was kind of interesting. It does things like it controls for weather using the reported temperature in one particular location, but Finland's a big place and it doesn't control for the weather in the location of the accident, so apparently it matters whether it's raining in Helsinki, but not matter whether it's raining when the accident happens.

The article excludes Good Friday, which is a problem because it's the only holiday in Finland which can occur on the 13th of the month. The article includes the other holidays. The general point here, it's not so much that each of these individual decisions is bad, but it just shows how many different options authors have when they do statistical analysis to include things and leave things out.

David: Or they might not have many options at all, Drew. When I was listening to the way that you described what they included, it sounded like, to me, a case of where they got their data from. I know, for example in Australia, the Australian Bureau of Statistics keeps fatality records across the general population and they categorize those as we categorize things.

One of those categories is generally referred to as accidental death. It's very common for that category of fatalities to include drowning in water, air accidents, road vehicles, and a number of other miscellaneous kinds of situations. I'm assuming these authors got those category of statistics, got the individual dates, went to the bureau of meteorology for the country, got the weather on those dates, and kind of pool all that data together into their paper.

Drew: That's certainly the case with things like the gender of the driver, because the death records the gender of the person who died, not who was sitting in the driver's seat. But it's not the case with things like whether they include holidays or not, or which holidays they included in and out. That's the researcher’s decision and that's a general rule we talk about.

It’s a thing called researcher degrees of freedom. There are lots and lots of choices you have to make when doing statistical analysis and even if the choices, each individually seem sensible, if you made different choices, you might get different results. The other name for it is sometimes, the garden of forking paths. You're wandering down all of these decisions and you get to the answer. Have you got there because it's the answer, or have you got there because of all the individual decisions you've gone down? The idea is that if it's a robust resulted, it should apply regardless of what the decisions you make. If it's a shallow result, then tiny tweaks in the decisions make a difference.

David: I think particularly, Drew, when you're trying to rebut or challenge a previous research finding, then it's incumbent on you as a researcher to the extent possible, make your research decisions consistent with the original studies that you're trying to either replicate or challenge.

Drew: Yeah, that is sometimes true. It's also sometimes a risk that if they've made dodgy decisions, then just repeating those exact same dodgy decisions doesn't necessarily mean that you're getting a more robust replication.

This new article, one of the things they did is they said, “Okay, let's just make one simple observation,” which is that if this effect is real, if it really is the case that females are at higher risk and the explanation given in the original article was sort of maybe females have more stress and they're more likely to fear Friday the 13th and therefore they drive more badly, it's kind of dodgy, but the new article said, “Well, okay, let's just accept that that's a possibility.” If that's the case then you ought to apply to injuries, too. If women are really getting into more accidents, then it should just be the fatalities that going up.

The article’s a lot more careful and consistent how it does the statistics, and it checks for both fatalities and injuries. Once you do the study carefully, it finds no difference at all between Friday the 13th and other Fridays.

David: Do we conclude then, Drew, that there are not more accidents on Friday the 13th than on other days or other Fridays from these research papers?

Drew: That's a pretty consistent finding. There are isolated papers out there that claim various Friday the 13th effects, but it's pretty much always the case that each of those isolated papers has some sort all follow up to that point out the mistakes in it. The more rigorous the study, the more the effects just goes away.

David: Our listeners and safety practitioners don't have to be particularly fearful of what might occur in their organizations on Friday the 13th, as they would maybe any other days. What about the full moon effect then, shall we go and talk about that?.

Drew: The full moon is particularly fun because even people who believe that the moon is a big problem, totally disagree about what the effect actually is. You get a lot of authors hedging their bets saying that they don't really believe, but they do you believe. They say things like, “Well, it's not us, it's lots of other people believe that the full moon causes weird human behavior.” We’re not going to say whether that's true or not, we can say is that because lots of people believe it, maybe that believe changes the way people behave and believing in the full moon causes people to behave weirdly like a self-fulfilling prophecy.

Just in case, let's check it out. There’s a lot of papers like that, that the purported mechanism is the full moon has an effect because people believe the full moon has an effect and you then get the really weird ones that say there’s tidal effects, it’s all gravity effects, or magnetism, or magic.

David: Yeah, I think it's one of those things where the full moon is actually something that's physical, so there is the potential for those sorts of claims to be made, that there’s actually different in environmental conditions, in different phases of the lunar cycle that may actually impact behavioral or environmental conditions, where work or activities are occurring.

Drew: Well, hold that thought, David, because we're going to encounter one actual genuine effect of the moon definitely has on safety. Before we get into the genuine stuff, a lot of statistical research has this sort of general problem that it claims to be impartially checking stuff out. The authors claim that they don't believe or not believe in the original effect, they're just doing statistical analysis.

The trouble with that is that even good statistics can produce false positives. If lots of lots of people supposedly impartially check out nonsense, some of them just by odds, by rolling the dice, are going to find that it’s statistically true. To get around that, David, how would you sort of worked things to make sure that you weren't just taking a statistical luck?

David: Well, I think in one of the other episodes, we talked about spurious correlations and if you look at enough variables, over enough time, in enough different situations, then you're going to find relationships, but I think what you're asking, Drew, is whether those relationships are actually the result of a phenomena, an underlying phenomena or whether or not it's, like you said, just luck.

The first thing you need to know is to check if there's an effect at all. There's a lot of different statistical methods and techniques to actually try to examine the size of the techniques, the size of the effects, and how the effect matches the situation that you are studying. You'd want to understand if there is actually an effect at all and how big that effect is.

I think what we said in a number of episodes is the bigger your study and the bigger the groups involved and the bigger the data set, you would expect that effect to get bigger and more clear, the larger the study.

Drew: If the effects start to get away and disappear in those more rigorous studies, that's definitely time not to believe. The other one is that you've got to have at least one possible mechanism and that mechanism gives you extra things that you can check. If you think that the full moon effect works because people believe in the full moon, then you would expect people who believe in the full moon being a problem to have more accidents. If you think that the full moon has an effect because it's a bright scary light, then you wouldn't expect that effect to happen on cloudy nights. We can use things like that to just put in that extra level of check that the mechanism is actually happening.

David: Yes, good point. Good point, Drew. How would you go about doing that?

Drew: You add in those extra factors as part of the regression analysis. You expect the things that you claim to matter to matter and you expect the things that you claim not to matter to not matter. Your real effect should change according to you mechanism getting bigger and it shouldn't change according to the things that you think are irrelevant.

If you think that the effect is because of brightness, then when it gets brighter, even if it's not a full moon, that should still have a partial effect. If you think it's because the full moon causes bad weather, then you should expect the effect to exist in bad weather, regardless of whether it's caused by the full moon on not.

Anyway, let's look at a particular paper. Fun title, Much Ado About the Full Moon: A Meta-Analysis of Lunar-Lunacy Research. The authors are James Rotton and I.W. Kelly. (I couldn't unfortunately find out what I.W. stands for.) Rotton’s an interesting author because he publishes mainly about how environment affects behavior.

Most of this is sensible, plausible things. He looks does weather make antisocial behavior better or worse. What effect does global warming have on crime rates? How does how people commute affect what their mood is like during the work day? That sort of thing where there is a plausible effect.

As best I can tell, he and Kelly seem to use lunar research, not because they believe in it but, because it's a good way of pointing out common statistical mistakes that people make in these other more reputable research.

In this paper, they look at the history of the academic debate about full moon effects and all of the different ways people think the moon might change behavior. There's a genuine effect that lunar cycles do affect the weather. There's a genuine effect to the amount of light at night changes through the lunar cycle. There's a possible affect the way people sleep and their circadian rhythms change through the month.

There are possible psychological effects from the amount of light that people get. That one 's not really plausible given that small amount of light that's off the full moon. Then, there's this weird new semi-magical stuff like the fact that now water bodyworks like the tides or their magnetic effects from the moon. They're just aren’t scientifically plausible, but might have an effect because people believe in them and that changes their behavior.

David: In a previous episode, Drew, we talked about meta-analysis. What the authors did here is they just pulled out all of the research on lunar and its impact on behavior and they pulled these different possible, I supposed, mechanisms out of all of that research. What did they do next?

Drew: They're not combining the studies, like you might in a meta-analysis. They're really just doing a critical systematic literature review, looking at the methods in the studies. They actually went to the extent of redoing a lot of the calculations to check if they were done properly, because the first thing they found was lots of really simple computational error. They checked the graph, they checked the statistics that people just made blatant mistakes.

The next thing they found with lots of effects that day called fickle. What they mean by fickle is that even among the studies that find an effect, the size, the direction, and the important factors keep changing.

Sometimes, the weather seems to matter, sometimes it doesn't. Sometimes, the full moon seems to cause more positive behavior and sometimes it causes more negative behaviors. Sometimes, it's the full moon, sometimes it's the new moon. Sometimes, it's the day before and after, sometimes it's are they just that particular day. The effect keeps swinging around and being different no matter when different people look at it.

The other thing they found was what we already talked about, which was lots of research and degrees of freedom. People making different statistical choices with no real justification why different authors had made different choices.

This is where they did something a bit interesting. They went and got hold of much the original data as they could. They wrote to the authors and said basically, “Can we have your excel spreadsheets please?” They did every study again using the same choices to see whether the results held up consistently.

David: That process, Drew, is that common in research? In my experience, I haven't heard of it too much of actually going and seeking out that the data. It's becoming increasingly common for researchers to publish their raw data alongside their publication, so that other authors can actually make their own assessment of the papers before incorporating in their own research or maybe including in this type of study, but you don't see too many published papers where the reanalysis is being done by the new authors.

Drew: It is increasingly common for people to publish their data and most often the criticisms don't come out in newspapers. They come out in letters or in blogs or in other forms of peer review. This is rare to rather than just criticize the calculations, actually redo the analysis. So, that's kind of interesting.

The other thing that's interesting is that there is actually some evidence for some statistically significant relationships. Having written the analysis they said, “Look. Some of these studies aren’t mistakes.” The studies, at least as individual studies, do find that effect, but they said this isn't enough to believe that the effect is real.

The first reason is that for every study that finds in effect, there's a similar study that finds the opposite or none at all, so there’s not replicability. Secondly, there's not really statistical significance. The idea is supposed to be that for a real effect, the evidence should be strong enough to make you throw away the idea that there is no effect. In this case, the evidence is really just, there might be something there, but we cannot reject the idea that there's nothing there. The third one they call, predictability. I’d actually call this effect size. This is the problem that even if there is an effect, it's so small that it doesn't really change people's behavior.

They said it's probably you can't explain more than 1% of any variation from day to day in behavior, so that's too small that you actually do anything about a full moon effect. You wouldn't even put on an extra nurse in your ER based on a 1% change.

David: It’s one of those areas where even if the effect is statistically significant and even if you can make a claim, if it's not really that useful to inform real-life decisions that people make, then perhaps you haven't really found something useful even though you might have found something that exists.

Drew: Exactly. Let me just throw in a couple of extra quick papers that they didn't cover in this study that get to what you were saying before, David. The first one I'm isn't about weird effects, it says, “Motorcycle accidents go up during the full moon,” because motorcyclists get distracted by these big bright object in the sky. What do you think of that one?

David: Well, it's a strange conclusion. It's one of those things where you can probably go, “Yeah, maybe under certain situations,” but the thing that surprised me when you put out this paper was that it was in the British Medical Journal. It is one of the most reputable journals in any scientific fields, so that really sparked my interest much more.

Drew: Yeah, I have to admit that I spent a good 45 minutes reading through the paper thinking, “Maybe this is something genuine,” and then I noticed it was published in their Christmas edition, which I should notice to start with. This has genuinely caught doctors out before, so I'm not too embarrassed about it, but they got into a bit of a habit in recent years in the BMJ of having a Christmas edition full of weird joke papers. They all look like normal papers and the addition looks like a normal edition, but they're not really intended to be taken seriously. I thought I had something there, but then the date, Christmas edition is kind of their equivalent of April fools.

Another one, and this one you can tell it's reputable (I think) by the ultra-boring and specific name of the journal, this is the Journal of the Illuminating Engineering Society of North America. The study is called, Moon Phases and Night Time Road Crashes Involving Pedestrians. It's a well-conducted study. It's pretty robust to statistical effects and researcher degrees of freedom, and it points out what (I guess) probably was going to be fairly obvious to start with, which is that on [...] nights, pedestrians are more likely to get hit by cars. There you have it, there is a full moon effect. It's not full moon is dangerous, it's full moon you can see the pedestrians.

David: There you go, that is actually affecting the different direction, which means a full moon might actually be a bit safer to be in and around the road environment rather than other times in the lunar cycle.

Drew: Yeah. There are actually a few papers that claim positive lunar effects. There seems to be a weird trend about papers about cooperating behaviors that people cooperate more on full moons, but that's not really any more reputable than the lunar-lunacy stuff.

This has been a bit of fun, but I think there's some real practical at takeaways we can get from this and this is why I wanted to go into it. Because one of the things that we do on the podcast is we tend to pick out one particular paper to talk about and that's the risk with one paper is that you can find one paper with some really, really strange things.

David, I wonder if you might want to say a bit on sort of how we pick up papers and sort of practical takeaways people can have on reading the literature.

Drew: This is one where I think we’re somewhere in the middle. We do often talk about one paper or two papers and what we’ve  got to do is make a judgment through our own understanding of the literature, if it's in an area of literature that we’re more familiar with or we do a bit of extra rating, just to see how consistent that individual research paper is with the rest of the field.

We probably don't do that as much as we'd like to because we're trying to get an episode out every week, but we do make a judgment. We try to share with each other what degrees of freedom and what choices we made, like we've talked about in this episode for how we brought at whether we're going to talk about a particular paper and what we're going to say about it. The main thing is just to try to present to our listeners individual papers to the best extent we can represent the overall conclusions in a particular area.

Drew: That's something we'd encourage for you, too, is to not take conclusions from single papers, but see if you can get a sense of how representative that paper is of that field of stuff that’s published on that issue. There’re lots of ideas, there's always a paper that supports it, but it's really about the balance of evidence.

David: I don't think that's also the case practically in your organization. If you're trying to do something across the organization and you think you're getting a really good outcome in one area and then you doing the same thing in a different area and you’re getting a very different outcome, that's not that different to reading two separate research papers that I've looked at researching the same thing that found very different things. That should really spark curiosity as to why something isn't working everywhere in your organization.

Drew: I think there’s a lesson for us all in just the prevalence of ideas, like the fact that emergency rooms have more people turned up to them on full moon nights. You'll get people who work in emergency rooms to tell you that that is absolutely true. You'll get papers that tell you that it's absolutely true. I can tell you pretty confidently from people who have studied this really rigorously it is not true.

The point is that it's really easy from the evidence we have in front of our own eyes to come to almost a sort of magical thinking that we’re doing something, it seems to work for us. If we actually stop and think about it, there's no plausible mechanism that it should be working, but it's worked for us in the past where it's hard to give it up.

David: These ideas, Drew, somewhat ritualistic or very common across particular professions, like you said even in the healthcare sector, there's people who genuinely believe that to be the case about emergency rooms and Friday the 13th. In safety, there's a whole lot of safety practitioners that believe a lot of things too. Do want to talk about what some of those that might fit into the same category?

Drew: Sure. One of those prevailing ideas that was around in the scientific literature for a long time was the idea of accident proneness, that there are some people who seem to get hurt more often. You can definitely still find people who believe that today about particular people, but we’re pretty sure that accident prone is really a symptom of confirmation bias or statistical artifact. It's not that you can do something useful by finding out who those people are and singling them out. What I actually want to do, David, was throw that back on you and say what superstitions do you believe in?

David: In relation to safety, Drew?

Drew: Yeah, in relation to safety. I mean, I think we've all got superstitions that we stick to than we sort of probably shouldn't.

David: I'm a little bit obsessive-compulsive, so I'd be very worried about my day if a particular thing that I wanted to finish was unfinished, so I suppose about some superstitions in a sense of just normal day-to-day tasks, like getting emails done and things like that where I get quite anxious and quite distracted if I haven’t been able to finish something up. I don't know if that counts, but that's about as close as I can think of something off the top of my head. I'm not afraid of black cats, I'm not afraid of walking underneath a ladder. I'm afraid of being out on top of a ladder a couple stories up without fall protection, but that would be about it.

Drew: Yeah, that's less of a superstition than good advice, I think. For me, I tend to anthropomorphize computers and electronic devices. I get really annoyed when people are swearing at a piece of technology, not because I think it's pointless, but because I think that you should be polite to computers. If you're going to make threats don't swear at them, make a very, very credible specific threat. I say, “Thank you,” to my Alexa. When my computer is not working, I describe in great detail of what I'm going to do to it and what it's going to be replaced with.

David: I did read quite a long debate on whether or not you should say, “Please,” and “Thank you,” to the Siri when you're asking for help out of your device.

Drew: Yeah, that's definitely something that I'm in the habit of even with my devices that don't talk back.

David: There you go.

Drew: The final serious takeaway is that behind every statistical effect, there is some sort of mechanism. We really should seek that mechanism out. That's what statistics are great for is they're good for indicating that there might be something there. We're going to go a step further and think about what is that something and go looking for it.

When we say things like subcontract deserve higher risk, that's not useful to us until we think about what the mechanism is. Because if we think subcontractors are at higher risk because they’ve been given more dangerous work, that's a very different problem to they're at high risk because they're taking great risks or because they're not being as well-protected. The statistical claim tells us there might be something there, but it's the mechanism that tells us what would be a good solution to this problem.

David: Yes, so I’m thinking, getting even closer to the topics that we've talked about today, Drew, if you are in your organization and it is a common discussion about leading into Christmas at the end of the year in December, that we have more accidents or work is less safe, you really need to put your money where your mouth is if you like and claim some kind of mechanism there, because workers are distracted or because there's extra productivity pressure to finish things off. And don’t wait for accidents to happen. See if those mechanisms are existing. Is there more pressure? Are people more distracted? Measure and test those things rather than just sit back and see the relationship after it's already occurred.

Drew: Yeah and I think from the evidence in this podcast, looking at mechanisms, if you don't believe us and you think that there really is a full moon effect, the most plausible thing is that it's something to do with lights, so turn the lights on and you'll probably deal with the problem.

David: Very good and I think in relation to that positive finding that you did find there, Drew, with pedestrians being more able to be seen during a full moon by drivers, then there's also a whole lot of research which we haven't had to pull out today about pedestrians wearing high visibility clothing being less likely to be hit as well. If you're a pedestrian or a cyclist, that's a good thing as well.

Drew: We’d like to finish each episode with invitations to our listeners. We’d love to hear from you about what are your best examples of magical thinking in safety. What do you think are things that are superstitions? Or what are the things that you've heard or wondered about that you're not sure whether they are superstitions and you'd like us to check them out. I certainly found looking into these fun and I'd love to be given a couple more similar things to have a look at.

David: We have played around a little bit with safety moments in the past, if anyone wants to start talking about safety moments, as far as rituals and magical thinking is concerned, then we’re happy to take that on as well.

That's it for this week. We hope you found this episode thought-provoking and ultimately useful in checking the safety of work in your own organization. Send in any comments, questions or ideas for future episodes to feedback@safetyofwork.com.