The Safety of Work

Ep.49 What exactly is a peer reviewed journal paper?

Episode Summary

On this episode of the Safety of Work podcast, we discuss peer reviewed journal papers. We dig into what they are and how they function. Whether you are using them for research purposes or setting out to write your own peer-reviewed paper, this conversation should prove useful.

Episode Notes

This topic was a request from one of our listeners. Join us as we dig into this frequently asked question and let you know all about academic journals and what you can take away from findings therein.

 

Topics:

 

Quotes:

“I still sort of think fondly...of doing my PhD and...you could look up the catalogues online. So, you could sit at your desk and find a reference to the paper, but then you’d need to wander the shelves and find the right volume and pull it down and take it to the photocopy machine.”

“Sometimes if a paper hasn’t advanced satisfactorily between reviews, then the editor will just make a call…”

“You know that you’re going to get peer reviewers that think that research is quantitative.”

 

Resources:

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work podcast episode 49. Today, we're asking the question, what exactly is a peer-reviewed journal paper? Let's get started.

Hi, everybody. My name's David Provan and I'm here with Drew Rae. We’re from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. In each episode, we ask the important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.

We’ve got something a little bit different today. Drew, what's today's question?

Drew: David, today's question comes courtesy of listener Tony. I was chatting with Tony offline. He pointed out that even though we try to base each episode of one or more peer-reviewed journal papers or journal articles, we actually (I think) did an episode on how to find and access the papers, but we never really explained what a peer-reviewed journal paper is. 

That's really what this episode is. We just thought we do a frequently asked question about academic journals, how they work, how stuff gets published, and what practitioners should read into stuff they find based on the publishing process.

David: Drew, are we going to dive straight in and start talking about what a journal is?

Drew: Yes. We'll try to make it an evidence-based discussion as we usually do. I haven't actually pulled out any particular bits of research here, but there is some research on the quality of peer review that we’ll mention part way through.

Let's start with the absolute basics. What is a journal? Journals are a really archaic part of academia. To understand them, you got to go back to when there were only a few gentlemen scientists talking to each other. The way we published stuff was by sending each other letters or by getting together as groups. Someone would keep minutes of the society meetings and publish those. 

David, do you reckon your letters and papers are ever going to make it into the scientific record?

David: No, I don't think so. As much as we like to think now, we're creating new knowledge, we're understanding new things about the world, there were certain times as part of the scientific revolution where it would have been quite interesting times to be involved in some of those societal gatherings.

Drew: Some of the names of people who were there at the same time and place, just getting together to chat about their work would have been amazing.

The trouble is where we’ll be at that time. There were just too many people doing different types of science. As the number grew, they had to put in place a bit of a more formal process of sharing information for people who weren't at the meetings. Rather than everyone coming along to the meeting and the people who don't read the minutes, we have a process where people actually submit papers in, they mail stuff into the meeting, and then it gets mailed out again like a newspaper telling you about the latest work that's been sent in. Only some people would actually get invited to present the work at the meeting. 

It was actually really controversial the very first time someone asked for a second opinion and said you are not going to send this out for everyone. I'm going to get someone else to look at it first. That was the first instance of peer review. The scientists got absolutely up in arms that the editor would dare send his work for checking. It was like inhibiting free speech to not be able to broadcast to everyone. 

Obviously, as we get more and more people doing it, we've got to have some sort of quality process in place. In particular, the number grows beyond the fact that we can even have something like the Royal Society and that's where all science happens. We have to have specialist societies for each topic, then they break into specialist societies, and then they break into specialist societies. Think of where safety sits. It's like the fourth tier of an engineering hierarchy or the fourth tier of a social science hierarchy that has split and split and split until safety became a topic that had its own journals.

Obviously, once you get lots of journals, you can no longer (as an individual) just subscribe to the journal for your society because it could’ve been published, could’ve been posted by any number of different societies. Instead, what we have are whole libraries that collect the journals together.

If you go to a university library, often it's not in a part now that's publicly accessible. It's called the stacks. Just sort of rows and rows of shelves filled with these journals that get sent out each month, and then the library collects them into binders. It used to be that most libraries had their own sort of binding service to collate their monthly journals together. You can just go to, like April of 1980 and there's a binder for each journal published in that month.

David: Yeah, Drew. I think it's been a very long time since I've pulled out a manual issue of a journal probably for undergraduate psychology in the 90s. Now, I suppose you don't actually physically pick up—well, I don't anyway—journal issues very often. Is that something you still do? Do you still get mailed out hardcopy journals?

Drew: Sometimes. When I publish a paper, they'll send you just that issue of the journal that your thing got published in. But yeah, I still think fondly just because it kind of dates me in the academic genealogy of doing my PhD and having any time I found a paper, you could look up the catalogs online. 

You could sit at your desk and find a reference to the paper. Then, you'd need to wander the shelves, find the right volume, pull it down, and take it to the photocopy machine. Often these were like dusty binders that no one else had ever opened since they were bound. It was really quite traditional, I guess, experience, but very time-consuming to find each individual paper

David: And now, they’re just all on—I suppose for our listeners—web searches. We spoke about it in (I think) episode 34. We spoke about how to find and access research. Now, you can look at databases or even Google Scholar, just find the references, go straight to the website of the journal, and get access to the paper all in one step.

Drew: Yeah. Very often you don't even need to know which journal’s in. You just follow a link from one of these databases and the paper appears in front of you. Pretty much every journal has its own website and its website is like a factory. It's got a space for authors to put stuff in. It's got a space that the journal pushes stuff back out again and makes it accessible to the databases. Then it's got a whole lot of back-end software that runs all of the checking and processing, which turns things from author submissions into published articles.

David: You mentioned that many, many journals now in all sorts of different disciplines—even in safety, we've mentioned probably four or five major journals—but there are dozens of minor and open access journals as well. Who owns and runs all of these journals?

Drew: All of the old journals started off as owned by Royal Societies. It's really expensive to run just one journal and it's very profitable to run a thousand journals. 

What happened was some big publishing houses set off to offering it as a service. Here, we’ll take it off your hands and run it for you. Societies say great. Our journal still works. It still got the same editorial stuff. Just someone else is looking after all of the expenses for us. This is fantastic. 

Then over time, that process got to the point where it became a really big business. There are one or two publishing houses that own most journals now. I'm not going to particularly pick on names, except I will say that most of the safety ones happened to be owned by a company called Elsevier. When people are complaining about big publishing houses, Elsevier tends to be the name that they pick on. There's not really much to choose between them.

David: I think the main complaint (I suppose) would be just the cost of accessing that scientific knowledge, particularly for practitioners who maybe don't have subscriptions and access. I think that's probably the major complaint, I'd suggest.

Drew: I'm going to be honest and say that most of the complaint doesn't really come from practitioners. It comes from academics complaining about the fact that they do the work to produce the papers. They get asked to do the work to be editors and peer reviewers. Their universities need to pay to access the papers that they have just written. Very often all of that money comes from government grants. This is the stuff that really should be owned by the people because it's being paid for by the people that are (in fact) owned by the publishing house.

David: I think we've seen a very similar thing in the last decade or so, with access to here in Australia to Australian standards, which is the obviously Australian version of our listeners overseas. They might think of ISO standards, [...] standards, or API standards or anything like that. We've only just now started to look at sort of democratizing the access to some of those Australian standards. Whereas traditionally, it was like you said. People sat on committees, it was all funded by the government, and then there was a private company that was able to charge access to them.

Drew: Yes. There are two movements that I think are fairly relevant to practitioners. The first one is the idea of open access. That's where the author pays the journal to make the stuff available. Once it's published, anyone can access it for free. That's just sort of a shifting of cost from the end-user to the person producing it. David, I think you've done that with a couple of your papers, is actually paid for them to be made available to the public.

David: Yeah, I think we definitely did that with one. It's not a cheap process as an author. It might have been a couple of thousand US dollars, $3000 or so. That just means anyone can access it. It can be used for any purpose. 

It's funny but even as the author of a paper, as you said, once you sign over the rights to a journal to publish it, then you're really, really, restricted—even as an author—on what you can do with that paper. Paying for open access is just a way of making sure that it can be freely shared and accessed by anyone.

Drew: The other thing that's been happening—this is why we talk about reputable journals—is that there are so many authors, particularly very young authors, sort of new to their studies, who are wanting to get things published. There are a number of publishing houses that have been set up basically just to prey on those young people.

They charge them a fee in order to publish their work, but their work doesn't receive proper peer review or publishing services. The really big publishing houses, in a sense, are very predatory. They are sitting on top of the system, taking away money, but they are providing a genuine service and your stuff genuinely does get edited and peer-reviewed. But then, there are a lot of less reputable journals that don't provide any of those services that still try to charge authors money in order to publish their work.

David: I suppose we’ve been through what a journal is in a little bit, but you just started there talking about the high-level process of publishing. Let's talk about that publishing process. How do things get published in journals? What does that process look like?

Drew: The first thing which a lot of people don't actually understand is that there are no limits on who can submit stuff to journals. Any of our listeners today could submit a paper to safety science. You don't have to belong to a university. You don't have to have an academic approve it. You don't have to have any rules about it. You just go to the website for safety science, click on the link for authors. There's an online form that you fill out, you attach your paper, and it's submitted to the journal. 

Sometimes you'll find really old journals where you have to email it to the person or you have to put in a very special file format with particular types of referencing. In most places now, you format the paper however you like. You use whatever referencing system you like and you just submit it. 

That goes to the general staff who check that you filled out the form correctly. They throw out stuff that is total rubbish, like totally the wrong language or doesn't even look like an academic paper. It's really just a spam email. That then goes to the editor who does a genuine quality check. They'll usually read the paper, read some of the paper, and make sure that it's written in decent English. They'll check that it matches the journal and hasn't been sent to the wrong place by mistake. They'll now send it to an associate editor. 

That's the job I have. I'm an associate editor for a journal called Safety Science. The associate editor reads the paper in its entirety. That's the first level of peer review, an associate editor will read it. 

They'll either do a thing called desk reject. Basically, they'll say, based on this one peer review, I'm not going to let it through to the journal. This associate has a fair bit of power to say what goes in or what goes out. They'll then send it to reviewers. 

Those reviewers are just people who are picked by the associate editor who thinks they would make good reviewers for that paper. Sometimes it'll be because the associate editor knows them or knows their work or we've got our own sort of secret search engines for potential reviewers that different keywords go in. They were AI systems who tried to identify who would make good reviewers based on the content of the paper.

David: Drew, you mentioned desk reject. As an associate editor like yourself at Safety Science, what's the rate of papers that you, at first, pass/reject, versus send out for further peer review?

Drew: Safety Science would get between 2000 and 3000 submissions every year. Of those, about 10% (I think) get rejected by the editor, and of the ones that go to the associate editors. It depends a little bit on the associate editor. We each have our own domain. You get different ratios of good stuff to unpublishable stuff in each domain. Typically, it would be about somewhere between 10% and 20% of papers get sent out for peer review.

David: In terms of everything that gets in through the Safety Science, there's somewhere between 10% and 20% that actually go out for that peer review for the first two filters.

Drew: Yes. The others come back with a short message to the author saying that it doesn't really fit the scope of the journal, would be better sent to a conference than a journal, or with some brief reasons just why it would not make it through peer review.

David: You do your secret search in your database for safety culture or one of your associates might search for something like a safety professional and turn up with my name or something like that. Then, you send it out. How many reviewers do you send it to and what's the process that they do?

Drew: Different journals have got different numbers of reviews that they need to make a decision. The aim at something like Safety Science, which is fairly common across safety, is to aim for two decent quality reviews. Very often, we'll need to solicit many more reviews than that.

Reviewers can say, no, I'm too busy and not review. Or, reviewers can say, yes, I'll do it, but then they never get back to you. Very often, reviewers just send back reviews that are too short or too nasty just to be useful, so we need to send it out again. The aim is that you will get two or three people who have put decent, reasonable comments on each paper.

David: For our listeners, these are typically double-blind processes in peer review journals. The reviewers don't know who the authors are and when the reviews come back, the authors don't know who the reviewers were.

Drew: Although there was a lot of guessing that goes on. Safety is a fairly narrow community. You can very often guess who has written a paper based just on the topic. We often have stabs at who we think the peer reviewers are. Dave, how often do you think you've been pretty sure who's been making the comments?

David: A few times. Not so much because I don't know the academic network as well, Drew, but one of the dead giveaways is when a reviewer will come back and tell you that there are three or four things that you should cite and all of those three or four things that they think you should make a citation of all have a common author through those three or four recommendations that they suggest. Then, it might give you a bit of an idea that the reviewer thinks that you should be upping their citation count.

Drew: Yeah, it's always a tricky one. Obviously, if it's been sent to the expert in the field and you haven't cited them, it's fairly legitimate to say, look, maybe you haven't cited the experts in the field. Much more often than not, it is a little bit of a game that people are offended that you haven't referred to their work. 

Each reviewer makes a bunch of comments. In most journals, they are asked to give an explicit recommendation ranging from reject to major revisions, minor revisions, or accept the paper as is.

I'll tell you, as an editor, every time I send it to a reviewer and they come back saying—except the as is—I basically need to throw out the review and find another one. There's no piece of work that you can read thoroughly and not at least find some reasonable things to say.

David: Yeah. Drew, they all come back. You'll get your two reviews back as an associate editor and you'll have to make a recommendation then. You might have two reviews that have one major correction. One might say minor corrections. You've still got to make a decision about what you do with that paper.

Drew: Yes. I'll read the reviews and I will forward those to the author. If there's something that's a bit confusing, like if one of them has said this is fantastic, and the other has given three pages full of comments, then I'll send a little bit of the steer to the author saying what to focus on. Or if there has been a contradiction, I'll tell the authors, hey, there is this contradiction here and give them some guidance as to what I expect from them. I might tell them, you need to address both of these, but you obviously can't agree with both of them. Or, I might tell them, look, I agree with this person. Sometimes a reviewer will say something that's fair but still a bit harsh. I'll say, look, I'd like you to answer this, but it's really quite acceptable if you answer just with no. 

David: I suppose there's a rinse and repeat thing. The comments go back to the author, the author then makes all of the updates to the paper, and then typically we’ll have a table at the front of their resubmission with here are all the comments and here are the things that we have done as authors to address in relation to each of the comments. That process can just go (I suppose) round and round and round until there's a conclusion. That can happen many, many times. 

I think we talked about a paper in one of the earliest podcasts, maybe episode 10 on the authority to stop work. I think the submission of that paper with Dr. Weber and Sean MacGregor took us about two years from the first submission to the journal to publishing.

Drew: Yeah, it's not supposed to take that long but it can. Sometimes, if a paper hasn't advanced satisfactory between reviews, then the editor will just make a call and will tell the reviewers, look, I need you either to accept this paper or reject it, and you can just keep making comments on it. 

Sometimes you just have a fundamental disagreement between the authors and the reviewers that can't be reconciled. The authors aren't obliged to accept every review comment. They can just basically write back to the reviewers and explain why they don't agree or why they're not going to do anything. If the reviewers don't accept that and the editor isn't willing to sort of step in and make a call, either way, the paper can just get stuck. Eventually, the authors need to review it or the reviewers need to reject it.

David: The peer review process that we've talked about there, you talked about having a database. You know the domain that you get papers sent in and you know who the people are who can review certain topics. How does a peer review process work? What's in it for the peer reviewer? Who gets involved in peer review?

Drew: Well, David, you've done peer review. What sort of big rewards did we give you when you reviewed papers?

David: There are a couple of things. There's the intrinsic reward which is you are supporting the maintenance of a certain level of quality in papers that are being published into what's perceived to be the evidence, based in journals. Doing your part to uphold a standard of quality and research (I think) is the important intrinsic motivator. 

I'm not sure there are too many extrinsic ones. You do get sent a really nice thank you from Elsevier with a one-month subscription to the database if you don't have university library access. If you did do a peer review, you could go in in the next 30 days and download a whole lot of papers from them for free as a result of doing a review. That in itself might be a good reward for someone to do a couple of peer reviews a year.

Drew: Yeah. I think there are some people who just do one peer review a month to maintain their journal access from Elsevier or Springer. I have to say that doing a service for the community is a tricky one. I've spoken to a number of other editors about this. I've met no one who enjoys playing gatekeeper. We all feel really guilty about rejecting papers because every paper involves lots of work from the author. We feel really uncomfortable with the fact that to keep the system working, we have to have the power that we can just say yes or no and stop things from even going to peer review. It's what the system needs to work, but it really is incredibly unfair.

Someone spends six months on a project, comes to my desk, I spend 30 minutes, and has the power to just say sorry, no, it's not going in this journal. I know some reviewers feel like that, too. When something goes to peer review, it's much more likely to eventually get published. The real service is for the authors of that paper, giving them feedback, improving the paper. It's probably going to eventually get published so you're putting the work into it.

David: Yeah. Drew, you mentioned there that once it goes out for peer review, it's probably likely to get published. It's been through to editing filters before it goes out for peer review. Does that mean people can trust peer-reviewed articles, having gone through a couple of editorial rounds, and a couple of peer review rounds? Is that enough for practitioners to be able to trust what's in it?

Drew: It definitely is something. By something, I mean that every peer reviewer is looking at the work trying to say are the conclusions of this paper justified? That's really important that when we're doing academic work, particularly when we're making conclusions that are going to impact safety practice, that we at least are willing to have other people check our work before it gets put out into the world. We stop making claims about what is the most important risk on a construction site, claims about what methods work and don't work.

I think the fact that we know that peer review is happening definitely improves papers. David, you and I have had long conversations on papers that we published together about what the peer reviewers are going to say and adjusting the paper to make sure that we are getting it ready for peer review that we wouldn't put on a blog post.

David: No. You just know now, for example, I think you mentioned before we start about some of the different types of comments that we've had and I think particularly given all of my research has been qualitative research. We've talked a lot about qualitative research. Occasionally, you know that you're going to get peer reviewers that think that research is quantitative and research needs to involve numbers and surveys. Then, you say I've done interviews with eight people, and here's my paper. You know that review is going to say that the sample is not big enough, and how can you make anything generalizable because of that.

We know in those papers that we need to put a big section in the introduction that explains qualitative research and just why this particular research design is appropriate for this research question. You're right, Drew. You wouldn't put that in a blog post, but you know you need to spend a couple of hundred words in a journal paper just to prevent that sort of peer review rejection.

Drew: The existence of peer review means that anything that's going out for peer review has been revised and edited many more times than anything that doesn't go out for peer review. That editing process definitely improves it, but the quality of the peer review step itself is very, very variable.

There's the whole online meme—I would recommend looking it up if you haven't—about reviewer number two. Every academic has had this experience of getting review comments back which are just terrible. Terrible because they're nasty, because they're unfair, because they don't really understand the paper, they just asked the authors to do lots of kowtowing to a particular theory, or to cite a particular piece of work. Basically, ask the authors, why did you write this paper and not this other paper I would have preferred you to write?

So, there's your Facebook group reviewer number two that must be stopped. There are picture memes about killing reviewer number two. There's a downfall video. I think the downfall video is called reviewer number three. There's a whole piece of peer-reviewed research about whether it is, in fact, reviewer number two is the bad one that came to the conclusion that, no, it's more often reviewer number three is the bad one.

David: I haven't seen that before, Drew. That's going to give me something to look up later on. Cool. Anyone can be a peer reviewer and get involved. We've got a paper now. It's been peer-reviewed. It's about to be published. Authors have an option of where they send their papers to. You can send it to any given paper that we write. There's potentially a dozen different journals which the paper would be in the scope of. Does it matter which journal you send your paper to?

Drew: It 100% matters to us. Academics keep score based on where things have been published and how many times they've been cited. Some journals are more prestigious than others. You can generally assume that if something's been published in one of the more prestigious journals, that it was harder to get it in. That harder to get it in doesn't guarantee that it's better, but it does mean that the authors thought that this was their best work that should go to that place. It does mean that you'd know that the editors aren't scrambling to find things to publish, so the editors would have had no worries about rejecting it, whereas a struggling journal will accept stuff just because they're needing stuff to get in. 

Within a pool of reputable journals, you should treat everything pretty much the same. Just because safety science is a more prestigious journal with a higher impact factor than commission technology and work doesn't mean that this was published in CCW so it's a bad paper.

David: Drew, you mentioned the impact factor there. Do you want to give the 30-second overview? That's something that, again, journals take fairly seriously. I know I attended a meeting of the Safety Science Associate Editorial Board with you and the movement of impact factor was something that was taken very seriously by the editorial board. What's an impact factor?

Drew: Impact factors are total bunk. I just need to make that very clear before I explain how they are calculated. Impact factors are like university rankings. They are the most nonsense of nonsense. 

Here's how you calculate the impact factor of a journal. You take the number of things that have been published in that journal. You'd take the number of times things in that journal have been cited. Then, you divide the number of total citations to the journal by the number of things that have been published.

David: Drew, if I’m a journal with an impact factor of five, that just means on average, every paper in this journal gets cited by another paper five times.

Drew: No, it does not. Here’s the thing. Doing average calculations assumes all sorts of things. It assumes that we can't manipulate the numerator or the denominator, whereas we absolutely can. 

Journals can manipulate how many things in their journals make it into the count and try to lower the number of things that get counted. Journals can also manipulate the number of things that get a number of citations that their papers get. For example, by having policies where they reject things that don't have a high ratio of citing their own journal.

Now, if a journal is too blatant about this, then they lose their impact factor. That's just really if they get caught being too blatant. Every journal tries to manipulate their impact factor. The big, big one that everyone knows about is that literature reviews get cited far more than any other type of article. A journal can manipulate its impact factor through what is a totally legitimate publishing practice of encouraging lots of literature reviews and discouraging other types of articles that are less likely to get cited. 

There is no way to use any sort of metric to judge the score of a journal. The biggest thing about impact factors is the majority of papers get one or zero citations. That impact factor does not describe a typical article in any journal. That's the biggest thing. It says nothing about any particular paper.

David: This might be the wrong thing to do, Drew, in your opinion. Even when I'm looking for papers for us to talk about on the podcast here or when I'm looking at doing my own research, if I'm in a database or Google Scholar or wherever I am. Actually, looking at it, Google Scholar is really good. In every single paper, it actually says how many citations this particular paper has. If you see a paper with 700, 800, 900 citations, this could be a fairly useful paper in this particular area as opposed to seeing something with 2 citations that's 5 years old.

Drew: I'd say citations are perfect for selecting things for the podcast because the number of citations a paper has received is an indication of how much attention it has gathered. If we're looking for what is an interesting thing to talk about, something that lots of people have already talked about is probably something that's interesting to talk about. There are things worth saying about that paper. You also know that very often we find things that have been cited heaps. We look at it and we think, this is useless just because the quality of the paper is so low.

David: Or it’s an easy cite for people. It's already been cited a lot of times, it's on this topic, so without doing the grunt work it just gets cited. It just continually gets cited in relation to that topic.

Drew: The authors put a sentence in the abstract that says something that lots of people need a citation for.

David: I haven't thought of that. Chasing numbers doesn't do anything for my career anyway, so I don't need to worry about that yet.

Drew: David, I don't know if I've mentioned this on the podcast. My particular specialty when I was doing my PhD is a technique called fault trees. There is a foundation paper in fault trees that was written in 1965 and has been cited a huge number of times. I have never—until very recently because it got put online by someone—found a copy of that paper. Until very recently, all the time that there were hundreds of people citing it, there were only three copies of that paper in existence, all in libraries, in obscure places, that did not do interlibrary loans. None of the people citing it had even read it. 

David: Drew, you mentioned a 1965 paper. Does when something gets published matter to us?

Drew: It matters a lot for a number of reasons. Things that are published now will get rejected when they might have been published back in 1965. The number of papers that are getting submitted is going up and they're supposed to be building on what's gone before. Something that has been published earlier has very likely been superseded by things that are published later. Something that is published today can be not guaranteed but presumed to be advancing the field beyond where it was previously. 

Very recent papers are going to have many fewer citations than very old papers, but they're more likely to be the later state of the art. If they’re done well, they'll have a literature review that summarizes all of that later work anyway and puts the new work into that context. 

The general rule is we aim to cite things, to read things that have been published in the past five years, and we want to be very careful. You may have noticed that I'm talking to listeners rather than you, David. On the podcast, if anything sort of older than five years will usually mention that and mention that we've done a check to make sure that it's still the state of the art or still up to date.

David: I suppose there are a couple of more things, then we'll dive into some practical takeaways. Does everything eventually get published? I haven't had to do it but I'm aware of a bit of journal shopping that goes on. Someone might get rejected and rather than think about fixing the paper, they might go, I'll actually try this journal next and this journal. They might try four or five journals before they actually get around to going, it's obvious now that I have to fix this thing before I get it done. Does everything eventually get published and how much journal shopping happens?

Drew: Journal shopping definitely happens. There is more pressure on some people to publish than others and if there's a lot of pressure on you to get stuff published. For example, a lot of Chinese academics have got numbers of papers every year they have to publish. A lot of Indian students have to publish two papers before they get their degree or they're not given their master's degree. Those people have just got huge pressure to get something published. If it gets rejected from one place, they will immediately send it to another place. 

For authors with less pressure, usually if something gets rejected from one place, you will at least edit it before you send it somewhere else. Unless you think that it was rejected unfairly, in which case you might just immediately say that the journal didn't want it, let's just try somewhere else. If you get no feedback, you just get desk-rejected, you're very likely to send it to somewhere else.

David: I suspect the blatant version of general shopping—I have seen at least once—is when the actual cover letter that goes into the journal still got the previous journal's name on the cover letter.

Drew: The most blatant version of journal shopping that I've seen was I sent a paper out to a peer reviewer who came back and said this paper has plagiarized my work. I rejected the paper and sent the complaint off to the author's university. I got a very irate email from the reviewer two weeks later because the person had sent the plagiarized paper straight off to another journal who had sent it to the same peer reviewer, who is now saying you plagiarized work for the second time.

This is the danger in journal shopping. There's a limited pool of good peer reviewers out there and it's not like peer reviewers get paid. It's not like they belong to a journal. For people who journal shop, there is a real risk, particularly in small subfields, that it will just go out to the same peer reviewers. That's sort of the kiss of death on a paper. If a reviewer says, I have already rejected this or another journal and it has not been changed. The editor says that they're just going to reject the paper and possibly blacklist the authors.

David: Yeah. Drew, is there anything else you want to want to talk about before we talk about practical takeaways? You had a note here about academics keeping score. We've mentioned a number of papers published. We’ve mentioned citations. We've mentioned impact factors. I suppose as a career academic, how do you get judged by the university in relation to publishing and journal articles?

Drew: Universities expect their academics to publish. It's one of the things on our KPIs as employees. It has a big impact on things like when we apply for grant money from government agencies, having publications, and having those citations for those publications matter. It's a big thing in promotions, how many papers we publish since our last promotion, how many times they've been cited. For that reason, a lot of our publishing we do for ourselves. We don't publish because we want something to get to an audience. We publish because we would like to have a publication. 

That's (I guess) important because that perverse incentive means that a lot of the stuff that gets published in journals, no one actually cares whether the practitioners read it or not. That's one explanation for why often papers don't have clear, academic takeaways in them, why they're written badly, why the whole system doesn't give easy access to papers. It's not that academics don't want their stuff to be read, don't want their stuff to be used. Those are KPIs as well. Very often, the reason for publishing is its own right. You have a gold star. You got a publication in Safety Science. I mentioned your university rankings before. Those are often based on a number of publications in certain very prestigious journals.

David: Drew, we talked about what a journal is. Anyone can be an author, submit it to the editorial filtering process, peer review. I've got my paper. It's been published now. I want to be looking at the recency of that and where it's being published. Hopefully, it's been interesting for our listeners to just understand a bit more about the journal publishing process. What are some practical takeaways for our listeners?

Drew: Let's try to think about what would be genuinely useful for practitioners to know. I guess the first one is the way to use publication as a way of telling who is an expert and who isn't. Obviously, we've sort of added a little bit of dirty laundry here. Think of publication in a peer-review journal as the minimum standard but not as a very high bar. There are lots of reasons why there might be something which is good work that hasn't been published yet. 

If anyone is consistently holding themselves out as an expert, but they're not willing to ever take that step of submitting their work to the rigorous scrutiny of their peers, it doesn't matter how many other people they cite in their work. If they're not going to have their own work put for peer review, studied, scrutinized, accept that feedback, accept that criticism, revise, and respond, then you should be very suspicious of them holding themselves out as an expert.

David: I think we talked about way back—it might have been episode one or two—when we talked about behavioral safety, that was one of the very early conclusions that we drew in terms of all the debate and discussion where which people have in that space, who the actual authors are and the number of papers in the studies doesn't match up for a lot of the people holding themselves to be able to be experts, one way or the other.

Drew: Yeah. You can criticize the academic ecosystem all you like and criticize academics living in an ivory tower. It's one of the really good disciplines that academia has. We test our stuff. We put it out there. We scrutinize each other. We're willing to be criticized.

David: Very much. The papers that I put in years into (I suppose) during my PhD and worked up. As you said, they’re long processes, 10,000–12,000 words. You send it out, someone strips your name off it so that no one knows who it’s written by, and sends it out to two very experienced—generally—academics. It comes back with pages and pages of comments. You kind of got to suck it up and make changes. Every single peer-reviewed paper that I've published, I've felt like the paper's been better at the end of the peer-review process.

Drew: Yeah, I would absolutely agree with that with my own work. Second takeaway is that beyond a certain point, the way academics keep score is really irrelevant to professionals. Just because we take it seriously doesn't mean that you have to. 

By the way, we keep score things like the number of papers that we've published and the number of times they've been cited. That's really just our own point-keeping. I don't know what the sweet spot is. In safety, it’s probably anyone with more than about 10 papers, I would consider an absolute expert. Anyone with a couple of hundred citations is basically considered an expert in their field. 

When people have more than that, they say, I have 50 papers, 100 papers, 200 papers, that just really says they're good at manipulating the system and keeping score. It doesn't mean that they're any deeper than an expert. If anything, it means that they're very focused on publishing rather than on making sure each individual piece of work is really interesting and of high quality. David, you led us very carefully, drew that number so that I fall on the right side of it.

David: I think you very carefully drew that number so that I'm pretty close to falling on the right side of that as well. I think in episode 24, we had Dave Woods on the podcast. He's someone with hundreds of papers and 30,000 or 40,000 citations or something like that. I think there are some people in our field who have been at this for decades and have got a lot of work under their belt.

Drew: Yeah. People who have got those huge numbers, they are the rock stars of our field. Often, the way to get there is to run a successful lab, have lots of other people working for you, and doing the work. Most rock stars also are in their own labs, have researchers working for them, PhDs working for those. There's a little bit of that pyramid scheme going on where the people who sit at the top certainly get the citations. 

The next take away, I mentioned peer review was the minimum barrier. It's a very low bar. It's not hard to get something published. It's that stuff that slips through the cracks all the time. You can't just assume because it's passed peer review that it's one of the good ones rather than one of the ones who slip through the cracks. That's why you should never totally trust a single paper. 

There are two strategies. One of them is that you scrutinize the individual papers really carefully. The other is just don't trust single papers, full stop. Do things like looking for literature reviews that summarize the body of work rather than relying on any single paper.

David: Yeah, I even find that when we look at papers for the podcast here as well, when we looked at topics like safety culture a couple of episodes back and things like that, I can think of a single paper on a study like that and see six or seven pages of statistics and not really make head or tail of how significant something is.

Even I find I'm much better off going and picking up a literature review on the safety culture broadly, just seeing how the narrative goes through the literature review, and what the bulk of the studies suggest rather than the individual study.

Drew: I know we've got some academic and student listeners, so I’ll give one bonus take away. For students, a good way to do it is if you find a paper you like, search for something that says the exact opposite. If you find that there is something that says the exact opposite that looks just as good as the one that says what you're going to cite, then that's the time you know you've got to dig deeper. You've now got two reputable things pointing both directions. You've got to work out what the true situation is.

David: Which happens a lot in our field, Drew, because it happens a lot in social science, I think.

Drew: Yeah, there is still a consensus on topics. If there are two good papers pointing either way, it's definitely a sign that the answer is not as easy as you thought it was the first time.

Final one. This gets into an invitation to the listeners as well. I haven't actually checked before the podcast if you agree with this one, but I think that as professionals, we all should be in the habit of reading at least some original research work. It doesn't have to be a lot, but it's a good idea and a good discipline to always be looking at what that cutting edge is being published.

David: I definitely agree with that and I think that's happening increasingly now. I spent various parts of my career over the last 20 years or so where I spent years and probably didn't read any original research. But now, I'll do a webinar with someone and mention a couple of our papers, for example, and I'll have dozens of lengthy messages asking for copies of the papers. For our listeners, this is perfectly fine as an author for me to share a private version of a paper. Lots and lots of people reach out in the interest of actually reading the original research. 

I think you mentioned there reading clubs, as well as journal clubs. I've also joined a couple of virtual journal clubs lately to talk about some papers. I know there are groups of professionals getting together, reviewing papers within their organization or across organizations, then coming together and talking about them.

Drew: Invitation to the listeners. We're really quite curious just how many people do that. We obviously know when people directly reach out to us. We'd be interested in any of you who are out there making a point of reading some of the papers we talk about on the podcast either by yourself or as part of a reading club.

For academic or student listeners, we’re very keen to hear about your good and bad experiences with journals and peer review. Certainly, don't hesitate to tell me why you don't publish in Safety Science because we take too long to read stuff.

David: Drew, that's it for this week. Next week, we're at episode 50. I think for our listeners to tune in next week, we're going to actually talk about the original paper, Safety Work Versus the Safety of Work, which was the inspiration for this whole podcast thing that we've been at for about a year now. Do you think we get this far?

Drew: To be honest, I'm amazed we've managed to keep up every single week scheduled so far.

David: Yeah, it's been pretty close a few times but we've done well. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Join us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.