The Safety of Work

Ep.4 What is the relationship between trust and safety?

Episode Summary

On today’s episode we are going to discuss the relationship between trust and safety.

Episode Notes

Topics:

Quotes:

“...It’s not as simple as ‘trust is a good thing’ and ‘distrust is a bad thing’...when we trust people too much, we take their word for things, even when we shouldn’t.”

“The happy medium...you get good communication and you get good checking behavior.

“We actually can’t really make predictions about what these findings mean in real-world organizational settings, once all of those variables become reintroduced.”

Resources:

Conchie, S. M., & Burns, C. (2008). Trust and risk communication in high‐risk organizations: A test of principles from social risk research. Risk Analysis: An International Journal, 28(1), 141-149.

Conchie, S. M., & Donald, I. J. (2008). The functions and development of safety-specific trust and distrust. Safety Science, 46(1), 92-103.

Feedback@safetyofwork.com

Episode Transcription

Drew: You’re listening to the Safety of Work podcast Episode Four. Today we are asking the question: What is the relationship between trust and safety? Let’s get started. 

David: Hey, everybody. My name is David Provan and I'm here with Drew Rae. We are from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work podcast. If this is your first time listening, then thanks for coming. The podcast is produced every week and the show notes can be found at safetyofwork.com. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. Drew, what’s today’s question? 

Drew: The question for this episode is what’s the relationship between trust and safety? Specifically, we want to know whether trust is overall a good thing or a bad thing for safety. As we’re going to find out, that’s not obvious and it is a more complicated question than it sounds. Just because trust itself is really quite a complicated thing to define.

A lot of people, including some pretty big name safety experts, throw around the word “trust” as if it’s an obvious part of safety culture and obviously a good thing. For example, James Reason said that, “If workers trust the organization, they're more likely to communicate openly about safety.” George Watson said that, “If workers trust their supervisors, that will give them a positive perception of workplace safety and they’ll behave better.” Other researchers have said that, “If workers trust each other, that creates and say good shared safety climate, and that’s good for safety.” There’s work that says that, “Trust or the lack of trust affects whether safety initiative is likely to be successful or not. If workers don’t trust the motives of their managers, then safety improvement initiatives aren’t likely to succeed.”

All of that creates trust as this obvious thing and that’s only part of the story. That’s what we’re going to look at in the two papers I’ve chosen to look at in this episode. Both papers we’re looking at come from the same researcher, Dr. Stacey Conchie. I always like to quickly look at who the authors are when we look at her paper. This isn’t an argument from authority thing. It’s more of a question of trust.

If someone publishes a lot of papers on the same topic, and they’ve got a reasonable spread of different research methods and different journals, then it’s fairly safe to assume that they’re an expert. What I mean by that is it doesn’t mean that you take for granted everything that they say. It just means they get the benefit of the doubt when they talk about things like what the literature says or what’s the consensus in the field. If they’ve been peer reviewed in a lot of different places, then I’m pretty confident that they’re not misrepresenting what the overall feel of things. 

One particular thing I look for is how connected the researcher is to both safety and to their parent discipline. Most topics in safety are really specialized subtopics about a discipline such as psychology or organizational studies. It gives me confidence if someone is talking about something like trust, which is a psychological concept that they’ve been published not just in safety but in respected psychological journals as well.

Dr. Conchie is a psychologist, she is an expert in trust and she’s got a particular interest in how you apply the idea of trust to high risk and high reliability work. The paper we’re going to look at is published in the Journal Safety Science in 2007. It’s called The Functions and Development of Safety Specific Trust and Distrust.

The paper’s a bit of a mixed sets, sort of half literature review, half of analysis of some interviews with offshore oil and gas workers, and then a discussion which blends the existing literature with what they found in the interviews.The authors for this paper are Stacey Conchie and Ian Donald.

Dave, you’ve done a bit of this combination of literature review to get the foundations and then interview work for a specific example. Can you say a little bit about what that’s like as a researcher, to do and to publish?

Dave: These are really interesting projects and I think you want to come off that one of two ways. You either read something in the literature that you then raise all sorts of questions in your own mind about a specific context or application of those ideas in a slightly different way or a different domain, so then you take what you got out of the literature and then try to conduct some interviews and research to expand, or clarify, or to help explain what you’ve read.

I’ve also been involving Stacey to come out the other way where we have done the interviews first, found some really interesting things, and then you try to actually figure out where in the literature these frameworks, or explanations, or theory to match what you’ve found in your data. This data theory match can happen before or after. It makes for a really useful and interesting paper when there’s a really strong literature review component combined with a specific research component. They’re good papers to read and I think that they’re really useful papers like this one. 

Drew: I suddenly enjoyed reading it and I learnt some stuff myself. This isn’t really a conclusion of the paper. They treated it as something that was fairly obvious and well-known in psychology but was new to me, is the idea that trust and distrust aren’t just opposites of each other. It’s not that your trust goes up and distrust goes down, or distrust goes up and trust goes down. The absence of trust isn’t distrust, it’s just the neutral willing to be convinced. If someone doesn’t trust you, that’s okay because they can learn to trust you. Whereas if someone distrusts you, that’s a quite different thing. You get into this cycle because they distrust you, they behave like they distrust you, you distrust them.  When psychologists measure trust, they don’t just measure this like linear scale. They measure different aspects of trust and different aspects of distrust.

The second thing that comes more from the safety side of things is the idea that it’s not as simple as trust is a good thing and distrust is a bad thing because when we don’t fully trust people, that can lead to quite good behaviors. You don’t trust someone, so you check up on them, and that’s good for safety. Whereas when we trust people too much, we just take their word for things even when maybe we shouldn’t. 

This was something that was mentioned by a few authors in safety before Stacey Conchie and Ian Donald did this work, but it hadn’t really been explored. For example, it’s something that Hail mentions a few times in his work, but not as the direct topic of the research and not as something that’s being well-evidenced. From literature, this is something that they wanted to check out further, and that’s why they move from literature review into this during field work. In this case, interviews with oil and gas workers. They reached the conclusion pretty quickly that the idea of distrust covers multiple things. There’s a functional type of distrust where workers know that other humans are fallible, so they routinely check each other’s work. 

There’s a great quote in the paper that comes from a supervisor. He basically says that when he comes back from shore leave, he doesn’t trust himself. So he tells his own lads to keep an eye on him. That kind of functional distrust is almost a positive thing. It’s a kind of respect, and it’s very different from say, workers being suspicious of the motives of their supervisors and their managers. 

The argument that Conchie and Donald make is that for safety, you need a moderate level of both trust and distrust. Sort of not too hot, not too cold, somewhere in the middle. Too much trust leads to a lack of personal responsibility and mistakes slipping through the cracks. Too much distrust leads to people behaving really badly towards each other and reduce attention to the core of the work. Whereas the happy medium, you get the good sides of both, you get good communication and you get good checking behavior. 

Dave: I think trust is one of those things where it’s very easy to take that oversimplified view that trust is good for safety and the more the better. Many things in safety, when we look into the data [...] over there it’s far more nuanced than complexed in that. We need to understand trust, too much trust, too little trust, trust, distrust, too much, too little, different types of trust and different types of distrust to really form a view of this.

For me, I record a paper that Karl Weick and his colleagues wrote, titled, Organizing for Collective Mindfulness. They introduced this topic called, The Organizations and People in Organizations Need to Be Emotionally Ambivalent. What they meant was that organizations needed to have equal amounts of hope and fear. They needed to have hope, faith, and trust in what they and others were doing for safety was right and that they are on track, but also equal amounts of fear and a level of unease that maybe they weren’t doing everything completely right so they had to continually scrutinize what it was they were doing. 

I think these more balanced and nuanced views of the way that organizations need to think in relation to safety are much more useful than just polar opposite positions like chronic and ease. That’s something that’s never sat very well with me because it’s very hard to be at one end of the spectrum all the time. 

While it feels a bit strange to say it in our modern organizations that always try to value the best of everything, is that, yes, I think a level of distrust is good for safety. 

Drew: One of the things that immediately leaps out to me is the idea of things like safety cases which are almost deliberately set up to be adversarial system for safety. One person designs a system, they make an argument that it’s safe. Someone else comes in quite aggressively as an independent assessor to try to check up on that. It’s a weird thing where they’ve almost collegial relationship between the assessor and the person they’re assessing. At the same time there’s this process that works better if they do actually get a bit skeptical and they distrust. People who’ve criticized that person say that the trouble is that there’s too much of the friendly colleagues, too much of the taking people’s words for what it’s worth.

Dave: Maybe we’ve been seeing that recently with the Boeing 737 MAX, software changes and the regulatory oversight of that. I think it’s been shown before in a safety case environment that too much trust and therefore and not enough checking and challenging may lead to things being overlooked or not managed as well as they could be. 

Drew: Yeah, but on the other hand, that doesn’t mean that we should deliberately start injecting distrust. Wouldn’t advocate that Boeing starts lying to the regulator to create a better distrustful relationship between them.

Dave: No. We’ll talk a little bit later about functional and dysfunctional trust, and functional and dysfunctional distrust. That’s definitely dysfunctional distrust. 

Drew: Yeah. I like the distinction. What we’re talking before, you’re talking about playing the ball, the not the person. 

Dave: Yeah. We’ll get into that in the findings and the practical applications of the findings of these papers because I think there’s some really, really clear messages for how people think about trust in their organizations and particularly what I’m really chained about is the language that were use to describe certain things and the emotions that different types of language creates in people within organizations. 

Drew: Some interesting ideas, this study doesn’t involve a lot of people. They interviewed 14 workers. Conchie and Donald are fairly cautious about trying to draw very clear, strong findings. They give some ideas for what creates trust and distrust. Their examples for distrust are mostly around the dysfunctional distrust. Things that create problems that cause people to be suspicious of each other. Particularly things that managers can do, supervisors can do to lose the trust of workers and vice-versa.

They give this model with these functional behaviors and dysfunctional behaviors. And both trust and distrust can lead to very functional behaviors and the dysfunctional behaviors. There’s a message that we want to encourage the good sort of trust and the good sort of distrust. They reached this conclusion that the good source is a kind of happy average. That some of the data or  examples they give almost say like there’s multiple types of trust and multiple types of distrust. It’s less about the amount and more about the type that you have.

Moving on to the second paper. Same first author, Dr. Stacey Conchie, now we have Calvin Burns as the co-author. This is a paper specifically about how to create trust. They are able to design this based this time on an experiment. It’s got a fairly traditional experiment structure. You take a bunch of people, you randomly split them into two groups, and you give the two groups different information or change one of the groups somehow. In this case, the two groups are student nurses who are heading off to do work placements in hospitals. Each group was given different information and that creates the different conditions that we use for the rest of the experiment. 

Dave, I’m going to test your voice acting here and get you to read out what age group is told. For the first group.

Dave: This is a great opportunity for our listeners to put themselves in a position of research participants. You can pick yourself in a room and this is what you’ve been told. The first group was told this. “Staff and students at the hospital that you’ve been placed in have raised concerns about violence and aggression from the general public. As a result, the hospital holds monthly sessions with each department with staff, students, and management discuss staff and student concerns. At these sessions, the hospital also discusses with staff and students decisions that they have taken as a result of the previous incident report. The hospital has just announced that it will release quarterly statistics about violence and aggression at the next session.”

Drew: And then the second group was given different information. 

Dave: “Staff and students at a hospital that you’ve been placed in have raised concerns about violence and aggression from the general public. Despite these concerns which were raised almost one year ago, the hospital has yet to make the staff and students to discuss their concerns. The hospital has an incident reporting system, but the hospital does not provide staff and students with the feedback when the incident reports are made. The hospital has not released any statistics about violence and aggression.”

Drew: Now, obviously, the idea here is to make the first group trust the hospital more because it’s being more open in its communication. And the second group trust the hospital less. Psychologist [...] people and the obvious isn’t exactly what they are testing. What they’re doing is they try deliberately create this really obvious difference in order to measure some other more subtle things inside it. 

The first thing they wanted to look at was how you measure trust. Having created the two different groups, what different answers the the different groups give to questions when you ask them about things like what do they believe about the hospital, how would act towards the hospital, what would they expect from the hospital.

They found fairly clearly that trust isn’t just one thing. There’s two separate things inside it that they called trust beliefs and trust intentions. In the first one is what you think about the hospital. The second one is what you would plan to do as a result of that. They found that the two don’t move in complete lockstep. It’s easier to move one than it is to move the other, for example.

The second thing they did was they were testing this interesting property of trust which is that it’s easier to make trust go down than it is to make it go up. That’s fairly consistent across psychology research. The group with the openly communicating hospital went up a little bit, but the group with the very secretive hospital went down a lot. It’s interesting that it’s not either direction. One goes up a little bit, the other one goes down a lot. 

But all of that was really just the preliminary to the main thing that I wanted to test in this study. What they really wanted to know is based on whether you trust someone or not. How you’re going to interpret what they tell you. Their working hypothesis is that, even if it’s bad news, you’ll take that better from someone that you trust. Whereas if someone is secretive and it’s bad news, that’s going to be really, really bad. 

They gave each person a new piece of information saying, either that violence in the hospital had gone up or gone down by 50%. Instead of two groups, we’ve now got four groups. There’s the openly communicating hospital with good results, the openly communicating hospital with bad results, and the secretive hospital with good or bad results. Unfortunately, this part of the experiment turned out to be a total bust. After carefully setting up these two groups and making sure that one group trusted the hospital more, there was no statistical difference between how the two groups interpreted that new information. 

Much of it all that I picked this study where results are well, none really. But what I love about the study is its integrity. And also, that it highlights how difficult and subtle something like trust is to measure. You’ve got a really carefully contained lab environment, and you’ve carefully created two groups. The two groups are identical in almost every way, that the only difference is that one group is primed to be more trusting. Even under those perfect lab conditions, trust doesn’t make a difference in how people created good or bad news. 

Dave, you thoughts on this almost like a lab experiment for measuring the psychological forces in play? 

Dave: Yes, I think psychology has a long history of doing these lab top experiments and this is a well-designed academic research. There’s nearly 400 participants, 3 separate universities and a really clear research question, hypothesis, very well-designed conditions, as you mentioned. It’s only possible to do this type of research in such an artificial setting. You could never isolate people and variables in this way in the workplace. We’ve talked about the opposite end of the extreme of research in last week’s podcast as very broad to your workplace study.

But the benefit in this case is that we’re able to to design and test very specific questions that you’ve talked about and we can isolate almost all of the other confounding variables. There’s a lot of positives in that because we can really get specific around proposing and refining theory. But the downside is that we actually can’t really make predictions about what these findings mean in real world organizational settings once all of those variables become reintroduced. I believe there’s obviously an important place for both and the best overall approach for important research questions such as this about trust is to have what Stacey Conchie has seemingly done over several years is have quite an extensive research program that combines lab experiments or artificial academic experiments in these types of settings with broader workplace-based research as well.

Drew: That’s I guess one of the limits of this podcast where we talk about just one or two papers, is we need to be careful to set the context, that this is part of a much larger picture. We need you as an audience, to trust us that we’re not cherry picking. We’ve actually read the stuff around it. And we’re giving you stuff that fits in appropriately to a bigger picture. That’s definitely the case with Stacey Conchie’s research.

Dave: Yeah. To our listeners, we proposed a research question to each other. When Drew sent me what is the relationship between between trust and safety, Actually, I immediately thought of one of Conchie and Burns’s papers, which is another paper which was published only months after the second paper that we’re just talking about now, which was a workplace-based study that actually looked at using trusted information sources to communicate about risk.

What they did in that study was they asked 131 construction workers how much they trusted information about safety behaviors given by different people. They picked four high-risk construction hazards and some clear behaviors around those hazards, for example inspecting slings before they use them for crane lifts or bending the knees when they lift it and things like that. 

My working assumption throughout my career was that the workforce, when being talked to about safety, trusted their workmates the most, their supervisors the second most, and then slightly professionals, maybe, listened to the least when it came to this. It is only one study and a small study so we need to be careful with generalizing the results. What they found here is quite clearly that work is actually trusted safety information coming from the safety professionals more than any other group of people in the workplace. The reason they did that is because some of these other dimensions of trust, they believe that safety people knew more about safety than other people. They believe that safety people didn’t have other intentions like productivity and profit, so they then reported that they were more likely to follow safety information that was provided by a safety professional that they were something provided by a manager. 

That finding is almost in contrast to quite a lot of things that we promote as safety practitioners in terms of [...] communication. It was interesting, Drew, as soon as you just mentioned, that brought up line of research. You mentioned two papers from the same author and I’d immediately thought of a third one from the same author as well. 

Drew: Yeah. I guess that shows that even though every individual paper has limits and we should be careful of drawing wrong conclusions from each one, when you’ve got a program that work like this with lots of different studies, each one with a different angle, a different method, different types of participants, different mix of doing things in the lab or doing things out in the field, together they build a fairly strong consistent picture. 

Plainly in comparison, there’s something like a lot of the safety climate work, where instead of building a consistent picture like this, the different studies start really build on each other. There’s something I’ve noticed recently that really annoys me, actually, is that this country where contrast is a very delicate and careful, and it builds up these really important things like the asymmetry and that trust can be both good or bad. Then, you say safety climate papers that cite this exact work, and then goes straight back to saying, “Trust is a part of a positive safety climate and we’ll just measure that as increasing trust means better safety climate equals better safety.”

Dave: Yeah. I think those [...] cross siding your papers and losing some of those detailed findings in translation happens a lot in safety research. I think it’s good, while you said earlier, through that we’re talking that individual papers, that’s also the advantage of this podcast where we talk about one or two individual papers where we can really show the merits of the individual research findings before they get grouped up into larger, more freely digestible general communication about topics such as climate and culture. 

What does it mean for practice, these findings about trust?

Drew: I think there’s a few clear things that we can take away. The first one is a little bit of a call back to last week, which is the importance of specificity when we want to do something to improve in safety. If you want to do something like improve culture or improve leadership, is really just too vague and doesn’t give you enough guidance on how to do it, which is why people end up with these very generic measurements and very generic programs, if you look at something more specific like trust, then you can really work out both what it is, what you don’t like about how things are now, and what are the direct actions you need to take in order to improve it. 

In this case, focusing on trust. Let’s just think really about what sort of relationships we lack, what is the ideal relationship between a supervisor and of the workers. Clearly, it’s not unquestionably they like each other, trust each other, never check up on each other. It’s going to be giving something more towards this very functional trust and distrust where you can check up on each other without getting suspicious about it. That’s not the same. It’s just unquestionably a lot of trust or lots of distrust.

I think there’s also something we can take from the experimental results. This asymmetry that keeps showing up. It may seem fairly obvious but it’s something we need to be reminded of that lots of very carefully trained positive behaviors can get wiped out by very small number of bad directions. 

As far as workers are concerned, you can say the same consistent message for years but just a couple of examples they think that reveal what managers really think and really care about, and those distrust stories perpetuate. 

Dave: Yeah. I refer to that third point that you mentioned a number of times before I was familiar with this research because I’ve seen through my experience in organizations that, like you said, lots and lots of effort that had gone into trying to improve safety climate in organizations and safety leadership behaviors through a whole raft of programs, then almost completely eroded by one ill-planned organizational change project or something like that where the organization went, “Okay, I was right all along, the organization doesn’t care.”

I think I’ve said that to a lot of people when they’ve been thinking about visible leadership programs to really think them through because with the best of intentions, if you make that bad leadership visible, then you can really negatively impact safety and sometimes it might be better to leave some people in their offices if you haven’t carefully supported and trained them in how to do those things really well.

I couldn’t quite find exactly who it was originally attributed to, but sometimes it’s best to be thought of as a fool than to open your mouth and remove all doubt. I think that applies for trust and communication in interaction in the workplace.

Drew: Yeah, that’s a good example. So where possible we like to take the research and think about how you could immediately apply it and take it away. We’re not always going to be able to do that every week, but in this case, I think there are some takeaways that can be turned into immediate action.

Think about what trust means for you in your work place at the moment. One thing that a safety practitioner can do really well is to find examples of things that are going on and reflect those back to management so they understand what’s going on.  Look for examples of open communication. Ask workers who they can rely on to give them a straight answer about what’s going on in the business. Or who gets things fixed, who gets stuff done, what it really needs to get done.

Think about what functional distrust means as well. Look for examples of where there’s professional people checking each other’s work without feeling micromanaged or checked up on. Look at your who’s doing that, how they’re doing it, and how those positive relationships work, and you share those examples around. 

Dave: Like I said earlier, I think it’s important that people really think about the language that they’re using when they’re talking that trust and distrust in organizations because it will be a motive. People will believe that trust is always good and the more the better, distrust is always bad, and we need to eliminate it.

You can use many other terms for concepts around trust like challenge, or critical reviews, or double checks, and work to help your organization understand that there is a difference between dysfunctional distrust that promotes secrecy and noncompliance. What we’re talking about is functional distrust that promotes an active monitoring which is helpful for safety.

One example of that language and where functional trust can be used is there’s a difference between a safety practitioner saying, “I don’t trust the safety audit process because the auditors might miss something,” versus saying, “I don’t trust the auditors.” And this is playing the ball, not necessarily playing the person. I think as safety practitioners, we need to get far better at using language well in our organization narratives around safety.

Another thing I thought for you was about safety training programs. One of the findings in one of the papers that we spoke about today concludes that training initiatives that promote interactive skills that help people communicate openly and honestly, will be more effective for safety, particularly, than technical training. It will be worthwhile reviewing your organizational safety training programs just to see what you’re doing is based on communication skills, non-technical skills, giving and receiving feedback, or career resource management. 

Drew: We play a lot of lip service to the idea of soft skills training, your communication training, and leadership development. I think very often, we don’t follow through along that. We talk about how important it is, but we don’t give people less support that they need to genuinely develop. It’s not the sort of thing that you can give someone a one hour PowerPoint and how to be an open communicator requires practices like coaching and mentoring and feedback overtime to develop those skills. You’re really investing in practical leadership. 

Dave: I agree, Drew, and I think as an extension of what I’d advocate—I think we would—is that the most specific that you can be about almost any topic or any matter in the organization, the better. When people talk to you in your organization about safety culture and safety climate, that’s an opportunity for you to ask them to be more specific.

When your manager might say, “We want to improve safety culture,” the first question should be, “What do you mean by that? What specifically do you think needs to be improved?” The more specific you can be, the better change you’ve got of actually designing an intervention that will be able to impact this specific thing that you’re trying to improve, like trust as opposed to climate. Like we’ve said, even trust is potentially too broad looking at the mechanism say, interactions between workers and supervisors, is getting even more specific.

Drew: That’s one of my favorite questions both for research and for getting along with other people is just what do you mean by that? Tell me more, is a fantastic way to go from the surface level ideas to things that are more specific and actionable. 

Dave: Great. That’s it for this week. We hope you found this episode thought provoking and ultimately useful in shaping the safety of work in your own organization. Send any comments, questions, or ideas for future episodes to feedback@safetywork.com.