After a long unplanned break, we’re back! During the break, we were very happy to see our inboxes fill up with topic ideas from our listeners, and in this episode, we’ll be digging into one of those suggestions - “The Seductions of Clarity” by C. Thi Nguyen from the University of Utah.
Just because concepts, theories, and opinions are useful and make people feel comfortable, doesn’t mean they are correct. No one so far has come up with an answer in the field of safety that proves, “this is the way we should do it,” and in the work of safety, we must constantly evaluate and update our practices, rules, and recommendations. This of course means we can never feel completely comfortable – and humans don’t like that feeling. We’ll dig into why we should be careful about feeling a sense of “clarity” and mental ease when we think that we understand things completely- because what happens if someone is deliberately making us feel that a problem is “solved”...?
The paper we’re discussing deals with a number of interesting psychological constructs and theories. The abstract reads:
The feeling of clarity can be dangerously seductive. It is the feeling associated with understanding things. And we use that feeling, in the rough-and-tumble of daily life, as a signal that we have investigated a matter sufficiently. The sense of clarity functions as a thought-terminating heuristic. In that case, our use of clarity creates significant cognitive vulnerability, which hostile forces can try to exploit. If an epistemic manipulator can imbue a belief system with an exaggerated sense of clarity, then they can induce us to terminate our inquiries too early — before we spot the flaws in the system. How might the sense of clarity be faked? Let’s first consider the object of imitation: genuine understanding. Genuine understanding grants cognitive facility. When we understand something, we categorize its aspects more easily; we see more connections between its disparate elements; we can generate new explanations; and we can communicate our understanding. In order to encourage us to accept a system of thought, then, an epistemic manipulator will want the system to provide its users with an exaggerated sensation of cognitive facility. The system should provide its users with the feeling that they can easily and powerfully create categorizations, generate explanations, and communicate their understanding. And manipulators have a significant advantage in imbuing their systems with a pleasurable sense of clarity, since they are freed from the burdens of accuracy and reliability. I offer two case studies of seductively clear systems: conspiracy theories; and the standardized, quantified value systems of bureaucracies.
Discussion Points:
Resources:
The Safety of Work on LinkedIn
David: You're listening to The Safety of Work Podcast episode 96. Why should we be cautious about too much clarity? Let's get started
Hey, everybody. My name is David Provan and I'm here with Drew Rae. We're from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to The Safety of Work podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it.
Now, before we get too far into things today, we should probably give you a brief update since it's been our longest break between episodes in the last little over two and a half years. It was unplanned. So Drew, what have you been up to?
Drew: David, I always feel bad when this sort of thing just drops in there. We could have scheduled ourselves a two-month holiday and it would have been a problem that we just disappeared off the airwaves. From my point of view, it wasn't the end of COVID because COVID is still rampaging everywhere. As some of the restrictions started lifting, some of the other activity that was in the background started flooding back in. People who'd been forgiving for a couple of years now suddenly started expecting things that I'd promised to get delivered and done.
I've been to New Zealand. I got to go and visit Auckland. I got to see my first carpet factory and talk to the good folks at Bremworth. I got to see the City Link rail project and have a bit of a look at the dig there. We've got some fun projects starting up. I can't really talk about all of the details of them at the moment. But yeah, we've got researchers back out in the field visiting places, interviewing people, and coming up with new stories to tell.
David: Yeah, great. I've been out and about a little bit as well, Drew, but I was talking to one listener—it's quite a funny story—when I was in the US. He kind of said to me, every single time he starts listening to one of our podcast episodes, he's just waiting for that episode where Drew goes, this is it, this works, do this. I was going to make a joke when we got on to this episode that we spent the last two months trying to help you find something that you think works.
Drew: David, if that was the story, then I'm afraid the answer is still, yeah, we tried really hard, but I'd like to think we do come up with concrete suggestions, at least, at the end of every episode of things that people can try and things that people can do that will make their lives better as safety practitioners. But yeah, that holy grail of a safety practice that you can just say, get the evidence behind this is no controversy. You should just do this. You're not going to find it.
It's the same in teaching. We spend all of our time at universities teaching and learning about teaching and trying to improve our teaching. There are very, very few things someone can come along and just say, here is the hard and fast evidence that this is exactly the way you should do it.
David: Yeah. We're going to talk about that a little bit in this episode today, Drew. You did the prep work and pulled out this paper. What's today's question all about?
Drew: Okay. This is a listener-suggested paper. One of the great things about taking a break is that our inbox has filled up with people saying, oh, if you're taking a break because you don't have any papers, here's one you should try.
This one was recommended by listener Perman Sherman. My apologies if I've mispronounced your name, Perman. It resonates with a topic that I think about a lot in not just my own teaching, I think it's something I'd sort of worry about in life more generally, which is how much uncertainty plays a role in what I choose to do and not do? Your uncertainty is something that sort of help drive people forward, but it's also something that holds people back, and I experienced that a lot myself.
When it comes to learning, some amount of uncertainty is absolutely necessary. If you think you know the answer to a question, you don't go looking for better answers to that question. Lots of what we do in a classroom is we start off the lesson by provoking the students into discomfort. We get them to recognize the limits of their current understanding. We ask them questions that make them go, I hadn't thought about that, or we give them a problem to solve that their existing tools don't help them solve. That sort of gets people into that curious learning state of mind.
Too much uncertainty is disabling. If you're too uncertain, you become risk averse, you start rejecting new information, and you start refusing to take any actions that would add to the amount of uncertainty that you're already facing.
This is me after COVID spending all my time just sitting in my little office where I don't have to go outside, and don't have to meet people. Less things just adding to that uncertainty. I think for all of us, there's a sort of sweet spot where you've got enough uncertainty that you want to go out and find new things, but not so much uncertainty that you want to hide in your bubble and stop learning new things.
David: Yeah, Drew. We might use how organizations think and respond to incidents throughout today's episode a few times because what you said there, if you think you already know the answer to the question, you don't go looking for better answers. I think in one of our earlier episodes on incident investigation, I think you said, Drew, that one of the, I guess, evaluation criteria for an incident investigation may be how much did the investigator actually learn in terms of new understanding about that particular situation, as opposed to going into the investigation with an already a view of what the answer should be.
Drew: This is something that I say a lot, and we might come back to it later in the episode, David. It's this idea that what are investigations for? From an organizational point of view, I think often, the accident creates uncertainty. The investigation is trying to remove that uncertainty again. If it doesn't in the process by sort of removing the uncertainty by learning and step forward, then that's a good thing. But if it does it just by trying to quickly find an answer, find a label, and close off the uncertainty as rapidly and cleanly as possible, that's like preventing learning rather than creating a learning opportunity.
David: For every complex question, there's a simple answer that's wrong.
Drew: Yeah, I like that one. Although I suspect some people would say very, very, simple questions, there's an academic answer that's complex. I think that's one of the reasons there's this sort of clash between academic safety people and safety practitioners. I'm an academic. I get to operate in a world that has a huge tolerance for uncertainty, basically, because we never have to take any action. All we need to do is research and teach, so we can be as uncertain as we like.
That lets me be as curious as I want to be and I love it. I can go through every day and someone asks me a question, I can say, that's a good question. What do you think? I can get away with that. That's what a good teacher does. But if you're a safety practitioner, if you're a safety manager for an organization, you can't respond to questions from your CEO with, that's a really good question. We should think some more about… You got to take action in the world.
You've necessarily, when you're working for an organization, got to have less tolerance for that uncertainty. Sometimes I sort of think that's the useful tension that we have. It's the job of people like me to go through the world trying to tell people to be a little bit less certain. It's the job of other people to try to find that certainty.
If we have a positive dynamic, then together, we sort of come to a good place in the middle. But otherwise, we've got this sort of risk of too much uncertainty being disabling. The paper today is sort of arguing that that sense of clarity that people are looking for that that might be a trap.
In particular, and we'll get into this, he argues that it might be almost like a deliberate trap that some people create. They offer you this promise of clarity or feeling like you understand the world that leads you into thought traps. What we're going to be talking about is sort of evaluating the paper and saying, is that something we should be worried about? If we should be worried about it, how should we be worried about it? What should we be doing about it?
David: Yeah, Drew. There are a few nice parallels between Safety Science and some of the things that we talk about as safety practitioners. The opening sentence of this paper is our quote, "Here is a worrying possibility. There is a significant gap between our feeling that something is clear and our actually understanding it."
For me, I straightaway thought of this work as imagined, work as done. [...], I think that's what we're going to go through. What do we think that we know about the world, how clear is that, and then how does that actually relate to the world itself?
Drew, maybe if I just do an intro to the paper. The title of the paper is The Seductions of Clarity. The author is C. Thi Nguyen. He is an associate professor of philosophy at the University of Utah. From his own bio, he says that he's interested in ways in which our rationality and agency are socially embedded about how our ways of thinking and deciding are conditioned by features of social organization.
Drew: David, I'm interested in what you think about this. There was sort of a spectrum of philosophers. You've got some philosophers who spend all of their time engaging with the work of other philosophers, so they're in this very abstract thought space, and then you get these other philosophers—I think C. Thi Nguyen is in this camp—who spends a lot of time studying the emerging results from other fields.
In this case, psychology, social psychology, and evolutionary psychology. They think about, okay, what are those implications of empirical results for the philosophy we've established so far? They're very engaged with other less philosophical research.
I think it's great because often, philosophers bring this interesting perspective. They think more about other people's research than maybe those people think about themselves. They think, what are the implications of this? What are the categories that are emerging? How are these people actually doing research? It gives a really interesting perspective.
Also, I always worry that there's the risk that they're not experts in the research that they are reporting on. We got to be very careful when they describe empirical research. This guy is a philosopher, he's not an empirical psychologist. To what extent is he not deliberately, but sort of accidentally cherry-picking or interpreting the results in a way that those results don't actually afford? That the new perspective is interesting, but actually, if you ask the original researchers that say, no, you can't interpret it like this and this is why you can't.
David: Yeah, it's a good point, Drew. I think when we did our safety work, safety of work, we borrowed heavily from the institutional work and institutional logics disciplines, and that was definitely new to me. I spent months and months trying to make sense of the empirical findings and the different philosophical perspectives in that field.
As we're going to talk about in a moment, we don't even know the journals, we don't know the researchers, and we don't know the history of the field. It could have been very easy for us to just take superficial understanding of that body of work and then just draw some abstract relationships to what we were talking about in safety.
Drew: Yes. This is more by way of analogy than a direct example, but you'll see people do the same thing with quantum physics. People who don't understand physics at all use quantum physics as a metaphor for other things, which is fine if it's just a very, very loose metaphor. But the moment they start making claims about it, that's when you begin to worry that this is pseudoscience.
David: Drew, this paper was published in The Royal Institute of Philosophy Supplement. We have definitely not reviewed a paper from that publication before. Do you want to just talk a little bit about it?
Drew: I'm going to go a step further and say that this is not a journal that I've heard off before. I thought it might be a good opportunity to talk about sort of how you deal with papers in journals you haven't heard of. The first thing I do is use a tool called Scimago, which is like a journal ranking service. Journal ranking is [...] stuff. It means nothing, really. But it gives you a sense of who in the field considers things to be reputable or not reputable.
At the very least, you search something up in Scimago and it comes up with zero hits, you think, oh, okay, what's going on here that this is not even ranked as a journal? Then you go to the homepage, you'll get the homepage, and look at who publishes it. Oddly enough, you don't check the editorial board because all of the pseudo journals give themselves editorial boards with important people who don't even know they've been listed as editors. Yeah, the end result of this process is this is a reputable journal.
What it is is it's a place that publishes the proceedings of conferences. The idea is you'll have an American Psychological Association or Philosophical Association meeting, and then the papers from that meeting will go into a special issue of The Royal Institute of Philosophy Supplement rather than into their main heavily peer-reviewed journal.
The takeaway then, and you remember why we talk about authors and journals, it's not that it tells you whether a paper is good, but it gives you a hint of who's doing the work, what sort of peer review it's getting. And therefore, how well does it represent other work? This paper is obviously interesting enough to be published at a conference. It's been presented to other philosophers, but it's never been peer-reviewed by, for example, psychologists.
We got to be careful that a philosopher who is representing the work of psychologists, not being peer-reviewed by other psychologists, could make mistakes that never get caught. We need to be a little bit more careful than usual in checking whether the research that it's using is representative, whether there's other stuff that's missing, whether it's misinterpreting the research that it's applying.
We can't just trust the author as much as we would normally trust the author talking about their references. It never says anything directly negative about the author or their work, it's just about the degree we need to scrutinize it.
David: Yeah. Thanks, Drew. We've got this paper title, The Seductions of Clarity. Do you want to give us just an overview of the overall messages of the paper, and then we'll just kind of go through some of those and create some connections to safety management?
Drew: Sure. This is a really well-written paper. I'd recommend having a look at it. Like all well-written papers, it tells you upfront what the paper is and what the paper does. Nguyen tells you both in the abstract in the introduction, here is my overall argument. If you don't believe me, I'm going to extend it and support it through the rest of the paper.
Here are the main claims or points. He says that humans use a sense of clarity as a way to tell us that we've finished thinking about something. You're sort of confused and then things start to become clearer. Okay, I don't need to keep worrying about this. I don't need to keep trying to understand it because I do now understand it. But that's not necessarily a reliable clue as to whether we should stop thinking about something.
If we experience that sense of clarity when we don't understand something, then we might stop thinking about it even though we don't understand it. Or worse, we might stop thinking about it even though it doesn't make sense. It's sort of that, ah, I get it now. It stops us from realizing, hold on, there's something wrong here that we should be thinking harder about.
The next step is, what if someone tried to deliberately do that? What if you call them either a hostile force or an epistemic manipulator and tries to give us ideas that give us those aha moments, that give us a sort of pleasurable, oh, that's interesting, that really makes sense? They do that because if we have that feeling, then we don't scrutinize too hardly whether we might be being misled, lied to, or given an idea that's not quite right.
Then his promise is he's going to sort of talk about two case studies in the paper that both might do this to us, conspiracy theories and bureaucracies, which is a really interesting sort of thing to juxtapose to each other.
David: Yeah, Drew. I'm going to throw two practical examples at you and you can tell me if it fits in the context of this paper. The first one might be, say a company has an incident and a particular thing is wrong at one of their workplaces, create some uncertainty about, oh, I wonder if we've got this problem at all in our other workplaces.
We might just ask all of those other workplaces to do a check, come back and tell us that that's done, everything's fine, and we can stop worrying about that issue in our company anymore and kind of move on. I know it's a very tactical thing, but would that be a little bit how we might think about this idea of, oh, once we become clear, we don't worry about something anymore?
Drew: I wouldn't say that as exactly the same. That's certainly an example of deciding that we can stop thinking about something. The particular mechanism he's talking about, though, is a moment of understanding that tells us to stop thinking about it. What I would suggest is to take that same example but throw in a psychological concept that we encounter while we're doing our investigation. Say, we're trying to look at what's going on, why that person does that. Then it occurs to us, oh, they lost situational awareness. And we've got an explanation that suddenly makes sense.
If we think about this in the lost where they were in space and time, suddenly, it all makes sense. Why did they do it? How did this happen? We get that pleasurable feeling, oh, okay, we've explained it, we understand it. And now we don't need to investigate further. We don't need to, for example, see if the procedure was wrong or the equipment was wrong because we've had that moment of clarity that we fully understand it.
David: Great, thanks. I like that. We might save the second one for a little bit later. Drew, you've got a reference here to google Studio C Practical Philosopher on YouTube for an opposite effect.
Drew: David, I did exactly the same you did. I read this summary and I'm immediately trying to think of all of the examples I can think of that fit this nice, neat pattern. One of the ones I did that I'll refer our listeners to was, this is the exact opposite effect of the creation of uncertainty as a way of paralyzing people. If you look up, there's a comedy group called Studio C. I'll just give you a very brief introduction to this sketch. It has a group of bank robbers emerge inside a vault and they hit the last layer of the security system, which is a philosopher sitting at a desk.
David: I'll leave it at that.
Drew: We'll leave that to you to go and look up the rest of the sketch. I was immediately thinking, there are lots of those things in life where we sort of hit a moment of uncertainty and it stops us taking action. We're not sure what to do next. Then we have those other moments where we have that really blinding sort of, aha, I get it now. That means we can move forward, but it also means we've closed things off, that we've stopped investigating.
Is it Dekker who, in one of his human error books, is finding your final clue in the investigation? We think we've found that missing piece, we've found the thing that says, this person did it. It's not just that we think we've finished, it's that we get that almost pleasurable moment of understanding that really tells our brain we're finished.
David: I think it might have been Sidney in Understanding 'Human Error' or his Drifting into Failure book where he talks about almost the same as the title of his paper, that seduction of root cause, that idea that I found the broken part of the system, I fixed that, and perfect, the world is now exactly as I want it to be.
Drew: I'll just put a direct quote from the paper just to finish off this discussion. This is in the author's own words. He says, "Our sense of clarity and its absence plays a key role in our cognitive self-regulation. A sense of confusion is a signal we need to think more. But when things feel clearer to us, we're satisfied. A sense of clarity is a signal that we have for the moment, thought enough." That's really what he's getting at with this first idea of clarity. Then he's going to expand that a bit further later in the paper, and we'll talk about the expansion of it for now.
David, I'll throw in one more example that I was thinking of when I heard this. This is a set of research they did trying to work out whether good lecturers result in better students. The way they basically did this study is they had students watching a really polished lecturer and students watching someone who was bumbling, stuttering, reading from their notes.
I do want to acknowledge that this is limited in the scale of the research they did. I don't think that these can be considered finalized findings. Basically, they found that student satisfaction scores are sometimes inversely related to how students do in future classes.
The idea is that the really polished person leaves the students going away thinking, oh, that was great, I really understand that, that was crystal clear. But then they stopped thinking about it. Whereas the confusing teacher, the student goes away thinking, I didn't quite get that. They do their own work, they do their own thinking, and end up as better students and better informed for it.
David: Drew, how do the performance evaluations go at the university now? When you've got your result where you get one out of five on your student evaluations, you can just go to the faculty and just say, I'm just making better students for the future.
Drew: David, I'm a little bit embarrassed about this. I'm a pretty good public speaker. I've got no idea what I'm like as a teacher because we don't have good ways of measuring it. But I know I'm good at that sort of glib, hey, Drew's put on a show thing. Things seemed clear to me, so I get really good student satisfaction scores.
I'm in that embarrassed position of the university hierarchy comes down and pats me on the back and says, Drew, you had a couple of courses where you got perfect student satisfaction scores, well done, here's an award. I sort of got to write an email back saying, look, thank you for this, I acknowledge that you're talking about. These scores are bunk, don't do this. Yeah, that tension between someone recognizing you for something that you know is just [...]. It means nothing.
I have to throw this in here because it is important. It's not just that student satisfaction scores don't correlate with student performance. It's that we know student satisfaction scores are systematically biased away from women and minority teachers. Students, on their qualitative feedback, give white males smart and intellectual. And they forgive us for bad teaching because we sound really smart.
But what they expect of female teachers and minorities is they expect them to be kind, compassionate, helpful, and these impossible standards of this person will look after me, and they mark them lower.
Basically, I can get higher scores for doing less work to actually help the students than someone else. They're not just bad reflections of teaching, they are really, really bad ways to evaluate how much someone is contributing to their own students across all of the things that are expected of teachers. That's getting a little bit off-topic. Since you joked about it, I had to throw in the real message there.
David: I think systematic biases are really important, particularly in relevance to this paper about seductions of clarity. It's very easy for us to just think that those teacher evaluations are good and true reflections of the quality of the teaching. I guess, when you're starting to give that overview as well, I was just reminded of incident rates, safety, and a lot of things that we do in organizations. I think that's directly relevant to what we're talking about today.
Drew: A good point. We might actually refer back to this when we talk about the bureaucratic side of it because he does talk about some of the reasons why numbers like student satisfaction scores are so attractive to bureaucracies.
David: Yeah. Drew, this idea of aha moments. These points of clarity when we make sense of the world, and particularly if we might have been thinking hard about it or struggling with it, it's like Eureka, we've solved it.
Drew: This statement is the first point I became a little bit suspicious of the paper. I'll give the full message up front just so I'm not sending mixed messages. I loved this paper and I love the ideas in this paper. I think that's when you've got to be most careful about whether it's actually empirically supported. My overall conclusion is that these are good ideas that I generally agree with, but we got to be really careful about exactly how much they do and don't say supported by the evidence.
What he's doing here is I think he's mixing up two different types of psychological research. I'll just give you an example of how the research works. One of the things they do is they give people what they call insight problems. These are like thinking outside the box type puzzles. The games where you got to convert the word six into five using matchsticks, and the actual solution is to break one of the matchsticks in half, or puzzles like that.
The key things about those insight puzzles are once you find the correct solution, it's obvious that until you find the correct solution, it's not obvious at all. Those puzzles are used to explore and demonstrate a few different psychological ideas.
One of the things is this idea of an aha moment when you transition from seeing the puzzle one way to seeing the puzzle the other way. It actually gives you a brain, a pleasure spike, like an injection of dopamine. It's something that you can directly measure that feels good. But that's not the same as the idea of feeling that you've completely understood something. That's a particular thing, which is with insight problems.
The more general idea is about that feeling that you understand something that makes sense. That's not associated with that spike of pleasure. It's just more associated with a feeling of truth or a feeling of satisfaction. I think both things can be true at once. There are some types of things where you get a spike of pleasure from understanding, but that's actually a genuine understanding. That's solving an insight problem.
There's this more nefarious one, which is that feeling that something makes sense and thinking that because it feels like it makes sense, it must therefore be more true. Researchers get this. You're trying to wrestle with a problem, and now, suddenly, it sort of seems to make sense to you. Does that mean you've settled on the right answer or not?
David: Yeah, it's good. Sorry, I was a little bit sidetracked when you're saying that. It's not quite solving an insight problem. But that spike of dopamine when you're doing something like reading a Where's Wally book or you're looking at one of those Magic Eye kind of pictures and then you suddenly see what you're looking for.
Drew: I think that's a perfect example. I think that works in exactly the same way. That feeling that you've solved an insight problem is the I found Wally. Once you've found Wally, you can't unfind Wally again. If you see him there, he's there. The Magic Eye, once you can work out how to see it, you can always do that. But until you do, your brain is really struggling, it really hurts, and then everything comes into focus.
We've already sort of done the reference to safety that we've had the same thing with accidents. Until we have that feeling that it makes sense, we're always going to keep looking. But once we get a feeling, oh, we've got an explanation, that explanation makes sense. It feels satisfactory.
That's sort of one of the things that safety theorists do. They try to create discomfort with those traditional explanations. You could argue that one of the things that both Dekker and Hollnagel doing their work is they tell you, don't be satisfied with an explanation of human error. If they tell you that enough and you believe them, then you get to the explanation or build it, you don't get that feeling of satisfaction. You feel guilty about it, so you keep looking for a better explanation.
David: Yeah. Drew, it also goes a little bit further about exaggerated senses of clarity. Do you want to just briefly touch on that?
Drew: Okay. This is the next step where he's sort of extending beyond that basic concept and trying to say something deeper. He says, "Okay, we've got this common phenomena that we know about, but there are certain belief systems, certain ways of seeing the world that create an exaggerated and false sense of clarity." He says, "It happens particularly when we quantify things, when we turn things into numbers." He says, "It can be done either deliberately by hostile actors or by accident through the creation of what he calls epistemically hostile environments.
I actually wrote off to Nguyen and I asked him about this because I reckon he is a fan of a series of books by a guy called Charles Stross, and his books are called The Laundry Files. The books introduce a character who introduces himself as a combat epistemologist. That is such a specific term that I find it hard to believe that two people would come up with the exact same term, but that's the term that Nguyen uses in this paper without attribution to Charles Stross.
He says, "These people are doing combat epistemology. They're using epistemology as a weapon to manipulate the belief systems of other people." It's a pretty strong claim.
David: Isn't that what marketing is all about, though?
Drew: Marketing is about manipulating the beliefs of people. This is about trying to put in place a whole belief system, so possibly some types of marketing do.
David: This is the link back to conspiracy theories.
Drew: Yes.
David: Okay, got it.
Drew: I'm really reluctant to mention brands here, but I can think of a few brands that might fit into this pattern. I'm just going to leave our listeners to drop maybe particular types of phones or particular types of computers.
David: I would say at the risk of offending, just spending a bit of time this year in the US the whole big pharmaceutical advertising machine about the role of medication in human health just as a general solution to everything I think maybe fits without being too deep into conspiracy theories. Drew, maybe it fits that whole put in place a belief system that we can manage our health through just more and more medication.
Drew: Yeah. We might actually be able to test that against some of the more specific claims he starts to make to see whether that fits.
David: Okay.
Drew: I suspect it might. This is sort of the point where I expect most of our listeners, and certainly myself, agree with the paper up to this point. Now the question is, how much do you sort of buy him going a little bit further? What he says is, okay, this isn't just about oversimplification.
Lots of people will get to the point of saying that, okay, oversimplifying things is irrational. There's always a bit of an appeal to oversimplify. There's a lot of over-motivated reasoning trying to remove the discomfort of uncertainty or cognitive dissonance that leads us towards wanting more simple explanations. He says, "This is something more." I think that's the test. Do you reckon that he's telling you something more beyond just that trap or appeal of oversimplification?
David: Which I guess is one of the early five HRO principles in complex systems that reluctance to simplify as a way of actually getting a better understanding of situations. Let's see what else we go into. I guess, in a lot of parts of this paper, I was drawn to Erik Hollnagel's quote. I think it's 2015, where he sort of said that people need to feel safe and be safe, and sometimes the former gets in the way of the latter. That we focus our effort on feeling safe, which is this point of clarity around safety in our organization, and maybe we stopped doing the work we need to do to actually be safe.
Drew: Yes. I think a lot of safety theories that have talked about that need to move toward discomfort. We've got that one about reluctance to simplify. I've lost the phrasing, David, but I'm sure you can grab it. Oh, constant unease or...
David: Chronic unease, which was gone Hopkins' view of that, which I guess followed preoccupation with failure in some of these things, which he says that, yeah, chronic unease. Let's just call it that.
Drew: Yeah, David. Thank you for the reference. The first one we're going to dive deep into is this idea of clarity as a thought terminator. Why is it that feeling clearer stops you thinking further? We start off with that same thing. It's almost like the same distinction that Hollnagel makes between feel safe and be safe. It's a distinction between genuine understanding and a feeling of understanding.
They're separate things. You would hope that they would happen at the same time, but they can happen separately. You can genuinely understand something or you can feel that you understand it, feel clarity. Maybe you feel clarity because you genuinely understand or maybe you just feel clarity and you don't understand.
Two broad strategies that people use to manipulate other people. Epistemic intimidation where you make them feel afraid or uncomfortable to think differently, and epistemic seduction where you make their own brains send positive signals for feeling in a particular way. That's the first point that I actually thought, hold on, does this even need to be a distinction? What is pleasure but the removal of discomfort?
Removing cognitive dissonance feels good. We know that. I'm not certain that there's a difference between seduction and intimidation, really. I don't think that you even need that for this paper. But David, you've thrown in a couple of examples here.
David: I was just thinking about positive mental signals. I don't think it quite fits anymore. I guess this is where the author, Nguyen, was sort of leaning on psychology here talking about structural weaknesses in human cognition and things like that. I got a little bit concerned that the paper was going quite a long way into psychological phenomena without really supporting it very well.
Drew: What makes it a little bit difficult? You'll see this if you read the paper yourself. He's sort of weaving together two arguments. One of the arguments is the empirical argument, which is here are the psychological effects that cause this. The other is almost his own conspiracy theory that looks how bad this would be if someone knew that this was true and tried to deliberately manipulate it.
He switches back and forth between these arguments in a way that makes you feel that he's right, as opposed to sort of know that he's well-established that he's right. I almost thought for a moment that he was trying to deliberately illustrate his own theory because he's trying to generate this sense of understanding through epistemic intimidation in his own paper.
There are three sorts of lines of empirical evidence, three different effects. I'll say upfront that all of these things are real psychological effects. The first one is the idea of cognitive fluency. This is the fact that people are more willing to accept ideas that are easier to understand. You get this weird set of social psychology experiments that do things like show you the same text written in a bad font and ask you, do you agree with it or disagree with it?
The harder you make it for someone to deal with an idea, even just by writing it upside down, writing in a bad font, or with bad spelling, the less willing they are to accept it. People don't like you deliberately making them work hard with their brains. They get annoyed at you if you do. If they get annoyed at you, they don't accept your idea. That's cognitive fluency.
David: Drew, is that why we love PowerPoint so much? No one wants to read a 50-page technical report. Show me it in five slides with two bullet points on each slide, please, and make it look pretty and colorful.
Drew: PowerPoint has been used as a direct example of this as an explanation of why are people more willing to accept the five-page PowerPoint than the 20-page report, when clearly, the 20-page report has got more evidence, more detail than the PowerPoint. People more willingly accept the PowerPoint because they want ideas to be made simple for them. Their brains see that as more acceptable.
I think there's an underlying heuristic here. Sorry, I'm actually speculating beyond the evidence myself here linking things. But a separate piece of evidence is, how do we know when people are lying? Liars tend to throw in more detail.
One of the tricks is when someone starts to give you unnecessary detail in a story, that is a sign that they are trying to convince you that they may be deceiving you. People know this, so people's brains go the other way. They think someone's putting in a bit too much detail here, they're probably lying. Someone's telling the idea nicely and simply, they're probably telling the truth. That's one of the reasons our brains are more willing to accept simple information than more complex information is because we trust the teller who wants to tell it to us simply.
You get slogans like, if you fully understand an idea, you can explain it in three sentences.
David: I don't know that quote. I think there's a quote that if you can't explain it to a five-year-old, you really don't understand it yourself.
Drew: Yeah. Those are reasonable heuristics. It is a sign of truth or a sign of understanding that we can't explain things simply. It leads us into traps then that people distrust ideas like relativity because we can't wrap our brains around it. If we can't wrap our brains around it, it can't possibly be true. That's where it can lead us astray. That's one idea.
The second idea is the one we've already introduced, this idea of insight problems and the aha moments that just give us a positive feeling. The third one, and this is the one that I personally find most interesting, is a thing called cognitive facility. The idea here is that if we learn something and that something gives us power to mentally manipulate the world, then we see that usefulness as evidence of its truth. If it's an idea that lets us generate other ideas to link together other things, we're seeing the world in a new and powerful way, that gives us a sense of truth.
David, I immediately jumped to a lot of our own research, which I feel that that's one thing that I try to do to other people. I try to give them new labels or distinctions that help them make sense of the world. Safety work versus the safety of work, safety clutter. They're concepts that help you do other things with them.
That might actually create a misleading idea about the trade of those ideas, whether they're actually real distinctions or not because they let you do things. They let you count things, they let you measure things, and they let you make sense of things. It leads people into thinking, oh, this idea is useful, I really want it to be true so we think it's true.
David: Drew, these three empirical ideas here—cognitive fluency, aha moments, cognitive facility—I think they're broad psychological phenomena or representations of the world. I guess linking them together in these seductions of clarity all pulling us towards us to be much clearer about the world that we live in. Do you want to go on to talk about echo chambers?
Drew: David, just before we do that, I want to pin down to what is precisely the new claim that Nguyen is making because I think there are two things we need to be clear on here. The first thing he's doing is he's bringing these three different ideas together and saying, let's call them all the seduction of clarity. We need to ask ourselves, is that a useful move? Do we gain some new understanding by linking the three ideas and calling them one thing rather than just treating them as three ideas?
The second thing is, remember the very original claim that he's making, which is that humans use the idea of clarity as a way to stop thinking. Has he actually got evidence that this happens? The answer is actually, no. None of these three effects actually demonstrate that particular claim, which is that humans use clarity to know when to stop thinking. That's something that we don't have evidence for.
You would think that if it was a real thing, people would have researched it in psychology and would have psychological evidence because it's a really interesting question. How do people know when to stop worrying about something? Why do we not have the evidence for that? Actually, all of the relevant research is about whether that feeling of clarity is good evidence, not how much do humans actually use it.
That's the bit. It sounds really plausible. It seems to make sense here. We've given lots of examples ourselves in our discussion, but we don't actually have evidence that it's true, that clarity does actually stop people thinking. That's something that is untested and unproven.
Anyway, I just wanted to really clarify the limits of the evidence here because it sounds so convincing, and it makes so much sense to us. That's exactly the thing that Nguyen is warning us about. We should be careful. Is it actually true or not? We don't know.
David: It is a great example. I guess maybe because it did feel clear I didn't think too much about it. But the idea that do we stop thinking about something when we understand it and it becomes clear to us seems like it could make sense, but it does seem like something that needs a lot more critical thought, particularly if there is not a lot of research or no research around that.
Drew: Yeah, and particularly if the claim is broader than any of those three individual effects, which there is very good evidence for.
David: Also in real life context, Drew, I guess, if the limitations of some of the psychological research which are typically kind of most lab-based studies where you might give someone a problem, they're thinking hard about it like the matchstick example, and then they solve it, then you might find in that research setting that they don't think too much about that matchbox problem anymore.
But when we think in organizations about safety management and there's a point of clarity, I think that's a much harder thing to research and understand. Do organizations stop thinking about safety just because an aspect of safety becomes clearer? I don't know if that makes sense.
Drew: It's a very long stretch to go from, hey, I've solved a matchbox problem to, hey, I've explained an accident and now the organization doesn't have to worry about it. That is a really long stretch to just change a feeling that you have in a lab with the organization itself actually stops doing something. Let's move on to talk about echo chambers.
Now, I should point out, that he makes this interesting distinction, first off, between thought bubbles and echo chambers. He says that as far as he's talking about them, they're different things. A thought bubble is just where you are in a social media environment when you never hear the other side. He says, that's something that can happen. You only ever hear one side of an issue, you never hear the contrary evidence.
He says, "An echo chamber is something specific, which is where the reason why you're in that bubble is because you're explicitly rejecting all of the other information. You don't trust it." The information is there, it's available to you, you just refuse to listen to it. I guess the difference would be a thought bubble is someone who only watches Fox News and never hears that other stuff is going on in the world. But to make it into an echo chamber, when people try to tell you about that other information, you say I'm not listening because Fox News tells me not to listen.
David: Got it.
Drew: I didn't just decide to pick on Fox News. That's the example he uses in the paper. Although, happily, I'll pick on Fox News, if I get the excuse.
David: Very good.
Drew: Okay, he steps out his argument. He's talking particularly about, you've probably heard of this guy, Rush Limbaugh, one of the Fox News commentators. He's very filled with conspiracy theories. The first thing is that it offers the sensation of epiphany that Limbaugh is revealing secrets to you and those secrets make sense of the world. Because why is it that we can have these sincere people who disagree about things? That's not the way the world should be.
Limbaugh comes along and says, the reason why that is is because one side is good and one side is evil. The other side's not actually sincere. They're just pretending to be sincere because they want to steal your kids. Ah, okay, if I see the world like that, that makes sense.
You could see the same thing in someone who tells you that climate change is all a hoax. It's a hoax because they want to get money out of you. They want to destroy your world and replace it with their own much more woke world. Okay, suddenly it now makes sense. Why are all these scientists talking about climate change? Why are these people advocating for change? Because they're part of this one big conspiracy, that explains it to you.
The second thing is this idea of cognitive facility. It's not just that you have that one moment of, ah, now I see the secrets, I understand the world as it really is. You've now got this intellectual tool and you can use this tool. You can use Limbaugh's way of seeing the world to look at a new person speaking and saying, ah, now I understand why they're saying what they're saying. You can generate your own conspiracy theories that fit within the pattern because that way of thinking lets you now suddenly interpret other things.
When we have convictions when people tell you stuff that doesn't make sense, we now know that's because they're lying because they're evil. We can resolve the contradictions now because we've got an explanation for why would people say those things.
We've sort of got this self-reinforcing view that gives us a belief that we're an insider, it tells us not to trust the outsiders, it tells us that any contradictions are the fault of the outsiders trying to deceive us, and it makes us feel clever because we can now generate our own theories, our own thoughts, and our own ways of interpreting things.
David: Drew, at risk of going too far into the safety world a little bit. But this idea of, I guess, not as a conspiracy theory, of course, but the idea of Safety Differently or something where we've got this worldview that seems to enable people to make sense of all of the things that they see in the world of safety and they've experienced throughout their career.
Now they've got a way of seeing things, and then we can use that to, I guess, make a whole lot of other different types of claims about safety more broadly, reject things, and accept other things. That, I guess, is a practical worldview in a safety space.
Drew: Yeah, I think that ticks all of the boxes. I think most people, when they encounter ideas like Safety Differently or Safety-II, have that moment of pleasurable epiphany. It's like, oh, this explains things I've been struggling to explain and now it feels good. It lets you explain away contrary evidence. The theory itself tells you this is what you should think about lost time indicators.
If someone comes along and says, hold on, I'm doing safety the old way and it's reducing incidence down to zero, the theory tells you how to make sense of that information. It explains the way contrary evidence. It says, don't trust that evidence, don't trust the people saying that. I guess I should try the ultimate qualifier here.
At this point, we're not claiming that this tells you whether a theory is true or not. It's just telling you that the truth or not is irrelevant to whether you accept it. You could do this evilly in the case of QAnon conspiracy theories. Or you could do it in the case of a genuine theory that explains a new part of quantum physics that would both have the same properties. Both give you the same positive feelings, both give you the feeling of being able to do new things in the world. But those feelings don't tell you whether the theory is true or not.
That's then the next claim that he's making. What if people do this deliberately? What if people are deliberately using their knowledge of psychology, their knowledge of the fact that you will accept these things, and trying to manipulate you by getting you to believe in this worldview? It makes a very seductive sense that he picks conspiracy theories as the first example because we know we've got the evidence that a lot of these conspiracy theories are not set up themselves by true believers. They're set up by people who are deliberately trying to make money out of creating true believers.
David: Drew, that's a bit of a strong claim for the Church of Scientology.
Drew: That's a factual historical claim for the Church of Scientology.
David: I wonder how many people have been offended so far with this deduction of clarity
Drew: To the extent that we've talked about safety theories, I'm sure we've made a couple of people uncomfortable.
David: And religion.
Drew: I'm not sure how many Scientologists or QAnon believers we got listening to the podcast, David.
David: I don't know, but in higher education bureaucracy, we haven't left much untouched.
Drew: Okay. Dear listener, if you are a member of the Church of Scientology in senior management at a university and running your business on the side selling drugs through a multi-level marketing scheme, then you probably think that we're lying anyway. Don't worry about it.
David: I guess Nguyen's claiming that people could use this knowledge that we as people will feel good when the world is clear for us, when the world makes sense, and use that, I guess, against us to further their own interest in some ways. I guess that's what he's claiming here.
Drew: Yeah. His goal here is to use the idea of conspiracy theories, where it's easy to think about this as being a deliberate evil force doing this to us, to then say, okay, let's move on to things like bureaucracy that don't require a deliberate hostile act of doing it to us. It just needs an environment that is accidentally hostile. So we could create a work environment that does those same things to us, that creates those seductions, that creates those hostile epistemologies as part of the environment we're in.
He's particularly picking on quantified bureaucracy and growing pretty heavily on two other works, which I have to admit I haven't read yet, but as a result of this paper, I've gone out and ordered copies. It's a book called Trust in Numbers by Theodore Porter and The Seductions of Quantification by Sally Merry. He's saying that we're doing that same thing, setting up those same cognitive forces when we try to create things in numbers.
He actually does directly hook in a little bit to that higher education. This is just a quick quote here. He says, "It's much easier to do things with grades and rubrics than it is with qualitative descriptions. We can offer justifications. I averaged it according to the syllabus' directives, I applied the rubric. We can generate graphs, we can make quantitative summaries. That sense of facility is even stronger in large-scale institutions, where the use of numbers has been stringently regularized." I'm being immediately there, David, of lost time indicators.
David: I just think it's graphs, tables, and data in general. All of our listeners, I suspect, will experience that in their organizations every day, which is to show me the numbers. What's the metric? What's the measure? That's what we do with risk assessments. It's what we do with audit scores, it's what we do with counting safety work, it's everywhere, and it makes us feel good and it makes us feel in control.
Drew: Yeah, and particularly, it makes us feel powerful. It's not the seduction just that it's simplified, this is a tool we can manipulate. We can use this number to track trends. We can use this number to create a graph. We can use this number to draw a comparison. The fact that it's useful to us in those ways, convinces us of its underlying truth, even though that shouldn't be a reliable guide because we can do that with untrue numbers just as much as we can do with true numbers.
David: Drew, do you want to make any general thoughts about the paper, and then we'll see if we can make some practical sense of this?
Drew: Okay. I guess the most immediate one is that I like the paper, because it points out some really interesting ideas. It draws links between some things that I hadn't seen the connections for. I think I've already mentioned, I've ordered a couple of these references because I want to go back to the original source and read them. He's found interesting stuff to talk about.
I'm a little bit suspicious of the way he tries to tie all of these different ideas under the single concept of the seduction of clarity. I can agree with the individual ideas, but I don't think the central factual claim tying them together. If it's new, then it's not well supported. To the extent that it's supported, it's not really new. It's just linking together things that work quite separately. I really think that to the extent that it's making new claims, we need a proper empirical investigation. We can't just rely on using arguments to link them together.
David: Yeah, Drew. I quite enjoyed the read. It's not that difficult to read. The third preprint is about 38 pages. There's a lot going on in the paper—the quantification, gamification, manipulation, echo chambers, combat epistemology, and thought terminators. There's a lot going on in this paper.
I'm a little bit like you, Drew. I think, pulling all of those ideas together and pointing all toward this idea that clarity makes us feel good and then stops us thinking, it's one of those things. If it feels true and simple, but I think it needs to be unpacked a lot more than that.
Drew: I do think we can make some sort of useful takeaways for safety. I do think that some of these claims are really quite both clear and well supported. David, I'm interested to the extent that you agree with these as I go through them.
David: Yeah, let's do it.
Drew: The first one I'd say is that just because an idea is useful for making sense of other things, that doesn't mean that the idea itself is correct. When we have things like that distinction between safety work and safety clutter, they provide you with ways of seeing the world and make sense of other things, but that's not the reason you should think that the ideas themselves are accurate and reliable.
I think that's always like a neutral judgment, it's not positive or negative. It's just don't use the sense of clarity as the way of judging. Find some other way of judging the idea other than that it helps you see the world in a new light.
David: I think, Drew, the idea in one of the Safety Differently principles, if safety is not the absence of negative events, it's the presence of positives that helps us see the world in a certain way and approach safety. But it doesn't mean that the absence of negative events means something's unsafe.
Drew: Yes. Sometimes, with claims like that, you go to unpack them and you realize that they are unprovable, so they are actually just philosophical claims, not empirical ones. Another one that I think of is people are not the problem, people are the solution.
David: Sometimes people are the problem.
Drew: I think it's really not even that. It's a philosophical position, it's a way of choosing to see the world. It's not true or not true. You can decide it either way for yourself. It's not a truth that we should try to apply to other people and say, you should believe this because it's the only way to see things. It's not, it's just a position that you choose to take.
The second thing is that it's important to be able to live with a bit of cognitive discomfort. The world's not actually supposed to make perfect sense. It's a complex, confusing place. It's very understandable that you want everything to fit neatly together. I'll go further. It does not help your future learning, it hurts your future learning.
I encounter this all the time when people are given all these different theories of safety and they say, I want a picture that links them all together. Try and draw that picture is really, really hard. You should try to draw that picture. That effort, that cognitive difficulty in trying to fit everything together is worth doing. But whatever you come up with at the end is not going to be perfect and shouldn't feel perfect. It should leave you feeling unsatisfied. It should leave you wanting to learn more. Anytime things seem perfectly clear, probably, you're not fully understanding them.
David: Yeah, I like that, Drew.
Drew: The third one, and this is I think it's a big important one, not just in safety, but as a general worldview. Be wary of any type of thinking that gives you an excuse to reject or ignore certain other sources of information. I'm not trying to say that you're obliged to spend time listening to nasty and unpleasant people. No one is obliged to spend time watching Fox News or on the right-wing blogs exposing themselves to just nasty unpleasant stuff.
What you shouldn't do is just jump to the idea that everyone who disagrees with you is evil and not worth listening to. In safety, it's probably less than we think they're evil, we think they're stupid. You don't jump to the idea that someone is stupid and incompetent just because they disagree with you. That's probably where the best opportunities for learning are. Find the smart people who disagree with you and find out why they're thinking what they're thinking. Don't use your own labels as a way of rejecting and ignoring that other information.
David: I like that, Drew, that idea of trying to use an idea of extreme curiosity. If you, for example, have a worldview that says a very differently Safety-II orientated worldview and someone in your organization comes along and says, we need more audits, we need more rules, you're not dismissing that. Actually, being extremely curious as to why that person might see that as the way forward. I think, being very wary. If we're very clear ourselves, it's very important that that clarity doesn't reject and ignore things that don't fit inside our clear view of the world.
Drew: David, I'll give you a really quick example of that. Jop and I just published our Take-5 paper. I think our views in that paper are pretty strong. We've got lots of feedback. A lot of that feedback is positive, a lot of that feedback is negative. It's very easy to dismiss the negative feedback.
I'll be honest, a lot of the negative feedback we do dismiss because it's clearly stuff that we've already considered in more depth than the people making the criticisms. But we had a guy who has an app doing Take-5 like stuff who asked for a meeting with us. It was so tempting to just say, oh, this is just another app doing Take-5, more of the same stuff that we've already talked about people not being reflective.
We decided we'd just take the meeting and listen to what the guy had to say. It was an hour of just learning interesting new stuff from someone who had really thought about the same problem that we had thought about, but thought about it from a different direction.
It's not whether someone agrees with you or disagrees with you that makes their ideas worthwhile. It's how much they're willing to think about their ideas and to have a conversation that is open to exchange of ideas. As a result, I hope we're going to have a follow up to the Take-5 paper that's not going to contradict the existing paper, but is going to have some interesting thoughts as to what do you do next.
David: Great. All right. We tackled Take-5 in Episode 95, the last episode. Maybe in a while, we get to revisit it. Drew, I added an extra practical takeaway here. I've been doing a bit of reading and I want to bring a few Kurt Lewin papers to the podcast because I think it's very under-referenced in the safety science literature.
In the early '40s, he basically said that there's no research without action and no action without research. I guess in reading this paper about the seductions of clarity and stopping us thinking, it's really easy for us to think with a lot of our safety management practices and activities in our organization that they're doing the things that we think that they're doing and they're working.
I guess what I thought practically is everything that you're doing in your organization for safety, don't be as clear as you might be that it's actually doing what you think. It needs to be constantly evaluated, interrogated, and consistently tried to understand exactly what's going on.
Drew: And acknowledging that that is a more uncomfortable way of filling the world.
David: Yeah, absolutely right.
Drew: It does hurt your brain a bit more when you think about things like that.
David: It does. Drew, the question we ask this week was, why should we be cautious about clarity?
Drew: I think we have an answer for once, simply because the feeling that something makes sense to us is an unreliable way of telling whether it's actually accurate and reliable.
David: Nice one, Drew. This idea of the seduction of clarity, it was a longer episode for us, but it was a long paper. That's it for this week. We hope you found this episode thought-provoking and ultimately useful in shaping the safety of work in your organization. Send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.