The Safety of Work

Ep.89 When is the process more important than the outcome?

Episode Summary

Our discussion today centers around the intriguingly titled paper, “The fetish of technique: methodology as a social defense,” by David Wastell.  Although it was published in 1996, its basic tenets are still useful and relevant today.  We will examine how safety methodology and processes within organizations are often relied upon for “relieving anxiety” rather than leading to successful or intended outcomes.

Episode Notes

Wastell, who has a BSc and Ph.D. from Durham University, is Emeritus Professor in Operations Management and Information Systems at Nottingham University in the UK. Professor Wastell began his academic career as a cognitive neuroscientist at Durham, studying the relationships between brain activity and psychological processes.  His areas of expertise include neuroscience and social policy: critical perspectives; psychophysiological design of complex human-machine systems; Information systems and public sector reform; design and innovation in the public services; management as design; and human factors design of safe systems in child protection.

Join us as we delve into the statement (summarized so eloquently in Wastell’s well-crafted abstract): “Methodology, whilst masquerading as the epitome of rationality, may thus operate as an irrational ritual, the enactment of which provides designers with a feeling of security and efficiency at the expense of real engagement with the task at hand.”

 

Discussion Points:

 

Quotes:

“Methodology may not actually drive outcomes.” - David Provan

“A methodology can probably never give us, repeatably, exactly what we’re after.” - David Provan

“We have this proliferation of solutions, but the mere fact that we have so many solutions to that problem suggests that none of the individual solutions actually solve it.” - Drew Rae

“Wastell calls out this large lack of empirical evidence around the structured methods that organizations use, and concludes that they seem to have more qualities of ‘religious convictions’ than scientific truths.” - David Provan

“I love the fact that he calls out the ‘journey’ metaphor, which we use all the time in safety.” - Drew Rae

“You can have transitional objects that don’t serve any of the purposes that they are leading you to.” - Drew Rae

“Turn up to seminars, and just read papers, that are totally outside of your own field.” - Drew Rae

 

Resources:

Wastell’s Paper: The Fetish of Technique

Paul Feyerabend (1924-1994)

Book: Against Method by Paul Feyerabend

Our Paper Safety Work vs. The Safety of Work

The Safety of Work Podcast

The Safety of Work on LinkedIn

Feedback@safetyofwork.com

Episode Transcription

David: You're listening to the Safety of Work podcast episode 89. Today we're asking the question, when is the process more important than the outcome? Let's get started. 

Hi, everybody. My name is David Provan. I'm here with Drew Rae, and we're from the Safety Science Innovation Lab at Griffith University. Welcome to The Safety of Work podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety, and we examine the evidence surrounding it. 

Following on from the last episode where Drew and I discussed a paper from outside the safety field that we felt could provide us with useful insights about organizational life and particularly as it relates to safety. The paper that we will review today is also from outside the safety field, but it was instrumental—at least in my thinking—about the work that we were doing a couple of years ago on defining safety work versus the safety of work and safety clutter.

I'm sure that this paper, Drew, is referenced in both of those papers of ours as well as a few others. Drew, what does software project management have to do with safety management and when did you first come across this paper? Because you introduced it to me.

Drew: David, I know what our listeners sort of imagine university-academic life is like. It's a lot more office management than you might think. A lot of our research is not done sitting in an office trying to think deep thoughts. It's out in companies watching what's going on or interviewing people. 

But there's a small part of my life that involves wandering into a professor's office and just having deep conversations. That's particularly with Dr. Rob Alexander at the University of York. He's even got one of those big professor chairs that he swivels around and you sit and chat. He gives me homework the same way that I give my Ph.D. students homework.

This is a paper that he gave me to read when I was talking about some of the background ideas that ended up eventually in our safety of work paper. He's also stringing through the draft of a book, which I've never published. I don't know if I'll ever get around to finishing off—called False Assurance. But I was chatting with Rob about the ideas in this book, particularly about the way people use things like risk assessment and safety cases. More to allay their own feelings of uncertainty and their own anxiety than as actual engineering techniques. But we use them as if they are engineering techniques. Rob gave me this paper to read.

To understand why it's relevant to safety if you're reading the paper for yourself, every time you hear the phrase structured system analysis, just replace it with risk assessment or safety case. I think this paper still makes exactly the same sense as it was originally written. 

David: Yeah, I agree.

Drew: It references a bunch of other interesting stuff as well. It's one of these papers you read it and then you want to go to the reference list and read all that stuff as well to find out where it got all of these interesting ideas from. Because they're not all his own. He's gathered together a bunch of these ideas and put them together for this circumstance.

David: Exactly, Drew. It’s one of those papers that I remember reading for the first time going, oh yeah, that's exactly what happens in safety. That's exactly what's happening in safety. It's a good one to get hold of. I might as well mention now, it's quite easily available, open access. You just chuck the title into Google Scholar and you'll get up our academia. 

Drew, I know how much you love a good title of a paper, but this paper goes beyond just having a good title and actually has a whole bunch of really cool subheadings involving lines and teddy bears, amongst other things. I'm sure that's a little bit at least part of the reason why you like the paper so much. Do you want to introduce the paper to us? 

Drew: Okay. The paper is called, The fetish of technique: methodology as a social defence, which is a title jam-packed with interesting ideas just in the words of the title. It's published in a journal called Information Systems Journal, which is one of those hybrid interdisciplinary journals that’s somewhere between computer science and management. All about the way in which electronic and information systems get used within organizations. Some of the papers are very technically geeky, some of the papers are very social science-y.

One of the big themes—which is also a theme of the author's research career—is the success and failure of information systems projects when they go well, when they don't go well, and trying to explain why organizations get them into such big messes with IT projects. It was published in 1996. The author, David Wastell, is kind of interesting. David, did you look him up as well? Anything you want to say about this his career]?

David: Yeah, absolutely. A single author paper and 25 years old, like you said, Drew. David Wastell is currently an Emeritus Professor. He's a retired professor in Operations Management and Information Systems at Nottingham University in the UK. He had various phases of his career researching different aspects of social and organizational life.

He started as a cognitive neuroscientist really looking at brain activity and psychological processes. Then he started looking—I suppose in the early to mid-90s we had the early part of the dot-com boom and lots of information technology and the emergence of the internet. He started looking at technological innovation and collaboration within the industry.

He was appointed Professor at Suffolk University in 2000 just after this paper was published. He came full circle at the end of his career. He went back into neuroscience and social policy and particularly looked at child protection and tried to merge a lot of what he learned over his career in both neuroscience and sociotechnical systems to look at human factors design of safe systems for child protection and things like that. 

I just looked through, Drew, what else has he published? This is a great paper from our perspective, but also one that's only being cited just over 250 times. Not a very widely cited paper, but jam-packed full of really useful organizational lessons. 

Drew: One of those people who's written interesting stuff that goes a little bit undiscovered. Hopefully, we can get a few more people reading and citing the paper by talking about it today. 

David: Drew, I want to start with that premise for the paper because basically, the way that I summarized the premise is, there are lots of processes used in software development project management. The research we're going to be talking about today was the structure of the paper was very similar to our paper on safety work versus the safety of work and safety clutter. 

It was these patterns identified through multiple observations of case study research within, in this case, four separate organizations that were all attempting to implement the same structured IT project management methodology. This specific methodology was increasingly popular in the early 90s. It was actually mandated for use on UK government information systems projects. These case studies involve just observing and interviewing people in these four organizations involved in working to implement these systems over multiple years. 

The purpose of the article is that methodology may not actually drive outcomes. These are these open questions that Wastell had. The methodology may not be driving the outcomes that we're trying to get from the methodology, and why do we remain so committed to certain management processes in the absence of the type of outcomes that we're trying to achieve? 

His central hypothesis in this paper was that we're so committed to these management processes because following these methodologies acts as a social defense. A set of organizational rituals with the primary purpose, like you said in the introduction, of containing anxiety and uncertainty as opposed to necessarily reliably generating outcomes. 

Drew: We probably should take a little bit of a quick detour here because I know we do have some software development listeners who might have a few assumptions about where the paper is going. 

This paper was written in 1996, which predates the Agile Manifesto and the Agile Software Development movement. There was a big interest in around 2000 and slightly afterward about trying to fix a lot of the problems with software development because the methodology had become too heavyweight. In other words, people were trying these really, really formal structured approaches to both software development and project management. 

We have things like your PRINCE and PRINCE2 in management, which are all about having very, very detailed planning processes, very, very structured timelines, Gantt charts, flowcharts, milestones, gateways, and ways of keeping projects in check by having big methods. There was a reaction to those approaches that said, look, this is just too top-heavy. We're putting so much into management and not enough into actually just writing the software. We need approaches that are more flexible, more streamlined, and more able to cope with varying user requirements varying day to day, get things written, get things out there, get them tested. 

But I think for the purpose of this paper, if you're thinking, sure, I agree that your processes are bad, that's why we have Agile. Agile itself fits exactly what they're talking about when they talk about methodology in this paper. Any process—whether that process is a heavyweight or lightweight process—still fits the same concerns, the same interesting questions. 

This paper—even though it was written in 1996—could also explain why Agile wasn't as perfect a solution as people thought it was going to be. Why people trying to fix the methods by coming up with new, better lightweight methods run into exactly the same problems. I think this paper can explain that. 

David: Yeah. That’s a good summary, Drew. I want to start with a quote from the abstract because we read probably a lot more abstracts than we do papers. Authors who put careful consideration into forming their abstract makes you want to read it. I thought, Drew, we just read the last couple of sentences of the abstract. Do you want to start with that quote or would you like me to read that out? 

Drew: No. Absolutely, David. I was hoping we're going to grade the abstract. Some people, their abstracts are just copied from the paper, but other people really craft their abstracts behind the story of the paper.

"The grandiose illusion of an all-powerful method allows practitioners to deny their feelings of impotence in the face of the daunting technical and political challenges of systems development. By withdrawing into this fantasy world the learning processes that are critical to the success of systems development are jeopardized. Methodology, whilst masquerading as the epitome of rationality, may thus operate as an irrational ritual, the enactment of which provides designers with a feeling of security and efficiency at the expense of real engagement with the task at hand."

David: I think, Drew, I maybe even lifted a direct quote in that in papers before as well. I think that's, get your hands on the paper and even just get your hands on the abstract, read those last couple of sentences, and reflect on those in relation to safety management in your organization and your industry in your country, then you'll see what's fueled a lot of the work that we've done on safety work and safety clutter. 

Drew, the paper starts with a long case study and it's actually titled A Cautionary Tale. Shout out to another great podcast, Cautionary Tales by Tim Harford. A cautionary tale and it just tells the story of one of the four case studies. I just might run through this quickly, Drew, in point form, but this sets up the framing for the rest of the paper. 

In the early '90s, there was a very successful company. It relied heavily on IT systems for inventory management, marketing, distribution, sales, finance, and other organizational functions. The company became increasingly concerned with the growing size of the central IT team and the general frustration of long lead times for new software. The feeling across management was, we've got all these IT people yet we're not getting IT software that's functional and quick into our organization.

The problem was that 80% of the time, the internal IT team was spending maintaining these old outdated legacy systems. There was a lack of user orientation, and so, the organization did what organizations do—engage consultants to help them develop a new strategy. This new strategy recommended this structured software development methodology, which was increasingly popular at the time. 

The organization did a pilot project, sent all 60 IT staff on a two-week training course. There were lots of early problems with the implementation. The processes were complicated. They were difficult to follow, but there was a lot of organization with tension to the project—steering committees and management oversight. There was a big focus in the IT teams. I'm getting the documentation and all of the diagrams right and making things all done in the correct order.

Further problems developed. Projects were slowed down even more by following the methodology and people were seen to be following the methodology in a blind and somewhat mechanical way. People were investing even more time in the diagrams and the presentations than the actual project work. In some people's minds, the project teams had lost sight of what the system that they're actually designing was to be useful in the organization.

Drew: David, can I jump in for a moment? Because there's a particular part of this process that I just loved the paradox or revenge effect. One of the problems that spurred the introduction of the methodology was that there was this feeling that the IT department was not responsive to users. All of these new systems were being developed without thinking about how people were actually going to be using them without engaging with the users about thinking about how the existing processes worked. This is something the new method was going to solve. It was like a method structured around user engagement. 

But the biggest thing that people were complaining about with this new process was just how mechanized the interaction between the developers and the users had become. The developers were using the process as a way to avoid engaging with users. Instead of talking to them, they were sending them diagrams for review. The users then had to learn how to interpret these user engagement diagrams and put comments on them, which of course, they didn't have time to do. People in other departments specially allocated people to work with the software team. 

It became your job to now be a liaison to the software team between the users and the software team. Those people in the liaison roles didn't understand either what the users or the software people were doing. The software people were getting fed up with these pseudo users, and still, we weren't actually getting stuff that was fit for use by the users. 

David: Yeah, Drew. It’s a great point. This idea that we're trying to solve a process of engaging with people by creating administrative processes and, like you said, sending things out for consultation and sign-off. On the surface, it looked like a whole lot more engagement because all these requirements were signed off, all these designs were signed off, and all these emails back and forward were occurring, but users felt that the politics of schedule and pressure were just forcing them to sign things off to keep projects going on time and not be holding things up. 

Interestingly, Drew, after a lot of frustration and challenge in the year after the late implementation of this process in this case study organization, two further projects we implemented using the process in [...] and were successful. But it's interesting that on observation and review that these projects were actually only paying lip service to the methodology and we're actually implementing an entirely different, more engaging user-focused approach and just ticking the boxes on the process on the way through. 

Interestingly, within two years, the process was completely abandoned, the head of information technology for that organization was replaced, a new consulting engagement was undertaken, and a new approach was designed. We don't know the story of that new approach.

Drew: One more detail I just want to throw in, David, which just rang so true to me from work in large organizations. It was a multi-year project. It's running six months late. The project still hasn't finished and the organization decides to conduct a review of the process. As well as trying to finish the project, everyone is involved in a review of the process used to deliver the project. Of course, what everyone is saying is, of course, stuff hasn't been written yet because the whole point of this process is not to get straight into writing the code. It's to properly understand the requirements first and to make sure that we've got things structured upfront. 

Everyone is complaining that they're not writing code, and everyone is defending the process by saying that the whole point is to delay writing code until you've got things right instead of just jumping in, which is what we always did before. Everyone is viewing the success and value of the process basically based on their prior assumptions about whether they like the approach that the process is falling or not. 

I think it's pretty clear reading between the lines that this implementation review was designed to kill the new process because people hated it so much. It was like sabotaging the project midway and never giving it a chance to succeed. Possibly have deserved it. 

David: Going back to last week's paper on garbage can model organizational choice, it was the review created an organizational choice opportunity. I suppose the first main part of this, sort of a theory-building section is called Methodology: The Lionization of Technique. I think Drew, we’re talking about safety sometimes about weaponizing safety management processes and tools. This is really what this section talks about. 

It starts with talking about there’s this general belief today in organizations—when I say today I mean I'm talking 25 years ago, but I still think a lot of what's said in this paper is very relevant today—is that this great belief in the power of methodology, especially in software development and I'd argue in safety as well. I mean, hey, we have endless safety work in our organizations and industries. For every aspect of software development, there lurks a cunningly devised implement or tool of some kind that a practitioner can use and follow. 

There's this belief in the magical qualities of methodology is a chimera. I didn't even know what that word meant. I had to Google it. But apparently, it means that thing which is hoped for but is illusionary or impossible to achieve. A methodology can probably never give us repeatedly exactly what we're after. 

Drew: This idea of the magical powers of method just falls right into a couple of real-world things that I've run into a few times. The first one is like just doing risk assessment. We've got this real need for people to understand the risks of work that they're about to do. We're so frustrated that people don't reliably do that. It would just be fantastic if there was a process that we could give people that would ensure that they always properly appreciate the risk, but there is no such process. 

There are plenty of people willing to invent new variations on it, and so we have this proliferation of solutions. But the mere fact that we have so many solutions to that problem suggests that none of the individual solutions actually solve it. But we've just got such a desire.

The other similar thing is in research, people want to know how to turn the handle and do research on something that fundamentally just requires you to be innovative, to think really hard, and to come up with new ideas. You see that even in things like literature reviews. People want a formula for doing a literature review that will let them know that they've done it correctly and let them just follow the process and do it. 

Whereas in fact, doing it well just requires this much more uncertain process of never knowing if what you're currently reading is relevant or not, never knowing if you complete, and never knowing if there's another paper out there that maybe you should have read, you should have cited.

David: Drew, I'm going to keep my powder dry [...]. I want to use that later in this episode as an example when we actually get a few more ideas out of this paper because I think they all come together nicely with that example. 

There’s a quote right here in this section that says, "Real problems require engagement with the task at hand. Turning to methodology can be an elaborate way of avoiding problems and not solving them." Almost like saying, if we want to understand and improve the safety of work as done, it requires real engagement with work as done. There's no abstract safety work process that we can use, follow, or apply to actually solve that problem. 

It goes on, I suppose Drew, a little bit like in our manifesto paper on Reality-based Safety Science we talked about in episode 20, the question is kind of like, where's the evidence that all these methodologies that we rely, depend, and believe so heavily in? Do organizations actually have any efficacy in doing what we want them to do?

Westall calls out this large lack of empirical evidence around these structured methods that organizations use and concludes that they seem to have more qualities of religious convictions than scientific truths. However, practitioners report, I suppose, the research that Westall does refer to says that practitioners report very favorable attitudes to methodology. Safety people are strong advocates of safety management systems. IT teams are strong advocates of Agile Project Management methodologies.

There's this unanswered question in this paper. I suppose Westall answers it in his own way, but this is also a big unanswered question, why do practitioners have such belief in these methodologies? I guess he does answer in the next couple of sections, Drew.

Drew: One thing he does point out is the hypocrisy that these methods themselves are designed to look and sound scientific. It's really weird that the technique demands that you have a process to follow, but the development and implementation of the technique don't follow an evidence-based process. He doesn't call out any of his colleagues in the software engineering or IT university. But I think you could very well do that. 

They're supposed to be academics, they're supposed to be scientifically-minded, and they're producing these things that look scientific, but they're not themselves following a scientific process of test and evaluation that the things actually do what they say on the tin. We see the exact same thing in safety. That we have these methods that look rigorous, but no one’s applied in rigor to developing and testing the method itself.

I think a great example here is behavioral safety, which is designed to look, sound, and feel scientific. That's why it's so attractive. But no one has done the science around testing behavioral safety whether it works. 

David: I think, Drew, there is a very early episode we did on behavioral safety. It might have been episode 1 or thereabouts. And then I think in the mid-50s we did an episode on fads and fashions in management practice. I think there are a few episodes in the back catalog we can talk about this organizational belief in some of these methods and what drives that.

Drew: In fact, one is actually called scientific management. People like thinking that they're being scientific. People just don’t like actually being scientific. 

David: Yes, sometimes. Anyway, in this case study, in this paper, there's a quote that says, "By March, we had been doing this for nearly two months. I couldn't see how it was helping us in any way, but the manager kept reassuring us that it was. He said we should all trust the methodology and then it would all work out in the end. Of course, it didn't." [...] I’m talking about the effort of implementing processes is enormous in an organization. It often involves training hundreds of staff. In this paper, only one of the four companies that were reviewed was still using this methodology two years after its implementation. 

Of all of that effort in reorganizing, upskilling, retooling, and defining business processes, that's a huge sunk cost in something that never did what it was promised to do. I think, Drew, at least the next question to the paper about why the companies continue to invest so heavily in these processes when there’s little evidence that they’re working? Drew, I was going to turn to where Westall proposes the answer. Are you ready to do that yet?

Drew: Absolutely. So we're talking about why are people not scientific. 

David: This is where, I suppose, Maxwell had the opportunity of being a cognitive neuroscientist and understanding psychoanalytic theory. He sort of lent on this idea of social defense. He proposed that, in relation to this issue, methodology far from being an irrational tool to facilitate systems development, in practice, often functions as an elaborate device for avoiding the painful challenges posed by information systems projects. They're sort of saying that if we've got a method, if we can align in the method, and if we can believe in their method, then it kind of stops us worrying about just how difficult, complex, and uncertain this project actually is.

Drew: David, I think just before we go further, we’re using a lot of examples out of safety, but it's probably worth just saying, what is the actual painful challenge for this organization in the case study? I think one of the key things to carry through here is that we've got a genuine problem that is very hard for a large IT development organization to engage effectively with the people who are using its products even when they're inside the same umbrella organization. 

They don't speak the same language, they've got different interests, and they've got different concerns. The organization is too big for every developer to go and spend time actually working with the users. The users are inconsistent. The users don't want all the same things. The users complain about different things. One user says do one thing and then the next week someone else says another thing. This is a genuine painful difficulty and uncertainty faced by trying to develop a software product.

Users' requirements are inconsistent. They are changeable. They are hard to discover. We would love our process to be able to formalize that to get us a way of getting those user requirements properly understood, locked in, and explained in a language which makes sense to the software developers so they can just go away and develop. That's ultimately what the goal is, that's what the desire is, that's what the chimera is.

Now the question is—given this complexity, given this difficulty—why do we think that the process is going to fix that?

David: Drew, I assume that that carefully described problem that this organization in the case study was facing is intentionally very aligned to the challenge safety practitioners face with frontline workers who are exposed to the real risks with the work that they perform every day. They're constantly changing requirements and the difficulty in not being able to talk to everyone in the organization in different languages. They’re frontline people and safety practitioners discuss. I assume that wasn't a carefully considered description of that case study.

Drew: David, it wasn't actually, but I think that's just a universal about the sources of uncertainty in organizations. That is one of the big things that everyone is anxious about, whether it's safety, management, or HR, is just how unknowable frontline workers are and how inaccessible it is to people who need to design big systems and big products to support that frontline work. It is a genuinely hard problem that all organizations face even if you’re not in safety. 

David: The way that we turn to it even now a lot of times in our organization is the same way that it was, I suppose, seen to fail in these case studies. Which is to send out a document that's finalized to the user group, give them a week to review it, then sign it off and tell you that they've got no problems with it, and then keep moving forward is a little bit how we approach safety management systems in some organizations still today. 

Drew, sort of this section, and this is the main section. I suppose the important section of the paper is that this methodology is a social defense. Westall said that software development is far from a deterministic engineering process. I suppose we just say everywhere we go through here, where we say software development or information systems, I think it would be fair to reflect on that if you're a listener to [...] and safety—which I assume most people are—and just substitute that for safety management.

Drew: I think it’s worth not drawing this binary between deterministic engineering process is a [...] because the work that they're drawing on is basically that there's a whole bunch of different metaphors you can apply, and none of those metaphors are perfect. They will give you a bit of insight. One of the metaphors you can apply to software development or safety is as an engineering process. It's like a machine that you need to design and run. It can explain some aspects of it. It's a useful metaphor sometimes. 

Certainly, when we look at things like a stamp and feedback loops and hierarchical control processes, that is a useful way of looking at it. But there are lots of other metaphors you can deploy. I love the fact that he calls out the journey metaphor which we use all the time in safety. 

Also, he talks about warfare metaphors. Yeah, it's all about product conflict and zoo metaphor, a family metaphor, or a gain metaphor. He's all drawing on another paper, which is talking about organizations based on using all these different metaphors to look at what's going on. Let's say I add in one extra metaphor, which is the metaphor of a psychic prison. 

David: Gareth Morgan in the mid-80s referred to organizations as a psychic prison. I guess this is the early exploration of the challenges of psychological safety within an organization and what's acceptable and not acceptable. There's a quote here that says, "Many aspects of group organization and group dynamics need to be seen first and foremost as an elaborate system of social defenses erected to contain anxiety, their apparent rationality and task orientation is a facade, a rationalization.” Saying sort of many, many processes, methods, and interactions in our organization are performative rather than instrumental.

Drew: I think that idea of an elaborate system of social defenses erected to contain anxiety works for everything from performance reviews to trying to talk to the boss at the Christmas party.

David: Leadership visits, behavioral observations, audits, incident investigations, there are lots of things that we do in safety that our listeners can reflect on and read through this paper as we've been able to. By social defense, now the author means a persistent institutionalized pattern of social activity that's primarily served to reduce feelings of anxiety and provide a feeling of warmth and security. 

Drew, we’ve cited Erik Hollnagel in 2015 in our Safety work versus the safety of work paper that said, "Organizations need to both feel safe and be safe. However, often the former gets in the way of the latter." I think that’s exactly what Westall’s trying to bring into these organizational dynamics from psychoanalytic theory here. To say that the feeling of safety is the primary concern of people within organizations not necessarily being safe.

Drew: Yeah. We’re stuck here with the multiple meanings of the word safe here. I think Hollnagel is actually using both feel safe and be safe within the same meaning. Whereas the social defense, it’s not about feeling free from accidents. It's feeling free from social criticism. We've been talking about this in the lab for the last couple of years trying to get the language right here because a lot of organizations use the idea of legal defense almost like a metaphor. 

You'll see people doing safety activities and they say, this is so that we don't get sued or so we don't get evidence. It doesn't make sense as a legal defense. People do things which are irrational from a legal point of view that they say are for legal reasons. 

I think that the legal is a proxy. It's not that they actually fear being prosecuted or actually fear being sued. They fear something more nebulous, which is like doing the wrong thing. It's that feeling of safety—the feeling that you're doing the right thing, the feeling that you're not mismanaging things, that you're doing things appropriately, that's what people are trying to manage.

David: Yeah that’s a great point. Because I have the summary in here that maybe the cynic in me at times said that it is a very locally rational priority for people in organizations to feel like they're managing well. Maybe more primary importance to them, their own anxiety, and their own personal security than actually to be waiting for their management outcomes to give them the sense of whether they are or are not managing. 

Drew: I think that's exactly right. Even if there are no negative outcomes, there is no rational fear of your manager criticizing you, rational fear of being caught, or rational fear of something going wrong, you can still feel anxious about whether you're doing things properly or appropriately. 

David: There's a lot of sense in this paper, and so we'll just share another one here that goes into say that because of these reasons, these methodologies become really elaborate, really large bureaucracies. 

Bureaucracy is the epitome of a rational organization, is all too often a highly sophisticated social defense. Bureaucratic systems while parading as efficient procedures actually waste resources. Excessive paper helps contain the anxiety of face-to-face communication; excessive checking and monitoring reduce the anxiety of making difficult decisions by diffusing accountability. I'm doing this because the audit’s doing this, I'm okay because the audit said I'm okay, I'm okay because I've got all of these procedures in the organization, or I've got these processes so I don't need to actually talk to my people. I guess that's what this is…

Drew: This wasn't just me deciding something. This was approved by multiple people through a system of checks and balances therefore it was a good decision. 

David: There are a few papers that seemed like a bit of a series. There's a 1993 paper, Drew. This is the '96 paper, and there's also a 1999 paper where Wastell actually provides his own process for how software systems should be managed inside organizations. 

In the 1993 paper, they're arguing that systems development is a really intensely stressful process. There are exacting technical and political pressures on practitioners. It sets up this chain of anxiety amongst actors. It’s argued that this anxiety chain plays a critical role in shaping the dynamics of the development process. 

This is the idea that lots of pressure on these IT projects at the time, lots of uncertainty around them, lots of jobs and reputations on the line are lots of push to follow a process so that the process can be blamed rather than necessarily that people.

Drew: David, he doesn't say this directly in the paper, but I was wondering with the analogy with safety whether what they have in common is these really long feedback loops. If this project is going to take two years to develop, then ultimately, at the end of the project, it is going to be a success or failure and that's stressful. But what's most stressful is that it is two years away. How do you know what you're doing now is safe? How do you know what you're doing now is appropriate? Or how do you know now that you're developing a good system and it's not going to be a waste of time? 

You can't use feedback against the overall result, but you can use feedback against I'm following the process correctly now. You see that with safety people in their resumes. You can't say I developed the system and for 10 years it didn't kill anyone. You can say I developed a system and it achieved regulatory approval. I followed the process. The process was successful in that much tighter feedback loop. 

David: I think there are those proxy measures, Drew. I think you're exactly right. I think in safety, because it is so uncertain the outcome, do we even know how to prevent the next disaster? Do we know where the next fatality is going to occur? It's very rational for people to go, I don't know if I can achieve these injury targets. I don't know if I'm going to have a fatality or not. 

If I've done all these leadership visits, all these audits, I've got all these procedures, I've got all these meetings, all this training, and all these safety people, and I've done all this time, then no matter what the outcome is, I'll be seen to have been doing the right things. Whether that's a conscious or an unconscious view, I think that's all part of it. The activity itself becomes a proxy for good management in the absence of actually knowing what the outcome is going to be.

Drew: It's one of the things we see with Ph.D. students as well. David, you had a fairly stress-free Ph.D. process, but I don't know how much you experienced just how uncertain it is whether you are on track or doing the right thing. It's one of the most common questions Ph.D. students ask is just, am I doing okay? That's one of the reasons why we have things like milestones. 

I can't promise you that two months in you're on track for or in three years you're going to produce a good Ph.D. and it's going to pass. But I can tell you that your next milestone is your early candidature milestone. Yes, you are following the right process and producing the right things that will make it through that milestone. The existence of a methodology gives us all of these proxy things that we can use to measure against. In fact, Wastell is going to talk psychologically about how to think about these proxy things. 

David: Drew, I think that’s right. I think Ph.D. is a little bit like safety in organizations, which you're told that you need to submit something in four to eight years time and it needs to deliver an outcome. There are two little stepping stones on the way, which are again, also more ritual probably than instrumental in your progress. It is very much one of these very organic processes that how do you know you're doing enough, how you know you're doing it well is really hard.

If we go to social defense, Wastell pulls out from the psychoanalytic theory and said, look, the literature points at three forms of social defense against anxiety. You've got this basic assumption behavior, this covert coalition, or this organizational ritual. 

I wasn't going to go too much in the first bit, but this basic assumption in behavior is like fight or flight like a stimulus-response thing. You can have a social defense by either taking something or running away from it. The covert coalition is just like, well, I'm going to create a social defense by creating this little hidden mafia inside the organization that I know someone will protect me, it's like family. That will be my social defense. Then the third one, which is the focus of this paper, is this organizational ritual. Do you want to talk any more about the other two or just stick with the paper?

Drew: No, I think both of them are really interesting. If you're reading the paper, I'd recommend you take a little bit of time to think about them and apply them. I recognize elements of those both in my own work and even in people's responses to the methodology going wrong using some of those other social defenses coming into play. Methodology is just one way of creating a social defense. There's also blaming other people, which is the covert coalition, and that's what people do when the methodology fails. But organizational ritual is the mainframe, so let's just keep following that. 

David: I think we also need to remember the timing of these papers. This is before the whole body of literature on institutional logics as well. Some of these ideas around organizational ritual have been expanded quite significantly in safety science in the institutional logics space. We did an episode, I think it might have been mid 30s episode on institutional logics. 

This original work that was looked at involved nurses and this idea about nurses or this observation of nurses objectifying patients, which was some seemingly kind of heartless and impersonal execution of nursing protocol. It would be like, can you go and check on the leg in bed number 10 rather than the patient's name. The research concluded that the psychodynamics of this approach really enabled nurses to deal with the intolerable anxiety of just working with sick and terminally ill people day in and day out. 

There were these linked mental processes categorized as splitting, projection, and introjection. Without going into any more detail, I suppose Drew, Wastell sort of lifted this whole psychoanalytic category of social defense and dropped them on the things, the patterns that he was observing in these IT project teams.

Drew: Yes, for the nurses, it was things like I can't stop the patient from being in pain, but I can appropriately follow the pain management protocol and that makes me feel okay about doing my job. Then for the IT teams it's, I can't actually give the users what they want, but I can follow the user consultation process. It turns from something that's just impossible and unachievable, but still anxiety-inducing and worrying. 

Nurses don't like seeing people in pain. Software people don't actually like giving users stuff that the users hate, but you can't do anything about that. You can do something about making sure you properly follow the process. 

David: I think how that applied or was talked about in those three areas and the IT projects were sort of splitting, which is the responsibility for doing this is not part of me. It's not my decision. Projection, which is again if something fails, it's like the methodology is responsible, and introjection, which is that the methodology says that things must be done in this way. It was sort of all about this sort of separating the individual from the responsibility for the way that things were done, but also the outcomes of the ways that those things were done.

Drew: Don't yell at me for sending you these emails. The process tells me I'm supposed to send you emails at this particular point in time. 

David: We get it sometimes in safety inductions. You’ll get it when you get a site manager and you get out to the site and he goes, I need a few moments of your time. The management system requires me to actually take you through the safety induction. I'm sorry. That's the sort of introduction. Okay, well this is going to be a compelling introduction to safety at the site.

Drew, another section now. Are we ready to move on because we've got another chunk? I think we've kind of run a bit short of time, so we might speed up a little bit. The heading of this section is, The Psychodynamics of Learning: Teddy Bears and Transitional Objects. Drew, do you want us to get started with these transitional objects and teddy bears?

Drew: For the sake of time, when you're growing up, you're attached to your parents. You can't stay attached to your parents forever. You got to detach and become independent. A transitional object is something that you use in between being attached to your parents and being independent like a teddy bear. You become attached to the teddy bear instead of the parent that lets you be a bit more independent. Eventually, you get rid of the teddy bear. 

The trouble is you can get stuck and you can find yourself 25 years old still clinging on to your teddy bear. The transitional object has become permanent, and that's one way of looking at methods. David, I think we can just jump straight to talking about take 5s now.

David: Maybe the way that I sort of give this example is to say that there's a belief that if a person does a pre-task hazard identification planning activity, then their reliability, safety, and efficiency of that task is improved. If I go, okay, I need to change this light bulb, what do I need? I need a light bulb, I need a ladder. Where am I going to position the ladder? How am I going to carry the light bulb? What am I going to do with the old one? What are the hazards? How am I going to solve all of this?

There's no question in sport, work, and any kind of tasks that some sort of mental rehearsal process or some sort of planning process can improve safety and task performance. Then we go, okay, well, we want our people to do this before they do every task, but we're not confident that people are capable of doing it so we give them a checklist. We don't know if you know what all the hazards are. We don't know if you know how to do a good planning process, so here's the checklist.

Then we have people do the checklist for 10, 15, or 20 years and wonder why they're treating it as a compliance activity. Drew, that's my starting point for this example. Do you want to step off from there?

Drew: The thing that we're forcing people to do is supposed to be a transitional object. It's supposed to be a framework that takes them from not mentally rehearsing to mentally rehearsing, but it's possible to get stuck on the transitional object. When you do, it often stops filling its purpose. If you're 25 years old and still carrying your teddy bear, it hasn't served its purpose of giving you this full independence. It's just substituting one type of dependence for another. 

The same thing with the risk assessment framework. If you're still using the framework, then that's almost evidence that you never actually managed to achieve the purpose, which was to just start spontaneously, independently thinking through the task before you do it. I honestly don't know when this is true of teddy bears or not, but you can have transitional objects that don't serve any of the purpose they're leading you to. 

You can be doing that framework, you can be using the transitional object and getting none of the risk assessment, getting none of the mental rehearsal. It's just become totally detached. The transitional objects become split from the thing it was supposed to help you from in the first place. 

David: To be really concrete with that example, I think we see that in lots of safety processes where we actually confuse, is the safety process designed to build capability in people and expertise and competence in them undertaking this process? Or is the safety process itself designed to produce the outcome? Processes can't produce the outcome. I think that's the point here. The point here is that it’s either a skill-building activity or it's a compliance activity. 

I sort of link back into some of Rasmussen’s work on rule-based and skill-based work. When we do have novice workers and new workers, then being rule-based on having processes, handrails, and checklists can be useful. Our goal there should be capability building. Our goal there shouldn't be about creating outcomes. 

Once we've got that capability, then the question becomes we should remove the process. Some sites—based on my advice, maybe just my musings—have now only required the written take 5s to be done for people for their first six months on that site. After that, they can choose whether or not they want to use the form or not. Just experimenting with whether that creates an intense focus on capability building for the first six months and then relying on expertise and mental processes for people from thereon.

Drew: David, I probably shouldn’t mention at this point that [...] now have a deadline for our take 5 paper. It's due in a couple of weeks, so this episode recording is well-timed. I don't think we've got a good solution in the paper yet, but we do point out the problem is that this social defense thing is real. 

Getting people to use it for the first six months as a learning tool solves that problem of the transitional object, but one of the problems is that there is this need for a social defense around risk assessment. If you take away the take 5s, yes, you've taken away something which is not achieving anything for safety, but it is absolutely helping people with that organizational ritual and social defense. That is a real genuine psychological safety need, even if it doesn't do anything for physical operational safety.

David: I think it's worth having either an answer for or a way of removing the need for that social defense to be there at those very proximal work processes. Drew, do you want to say any more about—I promise you, listeners, I will keep on Drew's case until that paper is published and we will get that take 5 episode done the very first recording session after it is published. Drew, is there anything more you want to say about that before we start to work towards wrapping up?

Drew: No, I think the paper then moves on to quoting one of my least favorite scholars of research methods, but he's very, very quotable. This is Paul Feyerabend. People who are deep into social science will love him. People who are much more proper in structured research methods will hate him. 

This is a quote from the paper and then there's a quote from Feyerabend. From the paper, it says, "All stories have good and bad guys, heroes and villains. Methodology is clearly the villain of this piece. The aim, however, is not to prove that methodology is always bad, but rather to argue that it has the potential to operate malignantly, and to discuss how this corrosive influence is exerted in terms of cognitive processes and organizational dynamics. The author admits to some partisanship and readily acknowledges that his position is, on the whole, 'anti-method'." Probably just quickly before you do the quote. Paul Fereyabend's book is called Against Method.

David: There you go, Against Method. It says, "It is clear that the idea of fixed method rests on a too naive a view of man, people, and their social surroundings. To those who look at the rich material provided by history, it will become clear that there is only one principle that can be defended under all circumstances and in all stages of human development. It is the principle: anything goes."

It's interesting that there's also a quote in this paper that we haven't said from 1829, the early industrial revolution that says, well, that's the end of people getting to make decisions in organizations. There's a tool, a technology, and a procedure for everything. That was almost 100 years before scientific management. This methodology is a long time burned into our organizational psyche. 

Drew, takeaways? I've dropped a couple of incomplete conclusions here. Do you want to kick us off with some things that you think might be some takeaways?

Drew: Okay, I think it's important to start with the non-takeaway takeaway, which is that it's really easy in this work, which critically analyzes methodology and bureaucracy, to come away with an anti-method, anti-bureaucratic attitude. We certainly know and have done episodes about authors who use this work as ammunition in their war against safety bureaucracy. 

The reason why I was interested in the safety clutter paper and the reason why I like this work is I see this sort of critical analysis as really very sympathetic to organizational ritual. It doesn't exist for the reason we think it exists, but it exists for really good, really interesting reasons. This is not a rant against software development methods. This is an explanation for why we have software development methods and why people liked them so much even though they don't work. 

I think if we want to make good use of this, then we need to crusade against dysfunctional methods and dysfunctional things in organizations using this insight, using the understanding that people don't follow the method because it works. Telling them, why are you following the method that doesn't work isn't going to help. You got to understand that people follow the method because it reduces their anxiety.

If you want to fix the method, you want to improve the method, you've got to do it sympathetic to that anxiety. I think this is where we're going with take 5 because you can't just rip out take 5s because they're nonsense. Yes, they are nonsense. But you can't just rip them out. It doesn't work. They'll just come back. People will use other forms to write down that same amount of information. You've got to go into removing them sympathetic to this social defense role that they play.

David: I agree, Drew. I think this practical takeaway that the complexity of our organizations, the dynamics, and the decision-making process. If your organization isn't psychologically safe for people to be independent, then they'll find another teddy bear. They'll keep finding more teddy bears every time you take one away. I think that's what this paper says. Understanding the connection between your methods, your outcomes, and psychological safety. 

One way of testing this is just to ask a few operational managers in the business what they really think of things like incident investigation, audit, and risk assessment. Their organizational narrative will be odd was really great to have the auditors in. They told me some things about my business that I didn't know. They gave me some really meaningful things, that was a great program. That'll be the narrative the operations managers tell the senior management and the board. 

I suppose it’s a good test of your relationship with them, see if they tell you a different story. Do you really think we learned anything from those audits? Do you really think we need to improve anything? Then you can really sit back and go, okay, like Drew said, these things are serving a purpose, but it's not creating safety of work outcomes in my business.

Drew: David, I've got to tell you another story, but I promise you that this is going towards that takeaway. All of my kids have had transitional objects. One of the ones that the two of them had in common was a blanket. I won't tell you which kids. I've got three, so we'll leave a little bit anonymous. Two of my kids had blankies and we learned from the first one, just how dangerous it is to have a kid reliant on a blanket because what if you lose the blanket? What if you forget the blanket? What if the blanket's not available? What if they go somewhere that's embarrassing for an 18-year-old to have a blanket? 

The solution we came up with was micro-blankie. The idea is that while the kid is still using the blanket as a transitional object to introduce a couple of very similar but much more lightweight objects that serve the same purpose and can be substituted and train the kid to substitute in and out blankie for micro-blankie. 

I think that works for organizations as well. The problem with these processes is not that they exist. The problem is that they are so cumbersome that they're taking your time, attention, and effort away from what we're actually trying to achieve by them. If we could satisfy the social defense with something which is just easier, is less work, and is less frustrating, and in the understanding that it's only there for a social defense, it's not there because it helps, then we can do both. We can satisfy the need and we can let people spend their time and attention on things that are important. 

It may be less about removal and more about substitution, making sure that the substitution keeps the need rather than adds in extra processes. Otherwise, we just risk substituting one process with another, you take away people's structured software development process, and now they're having scrums every day, talking about their agile processes, and going on agile training goes in complaining about how they're spending so much time on their agile process that they're not actually engaging in users. 

David: Drew, to make it practical and carry that take 5 thing, people ticking one box instead of 15 that just says I did my mental pre-start risk assessment today, tick the box initial date, done, and get 15 days on a page as opposed to one day on a page.

Drew: Yep, may in fact feel the same psychological purpose for less work. 

David: Perfect. I think practitioners here, another call out, but being really careful this whole idea of form over substance. Paying all this attention to the safety of work won't be new for our listeners, but I think this paper really carefully disconnects methodology from the organizational outcome in a lot of ways inside organizations. So it's really important for us not to be the ones that are going around the organization screaming the importance of following these methods to create safety outcomes unless we've got some confidence in how those methods are being applied and what they're actually doing in our business.

Drew: One more takeaway, David, I'll throw in just for our research listeners. Turn up to seminars and read papers that are just totally out of your own field because if nothing else, this paper shows how they provide useful metaphors that you can apply and transition. Wastell's work is a great example of using metaphors from cognitive psychology and from psychoanalytics to better understand organizational rituals.

I've seen similar work that uses metaphors from software development to understand organizations. Just reading left fieldwork, reading stuff from about nurses and their patients can help you with more different ideas to bring to understand and to challenge your assumptions about how you think of things. 

David: Institutional logics, organizational dynamics, and social patterning, these things are just unique in the safety world and different in other things like IT, project management, and other things. As a genuinely transdisciplinary science, we probably argue sometimes we can get more from some papers in some different fields. At least we'll continue to do a few of them in the Safety of Work podcast. 

The last point that I was going to make is transitional objects are there to build capability and to build independence. That's what transitional objects are there to do. If they persist after this competence and independence are established, then they will be a compliance activity and nothing more than a compliance activity. There's maybe another paper we can do at some point that I stumbled across at one point called the corruption of role and task, which really just talks about our organizational dynamics will always corrupt intended roles and intended tasks inside organizations.

Drew: Great takeaway, David. The question we asked this week was, when is methodology more important than the outcome? The short answer is?

David: I guess methodology becomes more important than outcome when we’re genuinely requiring a method to build capability in a particular activity within our organization, and methodology also becomes very important in low psychological safety environments for those purposes that you mentioned earlier, Drew. I guess in that way, the methodology doesn't seem to be important to generate the outcomes that methods are actually there to generate. 

Drew: That's it for this week. We hope you found these episodes thought-provoking and hopefully somewhat useful at least shaping the safety of thinking about work in your own organization. As usual, you can contact us on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.