The Safety of Work

Ep.88 Why do organisations sometimes make bad decisions?

Episode Summary

In this week’s episode, we tackle a topic that may or may not change the way you think about solving problems in an organisation. We delve deeper into an interesting paper on organisational decision making called A Garbage Can Model of Organizational Choice, written by Michael D. Cohen, James G. March, and Johan P. Olsen.

Episode Notes

While this paper was written over half a century ago, it is still relevant to us today - particularly in the Safety management industry where we are often responsible for offering solutions to problems, and implementing those solutions, requires decisions to be made by top management. 

This is another fascinating piece of work that will broaden your understanding of why organisations often struggle with solving problems that involve making decisions.

 

Topics:

 

Quotes:

“Decisions aren’t made inside people’s heads, decisions are made in meetings, so we’ve got to understand the interplay between people in looking at how decisions are made.” - Dr. Drew Rae

“Incident investigations are a great example of choice opportunities.” -  Dr. Drew Rae

“It’s probably a good reflection point for people to just think about how many decisions certain roles in the organization are being asked to be involved in.” - Dr. David Provan

 

Resources:

Griffith University Safety Science Innovation Lab

The Safety of Work Podcast

The Safety of Work LinkedIn

Feedback@safetyofwork.com

A Garbage Can Model of Organizational Choice (Wikipedia Page)

Administrative Science Quarterly

Episode Transcription

David: You're listening to the Safety of Work, episode 88. Today we're asking the question, Why are organizations sometimes bad at making decisions? Let's get started. Hey, everybody. My name is David Provan, I'm here with Drew Ray, and we're from the Safety Science Innovation Lab at Griffith University. Welcome to the Safety of Work podcast. 

In each episode, we ask an important question in relation to the safety of work or the work of safety and we examine the evidence surrounding it. For the last few episodes, we've been discussing some older papers that we think are still very useful for understanding current safety science. 

Today, we're going to talk about one of Drew's favorite older papers. Dew, I remember when I first started my Ph.D. you gave me about five or six papers that had nothing to do with safety, or so I thought at the time. One of those papers is the one that we'll discuss today and another of which we'll discuss in the next episode. I'm not quite sure why you picked those five or six papers because they were very left of the field at the time for me, but on reflection, I think it must have just been to expand my worldview and maybe test out how open to new perspectives I was.

Drew: David, I think every paper in that list is actually something that Rob Alexander made me read. I think it's just that when you find a good paper that tells you something interesting about the world, you want other people to read it. Just like the more different ways of looking at things you've got, the more power you've got yourself to both write interesting stuff and to avoid copying other people's ideas and just rehashing the same old things.

David: Yeah, and also Drew, I think for me with this paper about organizational decision making, I'd spent 17 or 18 years immersed inside organizations and decision making processes without really ever stepping back and trying to question, understand, or make sense of how those decisions got made. I was just usually happy or upset at decisions inside the organization. Let's jump in, and I'm not sure if this paper is open access, but I'm sure it wouldn't be too hard to get your hands on. Let's jump right in and discuss the paper.

Drew: Sure. The paper is called A Garbage Can Model of Organizational Choice. Right off the bat I just love the title. Anything with a good title is going to get me to read it. You can find it online. It's not technically open access, but if you just search for the title and PDF, there are lots of places where it's easily available without going through a paywall. It's published in a journal called Administrative Science Quarterly.

I've never had something published there myself, but it's one of my career goals. I'd love to get something published in ASQ. It's a top-tier journal. It's really interesting for safety researchers because you see stuff published in ASQ and 20 years later, someone puts safety in front of the name and it gets published in Safety Science.

David: I remember, Drew, we had a crack at ASQ with my professional identity paper, which was really interesting applied research. It was a new field in terms of a new profession that had the research applied. The associate editor didn't even go through peer review, just came back and said there's nothing new in this. It's a new application of theory, but there's no new idea here, so maybe try Safety Journal.

Drew: Yeah, and I think that's the way with a lot of safety researchers. We're just applying ideas that have been previously discussed in other fields sometimes. This one was published in 1972. The authors are Michael D. Cohen, James March, and Johan Olsen. As far as I can tell, all three of these authors are famous because they were authors of this particular paper. They've all got their own Wikipedia pages, and the paper got its own Wikipedia page and has been cited 13,000 times, which is a lot. David, anything else you just want to say about the paper itself?

David: No, I think it's sort of a different paper that we'll talk about on the way through. A different paper that's sort of a mix of theory and modeling. It's a little bit different from our normal safety papers. Drew, do you just want to talk a little bit about that format because it's something that I don't think we've had in a paper on a podcast yet?

Drew: No, we haven't done a modeling paper really on the podcast. You don't get a lot of them in safety, but you see them a lot in economics and sometimes in criminology. The basic idea is you have this theory for how the world works, and you turn the theory into as close to a mathematical model as you can. You then run the model and if the model produces results which match what you see in the real world, then you can claim that your theory and your model are reasonably representative. Then you have fun with it. You put in new parameters, new situations, and you see what happens. 

A lot of economics is like that. When people do economic forecasts, they've got these models that successfully predict the past because we know what the past looks like. Then they use them to say what they think it's going to be like if we think about the interest rates, or the things going to be like if we introduce this new tax rebate. We've got a model that gives us the correct answers so far, and we think, therefore, it'll give us the correct answer in new situations. 

One of the reasons why this paper was so famous is because in 1972, most people didn't know what a computer was, and so these people were using their mathematical models and creating a computer program. It's written in a fun computer language called Fortran, which I had to learn for my engineering degree, but is mostly a sort of dead archaic language these days. The novelty was using computer programs to do your modeling. Today, computer programming is woefully out of date and seems just very naughty and antiquated, and so we're going to focus on the theory in our discussion today.

David: Drew, it's half a century ago, but it's maybe a nice tie in to the discussion last episode, Episode 87, on cybernetics that we had within the systems theory discussion about can we create a computer that can replicate decision making? In this situation, it was a kind of organizational decision-making. 

Drew, I thought a bit of a backstory here, which is always interesting to how these collaborations come about—particularly for such a famous paper—with observations of the real world. We'll talk about all of this a little bit further throughout the podcast, but at the time, a lot of decision-making theory was being done in psychology and economics. It was sort of at this intersection of what's the rational choice? 

The economists and the rational choice around decision making, the psychologist of what was the motivation or the emotion around decision making, and very much about individual decision making, but organizations are not people. Sometimes I think our listeners might reflect and go, I'm not even sure who in the organization made this decision, but suddenly, a decision is being made and we've got a way forward, but it can be very hard to pinpoint who made that decision. 

How this paper came about was Johan Olsen was a doctoral student at the University of Bergen in Norway. He came across to the University of California. At the time, Irvine was a visiting scholar there for a couple of years in the late ‘60s and James March was both the Dean of Social Sciences as well as the Professor of Psychology through the end of the 1960s. 

Coinciding with the time of this visit and these two other researchers there, all of these scholars were present at the right time, in the right school, and in the same lab to witness the university conduct a search process to hire a new dean. What happened was, ultimately the search process ended, none of the potential candidates were chosen for the role, and the head of the search committee took the position of dean. 

During an interview, Olson describes this chaotic decision-making process that he was observing at the university throughout this search process and how it served as this foundational experience for the three scholars to later collaborate and produce their model. Drew, I imagine for the replacement of a position in the university as important as the Dean, there's a lot of water-cooler conversation happening amongst the academics around the search process and the choices that are the people in the organization and making around this decision.

Drew: I've seen a bit of this when we were doing our safety clutter research. On the one hand, you're sitting doing this research that sees the world as very formal and modeled and described by theory. On the other hand, you're working inside an organization that just appears to be utterly chaotic. These are three researchers in decision-making theory. They're experts in making decisions and in how people make decisions.

They are sitting in the office doing research and then leaving the office and they're part of a university that has got this massively important decision that is just getting made in the weirdest ad hoc process. So they thought, okay, we've got to have a theory of decision making that can explain how you can have meetings where some of the people who are supposed to make the decision don't show up to the meeting and some of the people who were there decide something that's totally different from the previous meeting. 

Then the next meeting reverses the decision again, and eventually, no decisions are made and we come up with something that doesn't even follow the processes. They had to have a theory that could account for just how anarchic a university is.

David: Yeah, and I think organizations are just the way you described it there. I'm sure many of our listeners can see that situation in their own organization. The author's sort of said, previous decision-making processes saw individual actors in the decision-making processes as reasonable, rational, and following a process that was quite repeatable.

What they found in this situation was that all of that went out the window and there was lots of other messiness, variables going on, compromises, and trade-offs. Then I'm actually tired of this decision, I just want it to go away, and I want to move on to the next thing. They're also doing things like observing interactions between people, nonverbal communication, and misinterpretations of other people's positions, and really, I suppose Drew, just left this example like you said with a whole bunch of open questions that the existing decision-making theory couldn't account for. 

By 1972, when this paper got published, the three authors had moved on to Stanford University in various positions and then they published this paper, A Garbage Can Model of Organizational Choice. At the time, they were using version five of their programming language, which you mentioned through Fortran basically. What you see in the paper in terms of all of the models and the data is coming out of version five of their model. 

Drew: Just for clarity, I should say that if you've heard of the idea of bounded rationality, that already existed at this time. These weren't the first researchers to say, hey, decisions are made under limited uncertainty and with people only having a certain amount of attention span to focus on decisions.

What they did is they took decisions outside of people's heads and put them in the organization and said that we can't just deal with the fact that individuals have limited capacity to think carefully about things. The decisions aren't made inside people's heads, decisions are made in meetings. We've got to understand the interplay between people in looking at how decisions are made. Should we dive into the text of the paper, David?

David: Yeah, let's do that. How about you start us by stepping off from there in terms of the background of the paper. 

Drew: We've basically broken it up here into a sort of three bits. We've got a bit of background, we've got the basic theory that they came up with, and then some of the implications of that theory. The background is they're describing what they call organized anarchies, which sounds a bit like a contradiction. 

The idea is that all organizations are organized anarchies at least part of the time. We'll describe a little bit what organized anarchy is in a moment, but there are some organizations that are predominantly or most totally organized anarchies. In particular, organizations which are public like the public service, educational organizations particularly universities, and informal or what they call illegitimate organizations. They don't go deeply into that, but I'm thinking of things like standards committees where it's very ad hoc how the organization comes together and breaks apart. 

There are sort of three properties to look for that you know you are in organized anarchy. The first one is what they call problematic preferences. The idea here is that most formal models of decision-making assume at least we know what people want. But in organized anarchies, people don't. You look at a method like the analytic hierarchy process or the House of Quality, you're supposed to make decisions by you work out what people's values are, you work out what weighting they associate those values, you weigh up the various different options against each of the value, and you come to a weighted decision. 

But what if people aren't even consistent about what they want and can't articulate that even with some sort of process? Then your values will shift depending on who's in the meeting and what they care about today, so that's problematic preferences.

David: I guess that's a bit like shifting goalposts, which I suppose practitioners talk a lot about in their organization. 

Drew: Yeah, and we'll get into some examples later. I've been in plenty of meetings where I’m just sort of been, what are we actually even trying to do here? What do people want? Tell me what you want and I'll do it, but we don't seem to have something we want. We're just sort of here to gripe or to express our feelings.

David: It reminds me a bit of the risk assessment conversations that we have on the podcast, Drew. One of the points that you always make, which I liked, is you should be doing a risk assessment when there's an immediate decision that needs to be made and can be made through the risk management process.

Drew: Yeah, if there's no decision, then it's very hard to. That's what they say. They say that you understand people's values based on the decisions that get made. You can't make decisions by first working out what people's values are. 

The second thing they say is unclear technology. The word technology there, I think they're really talking just about organizational processes. They say that the organization's processes aren't understood by its own members. People know how things work by trial and error, experience, or other people telling them. 

David, I don't know what this is like for organizations you've worked with, but absolutely both at university and in the public service, there's a formal policy and procedure for everything. If you want to know how something gets done and how you're actually supposed to do it, it's pointless to go and read the policy or the procedure. It'll always be there's a particular form you need to give to a particular person. 

The only way you know that is either someone tells you or your hand in the wrong form and someone tells you it's the wrong form or you've given it to the wrong person. Eventually, you work out who the right person is or the right form. That's what we mean by trial and error.

David: Yeah, I agree. I think something that's written down in black and white has already been turned into something that's very gray inside the organization. You've got to find out what that is. I agree with you, in most of the organizations I've been in—I mean, with some exceptions around the policy. But yeah, for the vast majority, it's finding out how something actually gets done.

Drew: The third thing, and this is kind of key to the whole model, is fluid participation. People vary in the amount of time and attention they can devote to any sort of particular activity. Whether they get involved in a decision or not has less to do with what their formal job is and more how much other stuff they've got going on. It might be you’re part of a committee, the committee meets every month, but you don't go every month. Sometimes you put in an apology, sometimes you send your deputy.

If you're at that meeting every month and you look at the membership, it's a different group of people who show up based on the time of year, who really cares about that particular meeting, and for whom it's not a priority. That's fluid participation. Your decisions are made by the people who happen to be in the room at the time, but that can change.

David: Drew, with these three properties, these problematic preferences, unclear processes, and fluid participation, there are three real-world phenomena that the paper is trying to explain. Do you want to take us through those as well?

Drew: I'm taking this directly from the paper, and I'm not 100% certain on what each of these three things means. They go through this fairly briefly. The first question they have is what counts as intelligent decision-making? I think what they mean is, you can easily see in rational decision-making theory when you've got it right. There's an optimal decision. 

But what counts as good decisions when things are this close to anarchy? You could imagine it as a sort of barter system where everyone's got their own values, they trade-off with each other, and they negotiate. If that was the case, you could measure good decision-making. Just by overtime, you ask people how happy they are with decisions, and market of ideas and decisions takes care of it. But that's obviously not what's going on. There's no horse-trading around these meetings. It's not like I vote for you if you vote for me. It's whichever one of us shows up cast a vote.

The second one is how do you then factor that into decision processes? How do you have a model of how decisions are made that takes into account that not everyone cares about every decision and people drop in and out of the processes? They say in fact, the most important decision each person has to make is which decisions do you spend your time on? Which decisions do you get involved in? We've got to have a way of modeling that decision. The decision about which decisions you care about.

And then the third one is just, what does management look like inside anarchy? Because all of these organizations do have management structures, they do have formal processes. What are those things doing and how do we make them do it well? Because a good theory of decision making should help us make better decisions, and that means we've got to have management structures that work with instead of against anarchy.

David: We got to think about the time this paper was done, again, half a century ago. Now, we hear a lot of our practitioners say that I don't leave a decision chance when I go into a meeting. I make sure I know the way that that decision is going to get made beforehand by doing a whole lot of stakeholder management, knowing who the power brokers are in particular topics, in particular organizations, and making sure they're understanding an onside. 

I think it'd be fair to say today inside organizations, there's a whole lot of horse-trading that goes on outside of the meeting where the decision gets made, or there may well be.

Drew: That horse-trading relies on you knowing what decisions can be made inside each meeting. I think one of the things about anarchy is often you don't even know what the decision is or what the problems are that people are trying to solve until you get to the meeting.

David: I think that's a really good point that we'll get to, so a great point.

Drew: It's like the anarchy prevents you from having that sort of strategic approach to it that you'd like to have to make decisions.

David: We've said this paper tries to come up with a model that can generally explain how decisions get made in these anarchies that you're talking about, and hopefully gives some insight into how different organizational structures might influence things for better or for worse. The basic idea here is that traditionally, we think of a decision being made in quite a linear fashion. I think you mentioned this briefly at the start of the podcast.

We know that we need to make a decision. There's a problem or an opportunity right in front of us. We come up with a range of options. We evaluate those options compared with our goals, objectives, constraints, and other factors. Then we pick the best option, and we move forward with that. That's not what this model says.

Drew: No. What Cohen says is that the organization always has a number of different things just floating around loosely. The first thing is you've got choice opportunities looking for problems to solve. Your example of a choice opportunity might be, say, a budget meeting or a hiring decision. This is a chance to make a decision.

You've also got issues and feelings looking for decision situations in which they can be aired. These are people who are annoyed about something, aggrieved about something, or want something that happens. They're looking for meetings where they can complain about those things. Then you've got solutions looking for issues that they might be the answer to. So you've got solutions wandering around looking for problems, and you've got decision-makers looking for stuff to do.

I'll just quote this directly from the paper. "One can view a choice opportunity as a garbage can into which various kinds of problems and solutions are dumped by participants as they're generated. The mix of garbage in a single can depends on the mix of cans available, on the labels attached to the alternative cans, on what garbage is currently being produced, and on the speed with which garbage is collected and removed from the scene." 

Every meeting is in fact a little garbage can, and people bring to each meeting whatever problems and solutions they think belong at that meeting even if other people don't think that they actually belong. In formal terms, we've got four variables. Each one of these things changes over time. This is what their model is going to simulate is the generation of each of these four things. 

The first one is you've got a stream of choices. This is the one thing the organization has got control over. Your choices get defined by the organization. Usually, they happen at fixed times, and usually, there's a list of eligible participants. People are allowed to be involved in those decisions. You might be setting the annual budget, it might be having a regular meeting to approve new programs or new spending, or it might be following the process for hiring a new staff member. Pretty much anytime you have a meeting, you've got some sort of choice opportunity.

David: I think into our safety realm, it could be a health and safety committee meeting, it could be a monthly safety section of the management meeting, it could be a quarterly executive safety committee. We've got all of these choice opportunities that we need. I think the name of the paper—thinking of each of these meetings as garbage cans—is a nice thing to have in your mind when you think about these meetings. People might feel good about calling their meetings garbage cans.

Drew: The second thing you've got is a stream of problems. Problems are not as well defined as choices because the thing is that in any organization and particularly in places like universities or the public service, there's an almost infinite number of things that you could be worried about because things can come from inside the organization and outside the organization. A problem could be anything from social pressure to reduce the carbon footprint, to Jim's frustration that there's no place to get a good coffee.

Everyone's got things that they care about. It might be someone's come back from a training course and now they're really excited about safety management systems or anything that someone's worried about becomes a problem. Every problem has got a time when it becomes visible. It's got a limited set of sort of choice opportunities where it's legitimate to talk about it.

Not every problem is legitimate to talk about in every meeting. You probably can't go into a hiring meeting and hijack the meeting to talk about the lack of a good coffee place on campus. People may in fact do that, but you're not supposed to. 

But on the other hand, you want a meeting about the travel budget. Someone can come into that meeting concerned about the carbon footprint. It really is quite legitimate for them to raise that as a problem and something people should be thinking about and solving. Is the university or the organization's carbon footprint something you should be talking about when you're setting the travel budget? 

Any problem and solution take a certain amount of energy to match them together. That's not the total resource to solve the problem. It's just the amount of energy that's needed to agree that this problem matches this solution.

The third thing you've got is you've got a stream of solutions. The bit that I personally really love is the fact that under this model, problems and solutions don't just go in one direction, they go in both directions. Everyone who's got a problem is wandering around looking for a solution, but there are also people with solutions who are going around looking for problems.

A good example of that is—they have it in the paper and I think it still works now—having a computer or having an automated workflow. Those aren't problems needing to be solved, those are solutions. The person who is really keen on automated workflows is going around looking for processes that can be improved by adding in an automated workflow. The person who's bought a new computer is saying, well, we could use this for budgeting, we could use this for scheduling, we could write a program that does seating allocations. It's either a solution in need of a problem.

I think safety is actually full of these things. We got management systems, we've got policies, we've got writing procedures, we've got risk assessments, we've got training courses, and we've got staff surveys and culture measurement instruments. They're all solutions looking for a problem to deploy them against.

In the paper, they've got a matching coefficient, how well does each solution deal with each problem? You don't need a perfect match. But if it's not a good match, then it's going to take you more energy to agree that the solution matches the problem. If someone's problem is lack of time and your solution is to send them on a training course, it's going to take lots of energy in the meeting for everyone to agree that that's actually a good solution.

On the other hand, if your problem is workflows are too clunky and your solution is an automated workflow, that might be a nice good match, easy for people to agree, the problem matches the solution, and move on. Then the fourth variable is the amount of energy that each participant has. You can think of this as how long people are willing to sit in meetings talking about things.

Everyone's got a fixed amount of energy. Once they're out of energy, they stop showing up to meetings. People can't go to every meeting if they're sitting in a meeting talking about supplying coffee for too long that will use up all of their energy and their capacity to get involved in making other decisions.

David: Yeah, I think that idea of stream of solutions and energy is really important because I was thinking as you're describing the stream of solutions about the matching coefficient is that it's also knowing what problem the solution is being attached to. Because we might think that some of our recommendations from incident investigations don't really match the problem of what contributed to the incident in the first place, but maybe those recommendations really do match getting the incident closed out and getting the board off management's back.

It could be really interesting to also think about, is every person in that meeting actually solving the same problem when they're talking about solutions?

Drew: I didn't have it in the notes, but I think incident investigations are a great example of choice opportunities. It's actually fairly poorly defined what you're trying to achieve when you're doing an incident investigation. We want to make safety better. But some people come along to incident investigations with particular problems they want solved. So they see the organization as having a culture problem, having a training problem, or having a frontline supervision problem, and they want that problem to be solved as part of the incident investigation.

Other people have got solutions. They want a new training package approved. They want to hire some consultancy that they've been wanting to hire for ages. They want to put in place some metrics that they think are really good metrics, so they come along to the meeting trying to sell solutions. If you've got enough energy to match the people who care about culture with the people who think culture is solved by doing a survey, then bingo, we've got a decision made.

We now have a recommendation that goes into the incident report, institutes a culture survey, and we've made a decision. But if we're having trouble with the matching, then we're going to have our hours of argument with people just talking about their own problems, talking about their own solutions, and draining people's energy without ever finding a good match between a problem solution and the decision opportunity.

David: Yeah, I must admit, Drew, the incident investigation example was one that I thought of a lot as choice opportunities and solutions looking for problems, problems looking for solutions. Maybe this year, maybe the 50th anniversary of this paper we could dust it off and maybe try to write something that applies it in that context to make it easy for people to understand it.

Drew: Yeah, that'd be fun. Like a garbage can model of incident investigation, or incident investigations as organizational choice opportunities.

David: Yeah, there you go. In total anarchy, anyone can show up to any meeting, talk about any problem, and potentially make any decision. But organizations aren't total anarchies. Organizations are—at least in the description in this paper—organized anarchies. There are always some constraints about who can attend which meetings, which meetings to make certain types of decisions, and like you said earlier, what's socially legitimate to talk about at these individual meetings?

If you put all this together, decisions get made at meetings. I still believe that it's still the case now. Lots of decisions get made as somewhat a consensus between whatever the appropriate forum is inside an organization, and not everyone who can actually show up will show up.

The people who do show up bring a combination of issues that they care about, solutions that they'd like to see interested, and like you said, when there's enough energy with the people who are in the room, that there seems like there's a problem that matches a solution, there are no objections, a decision will get made. Do you want to run through a couple of real examples that might kind of help this make sense in addition to the incident investigation example?

Drew: Sure. The one that immediately sprung to mind for me was standards committee meetings. David, I don't know if you've ever been involved in writing any standards.

David: Not industry standards. No, I haven't and no real desire to either.

Drew: I made a decision myself a long while ago that no matter how legitimate it might seem, they are not worth the energy involved. I think the garbage can model gives me a good rationale and explanation for why I have this problem. 

They’re a classic example of organized anarchy because, in principle, anyone can get involved in writing a standard. If you ever try complaining about a standard, everyone will tell you, but you have plenty of opportunities to complain about the standard through the standard writing processes. We sent it out to all of these consultative committees and you didn't give us any feedback, so it's your own fault the standard's bad.

It's a classic example of, do you have the energy to get involved in trying to be heard through this long drawn-out process? We have to sit through multiple meetings of multiple people all airing out their own problems and solutions that they want. Most standards don't have a clearly defined or clearly agreed goal. Personally, most standards that I hate, my very first question is, why is there even a standard about this? It is a standard because someone wanted a standard.

You can think of them as decision-making opportunities. Some people have got particular problems they want solved by the standard. If you look at new safety standards, often people are saying, it's because the existing standards don't cover my industry, they don't sufficiently cover security, they don't sufficiently cover privacy, or they cover safety but not wellbeing. People have got sort of vested interests they want solved.

You've got other people who want particular solutions. They've got a particular technique that they've invented or a particular metric that they want included in the standard. You got this mix of decision-making opportunity, vested interests about the problems, vested interests about the solutions. That's why the standard seems so random and chaotic because it's all a negotiation between the problems and the solutions based on who turns up to which meeting. 

You can have the same organization producing a series of standards on the same topic. They're inconsistent because different people have shown up to the meetings for each of the different sub-standards.

David: The second example that you've got here is enforceable undertakings. Just to make that clear for all of our international listeners outside of Australia, an enforceable undertaking is something that is part of the legal framework in Australia where if you're an organization that's had a breach of the safety legislation or regulations typically following a serious or fatal incident, then you have the opportunity to enter into an agreement with the regulator to avoid legal proceedings by making a commitment to improve safety. 

It's usually quite a sizable financial commitment that would be, say, 5 or 10 times what the potential fine might be. It's attractive to the regulators because they can evidence safety improvement within the industry. It's also attractive to organizations to avoid the legal process. Drew, Have I adequately described what this enforceable undertaking process is? I thought it might be good to do before we dive in.

Drew: Yes, thank you for that. I think that's a good explanation. The key thing is you've got this big potential pot of money that needs to be allocated in a way, which is vaguely related to what went wrong in the first place and that provides some sort of public benefit for safety. The key thing with enforceable undertakings is it can't just be a benefit to the company that's spending the money. It has to be of benefit to the broader community. But beyond that, there's a huge amount of freedom.

Anyone who's involved in the process might have particular things that they want addressed. Very often, the regulator's interested in producing something that is guidance material that is helpful for the general industry, and then you've got other people who've got particular solutions.

Our lab’s been involved a number of times in enforceable undertakings. Each time we've got dragged in, it's not that someone has clearly defined a problem and said, hey, Griffith University is the right organization to solve this problem. It's more they're thinking, hey, it'd be really cool to get Griffith involved in this. They're basically looking for us as a solution—the work we do, saying, how can we pay for some of this using the enforceable undertaking?

The process of coming up with the undertaking is then a negotiation between the people who've defined the problems and the people who want us as a solution defined a description of the problem that we are a good solution to. You can see the garbage can model that sounds like very obvious work. The garbage can has a clear label based on the type of incident and the type of industry, but then everyone's throwing into the bucket whatever problems or solutions they think are a good way to use that money.

David: Yeah. I think the last example here before we move on is just the end of the financial year budget. Budgeting time is a time where there are lots of solutions floating around, there are lots of problems floating around, and there's a certain pocket of cash. It's just lots of people, lots of energy for a short period of time in the organization to try to match these problems with solutions and get things approved until all the cash is gone.

I think, again, having a mental picture in your head of a garbage can of all of those hands going into the can trying to grab cash with their own problem and their own solution is a nice way to think about it. How can the organization then influence this process? What are the things that organizations can do?

Drew: If you're reading the paper, I think this is the bit where it gets fairly hard to follow because some of the things they're talking about are basically parameters in their mathematical model. Some of them are more things that you might have some sort of conscious control over. One of the things that you don't have total control over is just how much overall energy there is. They talk about high energy and low energy or high load and low load organizations.

In theory, every problem can be matched with a solution, but only if you're totally efficient in a high load organization. Other organizations have got plenty of time, plenty of energy, and they're comfortable with making decisions slowly. Eventually, every problem is going to find a solution and every solution is going to find a problem. But you probably don't get to choose what energy you have.

What you might have some control over though is how you decide who can make decisions. There are three extreme cases and then most people are actually hybrids of the three. You could have totally unsegmented where anyone could get involved in any decision. There would just be chaos. You could have a hierarchical structure where the more important decisions must be made by more important people, or you could have an extremely specialized structure where each particular type of decision has one person that they've got control over.

Each person controls one decision and each decision gets controlled by one person. The reality is that we tend to be a mix of the two. Most organizations have some sort of hierarchy. Often, the more important you are, the more decisions you can be involved in, so the more you got to choose your energy about which decisions you are actually involved in.

David: Yeah, I think that high energy organizations and low energy organizations are people who—when we think back to Rasmussen's risk modeling and dynamic society, we talked about this constant drive from organizations to be efficient and the constant push for people to get things done as efficiently as possible as well. I think we see that in individual people in which decisions they can be involved in, but also the expectations of individual roles in organizations just how many different, let's say, choice opportunities individual roles are expected to be involved in.

That's probably a good reflection point for people to just think about how many decisions certain roles in the organizations are being asked to be involved in and then maybe not being offended when they don't show up to every meeting.

Drew: Once you start running the model, what happens is each of these decision situations, usually, there has to be a decision made. If an organization's got a budget, you've got to set a budget. If you've got to allocate the money, you got to allocate the money. If you've got to hire someone, you've got to hire someone. But that doesn't mean that every problem and solution that gets brought along to that decision gets matched and solved.

Often, all you need is just one match and that's enough to make a decision. Then the rest of them need to have something else happen to them. Maybe they get deferred until next time. Maybe they go off and they find a different decision to get attached to. One of the things they say is that sometimes we can't make a decision until most of the problems and solutions give up and go somewhere else, which then allows us to make a decision with what's left. That gets to the conclusions once they start running the model.

In the first one, they say that decision-making processes often don't solve problems. They make the decision, but the problems go off and get attached to different decisions. Often, the more important a choice is the more people will want to attach their problems and solutions to that choice making it impossible to make the choice. 

A good example of that might be a really high profile incident. Everyone who's got something to say about safety is going to want it associated with that particular incident. They're going to be very dissatisfied. If we're not willing to talk about culture or we're not willing to talk about the management systems, we're not willing to talk about risk assessment or whatever thing they want to talk about associated with that incident.

It's not until people sort of realize we're not getting anything done here. If I want something done about culture, maybe I'd be better taking that off to the budget meeting and getting a budget allocation for it, or maybe I'd be better just going and doing it myself. Once those people give up and go away, that's when we get the final recommendations written—when we've got a small enough number of choice decisions left that we can just make the decisions.

A good example might be the one that prompted this paper about the really important hiring decision. Everyone wanted something out of the new dean. Maybe one person wanted someone who was young and exciting, another person wanted someone who had a good public persona, and another one was someone who was a good researcher who'd bring prestige to the department. You can't do all of those things at once.

Ultimately, once people who are looking for a prestige realize, okay, I'm not going to get that done through this decision. I'm going to get prestige in the department somewhere else. They go away and they try to solve their prestige problem by getting a grant that can hire someone. The person who wants someone young and exciting realizes, oh, I can get this better done through the Equal Employment Opportunity policy. I'll go and talk about it at that meeting. Then eventually, we could just make a decision to hire someone once we don't have to solve everyone's problem with that decision.

David: The second conclusion there is that decision-makers and problems sort of track one another through a series of these choices or choice opportunities, sometimes without seeming to make any process. For example, say, a person who's concerned with the poor onboarding process for new employees because maybe they've had some new employees into their department that they haven't felt up to speed on what they've needed them to do.

They take the problem to the budget meeting to try to get some funds for the onboarding process. They then go to the training and development for the competency system workshop to try to get something changed in a training system. They then go to a safety brainstorming day because they can maybe tack it on the back of safety induction training. And they can sometimes find themselves trapped in dealing with the same problem—even when they change roles themselves—because they're attached to this problem now because they've been so involved in trying to get a decision made for a period of time.

Drew: Yeah, I think we've all known those people, who doesn't matter what the meaning is. If that person is at the meeting, they're going to be talking about their personal hobby horse. I think we've all been that person where there's an issue we're trying to get solved and we just have to find opportunity, after opportunity, after opportunity trying to get it solved until eventually, we find a time and a place where someone is actually willing to make a decision about it.

David: Is that true if you hear a colleague saying, I'm just going to get on my soapbox for a minute, typically, it'll be something you've already heard before at a different meeting.

Drew: Absolutely. The final thing they say is most problems do eventually get solved. This is not a description of dysfunction. It might seem chaotic, but problems don't get solved the first time they show up. A lot of energy often gets wasted with problems and solutions showing up to the wrong meetings or showing up without there being a match at the right time.

Eventually, when the problem is in the right place with the right solution available and no other things are competing for people's attention, that's when the problem gets solved. I think we've all experienced that one as well. That it's often not at the important meetings, it's at the small meetings that are very limited in scope where there aren't other issues clamoring for attention that sometimes when important things do get solved because everyone who's there only cares about that one particular thing getting solved.

David: The premise, the model accounts for that by saying in that smaller meeting with less problems and maybe less solutions with maybe people with more individual energy, there's got to be more energy to create that match between the problem and the solution in that environment or in that choice opportunity. 

Let's talk about some practical takeaways and wrap this one up. Do you want to get started with what you think is good for us to learn and take away?

Drew: Okay. The first one isn't really a takeaway. The point about these models, the models themselves are not always perfect and they're very sensitive to exactly the internal variables. Don't take this away as a recipe for plugging in the parameters of your own organization and seeing how decisions are going to get made. It's more about the insights it gives you into the way the world works. So once you've got the model in your head, you can see it happening.

The first really useful insight, I think, is this decoupling of choices, problems, and solutions. I think once you begin to see that actually there are three separate things, choice opportunities, problems that need to be solved, solutions looking for problems. Once you start seeing those things, it takes away a bit of the frustration that you feel when you see decisions going nowhere. 

You realize actually, my job here is not to get this problem solved. This meeting needs to come to a decision, that's our job here. My problem just needs to find a decision where it can be solved. Maybe it's not this one, maybe it's the next one, and finding the right place to take that problem rather than being frustrated that it's not being dealt with through this particular decision.

The second thing, I think, particularly this year, I'm really feeling this one. It gives you a bit of permission to be strategic about your own decision-making energy. You don't have to be part of every decision, particularly if the decision isn't going to help solve the problems that you care about. It's okay to be strategic about which decisions you should take part in with more focused goals or ones that have got lots of resources to deal with multiple problems at once.

David: I think, particularly, Drew, there's a lot of demand on safety practitioners times because organizations know that there are lots of things that can have potential safety implications and lots of managers in organizations that perhaps lack a bit of psychological safety really want to make sure that there's a safety person involved in every organization's decision in case it's ever questioned down the track about why wasn't safety involved in this or that decision leads to some compromise in safety in some way.

I know practitioners and I've experienced this myself of being asked to be involved in almost every decision. You end up coming along to a meeting going, why am I here? But no one wants to hold a meeting without having a safety person in.

Drew: Yeah, I've experienced the same thing with industry-focused research. Every time someone's got a meeting about work-integrated learning or industry research, they want people from our team there because they think, hey, you guys know about the industry. That doesn't mean that those meetings are actually going to help us with any of our problems or that we've got any particular solutions to the problems that people are bringing to those meetings.

David: I think the third takeaway here is being clear about whether we have a problem looking for a solution, a solution looking for a problem, or a paired problem solution that's looking for a decision to be made. I think sometimes, we aren't honest with ourselves about which one of these three it is and we can end up wasting a lot of time and energy. 

I've been definitely, throughout my career at times, very much walking around organizations with a set of solutions looking for problems to attach themselves to, and maybe sometimes having a paired problem and solution looking for a decision to be made. I think what's probably really important to take away here is actually understanding what all the problems are in the organization, and then the solutions and the choice opportunities come after that, I would say.

Drew: I was thinking about a workshop I was in a few weeks ago to do with metrics. I think safety indicators and metrics are a classic example of a solution, not a problem. Very often, no one has defined what is the problem that you're trying to solve by creating metrics. We're arguing about what is the best metric to use in the absence of any actual problem for which a metric is a good answer.

David: If the problem is understanding safety, then go and talk to people involved in hazardous work, go and observe hazardous work, and understand safety. Don't sit and wait until the end of the month for some unhelpful data to flow through. I think you're right. I understand Albert Einstein. One of the quotes, I guess, he talks a lot about in learning teams training is an Albert Einstein quote that says, "If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and then 5 minutes coming up with solutions." I think it's a pretty good reference point to carry in your mind.

Drew: I think there are some people though where it's kind of our job to have solutions. I certainly feel this as an academic. I think it’s sometimes good advice for young academics looking for ways to make their research relevant is to recognize that what we have is a solution, and what we should be doing is being conscious that there are multiple problems that that solution works with. You might think that what you've discovered is you've done your Ph.D. in safety, but safety is usually a problem, not a solution.

What you've actually created in your Ph.D. might be an approach that might be useful for a whole range of different problems. You ought to go out and find people with problems, listen to their problems, and then find ways to adapt and apply this stuff that you've come up with instead of just assuming that because you started with one particular problem, that's the only problem that your work is relevant for.

David: Yeah, great point, Drew. Do you want to wrap us up here on the practical takeaways?

Drew: These aren't hugely useful takeaways because most of this is just about a way of looking at the world and understanding how you see the world.

David: I think the takeaway for me is just that frame of reference for how decisions get made. There are these decision-makers, there is this stream of choice opportunities, there are sets of problems, there are solutions, and there's a certain amount of energy. I think in safety, that's not a bad framework to think about. 

What are the choice opportunities I've got? How well do I understand what all the problems are? How well have I thought about the solutions and the matching of those solutions to problems? Then how am I strategic around the organization's energy to invest in agreeing on the matching between the solutions and the problem? I actually think that, for me, would have been a nice mental model to carry through my practitioner career.

Drew: Actually, I do have one very practical takeaway to attach to that, David, which is that I think for a safety manager particularly, it is important to always have a drawer with several different solutions in. These are things that you would like implemented in your organization for safety, but you don't have to push them all the time. You can just have them in your drawer waiting for a problem for which that is the solution.

Maybe that problem will be a particular incident. Maybe it will be some external pressure. Maybe it will be an organizational change program where you can just be the hero by just opening the drawer and say, look, here is a carefully thought out plan for solving this problem. I didn't know it was this problem I was going to be solving, but I've had the solution ready just waiting for the right problem, and here's the opportunity to create the match. The goal is to have the match, not to always shove your solutions down people's throats.

David: Yeah, I think that's a great practical takeaway. The question we asked this week was, why are organizations sometimes bad at making decisions?

Drew: Hopefully, the garbage can model gives you at least an interesting new way to think about that.

David: Yeah, and sometimes, you just run out of energy to match a solution to a problem. That's it for this week. We hope you found this episode of thought-provoking and ultimately useful in shaping the safety of work in your own organization. Join us in the conversation on LinkedIn or send any comments, questions, or ideas for future episodes to us at feedback@safetyofwork.com.