In today's episode, David and Drew look at how to design automated systems that work effectively with human operators, drawing from Klaus Christoffersen and David Woods' 2002 chapter "How to Make Automated Systems Team Players." The hosts challenge the traditional binary thinking about automation problems, where solutions focus either on replacing humans with more automation or reducing automation to give humans more control. Instead, they explore a systems approach that considers human-automation coordination as the primary design challenge.
The discussion centers on two key design principles: observability, which ensures humans can understand what automated systems are doing and why, and direct ability, which allows humans to steer automation rather than simply turning it on or off. Using examples from aviation incidents like Boeing's MCAS system and emerging AI technologies, the episode demonstrates how these 25-year-old principles remain relevant for contemporary automation challenges in safety-critical systems.
Discussion Points:
Quotes:
Drew Rae: "The moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of where the safety is coming from and what's necessary for it to be safe."
David Provan: "We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that. So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself."
Drew Rae: "It's not enough for an expert system to just like constantly tell you all of the underlying rules that it's applying, that that doesn't really give you the right level of visibility as understanding what it thinks the current state is."
David Provan: "But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning. If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments."
Drew Rae: "There's, there's gotta be ways of allowing all the animation to keep working, but to be able to. Retain control, and that's a really difficult design problem."
Resources:
The Safety of Work on LinkedIn
[00:00:00] Drew Rae - cohost: You are listening to the Safety of Work podcast, episode 131. Today we're asking the question, how can we make automated systems into team players? Let's get started.
[00:00:26] Drew Rae - cohost: Hey, everybody. I'm Drew Ray. I'm here with David Provin, and we are from the Safety Science Innovation Lab at Griffith University in Australia. Welcome to the Safety of Work Podcast. In each episode, we ask an important question in relation to the safety of work or the work of safety and examine the evidence surrounding it.
[00:00:45] Drew Rae - cohost: And our important question, David, this week, comes from a listener.
[00:00:50] David Provan - cohost: Yeah, listen, asked us just on various topics about automation and increasing automation and whether or not automation is the solution for safety improvement in sort of sociotechnical systems and you know, some of the research around that.
[00:01:04] David Provan - cohost: So I guess Drew, as we reflected, we've, I think we've only done one other episode sort of closely related to automation and human, but it was more about replacing human functions with automated functions. So I thought this might prompt us to do a few episodes in this. In and around this, this space drew, so it reminded me of a, of a paper that we'll introduce shortly, but it's a little, little bit old now, but it starts to frame how we might think about people and automation working together.
[00:01:32] Drew Rae - cohost: Yeah, there's a few of these kind of classic papers in automation that seem to get published around the early sort of two thousands or. Late 1990s, but when we look at back at them now, they actually say a lot about, I think, how to think about introduction of AI today. I think largely because a lot of the applications were conceptualized back then.
[00:01:56] Drew Rae - cohost: They just kind of weren't quite ready. So a lot of the autonomous systems we are looking at are things that were imagined and people were thinking about their safety at the turn of the millennium.
[00:02:07] David Provan - cohost: Yeah, absolutely Drew and I was thinking just around the time, 'cause the paper we'll introduce is a sort of 2002 publication, but it looks like late nineties research and thinking as well.
[00:02:16] David Provan - cohost: Reminded me of the the Y 2K bug around 2000. And I guess just reflecting on our current state of kind of. And software and sort of the cha Obviously a lot's changed in 25 years, but I think in this paper we can get some good right ways of thinking about, about introducing new, new automation into complex systems.
[00:02:37] Drew Rae - cohost: Okay, so, so the title is called How to Make Automated Systems Team Players. The Authors are Klaus Christofferson and David Woods. I think David Woods has come up in the podcast, uh, before in, in terms of genealogy, I believe Woods is one of whole Nagel students, and Deca was one of Woods. Students, is that the, the way it follows?
[00:03:02] David Provan - cohost: Not sure about woods and, and ogle, but definitely, definitely that's the lineage. And the other author, Klaus Christofferson did his PhD at Ohio State looks like in the late nineties. So it would've been just after Sydney had finished his, just starting as Sydney was finishing his PhD. So, you know, a lot of continuation of the thinking behind human error and complexity.
[00:03:23] David Provan - cohost: And there was some big resilience projects sponsored by NASA at the time. This paper that we're talking about, or this chapter in this book that we're talking about today sort of seems to come outta a lot of that thinking that was sort of going on at the time.
[00:03:36] Drew Rae - cohost: Yeah, so, so we're talking about sort of research that comes from that, cognitive systems engineering, uh, human factors resilience space.
[00:03:43] Drew Rae - cohost: As Dave just mentioned, it was published as a book chapter in a series called Advances in Human Performance and Cognitive Engineering Research. In 2002, and like most academic book chapters, this is kind of a mix of literature review and scholarly argument. So it's got a lot of the original thoughts of the authors, but all of those thoughts, particularly when they refer to evidence, are backed up by citations to academic work.
[00:04:13] David Provan - cohost: So no original research in this book chapter and not really a sort of a systemic, not a systemic literature review, but I think, like I said, drew, I sort of read these sorts of chapters as a little bit, like you said, more like a book chapter or an essay or you know, a little bit about trying to put a description of sort of current state and what it might mean moving forward.
[00:04:33] David Provan - cohost: So I like the style of research, but it's just good for listeners to know. There's no original research being done here. It's just pulling together and extending the. Existing literature.
[00:04:43] Drew Rae - cohost: Yeah, it's, it's more kind of an act of sense making about what we currently know and what are useful ways to think about it.
[00:04:49] Drew Rae - cohost: And that's why we've drawn out this particular paper for the topic is it's a useful way of thinking about the safety of automation.
[00:04:58] David Provan - cohost: So Drew, the first section of the book sort of starts with the title, human Automation Cooperation. What have We Learned? And it's sort of a little bit about looking backwards before we look forwards about what do we think are the known truths when it comes to combined human and and and automated systems.
[00:05:12] David Provan - cohost: So do you wanna sort of talk a little bit about some of the key contextual pieces in this section?
[00:05:17] Drew Rae - cohost: So there, there's nothing hugely new here. It's just sort of the basic of where do we start when we're increasing automation, we are doing it usually for economic benefits often, but not always. It comes with opportunities for increased safety, but we are always a little bit wary of it from a safety point of view because we are creating new ways for things to go wrong.
[00:05:40] Drew Rae - cohost: And that's kind of what people's biggest fear is. I think we are automating it, but what's gonna catch us out? What's gonna surprise us? That we haven't thought of and haven't successfully managed the safety of. And he sort of points out that like most of what we've learned about this comes from natural experiments, which is a very diplomatic way of saying, yeah, we tried it out and things went wrong and we learned from it.
[00:06:02] David Provan - cohost: And I think Drew, you sort of made a point in our run you here about, you know, like it's sort of a little bit what we're seeing with the, with the fear around AI now, or the realistic fear, which is like, we know the benefits. But kind of like, what's the downside? We don't kind of know this downside. So it's, it's sort of like a real life natural experiment.
[00:06:19] David Provan - cohost: This paper's kind of referring to now, except we don't quite know the the answer yet, whereas at the time, based on some aviation incidents and some nuclear type of incidents, those were the types of sort of natural experiments that sort of, you know. The authors to kind of look back on, you know, how it, how it went wrong and in sort of maybe an unpredictable, unpredictable kind of way.
[00:06:42] Drew Rae - cohost: Yeah. It's worth noting that at the time, pretty much the same thing was happening as is happening today, which is people were taken a little off guard by how quickly some of these things were introduced. We went from drones being this very sort of high tech, remote, speculative technology. Two, suddenly they were commercially available and people were using them and safety was catching up, and airspace regulations were catching up to the fact that the drones were already out there and already being flown.
[00:07:14] Drew Rae - cohost: We sort of had to regulate in hindsight and manage the safety in hindsight, and very much the same thing is happening at the moment with large language model style ai. Is people are just adopting it in organizations faster than we anticipated and suddenly we need to sort of like work out, oh gosh, what are the safety implications?
[00:07:32] David Provan - cohost: And then this, uh, this paper sort of argues that that is sort of the approach that we take to all automation, even in organizations, even technologies where we have to just think about the automation itself. And we don't necessarily, and you know what the core arguments of this paper is? We actually don't think about that automation in the context of the overall system and all of the interfaces and everything like that.
[00:07:52] David Provan - cohost: So we, we look at AI as AI and, you know, deploying. Introducing ai, but we don't do any kind of comprehensive analysis of, you know, what's gonna be all of the flow on implications and interfaces and potentially unintended consequences or the system, not necessarily just the technology or automation itself.
[00:08:12] Drew Rae - cohost: Yeah, I, I quite the like, the way they highlight sort of two main ways people tend to react or argue. About automation, and once you hear these, you just start to recognize one version of this argument everywhere. So the first one is saying that when automation goes wrong, it's really due to inherent human limitations.
[00:08:36] Drew Rae - cohost: And the way we can fix that is with more automation. That, you know, it's really the remaining human gap that's the problem. And you see this in people talking about chat gt, or it's not chat GT's fault. People just are uncritically using it. They're using the wrong problems. Yeah. If we can just, and then the second one is that the problem is the automation.
[00:08:55] Drew Rae - cohost: We are over automating, we are using too much and we need to go backwards a little bit. Don't need to get rid of all the automation. Just stop using it. Not quite so much, add in a little bit more human control and a little bit less automation. Don't reach too far, too fast.
[00:09:09] David Provan - cohost: I think we see this with like autonomous vehicles, right?
[00:09:12] David Provan - cohost: This debate, people fall on one of those two sides. We've got people who want to take the driver out of the car and let the automation, you know, basically do more and more and more. And then people who say, well actually no, at the end of the day we still need people driving cars. You know, and we actually probably should make sure that they're focused on driving by not giving them all of these kind of like automation sort of compensating systems.
[00:09:33] Drew Rae - cohost: David, if you don't mind there, there's a passage from one of my favorite books that this is just a great place to sneak it in. Which I'd like to do whenever I can. B Bear, bear in mind this is written in the 1960s, but I think is exactly the same sort of argument we're talking about here. It's from a book called Read for Danger.
[00:09:54] Drew Rae - cohost: By LTC Ro and RO says each railway accident represents but one brief incident in a much larger, though less spectacular drama that's been going on ever since the first steam engine moved on Rails. I refer to that struggle to eliminate the possibilities of human error, which is engage the ingenuity of generations of engineers.
[00:10:17] Drew Rae - cohost: The railway disaster may represent the engineer's failure, but never his defeat. Always he has resolutely counterattacked from the terrible lessons learned in the flaming wreck or the heap of shattered telescoped coaches. Have come all those safety devices which combine to make the modern railway one of the most complex.
[00:10:36] Drew Rae - cohost: But at the same time, most successful organizations, which man has ever evolved to this long struggle for perfection? There can be no end for the goal is unattainable. No mechanical devices and no precautions. However, painstaking can be altogether proof against mistakes, man must ever remain a fallible mortal.
[00:10:57] Drew Rae - cohost: Now the, the very male-centric language there I think is a sign of its times as this the slightly overdramatic language. But this is, I think the like heart of the first version of that thinking, right? Which is humans make mistakes. That is what we are striving to do is basically we use automation and AI and autonomous systems to engineer out the possibility of humans stuffing up.
[00:11:21] Drew Rae - cohost: And you know, the ideal world is the one where we've got rid of all of the humans and therefore. All of the human mistakes.
[00:11:28] David Provan - cohost: Yeah, and I think that's sort of in interesting about the timing of that, because, you know, in that, in that it's sort of like thinking that's quite similar to, you know, a Charles Perot or a James reason at the time with sort of like the, the human contribution.
[00:11:41] David Provan - cohost: But, you know, didn't none of that work quite pointed directly at automation being the solution. You know, in fact, those were sort of saying that automation is actually causing us more, at least in Charles Perot's case, he was probably saying that, you know, the high risk, you know, complex technologies and increasing order automation is just actually giving us a whole bunch of new ways to kill people.
[00:12:00] David Provan - cohost: So that's interesting. So nice passage. Yeah.
[00:12:03] Drew Rae - cohost: Yeah. Just as rolls, kind of the exemplar of that first one. I think Perot is the exemplar of the second one. Yeah, there are some technologies that are just too complex to ever trust, and we ultimately need things that humans can retain that final control over.
[00:12:19] Drew Rae - cohost: So
[00:12:20] David Provan - cohost: towards the end of this first section, I also sort of make this argument that, you know, that we seem to be locked into this mindset of. Thinking that the technology and the people are sort of independent components of, of the system. So like either the, it was the electronic or the automation that failed, or it was the human that failed.
[00:12:39] David Provan - cohost: And I guess Ian, from a systems thinking perspective, this paper sort of introduces this idea that it's not. About the technology or about the person, but it's about the interfaces and the sort of, I guess, teamwork to take, to borrow from the title of this paper, you know, to make the automation and the, and the people kind of like team players.
[00:12:56] David Provan - cohost: And on the same. On the same page and effective at the interfaces.
[00:13:01] Drew Rae - cohost: Yeah, and, and you can see that almost still, whenever there's an aviation accident, people are still sort of in this binary of. Was it human error or mechanical failure? And I think we've got out of, ultimately, most investigators tend to take a more sophisticated approach to that.
[00:13:19] Drew Rae - cohost: But it's still, the narrative we like to cast over things going wrong is, which was it? And the, and they're sort of encouraging us here to sort of get away from thinking one or the other and just combine the two. It's always the system or the interaction that's gone wrong. The question is, how has that interaction gone wrong and how did we make that interaction go?
[00:13:42] David Provan - cohost: Well, and, and so they, they introduced this idea of the substitution myth, and it's this idea that when we think about human and automated systems, that all we're doing is replacing certain things. So we're, we're just adding extra automation to substitute for previously human actions, or they're kind of like maybe removing automation.
[00:14:05] David Provan - cohost: You know, adding in kind of like operator actions as, so there's sort of like this direct substitution that the functioning of the system is kind of the same. All we're kind of doing is substituting at the person and substituting in the, the technology and they're sort of saying that it doesn't really work like that.
[00:14:20] David Provan - cohost: Drew your thoughts on that kind of argument?
[00:14:22] Drew Rae - cohost: I, I, I buy the conclusion of the argument, but the logic that they apply here I think is a little bit incorrect. And I think because the logic isn't quite right, that sort of like leaves room for. Misunderstanding. You know, ultimately, if you think about it, if you could replace a human with a perfectly automated version of a human, yes, you could actually substitute them.
[00:14:45] Drew Rae - cohost: You know, it's, the problem isn't fundamentally this idea of substitution and how interdependent the system is. The problem is that we are trying to replace the human for a reason, which is that we expect the automation to actually behave differently. From a human and those, when you've got a very complex system and lots of interaction, those small differences between a human and the automation, even if it's just the automation is more reliable or more consistent.
[00:15:15] Drew Rae - cohost: That can stuff up the whole system because what seems to be a mostly like, for like behavior actually the difference really, really matters.
[00:15:23] David Provan - cohost: Yeah, I think that's a good argument point there. I sort of going to say that the automation is always, is never flexible enough and you know, I think one of the examples that I was thinking of whose paper that we haven't mentioned yet is the obvious example of the Boeing MCA system, which is like, oh, we'll just introduce some automation to manage.
[00:15:41] David Provan - cohost: So the pitch, an attitude of the aircraft so that the pilots don't get it wrong. Right. You know, like overcompensate or undercompensated in different ways. We'll just let the software take care of it. And so, you know, that's an argument where, you know, the assumption is we're just gonna add in the automation to kind of replace the actions of the pilot.
[00:15:57] David Provan - cohost: And then we sort of see. How, you know, that particular natural event, natural experiment kind of unfolded.
[00:16:05] Drew Rae - cohost: Mm-hmm. David, notice how even then you sort of said, you know, the automation's not as flexible as the human. And I think whenever we make those sorts of assumptions, it actually almost like weakens the argument.
[00:16:17] Drew Rae - cohost: 'cause in future automation becomes more flexible. And we think, okay, we've solved that now. You know, and, and it, it's not that the automation is like more flexible or less flexible or smarter or less smart than the human, you know, in different contexts. It can be all of those things. But what really matters is we are changing the nature of this interaction.
[00:16:38] Drew Rae - cohost: You know, if we make the automation super flexible, that doesn't fix the problem. Now that now we make it harder for the human to predict the automation. If we make the automation inflexible. Then we've got the problem that, you know, the human has to be able to somehow direct it and we'll get onto it a little bit later, I think, where they really sort of almost predict am a s in the sort of talking about what actions the human operator is able to take in response to the automation.
[00:17:03] Drew Rae - cohost: But because sort of key thing here is that, yeah, the moment you divide it up and you just try to analyze the human behavior or analyze the automation, you lose the understanding of. Where the safety is coming from and what's necessary for it to be safe. And then they break this down in the next sections into some sort of like design principles almost for what we should be looking at to know whether it's safe.
[00:17:25] David Provan - cohost: Yeah. Drew. So I think it's a good segue into that, into that section, because the paper doesn't really. Argue in any way whether we should have more automation or less automation. That what the paper's arguing is how do we design for the coordination irrespective of the level of automation or, or human involvement.
[00:17:43] David Provan - cohost: When you've got, you know, humans and automations working together, then you have to design for that, and the paper would suggest a way. Design for coordination is through two means. One is observability and the second is direct ability. So should we talk about those two things as sort of more of these principles for the design of coordination between humans and automation?
[00:18:03] Drew Rae - cohost: Yeah, so, so, so observability, they use the human analogy because it's sort of like the easiest way to think about it to start with. Which is you've got, if you've got two humans working together, then they've got a lot of common ground that lets them coordinate with each other. And they mentioned a couple of specific things.
[00:18:21] Drew Rae - cohost: One of them is they've got a shared representation of the problem state. So they both know what it is that they're working on. And then the other is they've got representations of the activities of the other person so they know what the other person is doing. So you can imagine like two surgeons in an operating room, they can both see the patient, so they've got a shared representation of what they're dealing with.
[00:18:42] Drew Rae - cohost: The same patient. And they're talking and they can watch each other's hands and each other's body language and where their eyes are. And so you can work together by watching what the other person's doing and fitting in around them. And then as things get more complicated, you need to have a better understanding, actually, look, what's the other person trying to achieve here?
[00:19:00] Drew Rae - cohost: What are, what are their goals? What are their priorities? What's is what they're currently doing working? Do they need help? Do they need to be left alone? You know, is the problem that the patient's getting worse and they need to be rescued? Or is the problem that they're going fine, they just need not to be interrupted?
[00:19:14] Drew Rae - cohost: All of those are things that are easy for humans to coordinate because humans have a pretty intuitive understanding of other humans, and even if we don't directly read their brains, we watch their eyes, their body language, we talk to each other. All those things help us to coordinate.
[00:19:29] David Provan - cohost: I think the, the, the follow on from that argument is that, you know, that that's an open type environment and so that, that, that information and coordination comes at a relatively low cost because I'm, you know, watching the other person what they're doing, and they're a person, and I'm a person and I can infer, you know, their next steps and, and anticipate and all of these things.
[00:19:50] David Provan - cohost: And then goes on to sort of say, but when you have people and, and automation. Working together. Then that ability for common ground and for the person to actually understand what problem the automation is working on and how it's going about, you know, what it's doing and what it might do next is sort of less observable than like a human to human kind of coordination activity, which, you know, makes sense.
[00:20:13] David Provan - cohost: You know, I dunno what my computer's working on in the background at the moment. And I have no idea what it's gonna do next.
[00:20:21] Drew Rae - cohost: Yeah. D David, do you do a lot more traveling than I do? So this is only something that I've seen recently, but maybe it's been around for a while, is little robots running around hotels doing room service.
[00:20:33] Drew Rae - cohost: So yeahs or multiple hotels have been recently, they've just been these robots wandering around instead of humans going to the hotel rooms. And one thing that these robots do is they constantly announce their own behavior. So you, the robot says, I'm waiting for the elevator, or I'm, I'm pausing until the path ahead of me is clear.
[00:20:52] Drew Rae - cohost: I'm trying to navigate down the corridor now, and it's trying to do that thing that humans do much more openly, which is we use our body language and the direction of our eyes and so forth just to communicate our goals and intentions so that we just don't run into other people in the corridors. Yeah, a robot doesn't have a face or eyes, so it's gotta just constantly announce what it's doing so you can understand why it suddenly stopped.
[00:21:16] Drew Rae - cohost: 'cause it doesn't see the path clear, you know? If it didn't tell you, you'd wonder, is it broken down? Is it waiting for me? Am I waiting for it? And you'd just sort of like both stand there at an M pass.
[00:21:25] David Provan - cohost: So this is. This idea of observability is this idea of kind of like knowing what you know, the other agents, either machine or human agents in, in the system that I interface with through my tasks and activities that I can observe quite easily, you know, what they're doing and, and, and how they're doing it, and, and then anticipate what they might do next and how they're gonna interact with, with what I'm doing.
[00:21:47] David Provan - cohost: Again, in this kind of shared problem, shared shared goal type of space, and the paper talks about making data availability does not equal informativeness. So this idea that it's not just about knowing what the machine did or even what they're doing right now, but what we actually, because these are kind of like really simple presentation, like, you know, data logs of, you know.
[00:22:09] David Provan - cohost: What the automated system kind of did, but saying we actually need far more sort of comprehensive working information about what's going on and how the system operates.
[00:22:20] Drew Rae - cohost: Yes, so, so, so they remember this is 2002, so state of the art of AI at the time was expert systems. Which are fairly different to the way they do most AI today.
[00:22:32] Drew Rae - cohost: And so they're using the example of it's not enough for an expert system to just like constantly tell you all of the underlying rules that it's applying, that that doesn't really give you the right level of visibility as understanding what it thinks the current state is. And I think that's only become even like harder with, you know, a lot of ais we use at the moment, particularly lms, are based on machine learning models that are basically just underneath a whole set of.
[00:22:57] Drew Rae - cohost: Networks and weights. So there is no intuitive or obvious internal state for them to share with us. And you may have noticed if you've ever like asked chat GBT to explain why it's made its decision. It doesn't actually like search its own memory and tell you why it did something. It uses, its forward looking and just recreates a plausible explanation.
[00:23:18] Drew Rae - cohost: That is not the truth at all. So, you know, I've Simple example. You just like ask chat GP to give itself a name and then you ask it, why did you pick that name? It'll fabricate a amazing explanation. And then you say, I don't believe you, and it'll give you an entirely new explanation. You know, it's, it's, it's not even capable of being like honest about its own reasoning and thinking and state that it's in, which makes it really hard to work with as a partner when things start to go wrong.
[00:23:46] Drew Rae - cohost: You. Why did you just do that? It doesn't know, so you can't tell it how not to do it next time.
[00:23:51] David Provan - cohost: And I think with ai, you know, it's like, why did that automation do that? Even if it's a mode error or something like that. Within, within, with a hardware kind of system. Then you can actually look at the, the design and you know, the construction go, ah, well it was programmed like this, or it was, it was wide like this, or.
[00:24:08] David Provan - cohost: You know, the limitations of the sensor were, were this, and you can probably find the sort of failure mode. Whereas I think with, with AI is that it doesn't sort of have engineered failure modes, right? So you actually can't. So I think that's a really interesting insight that you actually can't know what it's doing and how it's doing it.
[00:24:25] Drew Rae - cohost: Yeah. So, so it's really hard to predict what sort of mistakes it might make. Or when it might make those, or how it might react to your own actions.
[00:24:33] David Provan - cohost: So one way to design for systems is observability. So how are the different agents in the system gonna kind of know what the others working on and, and, and how they're doing it, why and what next?
[00:24:44] David Provan - cohost: And then the second way to design for it is this idea of direct ability or the section heading in the, in the chapter is direct ability, who owns the problem? And drew, for me, this is basically. Who gets to decide ultimately, is it the, the human in, I'm gonna say control or in charge or in responsible for overseeing the functioning of the whole system?
[00:25:07] David Provan - cohost: Or is it the automation that's actually taking control and is responsible for overseeing the functioning of the whole system? And I think this, this paper argues that, you know, that needs to be, you know, thought through.
[00:25:18] Drew Rae - cohost: Yeah. I, I, I feel a little bit picked on by this paper because. I think actually at about the same time that this was being written, I might've had one of my own papers about operators in the loop and who makes the final decision.
[00:25:33] Drew Rae - cohost: But what they're criticizing here is that it can't just be a question of who has final control, which is what, yeah. I at the time thought the question was, but I said it's like not enough to just say, oh, the human can always turn the automation off, or the human can always override it. Because we are talking about systems that are operating quickly and in complex environments, the human's gotta be able to give better direction than simply no to the automation.
[00:26:01] Drew Rae - cohost: It's gotta be able to give it instructions that change its behavior. And that last, you know it, if we are gonna have cars that rely on highly automated systems, the can't choice can't be either trust the system or turn off all the driver assistance. There's gotta be a middle ground of. Allowing both to have input where we get the benefit of the automation.
[00:26:22] Drew Rae - cohost: But also the ability to steer it.
[00:26:25] David Provan - cohost: Yeah, I think that's a really good point and it's a nuance that I hadn't thought too much about because like you, it was like, well, if this, this automated function is, I don't understand it, or it doesn't seem to be doing what I think it, it, it should be doing, then I can just flip it off.
[00:26:40] David Provan - cohost: And I think, you know, some things are designed like that, like an emergency stop or a bypass pass or a, you know, shut down. But I think this paper makes a really good argument, which is actually our automated system should be far more flexible than that. So I might be able to adjust, you know, it's functioning.
[00:26:56] David Provan - cohost: If I know, if I, if I know enough about how it's functioning and why it's functioning, and I realize that the automation can't understand context and situation, then I should be able to make adjustments. Context without just having to kind of like press the red button.
[00:27:13] Drew Rae - cohost: Yeah. And, and this is where we sort of get back to examples.
[00:27:15] Drew Rae - cohost: Like the s when we get to the point, there are so many different systems that are providing input into the. Orientation of the plane that you know, the solution can't be when something goes wrong. Okay, we gotta switch off until we've made sure we've switched off the right system to re retain manual control yet.
[00:27:34] Drew Rae - cohost: And si similarly like the solution when we're using something like an LLM. Can't be just oh, if you don't trust chat gt, don't use chat gt. You know, people are using it. It's gotta be a, we've gotta be able to find ways that we can give it instructions, that we can trust it to follow those instructions reliably.
[00:27:54] Drew Rae - cohost: Just, you know, declaring it to be untrustworthy doesn't solve the problem. And similarly, asking humans to just like, act as a monitor for the ai. You know, the driverless car is safe as long as you are there, constantly paying attention, ready to take over. Control is an unrealistic ask of drivers. What's the point of having it?
[00:28:11] Drew Rae - cohost: Driverless if we rely on the human being just as vigilant as they were before.
[00:28:16] David Provan - cohost: And, and there's a, you mentioned that this paper sort of predicts a little bit the, the Boeing incident we mentioned. There's a section I just, I'll just read a, read a sentence out, you know, but because they talk about pilots in highly automated commercial aircraft that have been known to simply switch off.
[00:28:30] David Provan - cohost: Some of the automated systems in critical situations because they have either lost track of what the automation is doing or cannot reconcile the automations activities with their own perception of the problem situation. I think that's exactly the mca, which is, well, the plane is doing this right now, and that's not what I think the plane know should be doing.
[00:28:48] David Provan - cohost: You know, diving and losing altitude, you know, shortly after takeoff or something. So clearly I can't reconcile that. That's not right. I'm just gonna switch it off. Right. As a way of reclaiming their understanding and control of the situation. And they, they're sort of talking about those sorts of.
[00:29:04] David Provan - cohost: Automation automation, human automation interfaces is kind of like completely uncooperative and it's not kind of the way that we should be designing for, you know, human automation into, you know, coordination.
[00:29:15] Drew Rae - cohost: Yeah. Particularly if, you know, we've gone from a middle ground of some degree of support for the human.
[00:29:21] Drew Rae - cohost: And we are shifting it now into a binary where most of the time the human gets heaps of support, but we're expecting in an emergency for them to drive totally manually. It's like, you know, can you imagine teaching someone to drive a car today with all of the current driver assist? But the instruction is Oh yeah.
[00:29:37] Drew Rae - cohost: And if ever you're like out on the highway and something goes wrong, you're gonna have to learn how to drive stick shift in the next five seconds. 'cause we've taken away everything. Yeah. There's, there's gotta be ways of allowing all the animation to keep working, but to be able to. Retain control, and that's a really difficult design problem.
[00:29:54] Drew Rae - cohost: Nothing in this paper pretends that it's easy. It's just basically warning us against simplistic interpretations of the situation.
[00:30:01] David Provan - cohost: Yeah, it's kind of going, you know, at a high level, this idea of observability and direct ability is, you know, make sure if you're designing these systems that you provide, at least on the human side, as much transparency and insight into what the automation's doing and why and what next.
[00:30:19] David Provan - cohost: As you possibly can, and then provide the humans in the system with the opportunity to sort of flexibly adjust, you know, their interface with the automation based on kind of like where they think there's divergent kind of problem understanding or you know. Situation assessment or whatever the case may be.
[00:30:38] Drew Rae - cohost: So it's not a long paper, David, and we've pretty much got towards the end of it. Is there anything else you wanted to say about the content of the paper before we get to takeaways? No, I think
[00:30:47] David Provan - cohost: only just there is some talk about, further talk about design and just sort of. That cautionary tale of, you know, when you're designing an automated system, it's not just about designing the automated system, right?
[00:30:56] David Provan - cohost: Like, you can't just say, oh, I'm gonna put a light curtain and an automatic guard on this piece of equipment and come along and design that. They're saying, you know, the design process has to be about designing for team play and integration. So you have to think of the design process as being more, more than about just.
[00:31:13] David Provan - cohost: Designing the, the, the technical automation component.
[00:31:17] Drew Rae - cohost: Yeah. So, so it's basically saying we've gotta invest not just in the automation part of it, but invest in the playing together part of it. So should
[00:31:27] David Provan - cohost: we do some practical takeaways, drew? Sure.
[00:31:29] Drew Rae - cohost: So I think the first one is, and this is really the heart of this paper, is just avoid the trap when talking about automation of talking, either just about the automated part of it or just about the human part of it.
[00:31:43] Drew Rae - cohost: We can have better, clearer conversations when we always make sure we've drawn the boundary so that it includes both. So, you know, there's no point in talking about are driverless cars better than human drivers? Because there's no such thing in isolation. The question is, what's the safety of our overall traffic system that has a mix of cars with different levels of automation in it, working alongside each other?
[00:32:08] Drew Rae - cohost: You know, driverless cars are not driving in isolation. They're on the same roads as entirely human cars and cars with driverless assist, and it's that interactive system that everyone needs to be able to play well with others. You know, similarly, you can't isolate, you know, is chat GBT better at a human at doing something.
[00:32:24] Drew Rae - cohost: Humans are never doing stuff without assistance and chat. GBT is never doing stuff without a human. The question is, you know, is a human using chat GBT compared to a human using a different tool
[00:32:34] David Provan - cohost: and recognizing that that human chat GBT system is a completely new system from a human, human system. So it's, you know, we can't just look at the change of the Jack Chief PT component.
[00:32:45] David Provan - cohost: We have to look at the change, you know, the, the, how the system's changed. We keep going.
[00:32:49] Drew Rae - cohost: Yep. Second one is that when we are designing or purchasing automation, we need to deliberately consider those team player aspects and in particular the two properties of observability. So can the human understand what the automation is doing and direct ability?
[00:33:08] Drew Rae - cohost: Can the human steer the automation rather than just having a binary choice of using it or turning it off?
[00:33:14] David Provan - cohost: And so the third one here, you've got Drew, is like following on from that, you know, just having the ability of a person to turn off automation or to override it or to ignore. The technology doesn't make it safe.
[00:33:27] David Provan - cohost: So this idea of, well, if this system plays up well, the pilots can just turn it off, you know, doesn't make, it, doesn't make it safe without the observability and direct ability, you know, component. So we, we don't, we still never want to have an automated system that our people. We haven't designed how our people are gonna effectively work with,
[00:33:45] Drew Rae - cohost: yeah, one, one.
[00:33:46] Drew Rae - cohost: One of the, unfortunately, still common things that happens is people talk about automated systems as advisory only as if the human is going to like selectively and correctly ignore the automation whenever it's wrong and follow up whenever it's right. Yeah. That, that's just putting humans into an impossible position.
[00:34:05] Drew Rae - cohost: If we expect a system to give humans advice or to work with humans, then it has to be trustworthy to the level that we expect people to use it.
[00:34:13] David Provan - cohost: Great. Drew, any other practical takeaways that you can think of?
[00:34:16] Drew Rae - cohost: Uh, I think that's it from me. As I said, it's not a long paper. It's just I think a useful framework to just.
[00:34:22] Drew Rae - cohost: Help avoid some of these naive argument traps that lead us into just very binary questions about automation and gives us a slightly sophisticated way of thinking about it.
[00:34:32] David Provan - cohost: Yeah, I like that too. That was my sort of takeaway is that, you know, the answer isn't, or the question isn't even, should we have more automation or less automation?
[00:34:39] David Provan - cohost: The question. Or the question that this paper is more asking and trying to answer is, how can we design for more effective human automation sort of like coordination? And you know, I think if we assume that the reality of our world going forward is more ai, more automation. Technology. Then changing over to that other question of, you know, how can we design to make this all go well is I think the work that this paper does.
[00:35:06] David Provan - cohost: And it's kind of, kind of nice that, you know, the, the, the principles and ideas from almost 25 years ago are sort of still sort of playing out for us now and still some good kind of guidance here.
[00:35:16] Drew Rae - cohost: Yeah, I, I think we'll probably in the next few episodes, pull up a couple more current papers. So these idea of observability and direct ability I think are useful, but people have done a lot more work in expanding and pinning down those sort of properties of a good autonomous system that plays well with others.
[00:35:35] Drew Rae - cohost: Into more than just those two overarching concepts.
[00:35:38] David Provan - cohost: Yeah, absolutely. So Drew, the question that we asked this week was sort of the title of the paper. How can We Make Automated Systems Good Team players,
[00:35:46] Drew Rae - cohost: at least from this paper? The short answer is make sure that the automation is observable and make sure that it's directable.
[00:35:54] David Provan - cohost: That's it for this week. We hope you found this episode thought provoking and ultimately useful in shaping the safety of work in your own organization. Join in the conversation on LinkedIn or send any comments, questions, or ideas for future episodes to feedback@safetyofwork.com.