On today’s episode, we discuss whether we can prepare for automation by studying non-automated systems already in place.
We use the paper, Observation and Assessment of Crossing Situations Between Pleasure Craft and a Small Passenger Ferry, in order to frame our discussion.
“So, the rationale for a lot of the waterway rules, is about what different vessels are capable of.”
“Even if the automation can solve for the navigation, can it actually solve for the rest of the system properties, as well?”
“...When I look at a system like this, that we’ve explained, in a dynamic environment...I’m just not sure if it’s a system that you could automate.”
Observation and Assessment of Crossing Situations Between Pleasure Craft and a Small Passenger Ferry
Send us your experiences with automation and its unintended consequences to Feedback@safetyofwork.com
David: You're listening to the Safety of Work podcast, episode 38. Today, we're asking the question, can we get ready for automation by studying nonautomated systems? Let's get started.
Hey, everybody. My name is David Provan and I'm here with Drew Rae. We're from the Safety Science Innovation Lab at Griffith University. In each episode of the Safety of Work podcast, we ask an important question in relation to safety of work or the work of safety and we examine the evidence surrounding it. Drew, what's today's question?
Drew: David, today we're basically just going to be talking about the safety of automated and autonomous systems. The example that most people would be familiar with is the idea of driverless cars. A common problem with this automation is we don't really always understand the current role of humans in the system.
Humans do all sorts of things as part of their job that never gets formally recorded, never get written down, never get noticed until the human doesn't happen to be there, and then all of a sudden, we regret the absence of the human.
What some researchers advocate is before we leap forward into introducing artificial intelligence or autonomous systems, we should be doing more than just basic work to understand humans doing their jobs so that we understand what it is that we're trying to automate or understand what it is we're trying to replace when we try to remove the human from the loop.
That's just the basics of this episode. Is that a good idea? Can we get ready for automation by studying non-automated systems and the people inside those systems?
David: Drew, I really like the question, the paper that you've picked for this week because I'm not that familiar with the literature around automated systems, automation, and artificial intelligence. But I'd consider myself more familiar with the literature around how people do their work in complex sociotechnical types of systems, on the people's side.
I really like this study because this study is really basically answering that question that you've posed. We'll talk about shortly, but there's a small passenger ferry in Norway proposed to be replaced with an autonomous ferry.
So this study looks at well, let's understand what the system looks like before we introduce automation. The people involved might have been left scratching their heads about how to approach the automation process.
Drew: Yes. I think that they thought that they were getting in for something a little bit less complex than they discovered. The paper we've chosen is called, Observation and assessment of crossing situations between pleasure craft and a small passenger ferry. And as we're gonna find out, this is a specific small passenger ferry. It even has a name.
The paper was published in 2020 in the WMU Journal for Maritime Affairs. This is a little bit out of our normal bidding grounds, but a number of my international collaborators are in the maritime space. It's a fairly sophisticated subfield of looking at risk assessment.
The authors in this particular case—my apologies if any of them are listening, and I totally mangle their Norwegian names—Øvergård, Tannum, and Haavardtun. They were all from the South-Eastern University of Norway, and it's a collaboration between professors. There are no junior researchers in this case. The first author is an expert in work and organizational factors, and the other two authors are experts in maritime. We've got a mix of study expertise and subject matter expertise.
Basically, they wrote this paper as part of what I assume is an industry project that was contracted to the university. There's this small passenger ferry called Ole III. Ole III just has one job. Ole III goes back and forth across a 100 meter wide stretch of water and does it dozens upon dozens of times every day. There's this strait between mainland Norway and a small island. It's fairly busy traffic.
The journey only takes about two minutes, and so the captain on this ferry just goes back and forth, back and forth, back and forth. Probably not your world's most exciting job for an experienced captain. So the plan is to replace poor little Ole III with some sort of automated craft.
David: Yeah, I like this. When I was looking at the passenger ferry, I was thinking about roll-on/roll-off ferries and all this sort of stuff. There's a photo in the paper of this little yellow boat with two little bench seats in the front and a single driver looking at a little windscreen. I think that will be a fantastic job for two months of the year in summer, which is when the study was done. The study was done between June and August. For the rest of the year, it might not be so pleasant.
Drew, interestingly, they talk about the rules that govern the interactions of vessels in Norwegian waters. And this is the starting point because they wanted to know how this vessel should interact with other vessels. Basically, they say, well, there's a very clear rule that vessels should give way to vessels on their starboard side. If a vessel is on your right, you would slow down and cross after that vessel, and pretty straightforward.
But then the complication goes on to say, however, as far as possible, vessels should stay out of the way of commercial vessels. So pleasure craft vessels should stay away from commercial vessels. This little tiny—I think three meters or something by a few meters—vessel is actually a commercial vessel. But the captain is going, well, are other people going to think that I'm a commercial vessel, or are they going to think that I'm a normal vessel?
And then there was a third rule around what should happen as well where the authors in a couple of sections just say. It’s actually not really clear. It could be clear in very different ways to different people involved in this system what they should do.
Drew: Yes. The rationale for a lot of the waterway rules is about what different vessels are capable of. It's very easy for a little sailboat, sailing boat, to tack back and forth and get out of the way of an oil tanker. It's really hard for the oil tanker to give way to the sailing boat, regardless of which side of the oil tanker the sailing boat is on.
The rules say which side you are on. But then they say, okay, so commercial vessels basically get priority. Typically because commercial vessels can't dodge and can't stop. But Ole can. Ole is just a tiny little boat. Ole can suddenly go backward if he needs to get out of the way of other boats. And this assumption that the sailing vessels have more capability assumes that the person sailing the sailing boat is sober.
David: Drew, you throw that in there like you said, but the study did talk about a number of human factors or factors associated with other people in the system. What the study had reported was that having a test for getting a boat license in Norway only applies to someone born from 1980 onwards. There's something like 40% of the people in Norway who hold a boat license where no one's really got any idea about what the navigational capability of those pirates of those vessels is.
Drew: You're being generous, David. I've read the paper. We know what the navigational capabilities of some of these pilots are.
David: There was another study that this paper referenced that said 45% of pleasure craft vessel pilots or captains (I'm not quite sure what to call them), report drinking immediately before or during driving their vessel. Your little throw away there about them being sober means that, generously, half the people were not sure about their navigational capabilities. Half of the people we're not sure whether they're sober.
Drew: I don't know what this says about summertime in Norway, but this does sound like a very relaxed waterway to be on. But it is very busy and potentially dangerous. It's not shallow. The channel gets to six meters deep. If you go overboard and you hit your head, or you can't swim, you're going to be in serious trouble.
Collisions are actually a serious safety issue. You're talking, David, about summer jobs. The data for this paper was collected over two months from 10:00 AM to 8:00 PM every day of those two months. And this was the summer job of two second-year students who took it in turns to go on every single crossing during those hours during that time.
David: I liked that, Drew. It was something like the data was from the fourth of June till the fourth of August, so very much summer holidays. Like I said, 10:00 AM to 8:00 PM every day and nearly 5000 two minute crossings of this vessel. Aside, the captain taking note of all other craft and all the situations that little Ole III found himself in.
Drew: The two students, they kept track of who was on the ferry and how many other vessels were crossing the path of the ferry coming from each side. The side just matters because of the navigation rules. If they're crossing from one side, they officially have right of way. If they’re crossing from the other side, they definitely don't have the right of way, with the slight confusion that everyone is supposed to be giving way if they are a pleasure craft to this commercial vehicle.
The researchers kept track of every time that the navigation regulations weren't followed and what the captain of Ole III did about it. And then, they had the professors checking this data to make sure that the researchers were correctly interpreting and classifying based on the navigational rules and the situation.
David: Drew, like I said, there were 4802 crossings and 0 accidents. I suppose with the absence of an incident makes this a completely safe system. But underlying that is even though things go well or even those things went well 4802 of the time. There were in fact some unsafe situations that only 3 found himself in.
So 3152 of these crossings involved other vessels. About ⅔ of the time, there was another craft involved that either responded to or Ole III had to respond to. A total of nearly 7500 interactions with other vessels.
For those interested in keeping track, a bit over 6000 passengers, 1300 of those were kids and 60 times or so people needed assistance on or off the ferry. Very young kids or disabled people and 4000 bikes. A lot is going on for the captain of Ole III in just making these two minute, two-minute, two-minute, two-minute crossings of this little strait in Norway.
Drew: It's important to point out that the purpose of this study is to decide how we're going to automate the ferry. And one immediate thing we notice from that data is the captain wasn't just piloting the vessel dodging other vessels. He was helping little kids onto and off the ferry. You immediately start to automate it, you think, well, what's going to happen for those 60 times when the people—mostly little kids—need assistance?
David: And other things we'd say about bikes and other situations. Where do people sit? And what's the rescue process? And where do the bikes get strapped on? There's a lot going on in that system, like you said, Drew, that even if the automation can solve the navigation, can it actually solve for the rest of the system properties as well?
Drew: Over this time, there were 279 occurrences that got classed as incidents or as deviations from the rules. And in the vast majority of those cases—so 229 out of the 279—even though the Ole III has the right of way, the captain of the boat is slowing down or doing something so that he passes behind the other vessel. They classified them into these very minor incidents, into dangerous or critical.
David: I think, Drew, in those 229—just to color for our listeners would be—once Ole III starts its journey, if he's on the starboard side of any vessel who's proceeding down that strait, they should just wait and stop until Ole III crosses the path and then they keep going. These situations where Ole III slowed down, the vessel might have been 50, 60, or 75 meters away from the other vessel and the pleasure craft just shoots across in front.
Depending on how aggressive the pleasure craft is, it probably means how much Ole III had to slow down just to keep some separation.
Drew: Yeah, and it sounds like in most of these cases, it's not even that the vessel is shooting across in front but just looks like it is on the heading that he's going to and doesn't look like they're going to turn. The captain just preemptively, just slightly adjust the speed to make sure that he never gets close.
There were 39 incidents that were counted as more dangerous. Some of these were because the vessel was going very fast. Or the other benchmark they used was if the captain had to do something that was more drastic and active to avoid a collision like had to deliberately and suddenly throttle back to let the other vessel go past or had to change course.
And then there were 12 incidents that they counted as critical. Where the captain had to actually very closely dodge to avoid a collision. Typical things in that case where he had to sound his horn or had to actually reverse thrust and go back to get out of the way of something else.
The paper uses a bit of odd language to talk about these, David. I don't know what the terms passive and active usually mean to you, but the way they use it in this paper is a passive strategy is an avoidance strategy. It may involve an actual active action by the captain, but it's not asserting or following the rules. It's just keeping out of the way of other boats.
It's like being timid, whereas an active strategy is more aggressive. Where the captain follows the rules and clearly signals his intentions. It often actually involves him almost like trying to bully the other person into following the rules, or at least to be very clear about what he's doing.
David: Yeah, I didn't quite follow the labels passive and active when I read the paper either, Drew. Because normally, I'd think about them as someone doing something or something is happening in the background.
But I think the way that you've just explained then, which is whether or not it's more assertive or submissive. Passive being this submissive action, which is even though I'm right, it doesn't matter to me. I'll slow down a little bit. Or assertive, which is actually, I really don't want to reverse thrust here. Or I really don't want to swing this thing around. I'm going to sound my horn, I'm going to speed up, or I'm really going to be aggressive. I don't know if that was the same interpretation, Drew, that you just reflected then. But it was an interesting label, anyway.
Drew: Yes. Maybe let's go through when he uses each strategy, and that might give some sort of indication of what they're like. The idea is most of these other boats are pleasure boats. They're just people out on the water for fun. There are a few weird ones. I think there's a Vikings replica ship that came across at one stage. A couple of times, they were people just fishing in the middle of the channel, and there was a kayak. Mostly they're just people out on the water having fun.
They're behaving a bit erratically. They might not be paying attention. They might be going fast. The person operating them might be drunk or a little bit tipsy. Some indication that a lot of these people either don't know the navigation rules or certainly don't apply the rules treating the Ole III like a commercial vessel.
Basically, even though the captain has the right of way, this other vessel—for all these other reasons—simply doesn't respect that right of way. And you can think of that as they're violating the rules of the road. But the main strategy used by the captain is to basically diffuse the effect of these errors by the other people.
So rather than strictly follow the rules himself when they're breaking the rules, what he does is he basically makes sure that they're not following the rules doesn't matter. If they're cutting ahead of him, he'll just slow down a bit and let them pass. If they look like they're going to get too close, he'll just—before he gets close to them—heads the other way.
All those strategies get used when it's not particularly dangerous to make sure that it never gets to a dangerous state. But when it does get dangerous, the captain uses a very different strategy when he's really close to another vessel. The last thing he wants to do is be unpredictable. Because the risk is both boats try to dodge and they both hit each other because they've dodged in the same direction.
In fact, if he is about to collide with someone, he keeps his heading and speed constant so that it's very clear to the other boat what he's doing. And he sounds his horn to make it very clear to them that he’s steady, and it's their job to get out of the way. His way of avoiding danger is to be passive and timid. His way of dealing with danger is to be predictable and loud.
David: The challenge for automation there because there are 39 times where maybe dangerous taking some evasive action and 12 times that reverse thrust and sound a horn. I don't know exactly. But say there are 50 odd occasions where I would think that they were the ones that I credit classified as active. And then these 229 deviations, which were more passive, just slowing down or something.
But there are just two categories. I wonder how much gray exists between those two and when the captain of the vessel changed strategies—was being passive and then suddenly had to become active. I suppose I find myself with an interesting coding situation for the researchers as to whether it was clear to them or not when the boundary was between how the captain was making the decision about avoiding danger or dealing with the danger.
Drew: Yes. And I think that's one of the—I don't want to call it a shortcoming. But it's a weakness of this style of research. We're using low skilled, low experienced researchers to make sure that we've collected lots and lots of data. But they can only record what they recognize and notice.
Most of what gets recorded and classified are in these broad categories. And we don't, for example, have a narrative for the captain what he thinks as he goes back and forth. What he's got his eyes on. Why he’s doing particular things. What he's worried about. What he's not worried about.
David: Nevertheless—going back to the question for this episode—our listeners could just be thinking in their mind now, how would I design an automated system to make these decisions?
Drew: Yes. We're about to get onto that exact question. Before we do, I just want to point out one interesting point the paper makes. When I was reading this first time, I thought I was really going to have to get my head around these Norwegian navigation rules. A lot of the paper talks to particular subsections of particular navigation codes. But [...] goes through how do you interpret these rules in this situation?
It didn't say that actually this only matters after a collision has happened. After the collisions happened, that's when it really matters who had right of way. Before a collision happens, really what matters are practically what situation is the captain faced with? And what's the best action in these circumstances?
We know for sure that the best action is not to follow the rules because if you follow the rules while everyone else is dodging about, you are going to get hit, or you're going to hit them. We're relying on the captain as the skilled operator who makes things safe for everyone, even though rules are getting broken all around him.
David: Yeah. And our listeners would see a lot in that comment you just made, Drew, about workers imagining work is done, following the rules, and getting injured is not as good as not following the rules and not getting injured.
Drew: Yeah. Let's get onto the automation strategies. The paper lays out three options. David, I don't know what your impression is, but I got the idea that the paper didn't really like any of the options they had. They certainly didn't come to a clear recommendation.
David: Look, I'm just not sure there are things that with our current—like I said, I'm not an automation expert, and I'm sure there's a great deal of smarts in our automation. But when I look at a system like this that we've explained in a dynamic environment with the weather, pleasure craft, people getting on and off and going back and forward, and people not really having clear rules about how the system works, I'm just not sure if it's a system that you could automate.
Drew: The first option they give is they say, well, what does it mean if we just replace the captain with a system which is trained to know the rules and to follow the rules? The way the navigation rules are set up; if everyone follows the rules, everyone is safe. As long as you correctly classify what a commercial vehicle is and what's not and we have everyone agree on what that decision is, then everyone following the rules is safe.
But we know from this study that that is just a ridiculous assumption under these circumstances. We cannot guarantee that the other vessels know the rules. We can't guarantee that they are willing to follow the rules. And we can't even guarantee that they're capable of. For example, this Viking replica vessel didn't have the visibility it needed to see Ole III or to navigate around it. We just have these varying capabilities. There's no way a fishing vessel at anchor knows how to dodge or is capable of giving way.
David: Drew, I definitely agree with the authors on this one that just replacing the captain with an autonomous vessel—assuming that everyone follows the rules—is actually going to be a really really dangerous situation.
Drew: Yeah. And I imagine here a driverless car on the road that just follows the road rules and thinks, well, I've got right of way. I'm not going to care about whether someone else is on the road and just how bad that would be.
The second option they say is, look, this strategy the captain is using is actually really very effective. What if we design something that can mimic the strategy? But the trouble is, we'd basically need to make a decision to be conservative. Essentially always in that passive mode of assuming that no one else is going to follow the rules and giving way to everyone, which would work for safety. But this is a busy waterway.
And if you've got a little Ole III giving way to everyone, he's never actually going to get to the other side. He has to make some assumptions about what's reasonable, assume that some people are going to give way to him. Or it's just gonna be hopelessly inefficient trying to dodge around to stay clear of everyone and instead and focus on what's supposed to be a straight line, two minute trip over and back.
David: Yeah, I could see a vessel that would probably never happen in practice. But you could imagine a vessel that's just sitting there with sensors all around it. And anything that it recognizes as a vessel within a couple of hundred meters of it just results in Ole III not moving at all. You can just see this vessel that never ever leaves the shore on one side.
Drew: I imagine two college students on a kayak working out that they can chase Ole III right out of the waterway and down the coastline.
David: Okay. You've got a bit more of a fun-seeking or trouble seeking bind then, Drew. It will be interesting to see what the automation would actually do. Would it just stop it or would it actually try to maintain separation distances? And could you actually herd Ole III into some kind of channel or rocks? Or would it pick the rocks up on the far side is actually a vessel and never leave anyway?
Drew: The third thing they say is that what we really need here is—and this is what vessels do in practice—a way for vessels to communicate their intentions to each other. At the moment, that happens really informally. Some of it gets done by a basic understanding of the rules. That gives you some idea that helps you predict. And some of it just comes from you watching the other vessel, you see whether they're paying attention, you see how they're reacting, and that conveys their attention.
Then some of it is done more deliberately like when the captain holds a steady course and sounds his horn. He's clearly signaling his intentions where he's going. They say for the automated system to work, you'd need to replicate that informal thing formally and basically make it into an environment that is friendly for automation where every vessel can signal its intentions to other vessels.
David: And Drew, this could be the strategy with unmanned aerial systems in commercial airspace and things like that where all of the aerial vehicles have got the ability to communicate with the others—basically their location, heading, and things like that. I suppose the basis of autonomous vehicles on the road would be that every vehicle can communicate with every other vehicle in some way that's smarter than just location. But even then, Drew, does that solve the kayaker?
Drew: No, and that's exactly the point they make. They say this system is basically the equivalent to every vessel filing a flight plan or at least broadcasting its flight. It's what the sailing equivalent of a flight plan is to everything around it.
The trouble with that sort of thing is the moment you have a pleasure craft that doesn't have the special equipment it needs to be part of that environment. It becomes really dangerous for that lone outsider into the system.
Sure, we can put a sophisticated system on Ole III. And sure, we can put it on all the other water taxis. And maybe we can mandate every new motorboat that gets built. It gets one of these. But then someone's going to tow an old kayak down to the water or pull out a sailboat that hasn't been fitted with the beacon and suddenly it will become invisible to the system of automation.
David: Yes. I think there are three approaches there to automation in the workplace. You've got one, which is just automating one component of the system. So just Ole III, the ferry, and just assume that you understand the rules of how the system operates.
I think number two is having a really passive strategy, which compromises the chance of the system actually delivering what the system is designed to do. The third is actually trying to automate the entire system, so the entire system communicates. There you're talking, how do you change an entire system just so you can remove one person from one ferry? Is that even an economically right thing to do?
For me, even these three options, it just sounds like a big stalemate, and that was my comment earlier about I'm not even sure that you could, should, or need to automate this particular part of the system.
Drew: Yes. I don't actually know what happened. In fact, this is only a very recently published paper. I don't even think a decision has been made yet about what's going to happen to this particular ferry. But I think we've encountered examples of different systems that do adopt these different strategies.
I'm thinking, for example, some warehouses decide, okay, it's dangerous to have automated forklifts around with humans. We'll just get the humans out of there entirely, and this can be a robot friendly environment. The danger with all of those is people routinely get killed in humanless factories because they're really hostile environments when a human does have to go into them.
David: Drew. I can't help but think that a pedestrian cycle bridge might be the best approach here if you want to remove Ole III and his captain from the system. But maybe the Viking ships [...]. Drew, practical takeaways from this paper. Are we ready to do them?
Drew: Yeah, I think so. The first one that I'd like to throw in is just what they tried to do here was to demonstrate the usefulness of studying humans in order to understand what we need to do with automation.
David: Yeah. You might think it’s simple but also a very complex environment—one little boat, back and forward across a 100-meter wide section of water every two minutes for 10 hours a day. On the one hand, you think that, back and forth, back and forth, back and forth. Yep, okay, we can automate that, or we can change that. But it just shows you—when you study any system involving people—just how quickly it can get very complex.
Drew: Yeah, absolutely. I'm always fascinated with what appear to be small, simple jobs. What we can learn by seriously paying attention to what they like to do.
David: So the second one, Drew, is the importance of the role of skilled and experienced humans in making complex environments safe.
Drew: Yeah. They make this point in the paper that we can think about this situation as caused by how dangerous humans are. That we've got these humans who are drunk, who are making mistakes, who are deviating, and who aren't following the rules.
But we can equally examine this situation as one human who is skilled at his job, making the situation safe, and the importance of having that expertise of being able to have a system that is tolerant of mistakes that his understanding is even basically friendly, polite, and compassionate to the mistakes happening around him.
David: Yeah. Absolutely, Drew. Nearly 5000 crossings in this two month period and something like 270, 280 times where he—I assume it's he, but could be he and she—kept the system safe and made the system safe. And that's what people do in our organizations every day.
Drew: Then the third take away—I've thrown in a little bit facetiously—is we are in no way ready for driverless cars yet. If we can't get poor little Ole III going back and forth safely, how do we deal with an urban environment with automated vehicles?
David: Drew, is there anything you'd like to know for our listeners in relation to this week's episode?
Drew: I’ve just got a simple one. I'd love to hear your story about introducing—it doesn't have to be full automation. Just any sort of taking a job that is done by a human and computerizing it, or using technology to support it with unintended consequences. What happened? And how does that fit with the type of things that we've noticed in this paper?
David: That's it for this week. We hope that we found this episode thought-provoking and ultimately useful in shaping the safety of work in your own organization. Please tell a friend about the podcast. Leave us a review or contact us on LinkedIn and let us know what you think. Send in other comments, questions, or ideas for future episodes directly to us at firstname.lastname@example.org.