On today’s episode of Safety of Work, we wonder what regulators are supposed to do if safety emerges from frontline work.
We use the paper, How Institutions Enhance Mindfulness, to help frame our discussion.
“So, they talked about this collective mindfulness as emerging out of the five principles of high-reliability organization theory.”
“I was trying to interpret how much of this was down to national culture and how much of it was down to the research itself. And it certainly appears that in this situation, the primary regulator...the government regulator is the police.”
“Initially operators must learn and follow the rules. But to function effectively as operators, they can’t mindlessly follow the rules, because the rules are sometimes irrelevant or unhelpful, leading to unnecessary violations.”
How Institutions Enhance Mindfulness
David: You're listening to the Safety of Work Podcast episode 32. Today we're asking the question, if safety emerges from frontline work, then what are regulators supposed to do? Let's get started. My name is David Provan and I'm here with Drew Rae, this week we're going to jump straight into our discussion. Drew, what's today's question?
Drew: David, today's question comes out from one of the big gaps in safety theory. We got lots of theories that say in complex operational work, safety is an emergent property that comes out of patterns of interactions between the frontline workers and other things around them like technology.
We can think of a surgical team that’s working together to do a difficult operation, or a line crew repairing down power lines during a storm, or a flight deck crew helping to land and refuel the fighter jet. What keeps people safe isn't the individual behavior or the organization or even something vague and nebulous like culture. It's the routines that they have that are flexible enough to respond to circumstances but stable enough not to drift into danger.
The word often used for that comes from HRO theory and it's called Mindful Organizing. David, do you want to give our listeners just a quick reminder about HRO?
David: Yeah. Thanks, Drew. We spoke a few times on the podcast about HRO and people might recall episode 16 that we did on The Brady Report regarding the application of HRO or the recommendation to apply HRO in the mining industry. This idea of mindful organizing really seems to emerge with Karl Weick and Kathleen Sutcliffe as they expanded the HRO Theory in the 1990s.
They talked about this collective mindfulness’ emerging out of the five principles of High-Reliability Organization Theory. This wasn't original thinking, Drew. In the early 90s, Westrum talked about generative organizations where people have this license to think, where they don't just follow rules and procedures. They actively seek out new information and they inquire deeply when their work circumstances change.
They also use terms like the protective envelope of human thought which people would wrap around their work together. But then, Weick and Sutcliffe went on to publish a whole nother paper on organizing for collective mindfulness and they defined this collective mindfulness as simply the capacity to discover and manage unexpected events.
Even some of this school of thought wasn't new then. Turn it back to the early 1970s, talk about work being safe not because of these orderly systems and processes that people always follow. But work is sometimes safe in spite of these routines and procedures because work is different every day.
Drew: The problem is that if we accept this as a way of looking at safety and it sounds really nice that safety comes from these people who are committed, thinking, spotting problems, and fixing problems, what does that leave for the rest of us who aren't doing the frontline work? How do we promote mindful organizing? What does our system look like that encourages mindful organizing? What do our safety processes look like?
In particular, what do our regulators look like? On the one hand, we got all these safety theories that say safety comes from great frontline behavior but then we got our management theory that seems to be trending towards thinking that regulators regulate the systems, the systems regulate the work. That's a big contradiction. We don't have a good model for what a regulator looks like operating under an HRO model of what safety looks like.
David: Yeah, and I think it becomes very hard as you become removed from the frontline work to both understand it and to impact on it in a meaningful and constructive way. In the case of regulators, when I was reflecting on their role, they're only ever getting a snapshot in time into the work and the workplace that they're looking at. Their picture or their model of the work is always going to be very, very narrow.
Generally. they're always going to be starting by looking at a system or a safety case and then trying to use that to match it to work through inspections and compliance activity, or they're going to be responding to an incident and making some hindsight-based normative conclusions about the way that work should be done. When we think about safety emerging from these spontaneous and complex interactions between people in the front line, it becomes very hard to understand how the regulator can: (a) understand that, and (b) do something about it.
Drew: That's basically our question for today and the paper we've chosen has that pretty much exactly like its title. It's called, How institutions enhance mindfulness: Interactions between external regulators and front-line operators around safety rules. It's a very recent paper, it's from 2020, it's from a special issue of the journal Safety Science that was all about mindful organizing, and the authors are Ravi Kudesia, Ting Lang, and Jochen Reb.
The first author, Ravi Kudesia, is an early career researcher. He's already got quite a respectable body of work on the topic of mindfulness. I got interested in his stuff actually not because of the mindful organizing bit because I was searching for stuff about mindfulness (which I find really interesting), but a lot of the existing work is new-agey, ill-defined, not exactly clear what the philosophy is, not exactly clear what the mechanism is.
I love the fact that Kudesia links mindfulness and mindful organizing. He takes a really strong interdisciplinary reapproach and his emphasis is on real-world qualitative investigations. I was expecting that someone who comes from a methodological approach that I like and using qualitative ethnographic methods that I like, I was really going to agree with everything in his paper.
But actually this is one of those that you sort of read all the introduction, you read the method, you think hurrah, you look at some of their data and you think what the heck is going on? We thought it was only fair if I agreed with the method, we should talk about the paper.
David: Drew, the research was done in an explosive demolition firm in China. It looked like it was part of a broader project where they are looking at interactions between the frontline operators and the regulators. The assumption behind this body of research was that like we said earlier, the frontline behaviors and interactions are shaped by the biggest system they're part of. If you can understand the relationship between the regulators and the workers, then you can maybe see some of that shaping in action.
This makes sense from a social theory perspective where we talk about Giddens’ Theory of Structuration, where we talk about systems simultaneously both shaped behavior and that behavior actually shapes the system so we see this constant dynamic between the conditions around work and then how the work feeds back into changing the conditions for the work next time. It makes sense that if a regulator interacts with the worker, then work can be shaped differently after that interaction has occurred.
Drew: For our readers who have an opportunity to pick up the paper, it has a really good introduction and literature review. It doesn't use a lot of these social theory languages. It talks about it in a very plain, simple safety language. It presents this idea of mutual shaping systems and behaviors.
After that, the method is very straightforward. It's something we're quite familiar with. They're doing interviews as their primary form of data collection, then they're doing some observation and document analysis to supplement what they find from the interviews.
David, it's a little bit top-heavy. They interviewed 15 people in total which is a reasonable number, but they started off interviewing the senior management and then the next layer of management. By the time they got down to the frontline, they actually only interviewed three blast crew and two regulators. It looks like most of the observational data were from following one of those regulators around and talking to them.
David: Yeah, Drew. I think we spoke about this in one of our earlier episodes about why people break rules. I think it was episode 2 where some things you need to start with interviewing management to get a bigger picture of the organization, the context, the workforce, the regulatory environment so that makes sense. They actually spoke about trying to understand these complex interactions because of where the research was done in China.
It appeared as though the city level regulators interacted with management, and those city level regulators control town level regulators, and those town level regulators were the ones that went out to the site and interacted with the operators. They're actually trying to understand these interactions at a company-type level and at a site-type level, but it did get pretty thin pretty quickly.
Fifteen interviews are okay for a case study, but because they actually had quite a lot of different levels of interaction, you're right. It was pretty thin on the ground for what we're interested in, which is the operator-level interaction with regulators.
I want to also say here, Drew, as we go forward for the rest of the podcast, I was reflecting on some of our listeners who probably are in China. I don't know what all the other countries around the world are like, but at least in Australia, frontline operators do not have very many interactions with regulators. In fact, some in their entire career may never see a regulator. Some would only see them once a year maybe for inspection activities.
I also thought, as I read through the study, you can sometimes substitute thinking about the regulator with actually the safety professional. Just for our listeners who go, our operators don't ever interact directly with the regulators, just think about how your operators might directly interact with your safety organization, and (I think) some of these findings apply.
Drew: I think that's a fairly fair model. I was trying to interpret how much of this was down to national culture and how much of it was down to the research itself. It certainly appears that in this situation, the government regulator is the police in this case and they very often have police coming on the site. But then, the people we are talking about now as the regulators don't appear to be independent government employees, they appear to be employed by the organization. Sometimes, the police come, and sometimes, the police just check in with the company’s own regulators in order to see what's going on.
It seems to be sort of risk-informed. The more the police trust the regulator—the company-employed safety professional—the less the police come. The less that they trust the company to regulate itself, the more they use police as the regulators.
David: Drew, at this juncture in the paper, we talked about mindful organizing and HRO. Now, we're talking about regulation and police which takes us to rules and compliance. I got to this point in the paper and I asked you before the podcast. I said I got lost in what the researchers were trying to do whether they're looking at rules, rule enforcement, and the application of rules to work or whether they were still trying to understand mindful organizing. Maybe I'll just ask you the question I asked you before. I got lost in what the research question that the researchers were trying to answer was.
Drew: I don't like to assume that I know what's going inside a research team unless I've had a chance to speak directly to the people involved. But these are people across different countries. We don't know what the relationship was when the paper started, when the data collected was collected, when they realized that there's a special issue on mindfulness coming up. Maybe we can take some of these data and publish it as a mindful organizing paper.
There seems to be certainly a real disconnect between the layer of data collection, which is very straightforward, naive, asking people about rules, and this separate layer of analysis that reinterprets the data that they have, using the theory of mindful organizing. Regardless of what actually happened, I think that’s a fairly useful way to track the paper is to start with this first bit about rules as a straightforward presentation of the data, then we'll go to a phase two which is how we would reinterpret this data using the theory of mindful organizing.
David: Thanks, Drew. I think that was used for context because we are just about now to go and talk about rules for a minute and I didn't want our listeners to get lost. I thought I was going to be talking about mindful organizing and now you're talking about rules, but I promise we'll bring it back around before the end of the episode.
Drew, do you want to talk about the rules and how they talked about the four different key activities around rules? I actually don't mind some of that even if it's not in the context of mindful organizing.
Drew: This stuff is fairly heavily supported by data and direct quotes from their participants. It presents a model where the regulators are in-charge of the rules. The blast crews are in-charge of the activities and it's less the regulators’ job is to create and enforce rules, and more that rules are this framework of understanding, that flows back and forth between the regulators and the people doing the work in order to keep the rules always in the front of people's minds.
They talked about four key activities and we'll explain each of these. They've labeled them as encoding, reinforcing, reinstating, and learning. For each activity, you got to consider two things. One of them is how much people understand the rule and know what the rule is. The other is salience; how much the rule is in the very front of people's minds. The regulator's job is both to make sure that the people understand the rule, but to also make sure that they're thinking hard about the rule.
The first process, encoding. This is how the operators first encounter the rules. When you first become a junior blast crew operator or a junior blast crew engineer, you got to know what the rules are. You go through a formal set of training and then you get assessed on whether you understand the rule. This is just mainly about content transmitting from one person's brain to another to make sure everybody knows what the rules are. Sorry, David. Did you want to jump in?
David: I just thought that's really about explaining to people how their work should happen. All things being equal, this is how work should occur.
Drew: Yes, precisely. Reinforcing is about checking up on people once they've started work to make sure that they follow the rules. This has got both content and salience. You can't guarantee that someone comes out of training always knowing exactly what they're supposed to do, particularly if they haven't done the work before, they haven't encountered the real-world context.
Also, you immediately get this counter-pressure after you come out of training about what the work is actually like. Reinforcing is making sure that as people experience other pressures of time, clients, and different ways of doing things that they still keep thinking about the rules that they were trained in. Of course, sometimes that doesn't work. The rules get broken and there's got to be a process for reinstating, putting those rules back into place.
There's a big emphasis in the paper, David, about punishment and about this idea that reinstating is not just about correcting people but also about escalating the consequences of doing things wrong.
David: Yeah, and I think some of this might be down to the case study and the cultural context of the research being conducted in China. We know that the cultural context there is very hierarchical. It's rule-driven and asking general questions about safety and rules didn't surprise me at all, that this idea of reinstating was more oriented towards punishment and escalation, maybe to understanding, learning, and other approaches to reinstating that might be more typical in other organizations potentially or other cultural contexts.
Drew: Although, I think we need to be a little bit careful, even though they've translated it as punishment. I'm not an expert on Chinese culture, but it's clear from the data on this paper (that matches other stories I've been told), that they have a very restorative approach to punishing people.
When we talk about punishing someone from an incident, it will be doing something like writing a letter of apology to everyone in the organization or going in front of a big safety meeting of everyone on site and saying I stuffed up.
After that, you just go back to normal. You've done your penance. You've admitted your fault. You apologized to everyone else, then you are just restored back to your position. It's very much about forcing people to take individual responsibility for what's gone wrong. After that, there don't seem to be lasting consequences.
David: The fourth area is about learning, which we just touched there, but it is after the reinstating phase which is about improving the rules themselves. The authors in this paper suggest that it's about adapting and refining the rules based on operational knowledge. What some of us think about is looking at work as done and operators' experience, and then feeding that back into improving the rules for work going forward.
Drew: This is the point where I began to suspect that there are really two stories going on in this paper because we got the voice of the first author saying that learning is about the regulators learning. The regulator's actually finding out what's wrong with the rules, changing the rules, adapting the rules where they don't fit the organization.
But you look at the quotes in the paper and the quotes are about the police reading books about blasting and asking questions of the operators while referring to the books. If the operators can't give the correct answer that's in the book that the police have just discovered, then there are consequences. That's a very different sort of learning, so we got this theory being superimposed on some data which has a very much more blunt view of what learning is.
David: Drew, because we got these two stories running through this paper—one about rules and rule compliance, then one about mindful organizing—it gives a good opportunity to talk about these two different views mixed up in one paper. There's a quote there that you pulled out. Do you want to just talk about that?
Drew: I like to read the quote directly. This is like the transitional point in the paper between rules and organization mindfulness. It goes like this, “To even enter the operational realm, both literally through certification checking and more abstractly in terms of their competence, initially operators must learn and follow the rules. But to function effectively as operators, they can't mindlessly follow the rules because the rules are sometimes irrelevant or unhelpful, leading to necessary violations. But not all violations are of this type. Because of the physical intensity of this work, some violations reflect failures in self-regulation and dangerous shortcuts. This reveals a system in which regulators neither possess the wisdom to craft perfect rules nor do operators possess the expertise and single-minded dedication to safety to not need rules.”
Now, that presents a binary but I think sometimes we think it's just a choice between one of these things. You either be a new-view person who thinks that regulators don't have the wisdom to craft perfect rules, or you can be an old-view person who thinks that operators don't possess the dedication to follow safety rules. David, your thoughts?
David: That's a really well-written paragraph and I think it explains the messy middle which is we're always going to live, like you say, Drew. We're never going to have perfect rules, but we're also never going to have a situation where our workers have sufficient experience and perfect system-level information to always know how to make the best decision in the interest of their work and safety.
We are today and we're always going to be in my opinion in this messy middle and these paradoxes that we spoke about in episode 30 where we actually need people to display dependable role performance, which is rule-following, which is routines and reliability so that we can integrate our organization and we know how people are doing their job. We know that's never going to work in all situations so we need people to demonstrate this spontaneous initiative, this adaptability.
That's really this central theme of mindful organizing or collective mindfulness, which is that we can do our work as planned, then we're going to be able to tell when we need to deviate from the plan, we're going to be able to correct and maintain our operational performance before it's too late. I think that that's always going to be a case-by-case situation of a specific thing and that's probably our biggest central challenge in my opinion, Drew, for safety management is how do you create conditions to get those decisions well in your organization.
Drew: The argument that the rest of the paper makes that we're going to go through now is that you have to have some sort of regulatory force for the collective mindfulness, the collective organization to happen. You can't just go totally hands-off because then, there's a strong risk of drift towards optimizing for properties other than safety.
Let's just go back through those four steps of rules as the paper does and reinterpret them in terms of mindfulness. If you start at the encoding stage, your very naive move is that this is very top-down. The regulators are making the rules and before we can let people work, we need to teach them what the rules are and make sure that they understand them. It certainly seems to violate the principle of difference to frontline expertise. When you think about it, how do frontline experts get to become experts in the first place? They don't just magically know what is safe, particularly when they're starting off as junior people, when they're not experts yet.
The paper's argument is sure, a lot of the stuff that they need to know, they're going to learn from other expert front line operators. They learn from doing and they're going to learn from being around people who are the experts. But they’ve got to start off with something and the something that they need to start off with is that form of encoded institutional knowledge. That's basically what rules are. Is that you're encoding as much as we can into simple things that people can learn. That gives people a good framework to start off with so that they're building their knowledge based on this framework.
David: Drew, I like that. Like I mentioned, I've been reading more of the HRO literature recently and when that theory talks about deference to expertise, it actually says expertise rarely resides in one person. It resides between people and in groups of people.
In this situation, it's not deference to frontliner expertise, it's deference to expertise and that sits across the institutional knowledge which could be delivered in the form of trainers of new people and also experienced operators. Thinking about that expertise sitting across both the institution and the experienced individuals is a good way to think about it.
Drew: You can certainly imagine two totally different models of training. One model of training which has no deference to that expertise, which is basically people taking the rules out of the textbooks and trying to train people with them. The other is the organization and the regulators capturing the knowledge that exists between people and the existing systems and turning that into a simplified codified form that they can teach the new people with.
David: I remember, Drew, we did some work at one point with Safety Induction Training and we had taken a lot of care as a regulatory role if you think about safety professionals to design what we thought was the perfect induction training. We took it to a group of experienced operators and said this is what the safety induction training is going to look like. I won’t say what the feedback was. I will have to put a little A symbol on the episode, but it was essentially a score of about one out of ten.
Nothing that you're going to tell people in that cause is going to be useful to them to actually do their job safely. Then we went to this process with experienced operators of completely redesigning the Safety Induction Training for the particular site through the lens of the experienced operators. I think that's an example of what we're talking about here. What's the relationship between your institutional knowledge or your ruleset and your experienced operators?
Drew: I'm not going to give details here, but there was a set of inductions we're looking at on one of our projects that got mixed scores. The mix was pretty much 50% of the induction was focusing on some particular hand signals for moving vehicles and loads around on-site and people love having those hand signals in the induction, making sure that everyone knew how to tell a spotter and for the spotter to communicate for the person with the vehicle.
The other 50% was company standard information which people hated because they thought it was useless and repetitive. That shows for you that the induction was absolutely necessary. You definitely want when the spotter's you're desperately pumping your signal for the person to recognize what the signal is. That standardization coordination is important for you people on site.
Let's move on to reinforcement. Again, we start reframing this idea of blind reinforcement of the rules into a more HRO idea that operators are always making a trade-off between safety and productivity. The problem is that productivity always has a visible presence. You have an obvious sign how fast the work is going and how much you're getting done. You’ve often actually got physically a client or at least a personified client that you can point to and say that the client wants this or the client can speak for themselves and say what they want.
Whereas safety is more nebulous, unless you have a regulator. The regulator or the safety guy creates this visible presence and this real person that represents the safety goal. That levels the playing field between productivity and safety. They're not just competing abstract forces. They're forces that have a voice in the person and their physical presence on site.
David: Yeah. That's more of a reflection not just on reinforcing at an individual operator level. The rule and the compliance to the rule. It's reinforcing system properties that create conditions for people, the rule salience and the rule compliance.
Drew: David, I don't know what you think about this next one about reinstating. I really think this one didn't fit at all and it was a real stretch to try to say that the reinstating was mindful organizing. One of the HRO properties is focusing on failure and not just treating problems as local, but treating them as potentially representative of things wrong with the system that needs to be corrected. Which is sure very HRO-type language and stuff, but none of the examples in this particular paper actually involve correcting problems in the system or the organization. The only examples where someone makes a mistake and we make sure that everyone else on-site knows about it in case they’re having similar problems, which is less about correcting systems and more just about making sure people are aware of the potential for error.
David: I think it matches the HRO principles the way that they talk about reinstating the rules. Reluctance to simplify one of the HRO principles would talk about going deep into the system, to understand it, and clearly that wasn't what the quotes demonstrated nor did they demonstrate any sort of commitment to resilience, which is one of the other HRO principles that would normally fit in this sort of space.
I think this was just about dealing with non-compliance at the rule at the operator level. Here at the end of the paper, that even if we go back to the fourth element of learning and how that might fit, I think these two stories hit a bit of a dead-end at the end of the paper where they didn't really quite get reconciled, how they needed to.
Drew: That's probably a good point to start shifting on to what we think about it in practical takeaways. If we take the overall message of the paper as looking at this interaction between regulators and frontline workers, something that is mainly about using the language of rules for the interactions, that you're training about the rules, reinforcing the rules, finding and correcting when the rules don't get followed.
Then, we look at that in an HRO context and say how does this help people have this collective mindful organizing? What can we learn if we're in an organization that is trying to have mindful organizing or things that we have mindful organizing, but also feels a need to have this regulation as well? What can we learn about how to make that regulation fit in with the mindful organizing approach?
David: I think organizations need to think about what they're actually doing to create the conditions for mindful organizing. There are a few practical ways that we can think about this. The first one is when we think about regulators or like we said, when we think about safety professionals maybe needing to be present in the operation to keep that physical representation of the safety goal and look at work and operations through this safety lens, I think we need to think about how often our safety organization interacts directly with frontline workers.
I think (at least in my research) there's been some drift for the safety organization to only direct at the line management level down to the supervisor and really leave the interaction with the front line workers to be done by the supervisor. That can create a confused presence with the safety message and the production message, particularly if there are some strong production conditions in that organization.
One of the things I've been reflecting on this paper would strongly suggest that safety organizations need to be observing and interacting with the frontline workers, to understand their work, and to balance these messages that the workers are getting.
Drew: One thing I took away that I've been thinking about a lot recently is that there's really no obligation to be a purist in your safety philosophy and to just grab hold of a view of safety and have to interpret everything through that lens. In particular, it doesn't mean you have to decide that you're going to abandon the position where all problems are human error and pretend that people never take shortcuts.
I think a lot of people who are in a new view of safety feel almost obliged to deny the fact that people make mistakes and actually do actively go out and do things that they would say themselves that they shouldn't do. The reality is that most people are much better at self-regulation than a lot of us admit. Most people can be trusted just to get along with the job and do it safely, but we are all better at regulating ourselves when there's some degree of accountability.
I am happy being left to get my work done, but I would actually like my supervisor to check-in and make sure I'm making progress every now and then. I would like to be checked up on to make sure that I'm teaching well and not getting slack in the way I'm teaching. I think everyone does a little bit better on some of that checking up to help them self-regulate.
David: Yeah, there was actually a more recent paper that Karl Weick was a last author of. I think in 2015 that talked about two other traits of organizations for mindful organizing. One was pro-social motivation and one was emotional ambivalence. The one you just spoke about there is this idea of prosocial motivation which is my commitment to others and other's commitment to me, where there are these shared commitments around safety, and that there's more likely to be mindful organizing between people.
Drew: And getting that balance right is really, really hard because you check up on people too much and they lose their individual motivation, and they just feel checked up on too much. It's something that anyone who's doing management or supervision can't take for granted and has to be constantly working at and reflecting on how well they're doing. Am I getting the balance right between letting my people just go ahead and do stuff, and making sure that I'm sending messages that I'm providing that regulation function?
David: Drew, do you have a final practical takeaway?
Drew: The other takeaway would be then for the regulators. I think regulators sometimes feel a bit bound by their framework of their jobs. One thing this paper shows is that even if you're in a position where your job is explicitly about rules—it's about making rules, teaching rules, enforcing rules—that doesn't mean you can't have a progressive view of safety and interpret and improve your own work using that progressive view.
David: An invitation that I have to our listeners and things I like to know which is we talked a lot all the way through this episode about the paradox, the balance, the trade-off between rules, rule-following, and mindful organizing or organizing for collective mindfulness. I'd love to know if any of our listeners have this as part of their safety management program, work happening to explicitly try to work on this balance or as you said, Drew, intentional action to try to promote mindful organizing. I'd love to hear about what you're doing, how you're doing that, and how you’re measuring and monitoring the impact that it’s having.
Drew: That's a great question, David. I'd be very interested in the answers our listeners have. The question for this episode was if safety emerges from frontline work, then what are regulators supposed to do? At the end of the paper, David, what do you think the answer is?
David: I'm going to give you maybe my thoughts because I think at the end of the paper—this is reasonably consistent with what's in the paper—regulators need to understand the work, they need to understand the risk associated with the work, then work back from that to match that to the framework and the rules they apply. I suppose from there, the role of the regulator is to try to understand whether or not there's any agreement and alignment around whether the risk that the operators are facing is acceptable or not.
That may match a rule or a regulatory framework, or it may not, or the activity may deviate from a rule. But if it doesn't create the risk, then that needs to be part of the regulators’ mindset to be flexible around these rules because ultimately I think regulators want what organizations should want, which is risk being managed and people not being hurt.
Drew: Thanks, David. That's it for this week. We hope you found this episode thought-provoking and useful in shaping the safety of work for yourself and your own organization. As always, you can contact either of us on LinkedIn or you can send any comments, questions, or ideas for future episodes to firstname.lastname@example.org.