Operator Training: When the Subtask Is the Whole Task
Throwing operators into full simulator scenarios sounds thorough, but it can mask the one critical subtask they actually need to master. Human factors engineer Dave Strobhar argues that effective operator training starts by identifying which subtasks carry the highest consequences — loss of containment, asset destruction, major downtime — and drilling those specifically before integrating them into broader scenarios. Using real examples of failed steam-system isolations and a misallocated $500,000 simulator budget, Strobhar makes the case for focused, measurable training objectives over checkbox exercises. The goal isn't to simulate everything. It's to ensure operators get the one decision right when it counts.
Transcript (edited for clarity)
Traci: Welcome to the Operator Training Edition of Chemical Processing Distilled. This podcast and its transcript can be found at chemicalprocessing.com. I'm Traci Purdum, editor-in-chief of CP, and joining me again is Dave Strobhar, founder and principal human factors engineer at Beville Engineering. Dave is also the founder of the Center for Operator Performance and the Operator Training columnist for Chemical Processing. Hey, Dave.
Dave: Hey, Traci. How are you doing today?
Traci: Doing well, thanks. I'm looking forward to diving into today's topic. We're going back to one of the training guidelines we examined a few episodes ago. You mentioned that there's a belief that training should focus on whatever task needs to be performed — throw the student into the simulator and have them do it. However, there may be a subtask that is the critical element for success, and that's where training time should really be spent. When operators are thrown into a full-simulator scenario right away, what are the most common failure points you observe, and how often do they trace back to a neglected subtask?
Full-Simulator Failures
Dave: What you tend to see is that operators fail on that subtask, and the whole scenario unravels from there. When you put someone in the simulator and run them through the full task, they may get a small subtask wrong but still pass overall — the trainer might say, "You got most of it right." The problem is, that one piece may be the only piece they had to get right.
I can point to two separate events where a utility plant was suffering a steam failure. In both cases, the utility operator controlling the boilers had the ability to isolate their section from the rest of the plant. The idea is: keep the boilers up and running, let the rest of the plant collapse if it must, and preserve the equipment you'll need to restart. In both instances, the operators failed to isolate in time. As a result, the entire complex came down — all the boilers tripped, no steam was available, and they couldn't restart without bringing in a portable package boiler. A complete mess.
That one subtask — isolating from the rest of the complex when pressure drops below a certain threshold — was the piece they had to get right. And they didn't, twice.
Identifying Critical Subtasks
Traci: So how do you identify which subtasks are critical? How do you distinguish critical from merely important?
Dave: There are a couple of ways. The primary one: look at the consequence of failure. You see this in process safety analysis, alarm management, and human behavior research — the most revealing question is simply, "What happens if they don't do it?"
That surfaces the real cost of operator error. If the answer is, "The entire complex shuts down," that's your critical subtask. If the answer is, "The pump might cavitate and possibly damage a seal" — that's a probability. The isolation task is a certainty.
We use a rating scale: Will a failure result in a loss of containment or environmental incident? Will it destroy an asset? Will it cause significant economic loss, say from damaged catalyst? Is it a production delay? Or is it simply an efficiency issue? Working through that hierarchy identifies where to focus training.
In my steam example, rather than having the operator repeat full steam-failure scenarios over and over, most of that repetition is spent on routine tasks — not the isolation decision. That's the piece that needs the dedicated time and attention.
The Air Force and aerospace industry solve this with part-task trainers. Instead of spending money on a full-scope simulator to teach one specific task, you build a focused simulation of just that task and the variables around it. It doesn't even need to be fully interactive, since what you're really testing is the operator's decision-making process. Once they've mastered the subtask, then you integrate them into the full simulator.
Traci: My brother-in-law is a pilot and just completed three weeks of simulator training on a new aircraft. He's been flying for years, but he said it was grueling. Is that comparable to what chemical plant operators go through?
Dave: Unfortunately, most of the time it isn't. The challenge is that many organizations don't fully understand what skills operators actually need. There's a tendency to say, "We'll throw them in the simulator for a steam failure," without designing the training rigorously. As a result, it doesn't feel as grueling — because a lot of it is fairly routine and not tightly focused.
What pilots go through is intensive precisely because it focuses on critical tasks. Time that might otherwise go to lower-stakes activities is stripped out. That's training efficiency — don't spend time on things that won't actually improve performance. Focus on what will make or break plant safety and reliability.
What's the Trigger to Full Training?
Traci: At what point do you move a trainee from subtask drilling into the full integrated scenario? Is there a measurable trigger, or is it a judgment call?
Dave: It should always be a measurable trigger. Every subtask needs a learning objective — and by "objective," I mean something you can actually measure. In the steam isolation example, the metric is probably time. The operator has a defined window to make the correct decision. If they miss it, they fail. That creates a clear pass-fail threshold.
Most organizations don't operate this way. They'll say, "I want you to know how to isolate from the steam header," but "know" is vague. Knowledge is infinite — what does it mean for someone to know something? If you have a simulator, use it. Don't rely on gut feelings like "he seems to have it." Define the standard: "You need to initiate isolation within 10 seconds of the trigger condition." Did they meet it? If not, keep training. Tell them what went wrong.
You'll often find that operators hesitate because they're misreading signals — "That header pressure looked like it was coming back up, so I thought I had more time." The expert can step in and explain why that reading was misleading. That kind of debrief is exactly what focuses training on the subtask. Three Mile Island is a sobering example of what can happen when operators misinterpret instrument readings at a critical moment.
The same measurable standard should apply to the total task. "Respond to a steam failure" should have an objective endpoint: stabilize the steam header within 15 minutes of a major boiler trip. That gives a clear benchmark. Did they do it in time? If not, what slowed them down?
Getting Management On Board
Traci: How do you convince plant management and the people controlling the budget to invest in this kind of focused training, when they often just want to run through the scenario and check the box?
Dave: The check-the-box crowd is hard to reach — they're going to do what they're going to do. But for everyone else, I can point to a real example.
A plant allocated $500,000 to build a simulator of their facility. When they approached vendors, they were told the full plant would cost two to three times that. They scaled back to a critical section — still too expensive. They scaled back further to a non-critical section. At that point, operations dropped out of the conversation entirely: "If it's not the section we care about, we don't want it." But management still spent the half-million on something that wasn't going to move the needle.
When you broke down what they actually needed to train, it came down to one subtask: during a specific emergency, an automatic system would divert feed out of the reactor. The operator could not override the diversion. There were four possible state combinations — does the situation actually require feed diversion or not, and does the operator agree with the automatic system's response or not? Each combination carries very different consequences. One wrong call could mean an unnecessary shutdown costing millions. Another wrong call could mean a safety or environmental incident.
That's a handful of variables in a small section of the plant. You do not need a high-fidelity, full-scope simulator to train that decision. A part-task trainer built specifically for that scenario would cost a fraction of what they spent.
That's how you make the case to management. You can say: "We're trying to save you money. We're going to focus your training dollars on what actually matters — safety, environmental risk, asset protection, production continuity. We'll get there faster, operators will qualify sooner, and we don't need a million-dollar replica of the plant to do it."
The alternative is what happens when operators aren't trained on the right things. The entire complex shuts down because no one hit the isolation switch in time. That's real downtime. Those are real dollars.
Traci: Appealing to the bottom line tends to get attention.
Dave: It does — though I'm always careful about the order. Safety and environmental risk come first, then assets, then production. But realistically, different people in management weight those differently. And on safety specifically, you'll sometimes hear, "That'll never happen to us." It's a hard thing to argue against — until it does.
Traci: Is there anything you'd like to add?
Dave: Just one thing. The assumption underlying all of this is that the plant is at least trying to train operators on the right tasks — and that's actually more than a lot of plants are doing. So I don't want to be dismissive of the effort.
But the real lesson is to stay anchored to the question: why are we training this? When you push on that, you tend to land on the subtasks — because the broad task, "respond to a steam failure," contains a lot of material that operators already know or that isn't going to determine the outcome. What we want training to do is sharpen performance on the things that actually matter.
Traci: Well, Dave, thank you as always. To stay on top of operator training and performance, subscribe to this podcast via your favorite platform. And visit chemicalprocessing.com for more tools and resources. On behalf of Dave, I'm Traci, and this is Chemical Processing Distilled — Operator Training Edition. Thanks for listening.
Dave: Thanks, Traci.
About the Author
Traci Purdum
Editor-in-Chief
Traci Purdum, an award-winning business journalist with extensive experience covering manufacturing and management issues, is a graduate of the Kent State University School of Journalism and Mass Communication, Kent, Ohio, and an alumnus of the Wharton Seminar for Business Journalists, Wharton School of Business, University of Pennsylvania, Philadelphia.
Recent Awards:
2025 Eddie Award for her column "Lax Regulations Burn Rivers"
2024 Jesse H. Neal Award for best podcast Process Safety with Trish & Traci



