Podcast: Turn Training Hopes Into Measurable Success
In this episode, Traci and Dave focus on training evaluation as the final component of instructional system design. Dave explains that evaluation has two aspects: specific (assessing whether students learned what was taught in a particular course) and global (determining if training improves actual job performance).
The key insight is that evaluation methods should align directly with learning objectives. If objectives are correctly written to be objective and measurable, they define how students should be evaluated. Most companies fail at proper evaluation, relying on subjective assessments rather than objective testing.
High-fidelity simulators and process data can measure actual performance improvements in real-world transfer evaluations. However, evaluation should focus on system performance rather than individual blame.
Transcript
Welcome to the Operator Training Edition of Chemical Processing's Distilled Podcast. This podcast and its transcript can be found at chemicalProcessing.com. You can also download this podcast on your favorite player.
I'm Traci Purdum, Editor-in-Chief of CP, and joining me is Dave Strobhar, founder and principal human factors engineer for Beville Engineering. Dave is also the founder of the Center for Operator Performance.
Hey, Dave, what have you been working on?
Dave: Oh, a lot of different things, Traci. Trying to stay on top of all the changes in the industry.
Traci: And there's a lot going on, isn't there?
Dave: Oh, yeah. The whole AI thing, and everybody's trying to understand what that means, and yet still try to do the same thing we've been doing for years.
Traci: Well, you and I are wrapping up our series of podcasts that examine flaws in industry training. We have discussed instructional system design, job analysis, learning objectives, practice, time to train, and now we are going to talk about evaluations. So I want to have you explain to us what the evaluation phase means and what it consists of.
What Is Training Evaluation?
Dave: Well, there are two aspects of the evaluation. There's a specific and a global.
So the specific would be in relation to a particular course or a particular training effort that you're doing. And the evaluation is basically to say, "Did you succeed?" Too often, training is given, and it's just open-ended. "Well, I gave training and therefore they should know what it is that I taught them," without ever bothering to find out whether they really learn what you intended them to learn. So that's the specific.
The more global is, "Am I seeing an improvement in performance on the job?" So that yes, they may have, assuming you've actually had that evaluation phase for the specific training program, did it transfer? And there's a whole field of study called Transfer of Training, and there's a lot of effort put into that to try to understand does the training make a difference in on-the-job performance? Because you can make very good trainees and they may not make good operators. And I think I shared this anecdote with you before, Traci.
I had a friend, he was a flight instructor, and he said, "I turn out good students." And I said, "Well, it's probably more important that you turn out good pilots."
It's the same thing. Are we doing what we set out to do? That is a critical piece that is generally missing across the board. I know every company would say, "Oh, well, we test our students’ training modules."
But they very rarely use objective testing. It tends to be subjective. You get an expert to come in, and the student will explain some aspect of what they were supposed to have learned, and the expert says, "Yes, you've learned that." But there's rarely an objective test to say, "Yes, you've actually accomplished that." And there's even less of an attempt to assess how my training program is improving the operator performance.
And it's an interesting dichotomy when you ask the training individuals what their success rate is, and you ask operations management what their success rate is. You get very, very different answers.
Traci: It's a big circle of evaluating the evaluation, right?
Dave: Right.
Align Evaluation Methods with Learning Objectives
Traci: And how do you align evaluation methods with the learning objectives? Is there a better way to do so?
Dave: Well, that's where we were talking about learning objectives. Remember, one of the key things was that they should be objective and not subjective. If you truly make the learning objectives objective, the student will be able to identify the four major components in distillation.
Well, you've pretty much said what your evaluation is going to be, right? That's my objective. So did they achieve that objective? So I give them a test, and I either have them fill in the blanks or answer multiple-choice questions, whatever, but they should go directly from one to the other.
Learning objectives should, almost in and of themselves, define the evaluation that you're trying to do. And if they don't, then there's probably a problem with your own learning objective, that it's not stated in a way that I can easily measure it. And that becomes, obviously, a key part of it. And we'll use the control adage that you hear all the time: "if I can't measure it, I can't control it."
That applies to human performance in general and training in particular. If you can't measure the training, then you can't really control it. You're just doing stuff, and things happen. And you're not controlling it, you're not driving it to its most efficient point. That's actually what started the systems approach to training that the military undertook in the sixties and seventies, which eventually became instructional system design.
We need to be able to control this in the specific sense, we want to make sure the students are, indeed, learning what we have tried to teach them, but then also in that larger sense of, are we seeing improvements? Are we getting better pilots? Are we getting better soldiers? And if we aren't, then let's make some adjustments and see. Does that make it better or does that make it worse?
You should have that connection all the way through the process. That's why it was called the systems approach to training. This is a system, and the feedback, the evaluation phase, is a critical part of it to ensure that what you're doing is both working and efficient.
Evaluating Learning Transfer
Traci: Now you mentioned, are we getting better soldiers, or better operators, or better pilots? How do you evaluate that learning transfer and the real-world application beyond the immediate training outcomes?
Dave: So, it's actually becoming easier to do with the use of high-fidelity simulation.
So, I can have various programs to teach Console Operators how to make adjustments to this column, how to respond, all of those would be driven by those learning objectives that we talked about. But then I can put the student trainee into the simulator and give them a very novel problem to deal with, not related to the specific learning objectives, and measure their performance.
So, they got there, they did well on the learning objectives, and the testing on the simulator for that course, now we're going to put them in there, and how long... And the easiest one is that you initiate some upset in the process. And how long does it take you to stabilize the process?
Because that's really one of the key areas, "Okay, I lost a compressor, or I lost a pump of some kind, and so now I have an upset, and I'm able to stabilize it within 10 minutes, 15 minutes, 20 minutes." Very objective measure. And then you can record that over time. I can do that before they go through the training. I can do it after the training. And so there I'm getting a far more real-world evaluation of what is occurring.
The other option is that with the distributed control systems now, we're collecting a wealth of data on the process, both in steady state and upset conditions. The problem here is that you're trying not to focus on the individual but on the performance as a whole. So you can go in and look at, "Well, what did they do during this particular upset?"
Recently, I saw there was a plant, and they have a procedure for steam shedding. So, if they lose a boiler, they have to cut back on steam consumption, and everybody gets upset. And so they've done that a number of times. So it's not a one-off event, but you can look at, well, what was the performance over those different events? Was it better in any one than any other? And then you can dig into the training and say, "Well, why was this one better? Why were they able to stabilize sooner?"
We've talked in the past about some of it may be things not directly related to the control system, but it may be things around communication to the Field Operator or troubleshooting skills that maybe one team possesses and say, "Oh, they're probably better at that. So team A, which has gone through this particular training and communication and troubleshooting, did better than Team B, which hasn't gone through that."
So now we have some idea of, "Well, what was the output or the change that was affected by that particular training?" As well as just manipulating the control system itself and taking the right actions over time.
Now, the real problem in that though, of course, is that, as I was trying to emphasize, this isn't about the individual. Too often, one of the pitfalls in management is that they say, "Ah, the problem was the individual. I just need to get rid of that person." So, needless to say, any union environment would be very hesitant to allow these evaluations to occur, and you have to be very careful that it doesn't become this club that you're going to beat the operators up with.
And that's an old school way of management, but it's still out there. "We're going to punish the guilty. We're not going to try to learn from this." It's counter to all the efforts in HOP, Human and Organizational Performance, but it can still happen.
Traci: It definitely does still happen, and that's an interesting point you bring up. That it's not the individual, they need to look at themselves and figure out how to do better in training and making sure that they have the tools.
You can't train for every upset, but you can train them to figure it out on their own, and I think that's important.
Dave: Exactly. And that's where one of the criticisms of human factors is like, well, you never hold the individual accountable. It's always, "They didn't get enough training, or displays weren't right, or whatever." And there are times when the individuals do need to be held accountable, but there are also a lot of times where, "You're criticizing my performance, but you never trained me how to do it." And actually, that would be advantageous for the operators. "No, you tested me. I passed these tests. So it wasn't my lack of training, it was something else in the system that was going on."
And that's where the approach needs to be and say, "This didn't go as smoothly as we would like. What do we need to do better? Maybe it's training, maybe it's the interface, maybe it's the organization itself that needs to stand in front of the mirror and say, How do we improve the team performance?"
This isn't about an individual. Yes, people make random errors, and it happens, but how do you look at it? This is not a random error; this is somewhat, we would call it, design-induced or systemic error that occurred. Let's fix it. And then not only did we fix it for that individual, we fixed it for everybody.
How Critical is Evaluation?
Traci: How critical is evaluation to this whole instructional system design process?
Dave: Well, it is absolutely critical. And as I said, probably no one is really doing it, or they're not doing it very well. Because the whole ISD approach is one about measuring and control. So it gets back to basic systems theory.
And right now, training is this open-loop system. We do things and we hope that they are successful. And we've talked before about, we did an industry survey, and one company spent two weeks training their Console Operators and another company spent 26 weeks training their Console Operators, and they both thought they had good Console Operators. So either one is delusional or the other is wasting a lot of time. And so if you don't have this evaluation phase, you have no idea if you're wasting your trainer's time, your student's time. Or have you even achieved what you've set out to achieve?
I have certainly sat with Console Operators when something unusual happens, and I can see that horrified look in their face of, "Uh-oh, I don't know what to do, and this person is watching me." And that's not the sort of situation you want to get in, because those are the sort of things that lead to incidents from minor to major that you scratch your head with and say, "How can that have happened?" And "Oh, well, we trained them."
Well, you spent a lot of time on training, it doesn't mean you actually trained them. How do you know that your operators actually possess the skill and knowledge that you want them to possess?
And if you say, "Well, it's because we put them through a lot of trainings." Well, no. they could sleep through that if they wanted to. It's the, "Do I have some assessment process in place for either the training itself..."
Again, it has to be objective. It can't just be some subject matter experts saying, "Oh, yeah. I think they know it." Or their actual performance on the job, and saying, "This is your metrics for responding to upsets or for making rate changes are all good." Or "Your time to go through a crossover period."
So when a plan is changing grades of a material and they go from grade A to grade B, there's a thing called a crossover period where they're not really making either one, it's just off-spec product. You want to minimize that time. So that's an easy thing to measure. How fast do they get through that crossover period? And "Hey, you're doing great." Or, "You are not quite where the other guys are. Let's look at what you're doing." And maybe we need a little supplemental training to bring that individual up to that level of performance.
But without the assessment, plants are just doing things and hoping with that their operators are gaining that skill and knowledge, but I don't think most companies want to bank their future on hope, and instead want to be able to say, "We know, definitively, that our operators have those skills and knowledge necessary to successfully perform the job."
Traci: Well, we know that hope is not a sound strategy. And, Dave, you always help us at least understand how to achieve what we are setting out to achieve, so I appreciate that as we wrap up our six-part series on flaws in training.
Want to stay on top of operator training and performance? Subscribe to this free podcast via your favorite podcast platform to learn best practices and keen insight. You can also visit us at chemicalprocessing.com for more tools and resources aimed at helping you achieve success.
On behalf of Dave, I'm Traci, and this is Chemical Processing's Distilled Podcast: Operator Training Edition.
Thanks again for listening, and thanks again, Dave, for giving us all this knowledge.
Dave: Thanks for having me.