Operator Training: One Grain of Rice Away from Calamity

Complexity theory explains why industrial disasters seem inevitable in hindsight and invisible in the moment — and what operators can do about it.
April 8, 2026
5 min read

Key Highlights

  • Processing plants and people are complex systems where small changes can trigger major, unpredictable outcomes.
  • Complex systems undergo sudden phase shifts and rarely fail the same way twice.
  • Adding safety features or new technology can unintentionally increase complexity and create new risks.

I recently asked a roomful of experienced industry professionals what they knew about complexity theory. The answers were surprisingly thin. That is a problem, because complexity theory isn’t an academic abstraction — it is the hidden architecture of every processing plant we operate, and of every human being working within one. Both people and process plants are complex systems that interact in ways that amplify each other's unpredictability. Understanding that may be the difference between a bad day and a bad year.

Here is a way to think about it. Imagine standing in front of a table with grains of rice falling from a fixed dispenser above. A mound slowly builds. You watch it grow; you have a general sense of what will eventually happen, and then — one grain of rice triggers a collapse. The mound before the collapse looks little like the one after. Prior to the critical grain of rice, could you have predicted what was going to happen on the next grain? Probably not. Every new grain changes the force vectors for all the grains. The system looks stable right up until it isn't. This is what complexity theorists call an emergent phenomenon — a small input producing a disproportionately large change in the system.

Processing plants are complex systems, and they share this behavior. Complex systems have two particularly important attributes. First, they undergo phase shifts — the way a system performs one minute can look nothing like the way it performs the next. Second, they resist prediction. A system can hover in a critical state, the equivalent of the moment before that last grain of rice, for a long time before something gives way. And when complex systems do fail, they rarely fail the same way twice.

I saw this firsthand. When I was working at Three Mile Island after the 1979 accident, enormous effort and resources went into preventing the same accident from happening again — including relocating an indicator for the reactor coolant drain tank (RCDT) from a back panel to front-and-center on the control board. But the exact sequence of events from March 1979 was never going to repeat itself anyway. Everyone now knew to monitor the RCDT level. Complex systems don't rerun the same script.

More Safety, More Complexity

What makes a system complex in the first place? Complexity arises from interactions between components. The more components, the more interactions — and some of those interactions will be nonlinear, meaning the behavior of the whole cannot be predicted simply by understanding the parts. Individual components may be straightforward. Combined and interacting, they produce something far more difficult to anticipate. Safety and regulatory systems add yet another layer, introducing new interactions that the original designers never modeled. The utility I worked for when I was first out of college had three nuclear reactors. The oldest one, from 1969, was a beast. It would handle whatever was thrown at it. The newest reactor, from 1978, was Unit 2 at Three Mile Island, with all the latest in safety technology and mandated features. More safety features meant more interactions, and more interactions meant more ways for the unexpected to emerge.

The same principle applies to new technology introduced into existing systems. A new component may be perfectly stable in isolation, but what it does to the broader system is often unknown. When radar was introduced into harbor navigation, the expectation was that fewer collisions would occur. Instead, ships began traveling faster and closer together because the technology inspired confidence. Collisions did not decrease as much as anticipated. The unintended consequence is the child of complexity.

An audit of Texas City or Three Mile Island in the moments before their major incidents would have flagged areas for improvement. But like the instant before the critical grain of rice, there would have been no clear signal that catastrophe was imminent. This is not a counsel of despair — but it does demand a shift in how we think.

Managing Complexity in Processing Plants

So, what can we do? The most important first step is simply to recognize what we are dealing with: a network of interactions whose behavior we cannot fully predict, where small changes in one area can produce large changes elsewhere. From that honest starting point, a few practical principles follow. Processing plants need slack — resilience built into their design so that unanticipated interactions don't reverberate through a tightly coupled system with catastrophic results. New analytical tools, such as functional resonance analysis (which maps how performance variability can combine unexpectedly) and systems-theoretic approaches (which model the relationships between components rather than just the components themselves), are becoming available to help identify when a system may be approaching its critical point.

Complexity will not be eliminated. But it can be respected, designed around, and — with the right tools and the right awareness — managed. You want to train the bear, not poke it. If you want to go deeper on this subject, Sidney Dekker's “Drift into Failure” is an excellent starting point. Given how little most of us know about this topic, it is an overdue read.

About the Author

David Strobhar

David Strobhar founded Beville Operator Performance Specialists in 1984. The company conducts human factors engineering analyses of plant modernization, operator workload, and alarm/display systems for BP, Phillips, Chevron, Shell and others. Strobhar was one of the founders of the Center for Operator Performance, a collaboration of operating companies, DCS suppliers and academia that researches human factors issues in process control. He is the author of "Human Factors in Process Plant Operations" (Momentum Press) and was the rationalization clause co-editor for ISA SP18.2, "Alarm Management for the Process Industries." Strobhar has a degree in human factors engineering, is a registered professional engineer in the state of Ohio and a fellow in the International Society of Automation.

Sign up for our eNewsletters
Get the latest news and updates