Learn from a Safety Guru

Take advantage of the insights in the books of a renowned expert

By Dirk Willard, Contributing Editor

The first anniversary of the passing of Trevor Kletz, who died on October 31, 2013, provides a good opportunity to re-emphasize how much we still can learn from him. Frequently called the father of process safety (see: "Trevor Kletz Bequeaths Better Process Safety”), he wrote many books on the topic; some sit on a shelf in my study. Kletz directed some of his attention to the managerial malaise that explains why past accidents recur. One of my favorite quotes seems to strike at the heart of the problem: “If you think safety is expensive, try an accident.” So, let’s explore some of his comments on process safety. Pardon my paraphrasing.

Let’s begin with his thoughts on design and construction. Changing procedures to improve safety is far less effective than changing design, which is why design is so important. Don’t use different parts that are interchangeable — e.g., piping with dissimilar ANSI ratings for a compressor inlet and discharge. Make construction easy, use the same rating throughout or distinguish parts to avoid substitution. On p. 307 of the 4th edition of “What Went Wrong?,” Kletz suggests a useful mnemonic for selecting materials of construction: SHAMROCK, where S stands for safety; H for history (go with what you know); A for availability (spare parts); M for maintenance/maintainability (check cost savings carefully); R for reparability (training, experience and time to repair); O for oxidizing/reducing nature of fluid; C for cost (lifetime costs); and K for kinetics of corrosion mechanism, i.e., understanding of corrosion. Make a job simple or the people doing it will find shortcuts. Keep isolation as close as possible to the equipment being blocked off. Make maintenance easy by retaining line of sight, e.g., have local instrument readouts. Whenever possible, use instruments that measure parameters directly. Inferred values may not be reliable — an issue that plagues pH and some flow meters. A magnetic flow meter is a perfect example. If the core plugs, the velocity reads high and the inferred flow rate is too high. This same argument applies to complex controls. Instead of closing valve A and then valve B, why not close them together? With downsizing and consolidation, too many companies rely on contractors, often hired on low-bid lump-sump contracts, to provide the engineering knowledge of people they fired. Kletz mentions this problem has been going on for over a hundred years — it wouldn’t surprise me if the pharaohs chose the low bidder to build the pyramids! Poor construction management has ruined many good designs but stellar construction practices have improved lots of mediocre ones.

Now, let’s consider day-to-day operations. Never start up a system after an accident or serious near-miss, when equipment could be damaged, without a thorough explanation of the cause of the incident and an inspection. Many companies inspect their process pipe but ignore utilities, bypassed process pipe and, especially, hoses and expansion joints. Test during commissioning any process steps or equipment, such as emergency equipment, that must be operated quickly — and retest periodically after that; ideally, walk through the unit with operators before finalizing the design and again after purchasing. Have a backup plan if such a process is out of commission and test that plan, too, at least by walk down. Inspection errors and failure to periodically test emergency equipment and other similar procedures have been reported in historical events going back hundreds of years and, unfortunately, remain common.

Communication problems afflict both design and operations. Be redundant — until it hurts. On p. 105 of the 4th edition of “What Went Wrong?,” Kletz relates how 4,600 calves died because a Dutch company ordered a chemical from a U.K. supplier by number alone. In the U.K. that number was a poison. The Dutch firm should have ordered the chemical by name, number and description. Redundancy also works with communication: e-mail and call — keep calling. Another repeated root cause he identifies is role confusion. This contributed to the explosions at Esso’s Longford, Australia, gas plant in 1998 and BP’s Texas City, Texas, refinery in 2005 — as well as other accidents dating back decades, perhaps centuries. In addition, Kletz warns of the dangers of siloing information. This is particularly true when the engineer who programs the distributed control system isn’t the one who uses it. He suggests the same person should have both roles. Lastly, he cautions that a good record of minor lost-time accidents doesn’t indicate you’re safe from a major catastrophic process accident. How prophetic. Perhaps by reading some of his books you can help us avoid repeating history.


dirk.jpgDIRK WILLARD is a Chemical Processing contributing editor. He recently won recognition for his Field Notes column from the ASBPE. Chemical Processing is proud to have him on board. You can e-mail him at dwillard@putman.net

Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments