Executing Alarm Management

For you to succeed in advancing your strategy, while keeping the peace, you must meet many challenges — motivating personnel and juggling the integration of changes. The solution is better plant-wide communication and understanding of the alarm philosophy.

By Roy Tanner, and Rob Turner ABB and Jeff Gould, Matrikon

Share Print Related RSS
Page 2 of 3 1 | 2 | 3 View on one page

  1. Fault — usually it’s the instrumentation;
  2. Process change; or
  3. Minor project.

After pondering this subject in your not-so-spare time, you develop a sound hypothesis for the return of the nuisance alarms — plant changes.

In the time it took to complete some of the alarm reduction projects for your facility, other changes were taking place. There were projects for replacing older plant equipment as well as an expansion or two.

Of course, most of the changes that are giving you nuisance alarms aren’t projects at all; they’re a result of the inevitable drive towards efficiency and profitability from existing plant equipment. “Let's push that setpoint just a little higher” moves that noisy reading closer to the alarm threshold, or perhaps changing that controller tuning destabilized something else, causing an alarm downstream.

The problem was the people executing the DCS (distributed control system) configuration work or moving those process variables were unaware of the ongoing alarm management efforts. While you are looking at the modifications, you notice all the new changes have been implemented with the highest priority of alarm.

When you call, the design engineer is emphatic that the project is of the highest importance and deserves the immediate attention of the operator. The only solution to this challenge is plant-wide adoption of an alarm philosophy. This will ensure new projects and modifications follow this guideline and the work done to date will continue long after your promotion to another job.

Remember all those instrument faults you fixed when you did the original alarm reduction project? Guess what — they’re back. Moreover, some of them are new. You remembered to empower maintenance to diagnose and fix faults that cause nuisance alarms, as well as create a procedure for shelving the alarms while they’re undergoing repair, didn't you?

Then, you begin to wonder if change management procedures are a contributing factor. In spot-checking some of the problem alarms you thought you had corrected earlier, you find some of the alarm settings changed compared to the alarm rationalization documentation. You try to find a reason why, but it most likely happened soon after you changed them the first time. Putting these back in place is likely to take time for approvals. Perhaps excluding those pesky process engineers while you trained operations was not such a good idea.

Best way to eat an elephant

The only thing you seemed to have learned from this exercise is that you’re the only one who cares about alarm rationalization.

Operations and maintenance personnel already have enough to do. The operations supervisors’ main goal is making production numbers, not reducing nuisance alarms, and no one wants to be the scapegoat for not making quota.

Everyone sees the problem with alarms but nobody knows how to approach it. The reality is everyone is doing some alarm-management work today, but they’re not doing it as efficiently as possible. Shift supervisors review inhibited alarms at the beginning of each shift. Operators scream about nuisance alarms but don’t have the tools, or the time, to identify which one is causing the most grief. DCS technicians respond in a patchwork manner to the various requests to make the necessary changes. And, maintenance desperately tries to squeeze in the requested instrument adjustments or equipment maintenance to address some of the issues. The problem is that for the most part, this is all gut feel.

What’s necessary is a program that effectively deals with nuisance alarms — with a minimum of bureaucracy.

The trick is as simple as ingraining very basic alarm-review procedures into the existing work culture. Soon, operations and maintenance, and even process engineering will realize how much easier life can be with an alarm management strategy. But first, you must put the facts in front of them (Figure 4): poor alarm management is everybody’s problem. By identifying the one or two alarm problems every week, and by simplifying the internal Management of Change (MOC) process to more seamlessly enable regularly required changes, your alarm system will be humming for years to come. The best way to eat an elephant is one bite at a time, right? — Especially, if you invite some friends over for dinner.

With things running smoothly, your operations team should be recognizing problems early. Because they have been empowered, alarms will be selectively refined to identify the root cause of an abnormal condition. The maintenance team will already be dealing with those problems. The engineers will be designing with alarm management in mind, and when they don’t, will get a prompt and focused push from the operations team. The operations supervisor will be periodically reviewing the status of alarms and the performance of the management system and making sure it’s working smoothly. Maybe that leaves you enough time to look at the next level of operability; perhaps the EEMUA targets don’t look that far away after all. Of course, the team will need new tools to make it to the next level.

Software platform standards

The solution is to get the entire facility involved with alarm management while not impeding production and minimizing the effort required. One way to do this is to streamline work processes through seamless integration of the various systems required to continue your alarm-management strategy. In the past, integrating a DCS with third party software was either a) impossible or b) expensive and hard to maintain. Similar to the realization that your car is a lemon because you know your auto mechanic on a first name basis, you know the solution wasn’t a true integration when you totaled how much money went to maintenance at “Joe's Software House” who implemented the solution. This situation wasn’t always the integrator’s fault, as the project followed an end user specification, faced limited infrastructure, and involved systems that didn’t originate for integrating into your system. In any event, there were dependencies on the integrators for modifications, routine maintenance, and upgrades.

Page 2 of 3 1 | 2 | 3 View on one page
Share Print Reprints Permissions

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments