Managers at process plants continually ask the question, "How many alarms can an operator handle?" Because dealing with alarms is a large part of a console operator's activities, this is an important question. A research study covering 37 operator consoles conducted by the Abnormal Situation Management consortium showed a monthly average alarm rate of 2.3 per ten-minute period for normal operations and a median value of 1.77. A study by the U.K. government's Health & Safety Executive reported five alarms per ten-minute period — for 95% of the consoles included in that study, the average monthly peak alarm rate was 31–50 alarms per ten minutes. The Engineering Equipment and Materials Users' Association (EEMUA) published guidelines in 1999 stated that less than one alarm per ten minutes should be the goal but two per ten minutes is manageable and five in ten minutes likely will pose excessive demands on operators. It describes less than ten alarms in the first ten minutes following an upset as manageable and more than 20 "hard to cope with." The International Society for Automation (ISA) largely utilized similar values in the ISA 18.2 standard, with one to two alarms per ten minutes in steady state and less than ten in any ten-minute period.
While plants generally accept the EEMUA/ISA values, are they correct? This was the question the Center for Operator Performance (COP), a Dayton, Ohio, consortium that includes operating companies and automation vendors (www.operatorperformance.org), asked itself. So, the center commissioned Louisiana State University (LSU) to conduct a series of studies over the past two years to answer this question.
The first study involved LSU engineering and construction management students and used five alarm rates on a pipeline simulator. The alarms were presented either by time of actuation with priority color coded or grouped by priority. At one, two, five and ten alarms in ten minutes, there was no difference in response time to handle an alarm. At 20 alarms in ten minutes, response time increased by a statistically significant amount — and the display with alarms grouped by priority yielded statistically significant better response time. So somewhere between ten and 20 alarms in ten minutes response time degraded but not as much when alarms were grouped by priority. At 20 alarms in ten minutes, the students achieved better response times for higher priority alarms by sacrificing time on low priority ones.
The limitations of this study are obvious. It used students. The alarm rates were for only ten-minute periods of time. The alarms were evenly distributed across priorities, so there were as many high priority alarms as medium or low priority ones. However, the implications were so significant they prompted a second phase to address some of these limitations.
This involved exposing actual refinery operators and pipeline controllers to alarm rates for 60 minutes. Because the previous study didn't see any effect before 10 alarms in ten minutes, higher rates were used: 15, 20, 25 and 30 alarms per ten minutes. The alarms were distributed as suggested by EEMUA and ISA: 5% high, 15% medium and 80% low priority.
The results contained a number of surprises. The professionals averaged between 19 seconds per alarm at the lowest rate to almost 26 seconds per alarm at the highest. At 30 alarms per ten minutes, a queue of alarms began to develop because the operators had to spend more time assessing new alarms. They responded in about the same amount of time to high and medium alarms by not spending as much time on lower priority ones.