The Center for Chemical Process Safety's "Guidelines for Safe and Reliable Instrumented Protective Systems" (CCPS IPS) considers reliability a key factor for process safety instrumentation. This book advises treating reliability as equal in importance to IEC 61511's safety integrity for instrumented systems.
11. If the SIS is configured to alarm on detected failure, the process is safe as long as repair is completed within the assumed mean time to repair. An inherently safe design configures detected failure toward the trip condition and uses redundancy to achieve reliability. Configuring an SIS to alarm often increases the risk of loss of containment by at least an order of magnitude, because the SIS either is degraded in the case of a fault-tolerant SIS or disabled when no fault tolerance is provided.
IEC 61511, ISA-TR84.00.04, and CCPS IPS discuss the use of compensating measures to temporarily manage the risk. Recognize that the fault repair and compensating measures often result in higher occupancy and a greater likelihood that ignition sources are present. This increases the potential consequence severity should an incident occur during repair — so conduct a hazards and risk assessment to review and approve the choice to configure detected faults to alarm.
The requirement for safe operation demands covering the risk gap from the time of failure discovery until the fault is corrected and equipment returns to service. You must address continued process operation with a degraded or disabled SIS via procedures that define the compensating measures sufficient to maintain safe operation.
12. If a process hazard analysis (PHA) indicates that risk criteria have been met, the process is safe. The hazardous events identified during the PHA are limited by the evaluation team's assumptions, which are restricted by their collective experience, knowledge and available information. Any risk estimate is limited by the quality assurance and the feedback process that ensures the data are relevant in the real world; bias potentially can creep in.
A 2005 analysis of incident data by the U.K.'s Health and Safety Executive determined that more than one in four hazardous events were attributed to poor hazard and risk assessment and follow-up. A 2003 analysis by the U.K.'s Health and Safety Laboratory found that more than one in three incidents that occurred due to process deviations from normal operation weren't adequately considered as potential hazards or causes of equipment failure.
Quality design and management practices are absolutely essential to achieve real risk reduction and incident prevention. Without continuing efforts, latent conditions appear over time, causing failures in the safety layers like holes in Swiss cheese. Without proactive action, the holes may eventually align to present a challenge to safe operation when process deviation occurs (Figure 3).
13. An acceptable probability of failure on demand average (PFDavg) is sufficient proof of a safe system. The PFDavg is only as good as the model of the safety system and the data used for calculations. Most safety professionals can perform the calculations with a good tool — but in many cases experience and expertise are needed to see the forest for the trees. Failure rate data often come with large associated uncertainty. Correctly modeling the functions and applying the data require good engineering judgment. The usual, almost-exclusive focus on the sensor, logic solver and final element leads many to discount other potentially important contributors to failure. The PFDavg calculation must include all equipment that can cause a failure to function as specified.
Vendor data alone don't suffice to determine PFDavg. Vendor failure rates assume perfect operating conditions and perfect MI, ignoring the process' and operating environment's contribution to equipment degradation and failure. Actual failure rates highly depend on the operating environment and MI, and can be orders-of-magnitude higher than vendor reported rates. Consequently, you should base reliability data on field feedback — the less the feedback, the more the uncertainty in the data.