Delve Deeper into “Premature Failures” of Rupture Discs

Inadequate data often conceal that the protective devices operated appropriately

By Eric Goodyear, Oseco

1 of 3 < 1 | 2 | 3 View on one page

Today’s processing plants generally include complex networks of sensors, monitors and computer processors. We’ve learned to believe what our monitors tell us and to rely on the data they report. Usually we should — but in some instances the data can mislead us. Attributing the bursting of a rupture disc to “premature failure” exemplifies this. In most cases, the disc actually has operated as intended to relieve an overpressure event but the available data aren’t good enough for us to see that such an event occurred.

This underscores the importance of understanding the limits of what our sensors can tell us, and what we can do to protect ourselves from the things they can’t. Our networks are powerful but bad things still happen to good systems. We still get caught unawares by overloads and can’t open a valve fast enough or disable a feed line in time to prevent damage. How can we avoid such issues?

Dust Control Roundtable -- Hear industry leaders in hazard identification,  evaluation and control for combustible dust hazards discuss what's important.  REGISTER NOW

Measurement Basics

The ability of a system to react properly depends upon having adequate and appropriate data available. Sensors provide these data to control systems. So, let’s quickly review some key facts about the sensor, sampling and resolution, and data reporting.

The sensor. This is simply a device that responds in a predictable way to some phenomenon. Various factors affect its performance. For instance, a sensor requires a certain minimum time to get a reading, which limits its speed of response — in the case of a typical thermocouple, it may take a number of seconds to respond to changes in temperature. Changes occurring faster than the sensor can read will produce skewed results or an average rather than true data.

Another limiting factor of many sensors is accuracy, i.e., the closeness of the reported value to the true one. The sensor’s construction and calibration can affect its accuracy. Construction usually sets sensing range and repeatability. For example, consider a pressure sensor that has a membrane whose deflection is measured to indicate pressure. A very thin membrane will be quite responsive to small changes in pressure, deflecting a large amount (typically in a less-than-linear fashion). This can improve accuracy but restricts the upper pressure the sensor can handle. On the other hand, a thick membrane might deflect very little over a relatively large pressure range, creating an accuracy issue when the differences in pressure are small.

So, pay attention to the sensor’s accuracy specification if one is provided, and note whether it relates to the entire range or the reading. (To get a sense of the impact of the different bases, see: “Pressure Gauges: Deflate Random Errors.”)

Make no mistake, a sensor will report some value. However, it’s important to understand exactly what that value means in the context of your system. In the end, the output of most sensors is a signal already laden with caveats as to what its amplitude really reveals.

Sampling and resolution. Something must sample the signal generated by the sensor and record the data values. Sampling involves two pieces, the rate and the resolution. Sampling rate is simply the number of times a signal is read and recorded over a certain interval. Resolution indicates the smallest incremental value that you should treat as meaningful. So, for instance, for a pressure gauge with a resolution of 2 psi, don’t worry about differences less than that. Most systems will round the value to the nearest significant number to prevent “false interpretations.”

As discussed above, the sensor itself constrains the sampling rate. However, the device doing the sampling, such as an analog/digital converter, also imposes constraints, both mechanically and due to downstream devices such as controllers or recorders and their associated needs.

Data reporting. Once data are sampled, the reporting signal itself may exhibit some cyclic characteristics. In addition, you must consider the way in which the data are compiled, graphed and analyzed. For instance, let’s compare daily weather temperature cycles to vehicle motor rpm. The daily temperature cycle obviously is slow, so a sample rate of only once per hour suffices for relatively detailed data and results. In contrast, a small car engine may run at thousands of rpm, requiring a much higher sample rate for ample data coverage. In cases like this, systems typically sample in the 1–2 times/sec range and then may average the readings before reporting. So, adequately monitoring the engine involves analyzing a massively greater volume of data. Moreover, interpreting these data requires properly accounting for significant interactions that can impact engine rpm; small changes in conditions can affect the reported values drastically if sample rates aren’t taken into account. Various industries use different sampling and reporting strategies to manage this.

1 of 3 < 1 | 2 | 3 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments