Order up better measurements

Second-order instruments can provide improved accuracy and reliability for electrochemical and other sensors.

By Larry Berger

1 of 3 < 1 | 2 | 3 View on one page

Faced with a need to monitor the accuracy or reliability of a particular instrument, engineers at some plants improvise a secondary sensor to help keep the primary sensor “honest.” These practitioners, usually without realizing it, are using second-order instrumentation, that is, instrumentation whose purpose is to measure or indicate the condition or performance of another instrument. The terminology may not be familiar, but it aptly describes an often-overlooked way of dealing with the pervasive problem of sensor error.

Measurement untrustworthiness has been around as long as there have been instruments. But there are remarkably few ways of dealing with it.

Sometimes a reading is so far off that it clearly stands out as suspect. When this happens, it often is a simple matter to repeat the measurement, recalibrate the instrument, repair or even replace it.

Even if a single reading alone doesn’;t flag that something is amiss, reliability can be improved by the time-honored technique of redundancy, either in time or space. Redundancy in time simply means repeating the measurement, while redundancy in space entails using multiple instruments in parallel. Either way, redundancy is a powerful technique, though it seldom gets the attention it deserves — perhaps because it seems so obvious.

An instructive analogy comes from computing. A modern computer works so fast that it’;s hopeless to try to verify the results of a complex series of computations by comparing them with the results of manual computations done in parallel. So, how is it possible to spot if some malfunction in the computer’;s microprocessor is skewing the results? The answer is simple: Give the same problem to two computers and then compare their results. Carrying this thinking one step further, assigning the same problem to three computers in parallel can pinpoint when there is a glitch and which computer is the faulty one. (This concept, incidentally, is not what is usually meant by the term “parallel computing,” which refers instead to dividing a computational task into sub-tasks, each assigned to a different computer, then combining the results.)

Unfortunately, many applications do not lend themselves to redundant instrumentation for reasons of cost, space, weight, inaccessibility for servicing, etc. These kinds of constraints are becoming more commonplace.

A better approach
Computing again offers insights, this time on a more sophisticated approach. Even where computational accuracy is deemed mission critical, the use of multiple computers working on the same problem in parallel is considered to be an expensive luxury, except in the case of manned spacecraft. Most other mission-critical applications instead rely upon some regimen of computer self-diagnostics: The greater the importance of the computation, the more elaborate the diagnostics.

The idea is to exercise the microprocessor via a series of test computations that are thorough enough to turn up any malfunction and whose correct results are known in advance. This technique might be considered to be a computational analog of second-order instrumentation, where the diagnostic software plays the role of a secondary sensor whose job is to monitor the performance of the primary. The analogy breaks down, however, unless the self-diagnostic computations are “multi-tasked,” i.e., performed in parallel or at least alternated with the actual “working” computations. The essence of second-order instrumentation is that the secondary instrument reports on the status of the primary in real time.

The best way to cope with the unreliability of a sensor — and especially with its degradation over time — depends upon the consequences of an inaccurate measurement and the accessibility of the instrument in question. If there is a dearth of options in general, the shortage is especially acute in those “hard” cases where the need is greatest. When the consequences of an inaccurate measurement can be catastrophic, and when it is impractical or very costly to remove an instrument from its location for servicing, second-order instrumentation often affords the best alternative.

However, instrumentation engineers seldom consider second-order instrumentation, perhaps because they are unaccustomed to the concept of a sensor for a sensor. Examples of this approach exist, even in consumer products. A household smoke detector, for instance, does not need great measurement accuracy, but it must have outstanding reliability because instrument failure can be catastrophic to the user and pose liability issues for the manufacturer. That is why most smoke detectors are equipped with LEDs to indicate that the units’; batteries have enough charge. The smoke detector, the primary or first-order sensor, has a second-order sensor to monitor its condition.

The plant context
Let’;s consider a common situation at chemical plants that uses electrochemical sensors. Sooner or later, every such device fails because the electrochemistry of the measurement process itself degrades key elements of the sensor. En route to the inevitable failure, measurement accuracy deteriorates. However, the optimum time for intervention, i.e., replacement of contaminated or consumed elements, often cannot be predicted. In such cases, the best strategy depends upon (a) the consequences of a faulty measurement; (b) the accessibility of the sensor for servicing; and (c) the economics of preemptive action, i.e., preventive maintenance before the sensor starts operating outside its nominal range.

For the least-demanding applications, the simplest strategy — preventive maintenance — works best because routine scheduled service usually is less costly than the effort, and especially the risk, of trying to squeeze more life from degraded components.

What about production or waste-treatment processes requiring sensors that are hard to access, such as those located in the flow stream of a hazardous fluid, or whose servicing requires costly shutdowns? For instance, measurements upon which regulatory compliance or batch quality depend are not always made in convenient locations. Sometimes, access to the sensor necessitates suiting up to go into a hazardous environment in 100°F weather, or a sensor in high-pressure service must be extracted and reinserted in cramped quarters.
1 of 3 < 1 | 2 | 3 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.


No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments