Make the Most of Historical Process Data

The past can provide critical insights for the future.

By Dane Overfield, Exele Information Systems, Inc.

Share Print Related RSS
Page 4 of 4 1 | 2 | 3 | 4 Next » View on one page


Tips and Traps
Underappreciated issues can undermine an implementation. Pay particular attention to four areas:

1. Scan rates. A historian can’t store a value that wasn’t scanned. Data collection nodes for a historian are responsible for retrieving new measurement values. These nodes typically poll measurements at a configured interval. You may have the ability to scan groups of points at different intervals. The scan rate must allow the historian to accurately reproduce the measurement waveform. A common mistake is setting rates too high, increasing the likelihood that the collection node won’t scan important values. For example, if the collection node is scanning a voltage measurement every 5 seconds, a 3-sec. voltage spike may not be scanned or recorded in the historian. So, during configuration, it’s imperative to involve people familiar with the measurement behavior. Someone who understands the process and the measurements that are being collected would know that a 3-sec. voltage spike could be an issue.

When selecting scan rates, you need to ask the question “What is the minimum period in which a significant value change can occur?” Too often people implement blanket settings that aren’t appropriate for all measurements. For example, scanning temperatures once per minute would be fine for a large vessel where significant changes (0.5°) take tens of minutes but wouldn’t be good for measuring an exhaust air temperature which can change 10° in less than 5 seconds because you will miss significant events. If you don’t know the appropriate interval, check with a person who does.

2. Data retrieval. Most process data historians record a set of measurement values along with a status and time stamp. These values may be stored sporadically based on the data collection and filtering settings for the measurement point. Reports and analysis tools may offer a standard set of retrieval functions; the data consumer must understand these functions. Users like to create reports by retrieving evenly spaced samples of the recorded values because these calls result in a known number of returned values. Yet, sampled retrieval calls often will miss important recorded events that occur between the evenly spaced samples, resulting in inaccurate reports.

3. Aggregates. Data historians may offer a multitude of aggregate functions that will return calculation results performed on the historized data. Many of these aggregates have similar names or names that may be confused with common language usage. So understanding the details of each function is imperative. For example, a “total” may return an integration or a summation; an average may be time-weighted or a mean of the recorded values. If possible, retrieve the historized data, compute your own calculation result, and compare to the historian’s aggregate function result to verify the expected operation.

4. What time is it? Time-stamped process data in a historian presents many opportunities for errors and confusion.

• Collection nodes may time-stamp the scanned data based on the current time of the data source (e.g., PLC, DCS), which may differ from the current time of the historian or other data sources stored in the same historian.
• Users may exist in different time zones than that of the data historian computer. So a request for “the value at 8:30” may return different values for different users. Can users request data according to their time zone or must they use the time zone of the historian?
• Clock changes for daylight savings time result in one 23-hr day and one 25-hr day. What happens to a daily report during these two days?
• Some “events” occur within the confines of continuous time and require additional processing and correlation tools. Such events have a defined start and end time, and include traditional batch and sub-batch operations, equipment startups, environmental emission violations, and equipment failure or downtime. Tracking these events requires storage of the event start and end times along with the ability to correlate the historical process data within the event time. Traditional absolute-time based tools may not work well when performing event-based analysis. Event analysis demands tools that provide event querying (when did the events occur?), correlation of the process data within the event, removal of absolute time and “time-into-event” analysis.


Dane Overfield is product development lead at Exele Information Systems, Inc., East Rochester, N.Y. E-mail him at dane@exele.com.

Page 4 of 4 1 | 2 | 3 | 4 Next » View on one page
Share Print Reprints Permissions

What are your comments?

Join the discussion today. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments