Any discussion about the use of plant operating data with plant management, operational staff or suppliers of plant information management systems will quickly spawn one of these clichés: “We need to make better use of data,” “We need easier access to the data,” “We need to get the right data into the hands of the right people, so they can make the right decisions,” “Don’t just give me more data, give me more knowledge.” The clichés are right. We do need to improve our ability to make efficient use of our automation and plant information data system investments. But the questions are how and where is the value truly being delivered to the organization?
This call to action is being driven by reductions in resources, increased desire to maximize capacity utilization, the need to optimize operational performance, and to ensure that we are in compliance with company goals, targets and corporate responsibilities. To put the challenge in perspective, Figure 1 shows the number of refineries in North America and their crude processing capacity. Clearly, we are being asked to do more with less. Data management is an essential element of the solution to this challenge.
Figure 1. Number of refineries and their capacity from 1949 to 2004 (click to enlarge)
Data to knowledge
Over the last 20 years, the chemical, oil and gas industries have invested heavily in automation and plant information systems such that the data are now accessible. As a result, we should now be able to put them to productive use. Or can we? The challenge with raw data, no matter how accessible, are that they are just data, and data still requires a lot of work before they can be turned into knowledge. In most cases, the data need to be validated, analyzed and converted into knowledge that are actionable. And this can still require a significant investment of time and resources.
The Key Performance Index (KPI) has been the first step in putting data into a context that is more aligned with organizational goals. Every plant functional group has high-level objectives and targets, and if the raw operational data can be converted in real-time or near real-time into these KPIs, then non-compliance to operational targets can be quickly identified and decisions can be made (see CP, October, p. 37) . But while converting these data into contextualized KPIs is a necessary first step, this alone does not guarantee the desired operational improvements. If the KPIs themselves are not managed effectively, companies often simply transform the problem of “data overload” into the problem of “KPI overload”.
Consider the application of Control Asset Performance Management (CAPM).
In the chemical, oil and gas industries, 75% of a plant’s physical assets are under some form of automation or process control. Companies are now focused on optimizing control performance to improve plant performance. Typical results are tempting: capacity can be increased by 3% to 5%, with little or no additional capital investment. What is required instead is an investment of intellectual capital. The objective of the CAPM program is to automatically collect the raw data from the distributed control systems (DCSs) and covert these raw data into higher level KPIs. Most CAPM programs will convert real-time measurements of controller operating mode, present value, set point, and output into daily KPIs such as variance index, oscillation index, valve stiction (stickiness) index, utilization index, economic performance index, etc (Figure 2). As a result, it is now much easier to understand whether the control system is performing optimally by monitoring these high level utilization and performance-based KPIs.
Figure 2. Raw data to performance metrics.
The consolidation of raw data into KPIs or performance metrics is a necessary first step. If not managed carefully, though, it will simply change the nature of the problem. If we consider the CAPM example above, a plant faced with the challenge of monitoring and sustaining the performance of 1,000 control loops may find it difficult to act on the results of 1,000 KPIs per day. The transformation of data into KPIs alone seldom delivers the true improvements we’re looking for.
The need for visualization
The visualization layer is an essential element to getting the value from any KPI-based monitoring system. We have all seen the promises of the “digital dashboard” and speedometer-like displays of plant efficiency delivered in real-time though a web-based environment. But the true power of the visualization layer is its interactive ability to quickly sort and display the consolidated performance metrics; high priority requirements are highlighted and guidance is provided on actions required.
The visualization layer is developed through a combination of filtering, sorting and drill-down-type analysis techniques. This forms its own layer, the automation layer, between data collection and visualization. More sophisticated visualization techniques, such as Treemap Technology, are now available. These techniques allow users to visualize hundreds of assets in a single view and rapidly identify the key focus areas. Treemap, and similar systems, represent a step change in our ability to rapidly act on the information presented within a KPI-based environment.
CAPM studies have shown that well-designed KPIs combined with powerful visualization techniques can allow plant personnel to improve the identification of high-priority automation problems by 100%. More importantly, they can complete the task in less than 10% of the time required when using traditional analysis techniques. Figure 3, shows examples of both the sorting/filtering and Treemap visualization layers applied to CAPM.