Air Products is a world-leading industrial gases company that provides atmospheric and process gases and related equipment to a variety of manufacturing markets. With over 750 production facilities, 1,800 miles of industrial gas pipeline and operations in more than 50 countries, the company clearly understands that asset safety, reliability and efficiency are key to value generation. To this end, Air Products has increasingly focused on extracting further insight about asset performance from process data.
The fundamental operational questions that data-driven decision-making potentially can address lie at the intersection between asset optimization and reliability. Often, optimizing asset efficiency can detrimentally impact reliability and vice versa. The only way to objectively determine the optimal tradeoff is through the discrimination of data. Some representative questions plant managers often pose and data-driven decision-making ideally should answer include:
• Should I run this fixed-bed reactor at higher temperatures to achieve a greater conversion? If so, how will this affect my catalyst activity as well as the overall mechanical integrity of the reactor?
• When should I service the intercoolers of my multistage compressor given the tradeoff between maintenance cost and the potentially degrading isothermal efficiency of the compressor?
• Can I can extend the time between plant outages without increasing my unplanned maintenance cost?
The list goes on. The groundswell in interest throughout industry in “big data” suggests that objective, agile and insightful answers to these operational questions now potentially are within grasp. Certainly, digital control systems, data historians and additional sensor monitoring points made possible through advancements in the Industrial Internet of Things (IIoT) provide manufacturing companies with an unprecedented amount of data. However, these high-dimensional datasets often are characterized as having a challenging signal-to-noise ratio and a high degree of correlation/redundancy, along with being non-casual in nature (i.e., a change in a single sensor reading doesn’t always enable drawing a causal conclusion about altered process conditions — instead drawing a causal conclusion requires looking at a combination of factors), which ultimately leads to an indirect, incomplete representation of overall asset performance. As a result, companies paradoxically can have increased process observability but still remain unable to translate those acquired data into a deeper understanding of evolving asset efficiency and reliability.
For highly integrated processes having feedback and recycle components, a single univariate sensor profile seldom suffices for anticipating or diagnosing process interruptions or equipment failures. More often, predicting or spotting deviations in these complex process operations typically requires understanding the interdependence of several factors, many of which are individually identified by discrete process sensors. However, the interdependence of such sensor information and the contribution to a process deviation frequently is not obvious or feasible to monitor through human observation alone.
Advanced analytics in the form of state-of-the-art statistical multivariate techniques can de-noise process signals as well as capture the correlation among these signals, transforming huge plant datasets into actionable information. The Computational Modeling Center within Air Products’ Global Technology organization has developed ProcessMD, a patented, web-based, predictive-monitoring and fault-diagnostic platform, to provide foresight and insight about asset performance.
“Data is a core, strategic asset, and we have built the digital solution under our ProcessMD brand in order to unlock value from these data streams,” explains Brian Farrell, Air Products Global Technology Director.
At the foundation of the platform are predictive models built from multivariate techniques such as projection onto latent structures (PLS) and principal component analysis (PCA). These machine-learning techniques look to perform a variable transformation that, in doing so, robustly handles sensor redundancy/correlation, measurement error and more — enabling effective defect detection that may have gone unnoticed through classical, univariate statistical quality control techniques.
In some cases, a process may not perform properly even though all process variables have values within the expected ranges. For example, the data represented by X1 and X2 in Figure 1 fall within accepted upper and lower control limits. However, a composite variable T, defined through a multivariate model that captures the underlying positive relationship between X1 and X2, indicates when that correlation has been violated leading to an off-spec scenario. These data-driven, auto-adaptive models provide advanced alerts of subtle abnormalities and deviations from expected behavior across a whole plant or its specific components. The predictive analytics focuses on diagnostics to enable proactive intervention.