Production performance ratings can provide crucial insights

Key performance indicators may lead the way to cost savings and capacity increases without large capital expenditures.

By David Emerson

Share Print Related RSS
Page 2 of 2 1 | 2 Next » View on one page

To ease identifying golden and brown batches, the normalized rating value can be used as a filter criterion in a batch historian when selecting batches for analysis and reports.

Coming up with a rating
Calculations for determining a PPR can vary in complexity. A straightforward calculation ( Figure 2) may find a value for each input KPI based upon a comparison to a target value or the mean and standard deviation of its peers. This value is then weighted to give the individual KPI inputs different influences in the final rating. The calculation is performed using a default percentage as a starting point and the result is normalized to 0-100%. This method can easily be expanded to include new KPIs.

More complex formulas may be used so long as they adhere to three key principles:
1. Compare an input KPI to a target or its peers.
2. Weight each input according to its importance to production performance.
3. Normalize the result, restricting it to 0-100%.

When comparing a KPI to its peers, a simple rule-of-thumb can be used as a starting point. This rule(Figure 3) draws upon concepts from Six Sigma, by valuing low variability. If a batch’s KPI, for example its cycle time, is within the upper and lower standard deviation of all its peer batches, then this is good and should increase the rating. If the cycle time is outside the standard deviation limits, this is poor behavior and should decrease the rating. To reward batches with improvements, those with cycle times below the mean and above the lower standard-deviation limit should have a slightly higher rating.

This rule-of-thumb is intended to reward batches that provide consistent cycle times, with a slightly higher reward for those trending below the mean. If a cycle time is below the lower standard-deviation limit, this should be considered “too good to be true.” Perhaps it is due to a breakthrough in production performance but this decision should be withheld until the shorter cycle time is repeated enough for the mean and standard deviation limits to change sufficiently for these batches to be in the green (best performance) zone.

The number of standard deviations to use depends upon the industry, company and process. In most cases three standard deviations is a good starting point. However, in processes with low variability, this may not provide sufficient differentiation and so a lower number may be desirable.

Peer batches need to be defined for each application. In the strictest sense, all batches based upon the same master recipe version are peer batches. If the master recipe is revised, the batches based upon the new version should be grouped separately from previous batches. In other applications batches from multiple master recipes may be considered peers if the recipes are similar enough.

As more peer batches are produced, the mean and standard deviation values will drift as new data points appear. At some point, the PPR for all peer batches will need to be recalculated to provide a level comparison. Whether this should be done each time a batch completes, periodically or upon demand is an application-specific decision.

Depending upon the process and products involved, it may be necessary to create different PPR formulas for each master recipe or group of master recipes to customize them for differences in the processes or products.

Batch versus unit recipe ratings
So far we have looked at batch-level PPRs. While the batch-level rating is the most visible, the unit-recipe-level rating often can provide a more accurate reflection of key performance areas.

One weakness with batch metrics, such as cycle time, is that one key unit recipe, say, that for the reaction, can represent the true bottleneck or high cost/risk portion of a batch. Other unit recipes such as those for preparation, mixing, post-reaction processing and drying can provide variability that is actually caused by another batch’s reaction or other key unit recipe. If this occurs, calculate unit-recipe PPRs and use them as the primary method to compare batches. Alternatively, a key unit recipe’s cycle time, rather than the batch cycle time, can serve as a KPI input to the batch PPR.

While this could be carried down to the operation and phase level, it may provide diminishing returns. However, such calculations may be valuable for very specific periods of a batch’s execution.
A powerful use of PPRs is the ability to roll up results. Roll-ups can be performed on many levels. The table provides some examples. Rolling up of ratings can provide a quick indication of trends. Figure 4 shows the PPR for batches rolled up by master recipe version and master recipe. This roll-up enables the quick comparison of the rating for each version of Recipe A or B. When rolled up to the recipe level, comparisons between Recipe’s A and B can be made, assuming the same formula was used for both recipes.

A potent tool
PPRs can be used as metrics for dashboards, but also can provide a powerful indexing tool for production analysis. Using a batch’s and unit recipe’s ratings as a filter, it is easy to find top- and bottom-performing batches for a number of criteria such as production lines or units, time of day, products and material lots. These sets of batches then can be analyzed using the KPI inputs to gain a better understanding of what causes production problems and higher costs.

When used with a batch historian, these data can be analyzed for trends over time, not just on a recipe or product basis but also to detect other correlations, such as if certain operations, phase classes or units are commonly associated with high or low ratings.

For instance, an engineer may find that all batches of a product in the last quarter that had performance ratings below 50% and then drill down to determine the root causes. Or the engineer may use the averages and standard deviation for performance ratings for all batches of a product to compare different products to see which have the greatest variability. In this case peer comparisons can be made using batch’s PPRs (Figure 5).

Go for the gold
PPRs provide a composite KPI that can be used to identify top- and bottom-performing batches. The ratings can be used to expand the concept of golden batches from one exemplary batch or unit recipe to a set of excellent top-performing batches, pinpointing common traits and characteristics that lead to the best results. The concept also can be expanded to include “brown” batches, to identify root causes of production problems so that corrective actions can be taken to prevent them.

References
1. Jensen, B., “Six Sigma and S88 Unite for Batch Automation Productivity Improvement,” presented at WBF North American Conference 2001, WBF, St. Louis, Mo. (2001).
2. Luedecke, R., D. Sanders and D.D. Baker, “Throughput Improvement at Henkel Surface Technologies,” presented at WBF North American Conference 2002, WBF, St. Louis, Mo. (2002).

David Emerson is a senior systems architect for Yokogawa in Carrollton, Tex. E-mail him at Dave.Emerson@US.Yokogawa.com.

This article is based on a paper that was originally presented at the World Batch Forum North American Conference 2004 and is copyrighted by WBF.

Page 2 of 2 1 | 2 Next » View on one page
Share Print Reprints Permissions

What are your comments?

You cannot post comments until you have logged in. Login Here.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments