Digital twins have a high rate of adoption, with excellent reported benefits. They are typically managed as an engineering project with a scope and timetable for implementation. Unfortunately, little goes into tools for management of the twin after implementation, when the engineering team disbands. The good results are often lost as processes drift or change, people become fed up with false alerts, and the twin needs support.
Peter Drucker, management consultant and author, wrote, "You can't manage what you can't measure." This also applies to digital twins. Consider adopting the confusion matrix for monitoring and managing the performance of your digital twins.
Measuring Digital Twin Performance
The current state of digital twins lacks a key performance indicator (KPI) to manage their effectiveness. A review of industry writings on digital twins found no KPI or other measurements of effectiveness of the twin itself. Each digital twin affects the KPIs for what it models, but these are secondary. This report addresses KPIs for the performance of the digital twin.
Confusion Matrix Applied to Digital Twins
A confusion matrix provides a specific table layout for visualization of the performance of a digital twin algorithm. This includes twins using first principle math, machine learning, or both. It measures alerts generated by the twin in a way that all stakeholders can easily interpret the twin’s truthfulness and respond appropriately.
The four-cell matrix contains true positives and negatives, and false positives and negatives. This matrix provides a dashboard for measuring false positives and false negatives (errors) by the digital twin. Performance is managed by monitoring the false readings and continuously improving the twin by reducing them.
Performance Digital Twin Confusion Matrix
The most common twin is a performance digital twin for predictive maintenance (PdM) in the operate and maintain portion of an asset’s lifecycle. PdM is used for the examples in this Insight. The concepts can be applied to other types of twins since they involve simulation for predictions that can be compared with actual conditions.
• True Positive (TP) occurs when an alert generated by the model is confirmed by a maintenance planner or technician to be valid.
• True Negative (TN) does not have alerts, since the PdM twin does not generate alerts when there is no indication of a problem.
• False Positive (FP) applies to the alerts where a problem was not found.
• False Negative (FN) records the failures without a corresponding alert.
Integration with EAM System for Data Gathering and Reporting
For end users with inhouse maintenance staff, these metrics can be obtained seamlessly. Automate the data collection by integrating the digital twin with the enterprise asset management (EAM) system where the technician’s work orders are processed and managed. First, alerts generated by the PdM twin are transferred to the EAM system. The workflow includes automatically creating a maintenance work order for the maintenance planner to review, triage, approve, and schedule. Modern EAM systems have application programming interface (API) for this function.
A dashboard can be created to display the confusion matrix for each digital twin – preferably associated with the EAM system where asset management KPIs are monitored. An approach would be to add a checkbox field in the work order to obtain the needed data from the planner or technician. For those with just a few digital twins, this checkbox is likely best done by the planner.
As the quantity and maturity of the digital twins grows, this role may be transferred to technicians. With mobile devices, technicians can process work orders – including a checkbox for a false positive - while doing their work.
Modern EAM systems have a means to add fields in work orders to collect the needed data from the technicians and/or planner. These newer systems also allow for the creation of custom reports or dashboards for tracking the “confusion matrix” for a digital twin.
Managing Digital Twin Performance
The two leading indicators of a problem with a digital twin are:
• False negative which lets unplanned downtime occur.
• False positive leading to wasted technician effort with declining compliance to the alerts.
False Negatives and Unplanned Downtime
The prime benefit of deploying a performance digital twin is PdM to prevent unplanned downtime. Each false negative should initiate a review of the failure modes and effects analysis (FMEA) to assess the coverage of the digital twin. The twin will likely need to be upgraded to add fidelity to include the detected failure mode. These changes continuously improve the scope and robustness of the twin.
False Positives and Technician Compliance
Alerts for a problem that does not exist will gradually dwindle maintenance staff’s confidence in the digital twin. A high rate of false positives will eventually cause them to ignore the alerts and fall back to the old way of doing things. Tracking false positives provides a means to manage and continuously improve the twin’s trustworthiness. When the technicians lose confidence, the digital twin will likely have an ungraceful death. A rule of thumb is that a rate of false positives under 5 percent will sustain confidence in the twin.
Measuring Training Progress for Machine Learning
The ratio of false positives over all alerts [FP/(FP+TP)] provides a metric of the maturity of the twin. Guidelines for the workflow for processing the alerts are:
• High rate of false positives – perhaps over 25 percent: Send the alerts to engineering for evaluation and triage. Use the data to improve the fidelity of the digital twin.
• Moderate FP rate – perhaps 5 to 24 percent: Send alerts to the maintenance planner. Incorporate a business process where the maintenance work order is automatically generated for the planner. Periodically, generate a report of FP and FN from the EAM system for use by engineering to improve the twin.
• Low FP rate – under 5 percent: Have the technicians check for FP and FN in the work order with room for text notes on their mobile device. Continue to report FP and FN to engineering with the notes.
These suggested percentages for workflow processing depend on several factors and your percentages will be different. The factors to consider include available resources, institutional knowledge of digital twins, and willingness of the maintenance organization to support technology adoption.
Consider adopting the confusion matrix for monitoring and managing the performance of each digital twin. The associated KPIs provide a means to measure continuous improvement and drive to a robust digital twin with lower unplanned downtime for critical assets.
• End users should track the performance of their digital twins using the confusion matrix. Use it to continuously improve the twin and to identify drifts as the production processes change.
• EAM technology providers should review the capabilities of their software to assure that it can support KPI tracking of digital twins. Perhaps add a software wizard for adding data collection and dashboard display.
• Digital twin technology providers should consider adding this confusion matrix to its twins for data collection and tracking performance within the twin itself. This would be most appropriate for assets that are highly critical and disconnected from the user’s maintenance system – like an OEM monitoring its equipment for customers.