The digital twin concept encompasses use cases for both product lifecycle management (PLM) and for real-time streaming analytics in live systems. The latter is becoming ever more important as our society increasingly relies on a fast-growing population of IoT devices. With billions of smart devices constantly generating data, the task of tracking their telemetry and extracting real-time intelligence has become extremely challenging. Applying the digital twin concept to real-time analytics for IoT promises to address this challenge by revealing real-time insights, enabling immediate responses to emerging issues, and maximizing overall situational awareness.
Consider a software telematics application that tracks a nationwide fleet of trucks to ensure timely deliveries. Dispatchers receive telemetry from IoT-connected trucks every few seconds detailing their location, speed, lateral acceleration, engine parameters, and cargo viability. In a classic needle-and-haystack scenario, dispatchers must continuously sift through telemetry from thousands of trucks to spot issues, such as lost or fatigued drivers, engines requiring maintenance, or unreliable cargo refrigeration. They must also intervene quickly to keep the supply chain running smoothly. Digital twins can implement real-time analytics that helps track these devices and tackle the seemingly impossible task of automatically sifting through telemetry as it arrives, analyzing it for anomalies needing attention, and alerting dispatchers when conditions warrant.
To analyze telemetry streams from IoT devices, such as truck engines or refrigeration units, algorithms for real-time analytics must be able to identify unusual changes that deviate from normal patterns and indicate the need for intervention to avoid expensive, unscheduled downtime. While analytics code can be manually crafted in popular programming languages like Java and C#, creating algorithms that uncover emerging issues hidden within a stream of IoT telemetry can be daunting or, at a minimum, complex to devise. In many cases, the algorithm itself may be unknown because the underlying processes which lead to anomalies and, ultimately, device failures are not well understood. The same challenge applies to telemetry that analyzes human behavior, such as tracking lateral accelerations of a vehicle to detect a fatigued or erratic driver.
In cases like these, the fast-maturing science of machine learning (ML) can come to the rescue. Instead of trying to devise code that analyzes complex, poorly understood fluctuations in telemetry, application developers can instead train an ML algorithm to recognize abnormal patterns by feeding it thousands of historic telemetry messages that have been classified as normal or abnormal. After training and testing, the ML algorithm can then be put to work monitoring incoming telemetry and alerting personnel when it observes suspected abnormal behavior. No manual analytics coding is required.
The ML algorithm needs to run independently for each data source, examining incoming telemetry within milliseconds after it arrives and then logging events or alerting personnel when required. A new type of digital twin called a “real-time digital twin” provides a powerful new way to accomplish this. Running as a software component within an in-memory computing platform, each real-time digital twin processes telemetry from a unique physical data source, such as a truck within a fleet (or any IoT device). Instead of modeling behavior as a PLM digital twin would, it can host an ML algorithm or other real-time analytics algorithm and can generate alerts when it detects anomalies needing attention.
In this way, digital twins can harness the power of ML to provide predictive analytics that automates the discovery of anomalies that are otherwise difficult for humans to detect. Once an ML algorithm has been trained on historic data, it can be deployed to run independently in each digital twin. Real-time digital twins examine incoming telemetry immediately after it arrives. This technique provides much faster responses than would be possible using offline, big data analytics hosted in a data lake.
Thousands of real-time digital twins run together to track incoming telemetry from their data sources and enable highly granular real-time analysis that assists in timely decision making. The in-memory computing platform routes incoming telemetry to its corresponding real-time digital twin and processes it within a few milliseconds. Each digital twin independently runs its own ML algorithm to spot anomalies from a given data source. By mapping all digital twins to a cluster of physical or virtual servers, the in-memory computing platform can ensure real-time results at scale.
In addition, the platform can continuously aggregate state information from all real-time digital twins and visualize the results to help personnel boost situational awareness. While the digital twins sift through telemetry and generate dynamic information about the state of the data sources, fast in-memory computing creates an aggregate picture that pinpoints strategic issues and trends. For example, the telematics application can quickly spot regions where unusual delays are occurring due to weather conditions or highway blockages, enabling managers to reroute trucks and minimize additional delays.
Even as digital twin technology has exponentially advanced over the last several years, the incorporation of real-time analytics with ML into this technology creates important new opportunities for the digital twin concept. With the explosion in telemetry from billions of IoT devices and other data sources, the need for finding small anomalies that can emerge to impact the operation of live systems has become more difficult than ever. Because digital twins can encapsulate state information about individual data sources and run analytics code in real-time, they are particularly well suited to meeting this challenge. They can unlock new capabilities and enhance situational awareness that enables fast, informed decision-making and better insights than ever before.
About the Author
Dr. William Bain
CEO, and founder of ScaleOut Software