I’m not going to waste much space here writing about the broad, general impact of the Internet of Things. Countless column inches have already been spent detailing the number of devices, the amount of data and the potential size of the market. We get it, it’s huge, still growing and transformative.
One of the most interesting facets of the Internet of Things is the complex flows that result from simple events. A single out of spec sensor reading can trigger a cascade of machine and human actions. Deviations from known patterns of data might mean nothing, or may signal some kind of catastrophic event. How can we tell the difference? How can we orchestrate a response to conditions that are signaled by our IoT data streams in a consistent way whether or not human intervention is required? How can we look at these responses in aggregate and find ways to handle them more efficiently, find opportunities for further automation or discover exceptions that our current process does not cover?
This is not a new problem. We have been dealing with the challenge of coordinating activities triggered by signals or messages at scale for years. While the source of the messages (high volume IoT event streams) may be new, the techniques for responding to them are not. We can leverage many of the same tools and patterns that have been used successfully in other spheres to coordinate actions that arise from IoT events. Specifically, we can take advantage of a scalable, high performance workflow engine to consume the output from IoT devices, decide what messages or message patterns indicate an action is required and then execute a process in response. The beauty of this approach is that it gives us a clean separation between the underlying data and the process design, using tools and concepts that are already well understood by business and technical users alike. Our process engine can intelligently interact with other IoT devices using automated workflow tasks (covering many M2M use cases) as well as tasks that require human intervention (solving for many M2P use cases) or both in the same process. Finally, this approach gives us access to detailed analysis of both in-flight and completed processes without having to reinvent anything. In short, IoT and BPM seem destined to connect and in some ways, converge.
This is just the first in a series of articles. Over the course of the series we’ll explore this idea in more depth, discussing how a number of IoT protocols can play in the BPM world, how we can structure bidirectional communication between our process engine and IoT devices, where we may need new components, and how to use the insights BPM analytics gives us to make sense of trends in our IoT data. When we need to build specific examples, we’ll make use of Alfresco Activiti. Activiti is well suited for this kind of thing. It is lightweight, super fast and scalable, and comes with analytics baked in. Most importantly, it is open source and easily extensible which we’ll need to build some of our examples. It’s a perfect fit.
Stay tuned for the next article in this series, “Activiti and IoT, Choosing the Protocol Stack(s)“.