By 2020, the Internet of Things (IoT) is likely to grow in a market of more than US$1.29 billion1, connecting more than 30.7 billion2 objects and producing around 500 Zettabytes of data3. No top manager can ignore how this technology will redefine the product design, manufacturing, sales, purchasing and other business cycles of his or her industry. However, the difference between good and great leaders will come from how they will be able to design an organization that not only can anticipate these changes but can find ways to harvest the usefulness of the massive data stream generated by Machine to Machine (M2M) communication. This is where solutions coming from AI can help manage the complexity in the generated data and provide valuable business insights.
Here are a few recommendations that should help top managers create added value for their companies.
Design processes and open systems embracing Big Data
Whenever organizing their business processes and defining their underlying IT infrastructure, managers will need to address two critical issues facing system design: what type of software architecture and network topology must be implemented to optimize the benefits generated by future IoT data stream? Even if some vendors might promote proprietary solutions for data security reasons, it will become increasingly important for top managers to choose open solutions (e.g., Apache Hadoop). Companies adopting such an approach, will not only ensure that any new sensor or machine will be seamlessly added to the network without tedious API hard coding. They will also be able to reap the benefits resulting from data consistency and upscale system compatibility, resulting in productivity gains cost reduction.
Push for a distributed architecture
Selecting such a system strategy is obviously not neutral and has implications on the sort of network topology to be implemented. Rather than promoting a centralized architecture with its clear benefits (i.e., centralized systems are easier to create, implement and even maintain, as there is only one single point of failure), an IoT centric system will require a more distributed system approach, which will bring specific network advantages and downfalls. Indeed, distributed networks are highly scalable, extremely stable (as any harm is limited to the problematic node) and can evolve quickly. However, they require more maintenance and are more complex to implement, as details such as what resources and data are to be shared and communicated, must be precisely defined. This definition is especially crucial if companies are going to fully benefit from M2M data streams.
Manage your systems’ intelligence
Unlike in consumer markets, where the mere fact of being IoT connected can generate extra sales, the implementation of IoT or M2M technologies within a B2B environment, must add intelligence to the system. This can only be achieved by generating faster and better decisions, and by providing a coherent data stream that can be harvested. Computing strategies to increase speed and quality of decisions, as well as providing processes which can detect patterns in Big Data, aren’t necessarily aligned. Indeed, pushing intelligence to the lowest level, that is, within the components, only makes sense if decisions can be taken at that level, while not forbidding the possibility to consolidate information into a meaningful output.
Pushing intelligence to the edge can be tricky but if well applied, can generate the highest productivity results.
Build an edge computing strategy
A few years ago, I participated in a collaborative book on CCTV4 in metro environments, in which I suggested that the best security systems resulted from pushing intelligence to the closest specific data source to be controlled. I believed and still do, that for instance, integrating the data processing within cameras to take into consideration special problems (e.g., accounting for changes of metro lighting conditions or vibrations) is much better than putting every data analytics within the Network Video Recorder (NVR). Such an edge computing strategy (enabled by a distributed architecture) considers splitting the data processing between two (or more) network levels, according to where it is more meaningful. In this case, video analytics processing could be done in parallel streams, both at the camera and NVR levels. Obviously, such an approach introduces the critical issue of time scale, a problem that isn’t always easy to manage. This is especially true whenever taking into consideration distributed storage of quasi-independent systems, which therefore have developed their own decision cycles and different data storage frequencies.
Consider Complex Event Processing (CEP) technologies
To account for various events that could happen concomitantly, system designers must introduce what is known as Complex Event Processing (also called automated rule generation), which can be seen as a technology that analyzes and consolidates these different events, producing decisions and potentially triggering other events. CEP also accounts for a hierarchy of events (e.g., a security breach judged as a more serious event, which must be dealt with in priority). Thus, CEP introduces prioritization, enabling the production of higher-level events, coming from lower level of abstraction. Other key advantages of CEP are that it can automatically identify rare but important relationships between a seemingly unrelated stream of events and speed-up timely responses. It also reduces operating costs by controlling the system’s end-to-end performance and helps fine-tune the business processes.
However, there are three main problems with CEP.
- Computer scientists can create user-defined rules only if the level of complexity is reasonable, especially the split of information to be treated between network levels.
- Identifiable signs, which could suggest meaningful events often consist of low-level events hidden in irregular temporal patterns (i.e., from sub-seconds to months).
- CEP cannot predict future events or calculate the system parameter values for future events.
This is where the properties of AI can be useful.
Use AI to optimize the M2M and IoT data flow
More specifically, there are two AI technologies which are well adapted to CEP: Artificial Neural Networks (ANNs) and Deep Learning techniques. ANNs are computational models that mimic the human brain, made of billions of neurons interconnected by synapses. Their many key features can manage the complex modeling required for IoT applications, that is, the Big Data generated by sensors showing complex (temporal) patterns. Moreover, ANNs excel in cross-modality learning, a characteristic that can match the multiple properties found in IoT.
By adding Deep Learning technologies to ANNs, we can add as many hidden layers as we want, providing the mechanism to train the ANN in a supervised or unsupervised fashion. Therefore, Deep learning technology running on an ANN improves the system by providing the capacity to learn from the past. Furthermore, and as training improves the patterns identification, in the long run, it enables the predictability of future events and their parameter values.
All these cognitive computing techniques and technologies that I’ve just described will force companies and their system engineers to design by learning rather than by defining a priori the complex systems with their interactions. Indeed, using CEP and ANNs with technologies such as Deep learning will enable system tweaking during unsupervised learning. Most likely, this will mean that Intelligence will firstly appear in smaller systems before it is extended to others and finally to the complete system. As a consequence, the more the design will emerge from experience, that is, from the incoming output, the more important will be the definition of what the final output should be. This is where great managers can play a key role.
1 IDC spending guide study that covers the 2015-2020 forecast period
2 IHS 2016 forecasts
3 Source Cisco
4 T. Kritzer, S. Van Themsche and al., “CCTV: a tool to support Public Transport Security,” 2010.