By Bill Schmarzo, CTO, Big Data Practice of EMC Global Services
What is the Intelligence Revolution equivalent to the 1/4” bolt?
I asked this question in the blog “How History Can Prepare Us for Upcoming AI Revolution?” when trying to understand what history can teach us about technology-induced revolutions. One of the key capabilities of the Industrial and Information revolutions was the transition from labor-intensive, hand-crafted to mass manufactured solutions. In the Information Revolution, it was the creation of standardized database management systems, middleware and operating systems. For the Industrial Revolution, it was the creation of standardized parts – like the ¼” bolt – that could be used to assemble versus hand-craft solutions. So, what is the ¼” bolt equivalent for the AI Revolution? I think the answer is Analytic engines or modules!
Analytic Modules are pre-built engines – think Lego blocks – that can be assembled to create specific business and operational applications. These Analytics Modules would have the following characteristics:
- pre-defined data input definitions and data dictionary (so it knows what type of data it is ingesting, regardless of the origin of the source system).
- pre-defined data integration and transformation algorithms to cleanse, align and normalize the data.
- pre-defined data enrichment algorithms to create higher-order metrics (e.g., reach, frequency, recency, indices, scores) necessitated by the analytic model.
- algorithmic models (built using advanced analytics such as predictive analytics, machine learning or deep learning) that takes the transformed and enriched data, runs the algorithmic model and generates the desired outputs.
- layer of abstraction (maybe using Predictive Model Markup Language or PMML[1]) above the Predictive Analytics, Machine Learning and Deep Learning frameworks that allows application developers to pick/use their preferred or company mandated standards.
- orchestration capability to “call” the most appropriate machine learning or deep learning framework based upon the type of problem being addressed. See Keras, which is a high-level neural networks API, written in Python and capable of running on top of popular machine learning frameworks such as TensorFlow, CNTK, or Theano.
- pre-defined outputs (API’s) that feeds the analytic results to the downstream operational systems (e.g., operational dashboards, manufacturing, procurement, marketing, sales, support, services, finance).
- Analytic Modules produce pre-defined analytic results or outcomes, while providing a layer of abstract that enables the orchestration and optimization of the underlying machine learning and deep learning frameworks.
Monetizing IOT with Analytic Modules
The BCG Insights report titled “Winning in IoT: It’s All About the Business Processes” highlighted the top 10 IoT use cases that will drive IoT spending including predictive maintenance, self-optimized production, automated inventory management, fleet management and distributed generation and storage (see Figure 1).
But these IoT applications will be more than just reports and dashboards that monitor what is happening. They’ll be “intelligent” – learning with every interaction to predict what’s likely to happen and prescribe corrective action to prevent costly, undesirable and/or dangerous situations – and the foundation for an organization’s self-monitoring, self-diagnosing, self-correcting and self-learningtwoIoT environment.
While this is a very attractive list of IoT applications to target, treating any of these use cases as a single application is a huge mistake. It’s like the return of the big bang IT projects of ERP, MRP and CRM days, where tens of millions of dollars are spent in hopes that two to three years later, something of value materializes.
Instead, these IoT “intelligent” applications will be comprised of analytic modules integrated to address the key business and operational decisions that these IoT intelligent applications need to address. For example, think of Predictive maintenance as comprised of an assembly of analytic modules addressing the following predictive maintenance decisions including:
- identifying At-risk component failure prediction.
- optimizing resource scheduling and staffing.
- matching Technician and Inventory to the maintenance and repair work to be done.
- ensuring tools and repair equipment availability.
- ensuring First-time-fix optimization.
- optimizing Parts and MRO inventory.
- predicting Component fixability.
- optimizing the Logistics of parts, tools and technicians.
- leveraging Cohorts analysis to improve service and repair predictability.
- leveraging Event association analysis to determine how weather, economic and special events impact device and machine maintenance and repair needs.
As I covered in the blog “The Future Is Intelligent Apps,” the only way to create intelligent applications is to have a methodical approach that starts the predictive maintenance hypothesis development process with the identification, validation, valuing and prioritizing of the decisions (or use cases) that comprise these intelligent applications.
Read the source article in Data Science Central.