A smart building in Kawasaki, Japan called the Smart Community Center, has 35,000 sensor devices in it, possibly the premiere example today of the Industrial Internet of Things (IIoT) in action. Announced in later 2016, the partnership of Dell EMC and Toshiba is developing a testbed to make sense of the data from the many sensors. The Industrial Internet Consortium (IIC), a membership program dedicated to accelerating the Industrial Internet of Things (IIoT), approved the testbed, the first deep learning platform it has approved.
Smart buildings aim to lower the cost of maintenance and operation, and keep tenants happier with fine-tuned heating and lighting for instance. The Smart Community Center generates 300 million data points per day, said Richard Soley, chairman and CEO of the IIC and the Object Management Group, in an interview with AI Trends. “Working with that much data is a big deal,” he said.
It potentially could predict failure of a key component before it happens, based on the maintenance history, now available to the system. Replace the weak component before it fails, lessen disruption and hold down overall costs.
Dell EMC is putting substantial effort into the deep learning testbed for use in the Smart Community Center. “The testbed is an enabler for the industry,” according to Said Tabet, Technology Lead, IoT Strategy for Dell EMC. “The test beds allow for better understanding end-to-end, enabling better business models and use cases.”
Sensors in the Smart Community Center are clustered in areas related to maintenance and energy consumption, including the heating and cooling systems. “Our experience is in learning from big data,” Tabet said. “Many systems are not yet ready to handle big data. So they are learning.” The real-time IIOT testbed system under development leverages deep learning for that.
Soley worked for Symbolics which sold Lisp processors for $100,000 each in 1981; that was to get access to AI. Now racks of 200 processors have 100,000 times the processing power of that Symbolics hardware, and cost a lot less. GPU chips have helped enable this boost in processing power needed to power AI.
So what is the gating factor today with this incredible increase in power? “Parallelizing the algorithms is the gating factor today,” Tabet said. Data collection may be happening at the edge, while the inference engine is running somewhere else, resulting in latency. Real-time systems may not have time to send information to the cloud and wait for it to come back, before an action is required.
Finding qualified workers who can combine knowledge of deep learning and machine learning is another challenge. “There is not enough expertise out there right now,” Soley said. Dell EMC is doing in-house training for education and innovation in AI, Tabet said.
AI, Machine Learning and Real-Time IoT
The latency issue was also cited by Michael Alperin, an industry consultant with the data science team at Tibco, which provides analytics and event-processing software, during a panel at AI World on AI, Machine Learning and Real-Time IoT.
“In practice, real-time means insights derived from data are needed at the moment they are most useful. The exact requirement depends on the use and the data update frequency,” Alperin said. For maintenance, equipment sensor data can be combined with histories of when a machine has failed in the past. “Then you can intervene before the machine goes down,” in theory, he said.
The goal in many factories is to pull all the sensor data together to get a coherent big picture. Companies seek “the ability to take all of the data being generated in a factory and predict the final product quality. That’s what we see people doing today with supervised machine learning,” Alperin said.
David Maher, EVP and CTO of Intertrust Technologies Corp., a software technology company specializing in trusted distributed computing, is helping to process signals from offshore wind farms They use predictive modeling to help in maintenance, and to manage power distribution. “Most past models are obsolete; we need AI to help match supply and demand today,” Maher said. “It’s very sophisticated. We have have solar, geothermal and wind power all combined.”
AI Chip Architecture at the Edge
The drive to put compute power at the edge is placing a burden on smaller processors, which is driving evolution in chip design. Much of today’s AI happens in the cloud today; however, “Edge computing is changing to put the AI right in the processor on the edge,” said Dr. Shriram Ramanathan, senior analyst, Lux Research, moderator of an AI World panel on Evolution of AI Chip Architecture at the Edge.
Semiconductor Energy Laboratory Co., Ltd. (SEL) of Japan is in an interesting position, having designed a chip that consumes less power, generates less heat and thus the company says is well-positioned for edge computing. “Our company deals with material science and enabling low-power devices,” said Shinji Hayakawa, in technical services with SEL. “The raw volume of data used for AI is enormous. As we send more processing to the edge, we think more people will need edge computing capability.”
Oskar Mencer is the founder of Maxeler Technologies, which offers a dataflow approach to processing, said to result in dramatic performance improvements. Mencer said he founded the company to give the industry an alternative to microprocessors. “With AI, we have an opportunity,” he said. “We have new chip architectures, and we will probably have to change all of computer science” to implement properly on them, he suggested.
Jeff Burns, director, systems architecture and design for IBM Research, said IBM has an emphasis on AI going forward. “When we talk about AI, to get more function in smaller form factors, is a clear long-term trend,” Burns said.
The latency issues are driving innovations in edge computing, suggested Dinaker Munagala, CEO and Founder of ThinCI, a company working on deep learning and vision processing. “It’s not possible to get all the data we need out to the cloud,” he said. “Latency and bandwidth are issues in real-time systems.”
Mencer of Maxeler said, “The cloud is a great prototyping environment. We can change the software running on the cloud every hour if we want to. As we stabilize what the device needs to do, it makes no sense to send it to the cloud. It makes sense to do the processing locally.”
New chip designs “will help us push the computing industry forward,” Mencer suggested. The more data being generated, the greater the case for edge computing. “We will see more technologies deployed on the edge, which will be great for innovation,” he said.
Dr. Ramanathan asked the panel what analytics are appropriate to be done on the edge, and which in the cloud?
Mencer said, “It’s not hard.” A high volume of data comes to the sensor, so a data reduction step is needed. You need to go from 1 Tbyte to something you can send to the cloud, based on what the purpose is. “It’s about figuring out the use cases,” he said.
Munagala of ThinCI suggested “purpose-built hardware at the edge,” will be a fit for certain AI applications. Hayakawa of SEL said, “The data processing and memory needs to come together for AI processing; the current model where data and processing are divided, might not be sustainable.”
Mencer said software written for the last 50 years, was written with little regard for hardware efficiency. “But chips are cool again, as was said here. Making your own hardware is acceptable now.”
- By John P. Desmond, from the 2017 AI World Conference in Boston