Friday, November 15, 2024
Google search engine
HomeGuest BlogsDeep Learning, IoT Sensor Data…and Bats!

Deep Learning, IoT Sensor Data…and Bats!

At the very center of Internet of Things excitement is the sensor. Not just one sensor, mind you, but a sensor that normally would just be sending a data stream to who knows where would now have access to the information from another sensor measuring something completely different. Now imagine your entire office building awash with dozens, hundreds, or even thousands of light, temperature, humidity, water, motion, image and other sensors. This is a staggering cornucopia  of data pulsating across a network at one time.

So what do we do with all that data? Several articles written on IoT from the CIO’s perspective tend to focus on the headache of attempting to store it all.  In other words, it’s viewed as a Bigger Data problem. I think this line of thinking suffers from short-term memory less when remembering why someone would connect together  hundreds of sensors in the first place. All those sensors should be interacting to solve a problem. For data geeks like myself, all that data makes me smile, but Billy the Building Manager just wants a product that makes his life easier. So how do we move beyond ex post facto statistical analysis to real-time processing, decision making, and visualization to help Billy reduce energy bills.

A good place to start our hunt is computational neuroscience. Researchers in this field want to know how the brain functions, so they mix mathematics with neurobiology, psychology, cognitive science and computer science to produce models of how neural inputs lead to neural activity and, in some cases, produce an external behavior. Google has been talking a lot about this field with its big bet on deep learning being the future of machine learning and AI. Another popular term to look out for is the recurrent neural network, which basically points out the importance of feedback within a network of biological neurons. The act of local competition and feedback among neurons is what makes neural activity so complex and powerful.

This may sound all well and good for processing visual, auditory, motor, or touch inputs but the human brain has no carbon monoxide sensor input processing system! The human brain has processing mechanisms for a number of sensor inputs, but what about nonbiological inputs like carbon monoxide detection? This is where IoT sensor processing and information discovery can get creepy and/or interesting. Some of the basic principles developed in deep learning over the past several decades are the concepts of unsupervised learning, hierarchical neural networks, and recurrence. There are certainly genetic differences across brain regions, but it’s stunning how similar the fundamental mechanisms of neural computation is across the neocortex. If this concept is true (and there are those who take issue with this), then we can begin constructing models of nonbiological sensors in a fashion inspired by the mathematical models of computational neuroscience.

Thankfully we don’t need to start from scratch. The bat, for example the neural processing of target distance by echolocating bats is a great starting point for coding up sonar sensor processing. Once you have a number of sensors processing inputs in their own hierarchical deep learning networks, you have to then learn higher-level features when you shake it all up. Enter multimodal deep learning. Hierarchical processing has shown great promise in image and language processing, yet the hurdles line up exponentially when you start talking about multimodal processing. This is the wild, wild west of AI. And once we begin to discover what combinations of sensor produce interesting patterns and useful outputs, the realm of IoT sensor data processing will be poised to make a giant step forward.

Have a question? Reach out to Sean on twitter, article originally posted at seanlorenz.com

RELATED ARTICLES

Most Popular

Recent Comments