In just a decade, artificial intelligence (AI) has gone from being a hype to a fundamental technological enabler of our online lives. Services such as Google Search, social media feeds, and online ads all crunch vast amounts of data on the cloud using AI algorithms to generate output personalized to each individual user. Dig a little deeper and you’ll find AI’s learning capabilities harnessed across the online world to optimize user experience, business processes, and technological solutions.
Over the same period, AI has found countless applications in connected devices. To see AI in action, look no further than your smartphone, which you can unlock using facial recognition, speak to after activating Siri or Google Assistant using your preferred wake word, or use to take pictures that make those snapped with a standard camera pale in comparison. None of these routine actions would be possible without AI. But rather than running the AI algorithms on the cloud, these use cases run them right on the device, at the so-called edge of the network.
The growing prevalence and maturity of AI and machine learning, a related technology, have led to their democratization. Today, more and more products are incorporating edge AI to improve their service and enable new use cases. While, typically, they build on features that have become standard in smartphones, such as voice and face recognition, these only outline a subset of the use cases that can be improved using edge AI.
Benefits of edge AI
To understand the benefits of edge AI, it’s useful to understand how it works. Like standard artificial intelligence, Edge AI relies on mathematical models that are inspired by the neural architecture of the brain. The particularity of these neural networks is that they can be trained to accomplish all kinds of tasks. Expose them to millions of images of traffic lights from the vast trove of pictures available online, for example, and they’ll become masters at recognizing them.
Training the AI algorithms is a computationally intensive process requiring vast amounts of data. The outcome, however, is a compact, capable AI model that can easily be deployed on any number of end devices. Provided they have sufficient computational resources to run them, the algorithms can operate without requiring any cloud connectivity.
This independence from the cloud gives edge AI several important benefits:
- Connectivity requirements: Because no voice or image data needs to be uploaded to the cloud, devices running the AI algorithms locally can save bandwidth for other needs or instead rely on low bandwidth wireless communication technologies. That being said, cloud connectivity can be beneficial to update the AI models over the air when new, improved models become available.
- Latency: Running the AI algorithms run directly on the device saves the time required to upload sensed data to the cloud and transfer the output back to the end device. This considerably reduces latency, leading to a smooth, lag-free user experience.
- Privacy: If even the most sophisticated connected devices run the risk of having their data intercepted by hackers, it’s clear that consumer devices in low to medium price segments are particularly exposed to such threats. Because edge AI devices process all data (voice, image, or other) on the device, the data never has to leave the device, protecting it against interception.
Edge AI use cases
There are a variety of edge AI use cases that are gaining traction.
Face recognition is being used for user authentication beyond the smartphone. While commercial access control solutions use facial recognition to ensure that only authorized employees are granted access to restricted locations, security cameras can use it to sound the alarm if it detects strangers entering a building. Similarly, facial recognition can be used to recognize returning customers at gyms, medical clinics, or commercial venues.
At the same time, voice user interfaces are becoming increasingly prevalent. After all, what’s more convenient than being able to speak to your smart device (and be understood)? While voice recognition technology, which can both authenticate the user and process the incoming voice commands, was perfected in smartphones and smart personal assistants, it is now finding applications in cars and smart home devices, as well as to increase accessibility for people who are unable to type due to disabilities.
And in the industrial realm, edge AI can be used to flag anomalous behaviors caused, for instance, when motors show early signs of failure or when rolling bearings begin to wear. In these anomaly-detection use cases, the AI models are trained using a dataset covering normal behavior. By detecting any deviations from the norm, plant operators can receive alerts informing them of potential degradation of machines, allowing them to address them before they cause costly downtime.
Implementing edge AI with u-blox
Implementing edge AI use cases in wireless smart devices just got easier – and more powerful. Here at u-blox, we just launched the NORA-W10 Wi-Fi 4 and Bluetooth low energy 5.0 module, which is designed to enable and accelerate edge AI applications. In addition to featuring a powerful open CPU for advanced customer applications, the module offers AI support for speech and facial recognition. AI vector instructions for edge neural network inference (8- and 16-bit models) deliver an extra performance boost, considerably speeding up the AI algorithms, reducing perceived latencies, and saving power.
Author: Magnus Johansson, Senior Product Manager, u-blox