NXP Semiconductors will unveil at Arm TechCon this week an AI strategy centered on the software tools. NXP is debuting an AI software development environment for the edge, called eIQ, and customizable system-level solutions.
Calling the current AI landscape still in flux, Geoff Lees, Senior Vice President and General Manager of microcontrollers at NXP, told us, “The first- and second-generation AI accelerators proved to be not scalable.” Although a host of AI SoC startups are developing new acceleration architecture, Lees said that customers today want more scalable general processors to meet their AI needs, he noted.
GHz MCU in 2019?
The emerging trend today is the growing realization that AI at the edge will “require even more processing” than anticipated, observed Lees. Customers are seeking processors that can boost AI performance, security, and network connectivity. Lees hinted, “Don’t be surprised that NXP is getting ready with a GHz MCU for a 2019 launch.”
Linley Gwennap, Linley Group President and Principal Analyst, differs from Lees. He acknowledged that AI is moving edgeward, but Gwennap’s trend radar is homing in on AI accelerators rather than the more powerful MCUs. “Even MCUs can have a small AI engine to offload the CPU (e.g., Abee, Eta, Greenwaves, QuickLogic).” Gwennap said, “Running the MCU at 1 GHz doesn’t make sense from a power or cost standpoint.”
ML as a middleware
Setting aside the hardware debate, NXP’s announcement this week is focused on AI software. Lees explained that NXP is “treating machine learning as middleware.”
NXP’s eIQ includes tools necessary to structure and optimize cloud-trained ML models. The goal for NXP customers is to run those ML models in “resource-constrained edge devices for a broad range of industrial, IoT, and automotive applications,” explained NXP.
Describing eIQ as “a one-stop foundation for world-class machine-learning applications,” NXP noted that its AI software developments include:
- data acquisition and curation tools (e.g., vision, voice and audio front end, sensor);
- model conversion for a wide range of neural net (NN) frameworks and inference engines, such as TensorFlow Lite, Caffe2, CNTK, and Arm NN;
- support for emerging NN compilers like GLOW and XLA;
- classical ML algorithms (e.g., Support Vector Machine and random forest)
Furthermore, eIQ includes tools to deploy models for heterogeneous processing, distributing the ML workload across computational blocks such as Cortex A/M cores, DSP, and GPU, on a range of NXP’s embedded processors.