Mouser Left Banner
Mouser Left Banner
Mouser Left Banner
Mouser Right Banner
Mouser Right Banner
Mouser Right Banner
More

    Artificial Intelligence’s secret key is the memory integration

    The need for architecture have been already driven by the Big Data applications, that put memory closer to compute resources, but the further demonstration of how hardware and hardware architectures play a critical role in successful deployments, has been done by the Artificial Intelligence and Machine learning. But the key question is that, where the memory will be residing.

    A research commissioned by Micron Technology found that according to 89% of the respondents, it is important and critical for compute and memory to be architecturally close together. A survey conducted by Forrester Research, found that at present, memory and storage are the most common cited concerns regarding hardware constraints limiting artificial intelligence and machine learning. More than 75% of respondents recognize a need to upgrade or re-architect their memory and storage to limit architectural constraints.

    AI compounds the challenges already unearthed by big data and analytics requirements because machine learning does multiple accumulation operations on a vast matrix of data over neural networks. These operations are repeated over and over as more results come in to produce an algorithm that is the best path and the best choice each time — it learns from working on the data.

    Because there’s so much data, said Colm Lysaght, Micron’s vice president of corporate strategy, a common solution for getting the necessary working memory is simply to add more DRAM. This is shifting the performance bottleneck from raw compute to where the data is. “Memory and storage is where the data is living,” he said. “We have to get it to a CPU and then back again, over and over again, as these vast data sets are being worked on.”

    Finding ways to bring compute and memory closer together means saving power because data isn’t being shuttled around as much, Lysaght said. “It’s increasing performance because more things can happen right where they need to happen.”

    There is a number of different approaches to creating better architectures, Lysaght said. One example is neuromorphic processors that use a neural network internally and break up the internal number of cores into a larger number of smaller cores. “Because there is a large matrix of data that’s being worked on, having many more cores doing relatively simple operations over and over again is a better solution,” Lysaght said.

    ELE Times Research Desk
    ELE Times Research Deskhttps://www.eletimes.com
    ELE Times provides a comprehensive global coverage of Electronics, Technology and the Market. In addition to providing in depth articles, ELE Times attracts the industry’s largest, qualified and highly engaged audiences, who appreciate our timely, relevant content and popular formats. ELE Times helps you build awareness, drive traffic, communicate your offerings to right audience, generate leads and sell your products better.

    Technology Articles

    Popular Posts

    Latest News

    Must Read

    ELE Times Top 10