Will Memory Constraints Derail Edge AI?
One of the 11 Pillars of Industry 4.0 is artificial intelligence (AI). Known as the ‘game changer’ of manufacturing, AI incorporation provides companies with methods to optimize production, save resources, and improve services. Edge computing incorporated into AI, Edge AI, is the deployment of AI applications that rely on edge computing to process data. Edge’s architecture (Figure 1) is based on data processing that occurs at the ‘edge’ of where data is located instead of the cloud level. With a projection of one trillion IoT devices on the market by 2035, companies will benefit from the real-time, actionable insights and increased performance that edge computing offers. However, memory and data constraints are inhibiting the momentum of Edge AI’s integration.
Figure 1: Edge AI architecture
Source: NVIDIA
Surmounting Edge AI’s Data Volume
One of the pieces that drives edge AI’s innovation is also its main hindrance. As an adaptive technology, Edge computing is constantly creating mass amounts of data. It is also significant to point out that different edge devices require different types of memory. Regardless of device type, once a device is fully transitioned to the edge, it will continue to create data as it learns to minimize processing power and energy consumption.
Solutions are being introduced to surmount the large amount of data produced by edge AI. Efforts are being made to transition continuously generated data into a cloud environment. Additionally, Google’s Federated Learning Model is a hybrid approach that uses local data to optimize processing capabilities. However, incorporating edge and cloud computing techniques also requires a large amount of power.
The Power Behind Edge AI Memory
Most of the power consumption in an edge AI device is allocated to memory processing. Google’s report, Google Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks, found that 60% of power in a mobile system is used to transfer data to/from between on- and off-chip memory. One theory of mitigating power usage involves putting all memory onto one chip, but the current selection of SRAM on-chip memory lacks size and power efficiency.
Companies like Google, Facebook, Amazon, Apple, and other internet powerhouses are rushing to find solutions in edge hardware in order to beat the performance bottleneck. Magnetic RAM (MRAM) is a promising method of mitigation as it continues to evolve with enhanced energy efficiency, endurance, and yields. MRAM is three to four times as dense as SRAM and eliminates memory leakage as it is non-volatile. Additionally, storage applications like 3D XPoint by Intel, phase-change memory (PCM), and resistive memory (ReRAM) are gaining traction as methods that will keep edge AI’s momentum flowing.
What is the Solution for AI at the Edge?
With staggering statistics like Allied Market Research’s report projecting the market for AI edge computing to reach $59,633.0 million by 2030, it’s clear that the demand for comprehensive memory solutions is increasing. However, it is unclear if the solution will be legacy technology, new technology, or a mixture of the two. With large companies rushing to buy small industry startups, the hunt for edge AI’s memory game-changer is on.
Looking for solutions to keep your company’s edge AI momentum on track? Contact Symmetry Electronics today!