We'll be at CES 2025 Jan 7-10, Las Vegas!
Venetian Expo Booth 53359
Learn More
We'll be at CES 2025 Jan 7-10, Las Vegas!
Venetian Expo Booth 53359
Learn More
Hardware

Ultra Low Power SRAM for AI Computing

Hardware

Ultra Low Power SRAM for AI Computing

Vaishali S.
Date
November 13, 2024
Share On

Introduction

In today's fast-evolving world of ubiquitous computing, where devices are shrinking in size, growing smarter, and demanding ultra-low power consumption, memory plays a crucial role. Among these innovations, Static Random Access Memory (SRAM) stands out, especially in AI, IoT, and edge computing applications, where minimizing power use is essential.

This article highlights the 64KB ultra-low-power (ULP) SRAM—Ambient Scientific’s custom-designed memory that outperforms industry standards in power efficiency while boosting AI computing capabilities.

Key Features of 64KB Ultra-Low-Power SRAM

Our custom Ultra-Low Power SRAM was designed with several key outcomes in mind which are reflected in the following features below.

1. Ultra-Low Dynamic Power: The 64KB ULP SRAM boasts remarkably low dynamic power consumption across different cache configurations for low-power AI applications:

·       64KB retention with a 32-bit data bus: 6.4 μW/MHz

·       256KB unified L1 cache with a 32-bit data bus: 9.2 μW/MHz

These metrics underscore its efficiency, particularly for power-sensitive applications like tiny electronic products and battery-operated IoT systems.

2. Ultra-Low Leakage Power: Leakage current, a major cause of power loss when devices are idle, is dramatically minimized.

·       64KB retention with a 32-bit data bus: 5.5 μW

·       256KB unified L1 cache with a 32-bit data bus: 20 μW

This low leakage power is essential for IoT devices and edge computing, where devices frequently enter standby or low-power modes.

3. In-Memory AI Computing: As AI workloads shift to the edge, efficient data handling becomes crucial. The 64KB ULP SRAM is optimized for delivering 3D operands to matrix computers, enhancing AI inference directly on the device.

4. Reduced Active and Leakage Currents: Compared to off-the-shelf memory, the 64KB ULP SRAM consumes 5X less dynamic power and 3X less leakage power, making it an ideal solution for edge computing and always-on AI applications.

Performance Comparison with Industry Memory

Below is a comparison of the 64KB ULP SRAM with leading industry-standard memory:

Key Takeaways:

  • Power Efficiency: The total average current of our 64KB ULP SRAM is 68% lower than industry-standard memory, enabling longer battery life and reduced heat dissipation.
  • Leakage Reduction: The SRAM achieves a 47% reduction in leakage current, minimizing idle power consumption, crucial for IoT devices and always-on AI applications.
  • Efficient Area Utilization: Despite superior power performance, the SRAM’s area is competitive with industry memory, optimizing silicon space without sacrificing efficiency.
  • Frequency Trade-Off: Although the industry standard supports slightly higher frequencies, the 200 MHz offered by our SRAM is more than adequate for AI, IoT, and edge computing tasks, prioritizing power savings.

Optimized Low-Power Design

The 64KB ULP SRAM emphasizes reduced power through low-power sense amplifiers and low-power swing techniques. These methods lower voltage swings on the bit lines, minimizing energy consumption during read and write operations. The result is a 5X reduction in dynamic power and 3X reduction in leakage power compared to industry memory, perfect for power-constrained environments like AI, IoT, and edge devices.

Figure 1: Block Diagram of Low Power SRAM

Figure 2: Layout of Low Power SRAM

Von Neumann Bottleneck in AI Applications

In AI systems, particularly neural networks and deep learning, the frequent transfer of large amounts of data between memory and processors creates a significant performance bottleneck—the Von Neumann bottleneck. This problem arises because traditional computer architectures separate the memory (where data is stored) from the processor (where computation occurs), requiring constant data movement, which consumes time, energy, and bandwidth.This leads to increased latency, energy consumption, and bandwidth limitations such as:

  • High Data Transfer Demand: Neural networks often require millions or billions of parameters (weights), and the frequent movement of these weights from memory to the processor creates latency.
  • Power and Bandwidth Limitations: Moving large amounts of data consumes substantial energy and puts pressure on bandwidth, making the system inefficient in power-constrained environments like IoT devices.
  •  Memory Latency: Traditional off-chip memory architectures contribute to delays because of the time taken to fetch and write data.

How the 64KB SRAM Solves the Von Neumann Bottleneck

Our 64KB ULP SRAM offers several benefits that address these issues:

  • In-Memory Computation Efficiency: By using SRAM for on-chip memory storage, the data transfer between memory and the processor can be minimized, reducing the dependency on off-chip memory. This is especially useful for AI computing, where storage of frequently used weights, activations, or intermediate data in a nearby SRAM block allows for faster data access, reducing both latency and power consumption.
  • Energy-Efficient Data Access: The focus on low-power sense amplifiers and low-voltage swing techniques allows our SRAM to consume less power per access. This enables frequent access to memory with minimal energy overhead, making it ideal for low-power AI applications like edge computing and IoT devices, where power efficiency is critical.
  • Parallel Data Access: The 64KB SRAM with a 32-bit data input can handle substantial amounts of data in parallel, reducing the time needed for large memory transfers. This helps alleviate the bandwidth limitation and accelerates processing in AI models that need to frequently read and write data.
  • Reduced Data Movement: With on-chip storage and low-leakage, our SRAM helps mitigate the frequent back-and-forth data transfer that contributes to the Von Neumann bottleneck. Instead of constantly moving data to external memory, the weights and activations can be stored close to the processor, enabling faster computation cycles.
  • Support for AI Acceleration: AI accelerators, such as matrix multipliers used in deep learning, often require fast access to multiple data operands simultaneously. By using our SRAM as a cache or local memory, the matrix multiplication process can be optimized, delivering faster AI inference with reduced memory-related delays.

Applications in AI, IoT, and Edge Computing

The 64KB ULP SRAM is versatile and well-suited for:

1.    AI Edge Devices: Optimized for in-memory AI computing, the SRAM reduces external memory access, improving power efficiency and performance.

2.    IoT Devices: With ultra-low dynamic and leakage power, it’s perfect for always-on sensors, wearables, and smart home devices.

3.    Wearables: Extending battery life without sacrificing performance, it’s a strong fit for wearable tech.

4.    Edge Computing: Reduced power footprint ensures efficient real-time data analytics and sensor data fusion at the edge.

With its ultra-low-power design, reduced leakage, and optimized in-memory AI computing, the 64KB ULP SRAM is a game-changer for the future of AI, IoT, and edge computing. As devices evolve, this SRAM will play an essential role in shaping efficient, high-performance low-power AI applications that thrive on power efficiency.

Get In Touch
Headquarter ( Silicon Valley )
Ambient Scientific Inc.4633 Old Ironsides Drive Santa Clara California 95110. USA
Headquarter ( India )
Ramky House, 1st Cross, Raghavendra Nagar, Kalyan Nagar, Bengaluru Karnataka, 560043, India
Newsletter

Exploring the forefront of cutting-edge chip processor technology?

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
By providing your email, you consent to receive promotional emails from Ambient, and acknowledge our terms & conditions along with our privacy policy.