Samsung Launches HBM4E and Deepens Collaboration with NVIDIA, Accelerating the AI Computing Power "Memory Race" Further

robot
Abstract generation in progress

Amid the ongoing surge in AI computing power demand, storage technology is becoming a critical bottleneck in determining the performance of next-generation data centers. On Monday, the 16th, local time in California, Samsung Electronics of South Korea and NVIDIA jointly unveiled the next-generation high-bandwidth memory chip HBM4E at the annual developer conference GTC, highlighting their collaboration on AI computing platforms.

Samsung’s HBM4E is regarded as a key milestone in its next-generation AI memory roadmap: the product is expected to achieve a single-pin transfer rate of 16Gbps and a total bandwidth of 4TB/s, targeting future AI accelerators and ultra-large-scale data centers. Industry experts generally believe that this announcement marks a new phase in the collaborative upgrade of “computing power—storage” within the AI chip ecosystem, while also intensifying competition among Samsung, SK Hynix, and other manufacturers in the HBM market.

First Public Display of HBM4E: AI Storage Bandwidth Reaches New Heights

At this year’s NVIDIA GTC conference, Samsung publicly showcased the physical HBM4E chip, which is Samsung’s seventh-generation high-bandwidth memory (HBM) technology. HBM4E is positioned as an upgraded version of HBM4, designed to provide higher bandwidth and lower latency for next-generation AI accelerators.

According to Samsung, HBM4E is expected to achieve:

  • Single-pin transfer speed of 16Gbps
  • Per-stack bandwidth of up to approximately 4TB/s
  • Targeted at next-generation AI and high-performance computing systems

Compared to previous HBM products, this performance level further enhances data throughput for AI model training and inference, and is considered a critical infrastructure supporting trillion-parameter models and the expansion of AI data centers.

HBM technology stacks multiple DRAM chips vertically through 3D stacking, significantly increasing memory bandwidth and reducing power consumption. It has now become a core component of AI GPUs and accelerators.

Risk Warning and Disclaimer

Market risks are present; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment based on this information is at their own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments