Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Samsung Launches HBM4E and Deepens Collaboration with NVIDIA, Accelerating the AI Computing Power "Memory Race" Further
Amid the ongoing surge in AI computing power demand, storage technology is becoming a critical bottleneck in determining the performance of next-generation data centers. On Monday, the 16th, local time in California, Samsung Electronics of South Korea and NVIDIA jointly unveiled the next-generation high-bandwidth memory chip HBM4E at the annual developer conference GTC, highlighting their collaboration on AI computing platforms.
Samsung’s HBM4E is regarded as a key milestone in its next-generation AI memory roadmap: the product is expected to achieve a single-pin transfer rate of 16Gbps and a total bandwidth of 4TB/s, targeting future AI accelerators and ultra-large-scale data centers. Industry experts generally believe that this announcement marks a new phase in the collaborative upgrade of “computing power—storage” within the AI chip ecosystem, while also intensifying competition among Samsung, SK Hynix, and other manufacturers in the HBM market.
First Public Display of HBM4E: AI Storage Bandwidth Reaches New Heights
At this year’s NVIDIA GTC conference, Samsung publicly showcased the physical HBM4E chip, which is Samsung’s seventh-generation high-bandwidth memory (HBM) technology. HBM4E is positioned as an upgraded version of HBM4, designed to provide higher bandwidth and lower latency for next-generation AI accelerators.
According to Samsung, HBM4E is expected to achieve:
Compared to previous HBM products, this performance level further enhances data throughput for AI model training and inference, and is considered a critical infrastructure supporting trillion-parameter models and the expansion of AI data centers.
HBM technology stacks multiple DRAM chips vertically through 3D stacking, significantly increasing memory bandwidth and reducing power consumption. It has now become a core component of AI GPUs and accelerators.
Risk Warning and Disclaimer
Market risks are present; investments should be made cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, viewpoints, or conclusions herein are suitable for their particular circumstances. Investment based on this information is at their own risk.