AI ASIC and SSD demand soars! Maweil Technology (MRVL.US) enjoys the "AI inference dividend" with a 72% surge in operating profit

Bloomberg News has learned that focusing on customized AI chips for large AI data centers (i.e., AI ASICs), and as one of the largest partners in Amazon AWS Trainium series AI ASICs, Mawiell Technology (MRVL.US) announced after the US stock market closed on March 6 that its performance exceeded Wall Street expectations across the board, along with future prospects. The latest strong performance and outlook from Mawiell, combined with the explosive growth data of Broadcom (AVGO.US), a dominant player in AI ASICs with a larger market share announced the day before, jointly highlight that as the era of AI inference fully arrives, high cost-performance AI ASIC computing systems are launching a strong challenge to Nvidia’s near 90% market share in AI chips.

Financial data shows that Mawiell Technology’s revenue for the fourth quarter of fiscal year 2026 ending January 31 was approximately $2.22 billion, a new record high, with a year-over-year increase of over 20%, slightly above the Wall Street analyst consensus of about $2.21 billion. Adjusted non-GAAP earnings per share (EPS) for the fourth quarter were $0.80, surpassing the Wall Street consensus of about $0.79 and the previous year’s $0.60. Operating profit under GAAP was $404.4 million, a significant year-over-year increase of 72%, above Wall Street expectations; net profit attributable to common shareholders was about $396.1 million, a nearly 97.9% YoY increase.

Among these, the data center business closely related to AI training/inference super-systems contributed about $1.65 billion, accounting for approximately 74% of total revenue, with a YoY growth of about 21% and a 9% increase quarter-over-quarter based on a strong previous quarter. The company emphasized in its earnings statement that orders for its data center business are growing at a “record-breaking speed.” After the earnings release, Mawiell’s stock price surged over 15% in after-hours trading.

Regarding market-focused outlook, Mawiell’s CEO expects further acceleration in YoY revenue growth for the current fiscal year. The management’s guidance for Q1 FY2027 indicates a median revenue of about $2.4 billion, significantly higher than the analyst consensus of approximately $2.27 billion—this forecast has been continuously raised since late January when US tech giants like Google, Amazon, and Nvidia reported strong results. Even so, Mawiell’s official outlook remains stronger than the upwardly revised analyst expectations, indicating the explosive demand for AI computing infrastructure driven by global AI ASIC technology trends. Both Mawiell and industry leader Broadcom are expected to continue their “Nvidia-style” explosive growth trajectory since 2024.

The company provided a non-GAAP EPS guidance range of $0.74 to $0.84, with a midpoint well above Wall Street analyst expectations, and a gross margin range of 58.25% to 59.25%, also higher than analyst estimates.

On the day before Mawiell announced its strong results and outlook, Broadcom (AVGO.US) reported that its total revenue increased to $19.3 billion, a YoY growth of 29%. Broadcom stated that revenue closely related to AI doubled to $8.4 billion, far exceeding previous expectations, including AI ASICs and semiconductor revenue from smartphone RF chips, with Q1 semiconductor solutions revenue reaching $12.515 billion, a 52% YoY increase.

Most notably in Broadcom’s report, the CEO stated that next year, revenue related to AI ASICs, specifically “AI chips,” will surpass $100 billion. This new target includes revenue from “AI ASIC compute clusters” competing fiercely with Nvidia’s AI GPU dominance, as well as AI networking chips—high-performance Ethernet switch chips. These two leading ASIC companies, Broadcom and Mawiell, are jointly reinforcing the “bullish AI ASIC narrative”: as cloud giants like Google, Amazon, and Microsoft push an “AI compute cost revolution” to accelerate AI ASIC adoption, the core competition in inference is shifting from “peak compute” to token cost, power consumption, memory bandwidth utilization, interconnect efficiency, and total cost of ownership after hardware-software integration. In these key metrics, ASICs tailored for specific workloads—streamlined data flow, compilers, and interconnects—are inherently more cost-effective than general-purpose GPUs.

According to TipRanks’ target price, Wall Street analysts are extremely optimistic about Mawiell’s revenue prospects from AI chips and SSD controllers—consensus rating “Strong Buy,” with an average 12-month target of $118, implying a potential upside of 56%.

AI ASICs and SSD controllers jointly drive Mawiell’s rapid performance growth.

From the perspective of the current global AI infrastructure boom, Mawiell’s strong results are mainly driven by the explosive demand for data center infrastructure semiconductors—especially customized AI ASICs, high-performance communication and control chips, and data center-grade eSSD storage controllers. Mawiell’s recent revenue growth trajectory largely stems from its data center business, particularly products tailored for cloud service providers and supercomputing platforms, including AI ASICs, high-bandwidth network chips, interconnect solutions, and SSD controllers, which are closely linked to AI training/inference platforms. The proportion of data center revenue continues to rise, with growth rates significantly higher than the overall company.

Long-term, Mawiell has focused on accelerating the iteration of customized AI ASIC chip technology, network processors (DPUs/NPUs), SSD controllers, and high-bandwidth interconnect products. The demand for these products in training and inference of large-scale AI models, as well as massive data storage and processing, is expanding exponentially with global AI compute demand. Customized silicon designs for large-scale data center clients are no longer peripheral but have become a core growth engine for global chip companies.

Amazon AWS explicitly positions its joint AI ASIC compute clusters—Trainium/Inferentia—as dedicated accelerators for generative AI training and inference, with Trainium 2 offering about 30-40% better price-performance than its AI GPU cloud instances; Google also announced that Gemini 2.0’s training and inference are 100% run on TPUs. These developments indicate that “in-house AI ASICs for core model training and inference” are no longer just proof of concept but are entering a replicable industrialization phase.

Broadcom and Mawiell’s latest strong earnings reports confirm that the unprecedented growth logic of AI ASICs is rapidly being validated by “earnings evidence.” The worldwide surge in generative AI has accelerated the development of AI chips by cloud and chip giants, as they compete to design the fastest and most energy-efficient AI compute clusters for advanced large-scale AI data centers. Both Broadcom and its largest competitor Mawiell focus on leveraging their advantages in high-speed interconnect and chip IP to collaborate with cloud giants like Amazon, Google, and Microsoft to create AI ASIC compute clusters tailored to their data center needs. This ASIC business has become a vital part of both companies’ portfolios; for example, Broadcom’s collaboration with Google on TPU AI compute clusters is a typical AI ASIC technology route.

Mawiell’s robust performance, combined with the benefits from the “storage supercycle,” as evidenced by the recent results of the three major storage chip manufacturers—Samsung, SK Hynix, and Micron—highlight that high-performance storage controllers/SSD controllers continue to be the “hidden engine” of compute power. In large model training/inference systems, I/O bandwidth, persistent storage access efficiency, and memory pool interconnect efficiency equally constrain overall training costs and performance. Mawiell’s SSD controllers, NVMe/CXL cache controllers, and high-bandwidth storage interconnect products are critical components of this demand wave. Although these highly specialized control ASICs are less conspicuous than the exponential growth of AI ASICs, they are vital for data handling in ultra-large parameter AI models, directly boosting data center system efficiency and service quality.

From the perspective of the intersection of semiconductors and AI data center infrastructure, storage chips are “perfectly positioned” in the AI superwave because they benefit from both training and inference expansion, and they serve as “universal toll booths” across platforms, architectures, and ecosystems. As the AI era shifts from training dominance to inference, agents, long-context, and low-latency applications, system demands for capacity, bandwidth, power efficiency, and data persistence will only intensify. AI data centers heavily rely on storage systems, which are no longer just HBM; according to Morgan Stanley and others, as AI workloads shift from training to inference and HDD supply bottlenecks in nearline storage, enterprise NVMe eSSD is experiencing unprecedented structural growth.

Driven by the strong demand for AI data centers, prices of DRAM and NAND storage series will continue to soar. BNP Paribas recently published a research report predicting that DRAM contract prices will surge by 90% in Q1 2026, and long-stable NAND prices could rise significantly by 55%, continuing the upward trend since late 2025.

This optimistic outlook on storage prices is echoed by TrendForce, which revised its Q1 2026 DRAM contract price forecast upward from 55-60% to 90-95% QoQ, and NAND flash contract prices up to 55-60% QoQ, citing surging demand from North American cloud providers for enterprise SSDs, which could see prices increase by 53-58% QoQ in the first quarter.

As the wave of AI inference sweeps globally, the golden age of AI ASICs has arrived.

Mawiell’s CEO, Matt Murphy, stated in the earnings release that the company set a record for customer orders for custom chips in FY2026, and expects this trend to continue. Murphy said that due to “continued strong growth in the data center business,” overall revenue for this fiscal year is expected to accelerate further YoY. He also added that bookings for the data center business are growing at a “record-breaking speed.”

Nvidia’s AI GPU nearly monopolizes AI training, which requires more powerful, versatile AI compute clusters and rapid iteration of the entire compute system, while AI inference, after the deployment of cutting-edge AI technology at scale, focuses more on token cost, latency, and energy efficiency. For example, Google explicitly positions Ironwood as a “generation born for AI inference,” emphasizing performance, energy efficiency, and scalability of compute clusters. Meanwhile, Amazon’s latest moves demonstrate that AI ASICs have the potential to support large-scale training of big models.

Undoubtedly, AI ASIC compute systems will continue to erode Nvidia’s monopoly premium and market share in the medium to long term, not by linear replacement of GPUs. The fundamental reason is that, in the inference era, core competition shifts from “peak compute” to token cost, power consumption, memory bandwidth utilization, interconnect efficiency, and total cost of ownership after hardware-software integration. In these metrics, ASICs tailored for specific workloads—streamlined data flow, compilers, and interconnects—are inherently more cost-effective than general-purpose GPUs. The future of AI data centers is likely to see: cutting-edge training and broad cloud compute still dominated by GPUs, while ultra-large-scale inference, agent workflows, and fixed high-frequency loads accelerate toward ASICs, ushering in a truly heterogeneous compute era.

In the frontier training era, the most critical factors are versatility, software maturity, and rapid adaptation to new model architectures, favoring GPUs; but as the industry shifts from “training scarcity” to “inference scale, agentification, long context, and low latency,” the key KPIs will shift from “peak compute” to token cost, watt throughput, and system-level TCO. This is the fundamental reason behind hyperscalers’ collective acceleration of ASIC adoption.

For example, Google explicitly defines Ironwood TPU as the “best compute cluster for inference era,” scalable to 9,216 chips; Microsoft positions its new AI ASIC Maia 200 as an accelerator for cloud inference, claiming 30% better performance per dollar than its latest hardware; AWS defines Trainium 3 as a chip pursuing “best token economics,” with over 4x efficiency gains. These developments collectively demonstrate that as cloud giants initiate an “AI compute cost revolution” to promote AI ASIC penetration, concerns about Nvidia’s growth prospects are justified.

ETH-1.82%
TOKEN-3.17%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin