One Article to Understand the Profit Pools and Industry Landscape of the AI Storage Hierarchy

marsbitPublished on 2026-05-14Last updated on 2026-05-14

Abstract

**Deciphering the Profit Pools and Industry Landscape of the AI Storage Hierarchy** AI storage architecture can be divided into six distinct layers based on proximity to computing units: 1) On-chip SRAM, 2) HBM, 3) Motherboard DRAM, 4) CXL pooling layer, 5) Enterprise SSD, and 6) NAS & Cloud Object Storage. In 2025, the total market for these layers (excluding embedded SRAM value) was approximately $229 billion, with DRAM constituting half, HBM 15%, and SSD 11%. The profit landscape is highly concentrated, with over 90% market share in the top three layers for key players. These profit pools are categorized into three types: 1) High-margin, oligopolistic silicon layers (HBM, embedded SRAM, QLC SSD), 2) High-margin, emerging interconnect layers (CXL), and 3) Scalable, recurring-revenue service layers (NAS, Cloud Object Storage). **Key Layers Analysis:** * **On-chip SRAM:** Profits accrue primarily to TSMC via advanced wafer sales for AI chips. * **HBM:** The largest AI-era profit pool, driven by AI accelerator demand. SK Hynix (57-62% share), Samsung, and Micron dominate. HBM boasts exceptionally high margins (e.g., SK Hynix's 72% operating margin in Q1 2026) and is projected to grow at a ~40% CAGR to $100 billion by 2028. * **Motherboard DRAM:** The largest market by revenue ($121.8B in 2025), controlled by Samsung, SK Hynix, and Micron. High profitability is sustained as capacity shifts to HBM. * **CXL Pooling Layer:** Enables rack-level memory sharing for AI work...

Author: Godot

AI storage can be broken down into six layers,

1) On-chip SRAM

2) HBM

3) Motherboard DRAM

4) CXL Pooling Layer

5) Enterprise SSD

6) NAS and Cloud Object Storage

This hierarchy is based on the location of storage; the further down, the farther from the computing unit, and the larger the storage capacity.

In 2025, the total market for these six layers (SRAM is on compute chips, its embedded value needs to be excluded) was about $229 billion, with DRAM accounting for half, HBM for 15%, and SSD for 11%.

In terms of profit, each layer is highly concentrated among oligopolies, with the top three typically holding over 90% market share.

These profit pools can be divided into three categories,

1) High-margin oligopolistic pools at the silicon layer (HBM, embedded SRAM, QLC SSD)

2) High-margin emerging pools at the interconnect layer (CXL)

3) Scale-compounding pools at the service layer (NAS, Cloud Object Storage)

The three types of pools differ in nature, growth rate, and moats.

Why is Storage Layered?

Because the CPU responsible for control and the GPU responsible for computing only have temporary cache on the chip, namely on-chip SRAM cache. This cache space is too small, only enough to hold temporary parameters, and cannot accommodate large models.

Outside these two chips, larger external memory is needed to store the large models and the context for inference.

Computation is fast. The latency and energy consumption of moving data between different storage layers are the biggest issues.

Therefore, there are currently three main directions,

1) Stack HBM, placing memory next to the GPU to shorten the data transfer distance.

2) Use CXL to pool memory to the rack level, sharing capacity.

3) Integrate computing and storage on the same wafer, achieving compute-in-memory.

These three directions will shape the profit pool of each layer over the next five years.

The specific layers are detailed below,

L0 On-chip SRAM: A Profit Pool Exclusive to TSMC

SRAM (Static Random-access Memory) is the cache inside CPUs/GPUs, embedded in each chip and not traded separately.

The standalone SRAM chip market is only about $1–1.7 billion. Leaders are Infineon (~15%), Renesas (~13%), and ISSI (~10%)—a small market.

The profit pool for this part lies with TSMC. To fit more SRAM into each generation of AI chips, more wafers must be purchased.

Over 70% of the world's advanced process wafers are in TSMC's hands. The SRAM area of every H100, B200, TPU v5, etc., ultimately translates into TSMC's revenue.

L1 HBM: The Largest Profit Pool of the AI Era

HBM (High Bandwidth Memory) is high-bandwidth memory where DRAM (Dynamic Random-access Memory) dies are vertically stacked using TSV (Through-Silicon Via) technology and then attached next to the GPU via CoWoS packaging.

HBM almost single-handedly determines how large a model an AI accelerator can run. SK hynix, Micron, and Samsung have a near 100% market share.

As of Q1 2026, the latest market share breakdown is: SK hynix 57% to 62%, Samsung 22%, Micron 21%. SK hynix has secured significant procurement shares from companies like NVIDIA and is the dominant supplier.

Micron's Q1 FY2026 earnings call mentioned that the HBM TAM (Total Addressable Market) is expected to grow at a CAGR of ~40%, from about $35 billion in 2025 to $100 billion in 2028, reaching the $100 billion mark two years earlier than previous forecasts.

The core advantage of HBM lies in its extremely high profit margins. In Q1 2026, SK hynix's operating profit margin reached a record 72%.

Reasons for high profitability,

1) The TSV manufacturing process sacrifices some traditional DRAM capacity, keeping HBM in a state of supply shortage.

2) Improving advanced packaging yield is difficult; Samsung's previous market share drop from 40% to 22% was also affected by this.

3) Major suppliers have been relatively cautious in capacity expansion, and achieved a DRAM ASP (Average Selling Price) increase of over 60% QoQ in Q1 2026, demonstrating a clear seller's market.

Among the three giants, SK hynix, driven by strong HBM demand, achieved annual operating profit of 47.21 trillion KRW in 2025, surpassing Samsung Electronics for the first time in history. In Q1 2026, with a 72% operating margin, it even exceeded the profitability levels of TSMC (58.1%) and NVIDIA (65%).

Micron has high growth expectations, with Bank of America raising its target price to $950 in May 2026. Samsung, with the continuous progress of HBM4 mass production, has the largest room for market share recovery.

L2 Motherboard DRAM

This layer refers to what we commonly call memory modules (DIMMs).

Motherboard DRAM includes conventional memory products like DDR5, LPDDR, GDDR, MR-DIMM, etc. It is currently the part with the highest market sales share in the AI storage system. The global DRAM market reached approximately $121.83 billion in 2025.

Samsung, SK hynix, and Micron still dominate the vast majority of the market. According to the latest data from Q4 2025, Samsung ranked first with a 36.6% market share, SK hynix second with 32.9%, and Micron third with 22.9%.

The shift in production capacity towards higher-margin HBM has helped maintain high profitability and pricing power for memory. Although the single-product margin of conventional motherboard DRAM is not as high as HBM's, its overall market size is the largest.

L3 CXL Pooling Layer

CXL (Compute Express Link) allows DRAM to be "pooled" from a single server motherboard to the entire rack level.

With CXL 3.x and beyond, all memory in a rack can be shared and scheduled by multiple GPUs in the future, allocated on-demand. This solves the problem of KV cache, vector databases, and RAG indexes not fitting or being too cumbersome to move during AI inference.

The CXL memory module market was only $1.6 billion in 2024, projected to reach $23.7 billion by 2033. It appears the oligopoly of Samsung, SK hynix, and Micron will continue.

In this layer, Astera Labs focuses on Retimers and intelligent memory controllers between CXL and PCIe, holding about 55% share of this sub-market. Latest quarter revenue was $308 million, up 93% YoY; non-GAAP gross margin 76.4%; net profit up 85% YoY. It can be said to be quite lucrative.

L4 Enterprise SSD: The Biggest Beneficiary of the Inference Era

Enterprise NVMe SSDs are the main battleground for AI training checkpoints, RAG indexes, KV cache offloading, and model weight caching. High-capacity QLC SSDs have completely pushed HDDs out of AI data lakes.

The enterprise SSD market was about $26.1 billion in 2025, with a CAGR of 24%, projected to reach $76 billion by 2030.

As for the competitive landscape? Correct, still dominated by the three giants.

Market share by revenue in Q4 2025: Samsung 36.9%, SK hynix (including Solidigm) 32.9%, Micron 14.0%, Kioxia 11.7%, SanDisk 4.4%. The top five account for about 90%.

The biggest change in this layer is the explosion of QLC SSDs in AI inference scenarios. SK hynix's subsidiary Solidigm and Kioxia have already produced single-disk products with 122 TB capacity. AI inference KV cache and RAG indexes are spilling over from HBM to SSDs.

From a profit pool perspective, enterprise SSDs don't have the extreme gross margins of HBM but enjoy dual tailwinds of capacity-driven growth and inference expansion.

SK hynix (via Solidigm) and Kioxia are relatively pure plays. Samsung and SK hynix enjoy triple-layer benefits from HBM + DRAM + NAND, making them more comprehensive AI storage platform companies.

L5 NAS and Cloud Object Storage: The Compounding Pool of Data Gravity

NAS and Cloud Object Storage are the outermost layers for AI data lakes, training corpora, backup/archiving, and cross-team collaboration. In 2025, NAS was about $39.6 billion (CAGR 17%), and Cloud Object Storage about $9.1 billion (CAGR 16%).

Major vendors for enterprise file storage are NetApp, Dell, HPE, Huawei; for SMBs, Synology and QNAP. For Cloud Object Storage, using IaaS share estimates, AWS ~31–32%, Azure ~23–24%, Google Cloud ~11–12%, the three combined ~65–70%.

Profits in this layer mainly come from long-term hosting, data egress fees, and ecosystem lock-in.

To summarize,

1) DRAM has the largest market but the lowest gross margins (30–40%); HBM's market is only one-third of DRAM's, but its gross margin is double (60%+); CXL Retimers have the smallest market but the highest gross margin (76%+). The closer the layer is to computing, the scarcer and more lucrative it is.

2) Incremental profit pool growth primarily comes from three areas: HBM (CAGR 28%), Enterprise SSD (CAGR 24%), and CXL Pooling (CAGR 37%).

3) Each layer has different business barriers: HBM relies on technical barriers (TSV, CoWoS, yield ramp); CXL-type relies on IP and certification (single supply chain for Retimers); service-type relies on switching costs.

Related Questions

QHow is the AI storage hierarchy structured, and what are the six layers mentioned in the article?

AThe AI storage hierarchy is divided into six layers based on proximity to the compute unit, with capacity increasing as distance increases. The layers are: 1) On-chip SRAM, 2) HBM, 3) Motherboard DRAM, 4) CXL Pooling Layer, 5) Enterprise SSD, and 6) NAS & Cloud Object Storage.

QWhich layer in the AI storage stack is described as the most lucrative profit pool in the AI era, and what are the market share dynamics among its key players?

AHBM is described as the largest profit pool in the AI era. As of Q1 2026, the market is dominated by SK Hynix (57-62%), Samsung (22%), and Micron (21%). SK Hynix leads, driven by strong demand from customers like NVIDIA.

QWhat are the three primary strategic directions mentioned for optimizing data movement between storage layers, and how do they impact future profit pools?

AThe three primary directions are: 1) Stacking HBM to shorten data transfer distances, 2) Using CXL for rack-level memory pooling and capacity sharing, and 3) Developing in-memory computing (e.g., on the same wafer). These trends will shape the profit pools across the storage hierarchy over the next five years.

QAccording to the article, what are the three distinct categories of profit pools in the AI storage landscape, and what are their key characteristics?

AThe three categories are: 1) High-margin Oligopoly Pools (e.g., HBM, embedded SRAM, QLC SSD) characterized by technological dominance and high margins; 2) High-margin Emerging Pools (e.g., CXL) driven by new interconnects; and 3) Scale & Recurring Revenue Pools (e.g., NAS, Cloud Object Storage) based on data gravity, long-term hosting, and ecosystem lock-in.

QWhich company is highlighted as capturing the profit pool for on-chip SRAM (L0), and what is the underlying reason?

ATSMC captures the profit pool for on-chip SRAM. This is because the need to integrate more SRAM into each new generation of AI chips (like NVIDIA's H100/B200, Google's TPU v5) requires more advanced semiconductor wafers, over 70% of which are supplied by TSMC. The SRAM area directly translates into TSMC's revenue.

Related Reads

MY Group Completes Web4.0 First Stock Listing Layout, SEC Officially Discloses Form 8-K Announcement

MY Group has completed the listing layout for the "Web4.0 First Share," with the U.S. Securities and Exchange Commission (SEC) formally disclosing a Form 8-K report. According to the filing, the company's board has officially appointed Mr. Zhang Dingwen as Chief Executive Officer (CEO) and Executive Director, marking a significant upgrade in management and the entry into a new phase of its global capital market strategy. The disclosure of Form 8-K, used for reporting major corporate events, coincides with market information indicating the company is advancing several key capital market initiatives. These include a global brand system upgrade, corporate strategic restructuring, and a change of its stock ticker symbol. These moves are viewed by industry experts as signals of accelerated internationalization and enhanced global market presence. Concurrently, MY Group's proposed "Web4.0 Ecosystem" is garnering market attention. The company is integrating core capabilities across social traffic portals, global payment systems, public blockchain infrastructure, digital asset trading, and AI-powered financial systems. Analysts suggest that by closing this ecosystem loop, MY Group has the potential to become a next-generation platform merging Web2 user scale with Web3 asset frameworks and AI financial capabilities. With the management upgrade finalized, the global brand strategy launched, and the stock ticker change pending, MY Group is positioning itself as a focal point in the global technology capital market as a potential leading Web4.0 platform enterprise.

marsbit9h ago

MY Group Completes Web4.0 First Stock Listing Layout, SEC Officially Discloses Form 8-K Announcement

marsbit9h ago

Trading

Spot
Futures

Hot Articles

How to Buy ONE

Welcome to HTX.com! We've made purchasing Harmony (ONE) simple and convenient. Follow our step-by-step guide to embark on your crypto journey.Step 1: Create Your HTX AccountUse your email or phone number to sign up for a free account on HTX. Experience a hassle-free registration journey and unlock all features.Get My AccountStep 2: Go to Buy Crypto and Choose Your Payment MethodCredit/Debit Card: Use your Visa or Mastercard to buy Harmony (ONE) instantly.Balance: Use funds from your HTX account balance to trade seamlessly.Third Parties: We've added popular payment methods such as Google Pay and Apple Pay to enhance convenience.P2P: Trade directly with other users on HTX.Over-the-Counter (OTC): We offer tailor-made services and competitive exchange rates for traders.Step 3: Store Your Harmony (ONE)After purchasing your Harmony (ONE), store it in your HTX account. Alternatively, you can send it elsewhere via blockchain transfer or use it to trade other cryptocurrencies.Step 4: Trade Harmony (ONE)Easily trade Harmony (ONE) on HTX's spot market. Simply access your account, select your trading pair, execute your trades, and monitor in real-time. We offer a user-friendly experience for both beginners and seasoned traders.

3.5k Total ViewsPublished 2024.03.29Updated 2025.06.04

How to Buy ONE

Discussions

Welcome to the HTX Community. Here, you can stay informed about the latest platform developments and gain access to professional market insights. Users' opinions on the price of ONE (ONE) are presented below.

活动图片