The era of digital abundance, when any enthusiast could build a home server capable of competing with the power of a small company, is coming to an end. Owning your own advanced hardware is increasingly becoming an elite privilege amid rising memory chip prices and longer pre-order queues.
In this new ForkLog article, we explore why graphics cards have become a resource for the AI industry, why Nvidia no longer favors gamers, and why freelance designers now have to rent computing power from cloud data centers. But the main question we aimed to answer is: how will the chip shortage affect blockchain decentralization, where SSDs and DRAMs often play a key role.
Techno-Feudalism or Temporary Difficulties
Recently, based on statements from AI industry leaders and memory chip manufacturers, it seems that the era of owning a powerful personal computer (PC) is gradually ending.
There’s active discussion about Jeff Bezos’s 2024 speech, where he compared PC usage to using an electric generator during the era of centralized power supply. Some in the community see him as a prophet in this situation.
The latest hardware models are becoming the main computational resource for training and maintaining large language models (LLMs). AI is depleting inventories of HBM microchips, which previously served the consumer segment of SSDs and RAM. As component prices rise, the market may lose an entire class of budget devices this year.
In early February, TrendForce analysts raised their forecast for chip prices. They expect a 90–95% jump in contracts for consumer DRAM in Q1 2026 due to the AI boom. The previous forecast was 55–60%.
Additionally, training LLMs requires enormous amounts of data. The corporate sector has purchased SSDs with capacities of 2TB and higher, with high write endurance. Silicon chip manufacturers, for whom servicing the AI industry yields higher profits, are planning to reorganize their production capacities.
At the end of 2025, Micron Technology — previously one of the most active supporters of maintaining the desktop segment — announced the shutdown of its Crucial consumer line. Production will cease in Q2 2026 after nearly 30 years of the brand’s existence.
Micron also plans to increase production of HBM microchips. The company invested $9.6 billion in new facilities in Hiroshima, Japan.
On February 12, Samsung Electronics announced the start of shipments of advanced HBM4 chips to unnamed clients. This move aims to reduce the gap with competitors in critical AI accelerator components, including SK Hynix.
The world’s largest microchip manufacturer is in a difficult position: it is the main memory supplier for Nvidia and a leader in smartphones and consumer electronics. It is crucial for the company to maintain high-margin AI contracts without weakening its position in gadget manufacturing.
Last September, Samsung Semiconductor’s management aimed to balance the situation. The company confirmed that its memory production lines for top-tier graphics cards — GDDR7 — can serve gamers, content creators, and professional workstations.
These chips are used in Nvidia’s flagship gaming line — GeForce RTX 5090. Announced in January 2025, the graphics card remains the undisputed leader, and the price announced a year ago at $1999 is now far from reality. As of writing, offers range between $4000 and $5000.
Source: Nvidia. The highly adaptive Chinese market is leveraging opportunities. According to Nikkei Asia, major Chinese memory manufacturers CXMT and YMTC are planning significant capacity expansions.
By 2027, they aim to launch factories in Shanghai and Wuhan, focusing primarily on DRAM and NAND, rather than HBM, as market leaders do.
Ex-CIO/CTO of Bitfury Group and co-founder of Hyperfusion Alex Petrov believes there’s no point in hoping prices will fall; instead, costs should be redistributed.
“Don’t wait, live in the here and now. If you need hardware for work, mining, or running a node — buy it now, accepting high prices, and allocate what you can temporarily do without. Deferred demand by 2028 could be huge and unpredictable, so it’s worth hoping for older DDR3/4 and the release of new DDR6,” — shared the expert in a ForkLog comment.
Why Graphics Cards?
Why did graphics cards, which allowed playing Quake III Arena in 2000 and Fallout 4 in 2015, first get taken over by PoW mining and later absorbed by the AI industry? The answer lies in the specifics of graphics accelerators, which can be better explained by comparing them to CPUs.
A CPU is a genius capable of solving any type of software task: writing poetry, calculating taxes, managing an operating system. But it performs actions sequentially on each core.
In contrast, a GPU is like a factory with thousands of simple workers. Each is less intelligent than a genius, but they can operate simultaneously.
To render a frame in a game, millions of pixels’ colors must be calculated. This involves an equal number of identical mathematical operations per second. A graphics chip was born for parallel computations.
A similar situation occurs with PoW mining on graphics cards. Mining is a kind of lottery where the device must billions of times per second pick a random number to find the correct hash. GPUs were perfect for this, leading to the first wave of shortages until Ethereum switched to PoS in 2022.
Graphics processors became a real find for AI needs. Modern LLMs like ChatGPT or Gemini are essentially giant tables of numbers (matrices). Their training involves endless matrix multiplications to adjust “weights” (connections between neurons).
It turns out that the math creating water reflections in Cyberpunk 2077 is the same linear algebra underlying neural network training. But AI requires not only powerful computations but also colossal data transfer speeds. Ordinary gaming VRAM isn’t enough — it has been replaced by expensive, scarce HBM, which all tech giants are now fighting over.
Nvidia recognized this trend early and, starting with the Volta architecture, began adding “tensor cores” to graphics cards. These can multiply matrices simultaneously, optimized specifically for AI tasks.
GPU for an Hour and Offline Loss
In the current situation, the next two years will see content creators, video editors, designers, gamers, programmers, AI architects, and all who depend critically on powerful hardware face a choice: prioritize renting online computing power or pay significantly more to upgrade their PCs.
Given the shortage and queues for components, demand for subscriptions is growing, making cloud data centers more customer-oriented. Several companies offer flexible access to computing resources and GPUs for rent, such as Lambda Labs, Vast.ai, Hyperfusion, LeaderGPU, Hostkey, and others.
RunPod offers access to the scarce flagship RTX 5090 at $0.89/hour.
Source: Runpod. Shadow platform provides remote desktops without restrictions on running games or professional software for engineers and designers. Similar services like GeForce Now or Xbox Cloud don’t offer such freedom but are priced differently.
Source: Shadow. Even now, with a stable internet connection, a home smart TV can turn into a powerful workstation — just order the necessary hardware. For many, this opens previously inaccessible opportunities, but the responsibility for quality and uninterrupted operation shifts from users to data center owners, who may prioritize more important clients and enforce sanctions.
Petrov noted that data centers guarantee 24/7 availability, backup power, connection redundancy, and proper maintenance.
“You can store some things at home or at work — but it’s often more expensive and less convenient,” he added.
He also mentioned that many designers, video editors, producers, and artists are already being displaced by AI. At a certain level, they need to turn to specialized AI applications that “home hardware” cannot handle.
“While the requirements for LLMs grow exponentially, only small models can be kept on phones or at home. Expert-level large models require different scales, capacities, and speeds, which are provided by cloud data centers,” Petrov explained.
Bitcoin Back in Front
The entire IT sector depends on components, but for the blockchain industry, the microchip shortage poses a real threat to decentralization and power redistribution.
“Rising memory prices are a consequence of decisions made by certain commercial companies. Blockchain nodes are not the only ones affected; all devices with new DDR5 memory — smartphones, PCs, everything — are increasing in price. This also forces blockchains to become smarter and more economical, seeking different ways and solutions,” said Hyperfusion co-founder.
He pointed out the paradox of the current situation, where PoS networks are struggling:
“Proof-of-Stake has reduced mining energy consumption but shifted the load from electricity to memory and disks for businesses and users. In conditions where components have become 3–5 times more expensive, PoS chains are caught in a ‘perfect storm’ of reality.”
In networks like Ethereum and Solana, the principle is “easy to create, but very costly to verify.” Given the many nodes and the fact that proofs take seven to nine steps, the entry barrier for PoS validators is often lower for deployment but more expensive operationally.
Technical requirements for Solana node operators. Source: Solana Labs. Petrov said that in Ethereum, each node must keep the entire database of accounts, contracts, and balances accessible soon. This involves tens of millions of objects that are constantly updated. Fast operation requires high-speed RAM and NVMe SSDs in a RAID array.
Nodes must process every block. In high-frequency networks (Solana — 400 ms, Ethereum — 12 s), verifying signatures and executing transactions demands enormous resources. Full archival nodes have even higher requirements: an Ethereum archival node needs 128GB RAM and at least 12TB SSD.
The rising costs of components reduce validator profitability and create a new centralization risk. In January, the number of active Solana nodes dropped to 800 — the lowest since 2021. As support for small node operators diminishes, covering voting and infrastructure costs becomes harder without sufficient delegated stake.
At the time of writing, the Nakamoto coefficient of the network has fallen to 19 (it was 33 in 2023).
Ethereum Foundation is already discussing initiatives to lower the infrastructure barrier. In May 2025, Vitalik Buterin proposed EIP-4444, which could significantly reduce disk space requirements. It suggests nodes only store transaction history for the last 36 days while maintaining the current state and Merkle tree structure. This approach reduces storage needs without compromising verification of the current blockchain state.
In the new “silicon curtain” reality, Bitcoin remains the “people’s blockchain.”
“Bitcoin doesn’t check the state, only UTXOs, which are easy to cache. The PoW creation stage requires ASIC farms and huge energy-efficient capacities, but validation remains extremely lightweight. Verifying a PoW result is very simple and fast — that’s its beauty. The steps on a validator node: get block data, check its hash, one or two hash operations, compare target/difficulty, and it’s clear — yes or no,” — Petrov explained.
For these reasons, a full Bitcoin node can run even on a lightweight server or desktop, and sometimes on new Raspberry Pi devices with 4–8 GB RAM. The impact of memory shortages on PoW nodes is minimal. SSD prices are rising, but capacities up to 1TB are still accessible, the expert added.
What’s Next?
Petrov believes the era of personal hardware isn’t over — there are simply different approaches and solutions for specific tasks:
“I love the quote ‘Cloud is someone else’s computer’ — ‘The cloud is just someone else’s computer on the network.’”
The industry is rushing to find solutions to the chip crisis by developing new technologies:
Magnetoresistive RAM (MRAM) — energy-nonvolatile, about 1000 times faster than SSDs and more reliable than traditional RAM. By 2026, it will start replacing memory in critical systems (automotive, space);
CXL 3.1 (Compute Express Link) — allows servers to share their RAM over the network. A salvation for data centers but increases user dependence on the cloud.
The current crisis isn’t the first in history but the most structural. Previously, memory chips faced similar challenges:
1986 — US imposed a price ceiling on memory chips on Japan, causing DRAM prices to triple in a year. US PC manufacturers (Commodore, Apple) nearly went bankrupt, and Intel exited the memory market to focus on processors;
2011 — Thailand floods submerged Western Digital factories producing 40% of the world’s HDDs. Prices soared 190% and didn’t return to normal for two years.
Exponential growth of AI makes it impossible to accurately predict future market behavior. The launch of new capacities by 2028 could ease the crisis if current growth rates are maintained.
If AI agents become the backbone of the economy, demand for chips will outpace production. In such a scenario, owning a powerful PC will become as elitist as owning a collectible horse. Whatever the future holds, timely thermal paste replacement remains essential.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Silicon Curtain - ForkLog: Cryptocurrencies, AI, Singularity, Future
Graphics cards, RAM, SSD — what’s next?
The era of digital abundance, when any enthusiast could build a home server capable of competing with the power of a small company, is coming to an end. Owning your own advanced hardware is increasingly becoming an elite privilege amid rising memory chip prices and longer pre-order queues.
In this new ForkLog article, we explore why graphics cards have become a resource for the AI industry, why Nvidia no longer favors gamers, and why freelance designers now have to rent computing power from cloud data centers. But the main question we aimed to answer is: how will the chip shortage affect blockchain decentralization, where SSDs and DRAMs often play a key role.
Techno-Feudalism or Temporary Difficulties
Recently, based on statements from AI industry leaders and memory chip manufacturers, it seems that the era of owning a powerful personal computer (PC) is gradually ending.
There’s active discussion about Jeff Bezos’s 2024 speech, where he compared PC usage to using an electric generator during the era of centralized power supply. Some in the community see him as a prophet in this situation.
The latest hardware models are becoming the main computational resource for training and maintaining large language models (LLMs). AI is depleting inventories of HBM microchips, which previously served the consumer segment of SSDs and RAM. As component prices rise, the market may lose an entire class of budget devices this year.
In early February, TrendForce analysts raised their forecast for chip prices. They expect a 90–95% jump in contracts for consumer DRAM in Q1 2026 due to the AI boom. The previous forecast was 55–60%.
Additionally, training LLMs requires enormous amounts of data. The corporate sector has purchased SSDs with capacities of 2TB and higher, with high write endurance. Silicon chip manufacturers, for whom servicing the AI industry yields higher profits, are planning to reorganize their production capacities.
At the end of 2025, Micron Technology — previously one of the most active supporters of maintaining the desktop segment — announced the shutdown of its Crucial consumer line. Production will cease in Q2 2026 after nearly 30 years of the brand’s existence.
Micron also plans to increase production of HBM microchips. The company invested $9.6 billion in new facilities in Hiroshima, Japan.
On February 12, Samsung Electronics announced the start of shipments of advanced HBM4 chips to unnamed clients. This move aims to reduce the gap with competitors in critical AI accelerator components, including SK Hynix.
The world’s largest microchip manufacturer is in a difficult position: it is the main memory supplier for Nvidia and a leader in smartphones and consumer electronics. It is crucial for the company to maintain high-margin AI contracts without weakening its position in gadget manufacturing.
Last September, Samsung Semiconductor’s management aimed to balance the situation. The company confirmed that its memory production lines for top-tier graphics cards — GDDR7 — can serve gamers, content creators, and professional workstations.
These chips are used in Nvidia’s flagship gaming line — GeForce RTX 5090. Announced in January 2025, the graphics card remains the undisputed leader, and the price announced a year ago at $1999 is now far from reality. As of writing, offers range between $4000 and $5000.
By 2027, they aim to launch factories in Shanghai and Wuhan, focusing primarily on DRAM and NAND, rather than HBM, as market leaders do.
Ex-CIO/CTO of Bitfury Group and co-founder of Hyperfusion Alex Petrov believes there’s no point in hoping prices will fall; instead, costs should be redistributed.
Why Graphics Cards?
Why did graphics cards, which allowed playing Quake III Arena in 2000 and Fallout 4 in 2015, first get taken over by PoW mining and later absorbed by the AI industry? The answer lies in the specifics of graphics accelerators, which can be better explained by comparing them to CPUs.
A CPU is a genius capable of solving any type of software task: writing poetry, calculating taxes, managing an operating system. But it performs actions sequentially on each core.
In contrast, a GPU is like a factory with thousands of simple workers. Each is less intelligent than a genius, but they can operate simultaneously.
To render a frame in a game, millions of pixels’ colors must be calculated. This involves an equal number of identical mathematical operations per second. A graphics chip was born for parallel computations.
A similar situation occurs with PoW mining on graphics cards. Mining is a kind of lottery where the device must billions of times per second pick a random number to find the correct hash. GPUs were perfect for this, leading to the first wave of shortages until Ethereum switched to PoS in 2022.
Graphics processors became a real find for AI needs. Modern LLMs like ChatGPT or Gemini are essentially giant tables of numbers (matrices). Their training involves endless matrix multiplications to adjust “weights” (connections between neurons).
It turns out that the math creating water reflections in Cyberpunk 2077 is the same linear algebra underlying neural network training. But AI requires not only powerful computations but also colossal data transfer speeds. Ordinary gaming VRAM isn’t enough — it has been replaced by expensive, scarce HBM, which all tech giants are now fighting over.
Nvidia recognized this trend early and, starting with the Volta architecture, began adding “tensor cores” to graphics cards. These can multiply matrices simultaneously, optimized specifically for AI tasks.
GPU for an Hour and Offline Loss
In the current situation, the next two years will see content creators, video editors, designers, gamers, programmers, AI architects, and all who depend critically on powerful hardware face a choice: prioritize renting online computing power or pay significantly more to upgrade their PCs.
Given the shortage and queues for components, demand for subscriptions is growing, making cloud data centers more customer-oriented. Several companies offer flexible access to computing resources and GPUs for rent, such as Lambda Labs, Vast.ai, Hyperfusion, LeaderGPU, Hostkey, and others.
RunPod offers access to the scarce flagship RTX 5090 at $0.89/hour.
Petrov noted that data centers guarantee 24/7 availability, backup power, connection redundancy, and proper maintenance.
He also mentioned that many designers, video editors, producers, and artists are already being displaced by AI. At a certain level, they need to turn to specialized AI applications that “home hardware” cannot handle.
Bitcoin Back in Front
The entire IT sector depends on components, but for the blockchain industry, the microchip shortage poses a real threat to decentralization and power redistribution.
He pointed out the paradox of the current situation, where PoS networks are struggling:
In networks like Ethereum and Solana, the principle is “easy to create, but very costly to verify.” Given the many nodes and the fact that proofs take seven to nine steps, the entry barrier for PoS validators is often lower for deployment but more expensive operationally.
Nodes must process every block. In high-frequency networks (Solana — 400 ms, Ethereum — 12 s), verifying signatures and executing transactions demands enormous resources. Full archival nodes have even higher requirements: an Ethereum archival node needs 128GB RAM and at least 12TB SSD.
The rising costs of components reduce validator profitability and create a new centralization risk. In January, the number of active Solana nodes dropped to 800 — the lowest since 2021. As support for small node operators diminishes, covering voting and infrastructure costs becomes harder without sufficient delegated stake.
At the time of writing, the Nakamoto coefficient of the network has fallen to 19 (it was 33 in 2023).
Ethereum Foundation is already discussing initiatives to lower the infrastructure barrier. In May 2025, Vitalik Buterin proposed EIP-4444, which could significantly reduce disk space requirements. It suggests nodes only store transaction history for the last 36 days while maintaining the current state and Merkle tree structure. This approach reduces storage needs without compromising verification of the current blockchain state.
In the new “silicon curtain” reality, Bitcoin remains the “people’s blockchain.”
For these reasons, a full Bitcoin node can run even on a lightweight server or desktop, and sometimes on new Raspberry Pi devices with 4–8 GB RAM. The impact of memory shortages on PoW nodes is minimal. SSD prices are rising, but capacities up to 1TB are still accessible, the expert added.
What’s Next?
Petrov believes the era of personal hardware isn’t over — there are simply different approaches and solutions for specific tasks:
The industry is rushing to find solutions to the chip crisis by developing new technologies:
The current crisis isn’t the first in history but the most structural. Previously, memory chips faced similar challenges:
Exponential growth of AI makes it impossible to accurately predict future market behavior. The launch of new capacities by 2028 could ease the crisis if current growth rates are maintained.
If AI agents become the backbone of the economy, demand for chips will outpace production. In such a scenario, owning a powerful PC will become as elitist as owning a collectible horse. Whatever the future holds, timely thermal paste replacement remains essential.