NVIDIA May Shift AI Servers to LPDDR, Signaling Further Memory Tightness in 2025
Industry reports indicate that NVIDIA is evaluating the adoption of LPDDR memory in its next-generation AI server architectures, a move that could reshape memory demand patterns across the entire semiconductor supply chain. While LPDDR has traditionally been used in mobile devices and edge computing, rapid improvements in bandwidth-per-watt and power efficiency have made it increasingly attractive for AI system designs that push thermal and energy limits.

A Strategic Shift Driven by Efficiency and Thermal Constraints
AI accelerators have entered a phase of exponential scaling, with models growing larger and power consumption rising sharply.
HBM remains the industry’s gold standard for ultra-high bandwidth, but its power density, thermal output, and manufacturing constraints pose challenges for certain system configurations.
Industry analysts suggest that NVIDIA’s evaluation of LPDDR aims to:
- Increase bandwidth per watt, improving overall system efficiency
- Reduce thermal load for specific AI server workloads
- Provide an additional memory option to mitigate HBM bottlenecks
- Diversify memory sources as demand for HBM intensifies globally
If implemented, this architectural change could impact NVIDIA’s broad ecosystem of partners, including DRAM vendors, data center operators, and OEM system builders.
Supply Chain Implications: LPDDR Could Face Immediate Strain
The global memory market is already under stress.
HBM is projected to remain in shortage through 2027, while DDR5 faces tight supply due to rapid AI server deployment.
LPDDR production lines are shared with high-volume mobile and consumer SoCs, meaning that a sudden spike in demand from AI servers could quickly overwhelm available capacity.
This raises the possibility of a multi-segment DRAM squeeze in 2025, simultaneously affecting HBM, DDR5, and LPDDR.
Industry Impact and Vendor Responses
Memory manufacturers may need to adjust capex plans and wafer allocation strategies.
SK hynix and Samsung produce LPDDR at advanced nodes (1a/1b nm), which are already heavily allocated to flagship smartphone models.
A shift by NVIDIA could:
- Accelerate LPDDR migration to newer nodes
- Push vendors to reallocate wafers away from consumer products
- Trigger ASP (average selling price) increases across multiple DRAM categories
- Elevate the strategic importance of power-efficient memory in AI system roadmaps
Market Outlook: A Potential Ripple Across Servers and AI Infrastructure
AI infrastructure spending by Microsoft, Google, Meta, and other cloud giants has reached historic levels, with combined Q3 CapEx of $78 billion (+89% YoY).
Any NVIDIA memory architecture transition would quickly cascade across server OEMs such as Supermicro, Wiwynn, and QCT, leading to rapid adjustments in memory BOM planning for AI servers launching in 2025–2026.
Key Points to Watch
- LPDDR demand from AI servers may significantly tighten memory supply in 2025.
- Competition for DRAM capacity across HBM, DDR5, and LPDDR is expected to intensify.
- Memory vendors may shift product mix and increase wafer investment for low-power DRAM.
- Potential upward pricing pressure on server DRAM, HBM, and high-speed LPDDR.
- AI server architecture choices by NVIDIA could influence global DRAM market structure for years.