Who could have predicted that the first thunderclap in the 2026 AI computing landscape would be sparked jointly by Lenovo and NVIDIA!
On January 6th, Pacific Time, at the TechWorld conference in Las Vegas, Lenovo Chairman and CEO Yuanqing Yang and NVIDIA Founder and CEO Jensen Huang took the stage together to officially announce the co-launch of the “Lenovo AI Cloud Super Factory.”
This is no ordinary technical collaboration. Huang hailed it as “the beginning of the AI industrial revolution”—a heavyweight strategic move featuring gigawatt-scale computing power, deployment in hours, and over 30% cost savings. By joining forces, these two giants are directly reshaping the new paradigm for AI implementation, making the democratization of computing power in the era of trillion-parameter models a tangible reality.
🌟 Giants Unite: 30-Year Partnership Culminates in a Computing Power “Game-Changer”
The partnership between Lenovo and NVIDIA actually spans three decades. This time, they are pooling their core strengths, creating an unbeatable synergy through complementary advantages:
✅ Lenovo: Brings globally leading server and data center technology, its battle-tested Neptune liquid cooling (from supercomputing center validation), and end-to-end capabilities in design, manufacturing, and deployment. A significant portion of the world’s supercomputers are built on its technology.
✅ NVIDIA: Controls the world’s core GPU and accelerated computing architecture. Its newly announced Blackwell and upcoming Vera Rubin architectures are the “computing heart” for large-scale AI training and inference.
Yuanqing Yang stated unequivocally at the event: business collaboration between Lenovo and NVIDIA will quadruple in scale over the next 3-4 years, with this AI Cloud Super Factory serving as the central vehicle to achieve that goal.
🔧 Technical Core: Dual Architecture + Liquid Cooling – How Potent is This Computing Foundation?
The competitive edge of this “Super Factory” lies entirely in its “hardcore specs”—every parameter is officially confirmed, no empty promises:
Dual Computing Architectures, Performance Maxed Out
The Super Factory率先 deploys the NVIDIA Blackwell Ultra architecture paired with Lenovo’s GB300 NVL72 system, featuring a fully liquid-cooled rack design. A single rack integrates 72 Blackwell Ultra GPUs and 36 Grace CPUs, delivering high-density computing power to support massive AI training and inference[superscript:6].
More impressively, it also supports NVIDIA’s next-generation Vera Rubin NVL72 flagship system, slated for delivery in the second half of 2026:
- A single system integrates 72 Rubin GPUs and 36 Vera CPUs.
- Rubin GPUs feature the 3rd-generation Transformer Engine, delivering inference performance of 50 PFLOPS—a 5x increase over the previous generation.
- Inference token cost plummets to 1/10th of the previous generation[superscript:7].
Compatibility across dual architectures allows enterprises to achieve smooth computing power upgrades without redundant investment, maximizing long-term ROI.
Neptune Liquid Cooling Empowerment: Green and Efficient
What’s the biggest fear with high-density computing? Heat! Lenovo brings its “trump card”—Neptune liquid cooling technology.
This technology, validated in global supercomputing centers, when integrated into the Super Factory, not only ensures the stable operation of tens of thousands of GPUs but also significantly reduces energy consumption. Based on collaborative practices between Lenovo and Suiyuan Huachuang, adopting this technology can slash AI server energy consumption by over 30% and halve operational and maintenance costs.
Furthermore, the Super Factory integrates Lenovo’s Neptune liquid cooling infrastructure, NVIDIA’s accelerated platforms, and full lifecycle services, enabling a seamless push from design and deployment to operation. It completely eliminates the pain points of traditional “hardware stacking” in AI infrastructure[superscript:6].
🚀 Core Advantages: Solving 3 Major Pain Points, Enterprise AI Deployment No Longer Daunting
For enterprises, the most practical aspect of this “Super Factory” is its ability to solve long-standing AI deployment challenges, with each advantage backed by data:
1. Deployment Efficiency: Compressed from “Months” to “Hours”
Traditional AI infrastructure deployment often takes weeks or months. The Super Factory adopts a “plug-and-play” computing power package model, slashing deployment cycles to the hour level.
Crucially, it can reduce AI deployment time for cloud service providers by over 50%, significantly improving “time to first token.” SMEs no longer need to build systems from scratch; they can focus on model development and business implementation[superscript:4].
2. Cost Control: Over 30% Savings on Initial Investment
Yuanqing Yang outlined the economics at the event: building a self-owned ten-thousand-GPU cluster requires infrastructure investment in the hundreds of millions of dollars.
By adopting the Super Factory model, enterprises can save over 30% on initial investment. Coupled with Lenovo’s Wanquan Heterogeneous Intelligent Computing 4.0 technology, which can boost computing power utilization by over 40%, operational costs are further reduced, achieving “no wasted computing power, controllable costs.”
3. Scalability: Flexible Expansion from Hundreds to Hundreds of Thousands of GPUs
The Super Factory boasts exceptional elastic scaling capabilities. The system can scale linearly from hundreds to hundreds of thousands of GPUs. AI startups or enterprises experiencing rapid growth no longer need to worry about “insufficient computing power” or “resource waste.”
Jensen Huang also emphasized: This platform achieves “predictable performance, reproducible deployments, and manageable operations,” thoroughly eliminating the uncertainties of AI deployment[superscript:4].
🏭 Real-World Deployment: Multiple Use Cases Blooming – Not a “Concept” but a “Practical Solution”
Unlike many technologies confined to PowerPoint slides, the Lenovo x NVIDIA AI Cloud Super Factory has already been deployed in multiple scenarios. Cases and data are from Lenovo’s official disclosures and partner announcements.
Case 1: Large-Scale Intelligent Computing Center – Qingyang Domestic Suiyuan Cluster
In collaboration with Suiyuan Huachuang, Lenovo lit up a domestic Suiyuan large-scale cluster in Qingyang. Nearly 800 AI servers have been deployed, forming a nearly thousand-petaflop-scale heterogeneous computing power pool.
The jointly developed computing power management platform enables ten-thousand-card-level computing power coordination and has already empowered the efficient implementation of several large domestic models and industry applications.
Case 2: Smart Healthcare – Xunshang Medical “AI Factory”
Lenovo built a scenario-specific AI factory for Xunshang Medical, centered on “AI Large Models + RPA + Integration Platform.” It integrates data from multiple hospital systems like HIS and finance to construct a “360-degree holistic patient view.”
This thoroughly addresses issues like hospital information silos and cumbersome processes, significantly enhancing patient experience and management efficiency.
Case 3: Ecosystem Development – Partnering with Volcano Engine to Empower Developers
Lenovo also launched the “Tianxi AI Ecosystem Intelligent Agent Pioneer Plan” with Volcano Engine, providing developers with resources and mentorship, even pledging that “100% of the profits generated by intelligent agents in the next 12 months will belong to the developers.”
Simultaneously, as a lead editor, Lenovo participated in formulating the industry’s first Model Training and Inference Service Standard, filling an industry gap and promoting the standardization and scaling of AI production.
🔮 Future Outlook: The AI Industrialization Era, Where Computing Power Becomes as Ubiquitous as Electricity
Jensen Huang predicts: The next AI frontier involves integrating various technologies into enterprise intelligence. The future holds not just cloud intelligence, but also enterprise and industrial intelligence closer to the data source.
This AI Cloud Super Factory is precisely a product aligned with this trend—it pushes AI infrastructure towards industrialization, making AI an infrastructure that can be used on a large scale and flexibly dispatched, much like electricity.
Yuanqing Yang also stated that Lenovo will continue to deepen its collaboration with NVIDIA. Leveraging the advantages of the Super Factory and its “1+3+N” AI server architecture, Lenovo aims to help more enterprises transition from “computing centers” to “AI factories.”
Industry analysis suggests this collaboration is not just about complementary strengths between two giants; it marks a shift in the AI computing industry from “decentralized construction” to “scalability and standardization.”
With the full-scale deployment of the Super Factory, the computing power gap between enterprises of different sizes will further narrow. AI technology will truly integrate into all industries, heralding a brand-new era of AI industrialization already on the horizon!
✨ Follow us for the latest updates on the Lenovo x NVIDIA partnership and to unlock more frontier insights into AI computing power.