
Enter the 8MW CDU Era
The next evolution arrived just days later.
On Jan. 20, DCX announced its second-generation facility-scale unit, the FDU V2AT2, pushing capacity into territory previously unimaginable for single CDU platforms.
The system delivers up to 8.15 megawatts of heat transfer capacity with record flow rates designed to support 45°C warm-water cooling, aligning directly with NVIDIA’s roadmap for rack-scale AI systems, including Vera Rubin-class deployments.
That temperature target is significant.
Warm-water cooling at this level allows many facilities to eliminate traditional chillers for heat rejection, depending on climate and deployment design. Instead of relying on compressor-driven refrigeration, operators can shift toward dry coolers or other simplified heat rejection strategies.
The result:
• Reduced mechanical complexity
• Lower energy consumption
• Improved efficiency at scale
• New opportunities for heat reuse
According to DCX CTO Maciek Szadkowski, the goal is to avoid obsolescence in a single hardware generation:
“As the datacenter industry transitions to AI factories, operators need cooling systems that won’t be obsolete in one platform cycle. The FDU V2AT2 replaces multiple legacy CDUs and enables 45°C supply water operation while simplifying cooling topology and significantly reducing both CAPEX and OPEX.”
The unit incorporates a high-capacity heat exchanger with a 2°C approach temperature, N+1 redundant pump configuration, integrated water quality control, and diagnostics systems designed for predictive maintenance.
In short, this is infrastructure built not for incremental density growth, but for hyperscale AI facilities where megawatts of cooling must scale as predictably as compute capacity.
Liquid Cooling Becomes System Architecture
The broader industry implication is clear: cooling is no longer an auxiliary mechanical function.
It is becoming system architecture.
DCX’s broader 2025 performance metrics underscore the speed of this transition. The company reported 600% revenue growth, expanded its workforce fourfold, and shipped or secured contracts covering more than 500 MW of liquid cooling capacity.
Its deployments now support multiple hyperscale projects across Europe and North America, including facilities in the 300 MW class.
These numbers reflect a broader reality: liquid cooling is moving from niche adoption into mainstream infrastructure strategy.
As Szadkowski put it:
“At this scale, liquid cooling is no longer just about removing heat, it’s about system architecture now.”
The AI Factory Demands Facility-Level Cooling
NVIDIA’s platform evolution, culminating in the company’s Rubin-class rack systems, reframes the rack as a coherent compute unit rather than a collection of servers.
That shift pushes infrastructure decisions upstream.
Power distribution, cooling loops, and facility topology must now support rack-scale machines operating as unified systems.
Facility-scale CDUs and warm-water cooling strategies directly align with this direction, reducing mechanical complexity while enabling faster deployment cycles.
For operators racing to bring AI capacity online, the combination of simplified plant design, scalable architecture, and reduced reliance on chillers could materially accelerate build schedules.
Cooling Moves to the Center of Infrastructure Strategy
The data center industry is still early in the AI infrastructure cycle. But one lesson is already apparent.
Announcements and capital commitments matter less than execution; and execution increasingly depends on infrastructure readiness.
Power availability remains the gating factor for many markets. Cooling infrastructure is quickly becoming the next constraint.
Vendors able to simplify and scale liquid cooling architectures are positioning themselves at the core of next-generation deployments.
DCX’s facility-scale approach suggests the future of AI data center cooling may look less like incremental upgrades to legacy designs and more like a clean-sheet rethink of how heat is managed at megawatt scale.
In other words, liquid cooling is no longer just supporting AI infrastructure.
It is becoming part of the foundation that makes it possible.





















