
With the conclusion of the 2025 OCP Global Summit, William G. Wong, Senior Content Director at DCF’s sister publications Electronic Design and Microwaves & RF, published a comprehensive roundup of standout technologies unveiled at the event.
For Data Center Frontier readers, we’ve revisited those innovations through the lens of data center impact, focusing on how they reshape infrastructure design and operational strategy.
This year’s OCP Summit marked a decisive shift toward denser GPU racks, standardized direct-to-chip liquid cooling, 800-V DC power distribution, high-speed in-rack fabrics, and “crypto-agile” platform security. Collectively, these advances aim to accelerate time-to-capacity, reduce power-distribution losses at megawatt rack scales, simplify retrofits in legacy halls, and fortify data center platforms against post-quantum threats.
Rack Design and Cooling: From Ad-Hoc to Production-Grade Liquid Cooling
NVIDIA’s Vera Rubin compute tray, newly offered to OCP for standardization, packages Rubin-generation GPUs with an integrated liquid-cooling manifold and PCB midplane. Compared with the GB300 tray, Vera Rubin represents a production-ready module delivering four times the memory and three times the memory bandwidth: a 7.5× performance factor at rack scale, with 150 TB of memory at 1.7 PB/s per rack.
The system implements 45 °C liquid cooling, a 5,000-amp liquid-cooled busbar, and on-tray energy storage with power-resilience features such as flexible 100-amp whips and automatic-transfer power-supply units. NVIDIA also previewed a Kyber rack generation targeted for 2027, pivoting from 415/480 VAC to 800 V DC to support up to 576 Rubin Ultra GPUs, potentially eliminating the 200-kg copper busbars typical today. These refinements are aimed at both copper reduction and aisle-level manageability.
Wiwynn’s announcements filled in the practicalities of deploying such densities. The company showcased rack- and system-level designs across NVIDIA GB300 NVL72 (72 Blackwell Ultra GPUs with 800 Gb/s ConnectX-8 SuperNICs) for large-scale inference and reasoning, and HGX B300 (eight GPUs / 10U, 2.1 TB HBM3e) for training—alongside an AMD Instinct MI350 platform claiming a 35× inference uplift over prior generations.
To address escalating GPU TDPs and signal-integrity constraints, Wiwynn introduced a Double-Wide Rack Architecture: doubling rack width while integrating HVDC distribution, advanced liquid cooling, and optimized signal layouts. This emerging pattern is already influencing campus designs targeting NVL72-class rackloads.
Cooling innovations included a double-sided cold plate rated to 4 kW per device (using microchannels and 3D electrochemical printing), a two-phase cold plate with eco-friendly dielectric refrigerants, and a 300-kW AALC Sidecar, co-developed with Shinwa Controls, that enables liquid-cooled rack capacity within air-cooled buildings without wholesale mechanical retrofits. The approach is particularly valuable for brownfield expansions where chiller-plant upgrades are constrained.
Networking options spanned NVIDIA Spectrum-X (Spectrum-4 MAC) and Broadcom Tomahawk 6, underscoring the rapid evolution of AI fabric design.
ASRock Rack and the Waterless Path
The ASRock Rack, pairing NVIDIA HGX B300 with ZutaCore’s waterless HyperCool, highlighted a complementary route: dielectric, water-free direct-to-chip cooling designed to simplify risk and retrofit. The 4U16X-GNR2/ZC integrates HGX B300 (eight Blackwell Ultra GPUs) with HyperCool’s closed-loop dielectric fluid, reducing operating risk versus water-based loops while lowering energy use and OpEx.
ASRock also demonstrated an air-cooled 8U16X-GNR2 and a 4UXGM-GNR2 CX8 (equipped with RTX Pro 6000 Blackwell GPUs, ConnectX-8 SuperNICs, and BlueField-3 DPUs), plus an AMD EPYC 4005 OCP server for cloud-hosting applications, illustrating a heterogeneous portfolio strategy tuned to tenant requirements.
Much of the summit’s message was that the future data center will incorporate GPUs from multiple vendors, but that general layouts and hardware support will remain OCP-compliant. The industry is converging on standardized trays, manifolds, and sidecar coolers that shorten integration cycles, minimize failure points, and simplify retrofits.
For operators eyeing 132-kW-plus racks and megawatt-class rows, these standardized building blocks reduce vendor lock-in and construction variance while supporting densification roadmaps across both greenfield and brownfield deployments.
Power Distribution: 800-V DC Moves from Slideware to Solution Stack
NVIDIA’s pivot to 800-V DC power at the rack is now clearly aligned with supplier roadmaps. Power Integrations detailed new 1,250-V and 1,700-V PowiGaN high-voltage devices designed for 800-V DC data center architectures, claiming higher density and efficiency compared to stacked 650-V GaN or 1,200-V SiC implementations in equivalent topologies.
Its InnoMux 2-EP IC integrates a 1,700-V PowiGaN switch capable of handling ~1,000-V DC inputs. In a liquid-cooled, fanless auxiliary-supply reference design, the platform demonstrated >90.3% efficiency. Crucially, GaN’s lower switching losses at high frequency reduce magnetics and total component count, yielding smaller, more efficient rack-level power modules.
These developments were presented in parallel with NVIDIA’s 800-V DC reference architecture, which DCF previously covered, as part of a broader effort to accelerate megawatt-rack adoption.
Transitioning from facility-level AC to rack-level HVDC reduces copper mass, conversion stages, and transmission losses, improving PUE and freeing valuable IT-aisle space. As OCP continues to standardize 800-V interfaces, procurement risk drops and multi-vendor power shelves become viable options for operators preparing 800-V rollouts in 2026–2028 deployments.
Platform Security: Post-Quantum, but Practical
Lattice Semiconductor introduced a new secure-control FPGA family designed to run both pre- and post-quantum cryptography in hardware using a crypto-agile architecture. The devices are optimized for board-controller and host-interface roles common in AI servers and NICs.
A featured device, the MachXO5-NX TDQ, supports the full Commercial National Security Algorithm (CNSA) 2.0 suite—including ML-DSA, ML-KEM, LMS, XMSS, and AES-256—while operating below 5 watts (typically under 3 W). It implements a hardware root of trust for instant-on secure boot and can even protect its own bitstream with PQC.
The original Electronic Design report reiterates the U.S. government’s CNSA 2.0 adoption roadmap: new software standards by 2025, transition by 2030, and full adoption by 2035. The article also flags “harvest-now, decrypt-later” risks that strengthen the case for crypto-agility in today’s deployments. Lattice says its new devices are already shipping to communications and data-center customers.
As accelerators and fabrics become more composable and remotely orchestrated, the secure control plane becomes the blast-radius limiter in any breach scenario. Low-power, crypto-agile FPGAs allow operators to layer PQC protections now, without waiting for every baseboard management controller or SoC to evolve. It’s a necessary shift, and one the data center industry will need to embrace.
Building Your AI Factories
Much of the OCP 2025 vendor focus centered on building the next generation of AI factories and defining standards for how those factories will be designed, powered, and secured.
Standardization, interoperability, and forward-compatibility beyond current-generation systems were the watchwords of the event. The clearest takeaways for data center stakeholders include:
• Densification is standardizing.
OCP-aligned trays (e.g., NVIDIA Vera Rubin) and vendor-integrated racks (Wiwynn, ASRock) are reducing bespoke plumbing and cabling, enabling faster installations and more predictable reliability at 70 – 150 kW per rack and beyond. For brownfield sites, sidecar liquid loops such as Wiwynn’s 300-kW AALC provide a practical bridge to liquid cooling without re-plumbing entire facilities.
• 800-V DC is the next power frontier.
NVIDIA is clearly steering the market toward 800-volt DC distribution, with pilot deployments expected in 2026 and broader adoption by 2027 – 2028. Progress from Power Integrations’ high-voltage GaN devices and NVIDIA’s OCP reference designs will accelerate this shift. Operators should plan now for HVDC safety protocols, protection schemes, and staff training. The copper and efficiency savings at megawatt rack scales are too substantial to ignore.
• Security must be crypto-agile.
CNSA 2.0-aligned control FPGAs from Lattice provide an immediate path to layering post-quantum cryptography into server baseboards and NICs, mitigating harvest-now, decrypt-later risks as AI fabrics become more valuable and more targeted. Platform roadmaps should align with CNSA milestones for 2025, 2030, and 2035 to stay ahead of the regulatory curve.
• Procurement and ecosystem implications.
By driving tray, manifold, and HVDC elements through OCP, NVIDIA and its partners are creating de facto multi-vendor interfaces that strengthen second-source options for operators. Value is shifting toward integration quality (via racks, manifolds, sidecars) and power electronics where GaN/SiC design choices materially affect total cost of ownership.
With multiple qualified sources for these key components emerging, bottlenecks in constructing next-generation AI factories can be minimized, advancing the industry toward scalable, standardized, and secure AI infrastructure.
In Summary
Unlike prior OCP gatherings that spotlighted headline-grabbing hardware launches, OCP 2025 was less about flashy one-offs and more about convergence: on deployable, interoperable building blocks for the emerging era of AI factories.
The focus has shifted to liquid-ready trays and sidecars, high-voltage DC power trains, PCIe 6.0 switching with integrated security, and post-quantum-capable control planes, all features designed to accelerate time-to-operation.
For hyperscalers and power-first AI campuses, this evolving bundle of OCP-standard technologies shortens the path from land-bank to live capacity, reduces electrical and mechanical losses as rack power crosses the megawatt threshold, and hardens the platform for a high-value, high-threat era of AI workloads.