
The decision also reflects a future of AI workloads running on heterogeneous computing and networking infrastructure, said Lian Jye Su, chief analyst at Omdia.
“While it makes sense for enterprises to first rely on Nvidia’s full stack solution to roll out AI, they will generally integrate alternative solutions such as AMD and self-developed chips for cost efficiency, supply chain diversity, and chip availability,” Su said. “This means data center networking vendors will need to consider interoperability and open standards as ways to address the diversification of AI chip architecture.”
Hyperscalers and enterprise CIOs are increasingly focused on how to efficiently scale up or scale out AI servers as workloads expand. Nvidia’s GPUs still underpin most large-scale AI training, but companies are looking for ways to integrate them with other accelerators.
Neil Shah, VP for research at Counterpoint Research, said that Nvidia’s recent decision to open its NVLink interconnect to ecosystem players earlier this year gives hyperscalers more flexibility to pair Nvidia GPUs with custom accelerators from vendors such as Broadcom or Marvell.
“While this reduces the dependence on Nvidia for a complete solution, it actually increases the total addressable market for Nvidia to be the most preferred solution to be tightly paired with the hyperscaler’s custom compute,” Shah said.
Most hyperscalers have moved toward custom compute architectures to diversify beyond x86-based Intel or AMD processors, Shah added. Many are exploring Arm or RISC-V designs that can be tailored to specific workloads for greater power efficiency and lower infrastructure costs.
Shifting AI infrastructure strategies
The collaboration also highlights how networking choices are becoming as strategic as chip design itself, suggesting a change in how AI workloads are powered and connected.