
Enterprises building out a Nvidia-connected AI environment will now be able to deploy non-Nvidia accelerators into that environment, Kimball explained. Thus, semi-custom silicon can integrate “much more directly” into Nvidia-based AI systems.
With this move, Nvidia is “further acknowledging the heterogeneity that will be the AI inference environment,” noted Kimball.
Yaz Palanichamy, senior advisory analyst at Info-Tech Research Group, agreed that welcoming Marvell into its NVLink ecosystem increases Nvidia’s support of semi-custom and heterogeneous architectures while allowing customers to continue using its platform. “Enterprise customers [will have] more flexibility when creating their AI systems, but will still create a larger presence in the greater AI ecosystem for Nvidia,” he said.
As Kimball further pointed out, even if Nvidia is the dominant chip in an enterprise’s infrastructure, there will be use cases and deployment scenarios in which third-party chips are required. So the key is to control the fabric and software that ties this heterogenous environment together, which is what Nvidia is aiming for.
There is a “battle of sorts” going on, he noted. While NVLink delivers a high-performance interconnect for Nvidia environments, the competing Ultra Accelerator Link (UALink) is a consortium-based spec that delivers the same capability and is backed by the likes of Astera Labs, AMD, Intel, Meta, Broadcom, and Marvell itself.
“Openness-ubiquitousness is the real key to winning,” said Kimball. “Nvidia is working to shift from a proprietary to a ubiquitous model.”





















