
“One of the challenges for AI — for any brand new technology — is putting the right combination of infrastructure together to make the technology work,” says Zeus Kerravala, founder and principal analyst at ZK Research. “If one of those components isn’t on par with the other two, you’re going to be wasting your money.”
Time is taking care of the first problem. More and more enterprises are moving from pilot projects to production, and getting a better idea of how much AI capacity they actually need.
And vendors are stepping up to handle the second problem, with packaged AI offerings that integrate servers, storage and networking into one convenient package, ready to deploy on-prem or in a colocation facility.
All the major vendors, including Cisco, HPE, and Dell are getting in on the action, and Nvidia is rapidly striking deals to get its AI-capable GPUs into as many of these deployments as possible.
For example, Cisco and Nvidia just expanded their partnership to bolster AI in the data center. The vendors said Nvidia will couple Cisco Silicon One technology with Nvidia SuperNICs as part of its Spectrum-X Ethernet networking platform, and Cisco will build systems that combine Nvidia Spectrum silicon with Cisco OS software.
That offering is only the latest in a long string of announcements by the two companies. For example, Cisco unveiled its AI Pods in October, which leverage Nvidia GPUs in servers purpose-built for large-scale AI training, as well as the networking and storage required.