
“The problem is, if you’ve got a roadblock at the other end of the wire, then Ultra Ethernet isn’t efficient at all,” Metz explained. “When you start to piece together how the data moves through buffers, both in and out of a network, you start to realize that you are piling up problems if you don’t have an end-to-end solution.”
Storage.AI targets these post-network optimization points rather than competing with networking protocols. The initiative focuses on data-handling efficiency after packets reach their destinations, ensuring that advanced networking investments translate into measurable application performance improvements.
AI data typically resides on separate storage networks rather than the high-performance fabrics connecting GPU clusters. File and Object over RDMA specifications within Storage.AI would enable storage protocols to operate directly over Ultra Ethernet and similar fabrics, eliminating network traversal inefficiencies that force AI workloads across multiple network boundaries.
“Right now, the data is not on Ultra Ethernet, so we’re not using Ultra Ethernet at all to its maximum potential to be able to get the data inside of a processor,” Metz noted.
Why AI workloads break traditional storage models
AI applications challenge assumptions about data access patterns that network engineers take for granted.
Metz noted that machine learning pipelines consist of distinct phases, including ingestion, preprocessing, training, checkpointing, archiving and inference. Each of those phases requires different data structures, block sizes and access methods. Current architectures force AI data through multiple network detours.