
“What we are trying to do here is a completely open, standard-based solution, so that any compute vendor, any accelerator, or even hyperscalers, can bring their own compute and seamlessly plug into the rack,” Kar said.
How Upscale is building on an open standards foundation
Upscale AI’s technical approach leverages multiple open standards initiatives rather than developing proprietary protocols.
The company’s stack builds on the SONiC (Software for Open Networking in the Cloud) network operating system, Ultra Ethernet Consortium (UEC) specifications, Ultra Accelerator Link (UALink) standards, and the Switch Abstraction Interface (SAI).
UEC specifications specifically target AI networking challenges by adding congestion management, advanced telemetry and predictable latency features that traditional Ethernet lacks. UALink provides standardized interfaces for accelerator interconnects, breaking dependence on proprietary solutions like Nvidia’s NVLink, while SAI provides hardware abstraction.
Kar explained that those standards will provide the foundation for what Upscale is building, but he’s not stopping there. “We are upgrading the stack for both SAI and SONiC for scale up,” Kar noted.
Full-stack integration strategy
While some networking vendors focus on a specific part of the stack, the goal for Upscale AI is to cover everything. “We are fully vertically integrated,” Kar said. “We do silicon, system, software, everything.”