
Aria’s technical approach differs from incumbent vendors in its focus on end-to-end path optimization rather than individual switch performance. Karam argues that traditional networking vendors think of themselves primarily as switch companies, with software efforts concentrated on switch operating systems rather than cluster-wide operational models.
“It’s no longer just about the switch itself. It’s really about the end-to-end path,” Karam explained. “When you look at these jobs being scheduled, it’s about the paths the traffic are going to take through the network, end-to-end that really matter.”
Telemetry at microsecond resolution
The company is targeting the backend Ethernet network that connects GPUs in AI clusters. It’s building with merchant silicon from Broadcom and using the open-source SONiC network operating system.
Aria’s core differentiation centers on extracting and acting on network telemetry that already exists in modern switching silicon but remains largely untapped outside of hyperscale environments. “In order to deliver on this performance, you need the data, you need telemetry, and this telemetry today exists,” Karam explained. “If you look at these ASICs from chips like Broadcom, they have tons of telemetry at the microsecond resolution.”
The challenge, according to Karam, is figuring out how to effectively extract, store, process and act on the telemetry data at scale, which is something that Aria is working on delivering as part of its platform.
Deterministic versus probabilistic network optimization
Aria is not only building networking gear for AI networks but also using AI to help improve networking.





















