
According to Broadcom, a single Jericho4 system can scale to 36,000 HyperPorts, each running at 3.2 Tbps, with deep buffering, line-rate MACsec encryption, and RoCE transport over distances greater than 100 kilometers.
HBM powers distributed AI
Improving on previous designs, Jericho4’s use of HBM can significantly increase total memory capacity and reduce the power consumed by the memory I/O interface, enabling faster data processing than traditional buffering methods, according to Lian Jie Su, chief analyst at Omdia.
While this may raise costs for data center interconnects, Su said higher-speed data processing and transfer can remove bottlenecks and improve AI workload distribution, increasing utilization of data centers across multiple locations.
“Jericho4 is very different from Jericho3,” Su said. “Jericho4 is designed for long-haul interconnect, while Jericho3 focuses on interconnect within the same data center. As enterprises and cloud service providers roll out more AI data centers across different locations, they need stable interconnects to distribute AI workloads in a highly flexible and reliable manner.”
Others pointed out that Jericho4, built on Taiwan Semiconductor Manufacturing Company’s (TSMC) 3‑nanometer process, increases transistor density to support more ports, integrated memory, and greater power efficiency, features that may be critical for handling large AI workloads.
“It enables unprecedented scalability, making it ideal for coordinating distributed AI processing across expansive GPU farms,” said Manish Rawat, semiconductor analyst at TechInsights. “Integrated HBM facilitates real-time, localized congestion management, removing the need for complex signaling across nodes during high-traffic AI operations. Enhanced on-chip encryption ensures secure inter-data center traffic without compromising performance.”