Higher capacity throughout the network means less congestion. It’s old-think, they say, to assume that if you have faster LAN connections to users and servers, you’ll admit more traffic and congest trunks. “Applications determine traffic,” one CIO pointed out. “The network doesn’t suck data into it at the interface. Applications push it.” Faster connections mean less congestion, which means fewer complaints, and more alternate paths to take without traffic delay and loss, which also reduces complaints. In fact, anything that creates packet loss, outages, even latency, creates complaints, and addressing complaints is a big source of opex. The complexity comes in because network speed impacts user/application quality of experience in multiple ways, ways beyond the obvious congestion impacts. When a data packet passes through a switch or router, it’s exposed to two things that can delay it. Congestion is one, but the other is “serialization delay.” This complex-sounding term means that you can’t switch a packet if you don’t have it all, and so every data packet is delayed until it’s all received. The length of that delay is determined by the speed of the connection it arrives on, so fast interfaces always offer better latency, and the delay a given packet experiences is the sum of the serialization delay of each interface it passes through. Application designs, component costs and AI reshape views on network capacity You might wonder why enterprises are starting to look at this capacity-solves-problems point now, versus years or decades earlier. They say there’s both a demand and supply-side answer. On the demand side, increased componentization of applications, including the division of component hosting between data center and cloud, has radically increased the complexity of application workflows. Monolithic applications have simple workflows—input, process, output. Componentized ones have to move messages among the components, and each