In partnership withMicrosoft Azure and AMD
In Seattle, a meteorologist analyzes dynamic atmospheric models to predict the next major storm system. In Stuttgart, an automotive engineer examines crash-test simulations for vehicle safety certification. And in Singapore, a financial analyst simulates portfolio stress tests to hedge against global economic shocks.
Each of these professionals—and the consumers, commuters, and investors who depend on their insights— relies on a time-tested pillar of high-performance computing: the humble CPU.

With GPU-powered AI breakthroughs getting the lion’s share of press (and investment) in 2025, it is tempting to assume that CPUs are yesterday’s news. Recent predictions anticipate that GPU and accelerator installations will increase by 17% year over year through 2030. But, in reality, CPUs are still responsible for the vast majority of today’s most cutting-edge scientific, engineering, and research workloads. Evan Burness, who leads Microsoft Azure’s HPC and AI product teams, estimates that CPUs still support 80% to 90% of HPC simulation jobs today.

In 2025, not only are these systems far from obsolete, they are experiencing a technological renaissance. A new wave of CPU innovation, including high-bandwidth memory (HBM), is delivering major performance gains— without requiring costly architectural resets.
To learn more, watch the new webcast “Powering HPC with next-generation CPUs.“
This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.