
Not Falling Short—Just Not Optimized
Altizer drew a clear distinction. Traditional data centers can run AI workloads, but they weren’t built for them.
“We’re not falling short much, we’re just not optimizing.”
The gap shows up most clearly in density. Legacy facilities were designed for roughly 300 to 400 watts per square foot. AI pushes that to 2,000 to 4,000 watts per square foot—changing not just rack design, but the logic of the entire facility.
For Altizer, AI-ready infrastructure starts with fundamentals: access to water for heat rejection, significantly higher power density, and in some cases specific redundancy topologies favored by chip makers. It also requires liquid cooling loops extended to the rack and, critically, flexibility in the white space.
That last point is the hardest to reconcile with traditional design.
“The GPUs change… your power requirements change… your liquid cooling requirements change. The data center needs to change with it.”
Buildings are static. AI is not.
Rethinking Modular: From Containers to Systems
“Modular” has been part of the data center vocabulary for years, but Altizer argues most of the industry is still thinking about it the wrong way.
The old model centered on ISO containers. The emerging model focuses on modularizing the white space itself.
“We’re not building buildings—we’re building assemblies of equipment.”
Compu Dynamics is pushing toward factory-built IT modules that can be delivered and assembled on-site. A standard 5 MW block consists of 10 modules, stacked into a two-story configuration and designed for transport by trailer across the U.S.
From there, scale becomes repeatable. Blocks can be placed adjacent or connected to create larger deployments, moving from 5 MW to 10 MW and beyond. The point is not just scalability; it’s repeatability and speed.
Altizer ties this directly to a broader shift in how data centers are defined. Referencing UL 2755, he described a future where facilities are treated as equipment assemblies rather than buildings. The emphasis shifts away from office space and toward industrial function.
“I don’t think the data center of the future is going to look like a building at all.”
Instead, he sees a field of interconnected systems including generators, transformers, cooling infrastructure, and IT modules, all optimized for output.
Liquid Cooling: The Real Execution Risk
If modularity defines the future, cooling is defining the present, and creating the most risk.
Altizer pointed to wide variability in how liquid cooling systems are being installed today. Differences in pipe materials, fabrication, commissioning, and cleaning practices are creating inconsistency across deployments.
“There’s been so much variability… there’s bound to be some future issues.”
The concern is not immediate failure, but latent problems that emerge over time—especially if systems were not installed with pristine cleanliness.
At the same time, the industry is still building expertise. Many engineers and contractors are only now gaining experience with liquid cooling systems, even as chip designs continue to evolve.
That evolution is pushing infrastructure in new directions. Nvidia’s latest platforms, for example, are designed for full liquid cooling using warmer fluid, which favors fluid coolers over traditional chillers. Many existing facilities, however, are built around chiller-based systems.
The result is a wave of interim solutions that sacrifice efficiency. Altizer described setups where heat moves through multiple exchanges (fluid to fluid, fluid to air, air back to fluid) each step adding complexity and energy loss.
These are not long-term answers. They are transitional.
Power: Complexity Inside, Constraint Outside
From Compu Dynamics’ vantage point, the biggest power challenge is not inside the building.
“The power problem is really outside the building.”
Utility availability, interconnection timelines, and self-generation strategies are the gating factors. Inside the data hall, the challenge is configuration. And that, too, is evolving.
Altizer described a landscape of competing approaches: different UPS strategies, battery placements, generator configurations, and even early discussions about shifting from AC to DC distribution.
One potential future path simplifies the stack dramatically, moving from traditional layered systems to a direct DC bus feeding the racks. The industry isn’t there yet, but the direction reflects a broader push toward simplification under extreme density.
Designing for an Uncertain Demand Curve
When asked how to design infrastructure for an uncertain AI demand curve, Altizer answered candidly.
“If I could answer that question, I think I could make a gazillion dollars.”
Historically, colocation providers built highly adaptable facilities to hedge against demand shifts. AI is pushing toward the opposite: purpose-built environments designed for specific customers and chip sets.
That model works today because revenue expectations are high, with some operators expecting to recover infrastructure costs in just a few years. But Altizer offered a note of caution, recalling the overconfidence of the dot-com era.
He stopped short of predicting a downturn, but the implication was clear: assumptions about payback periods may not hold indefinitely.
From Data Centers to Industrial Plants
By the end of the conversation, Altizer’s view of the next two to three years came into focus.
Data centers will no longer be treated as buildings. They will be treated as industrial plants.
“They’re going to look different, act different, and be maintained differently.”
If GPUs continue to displace CPUs as the dominant compute platform, infrastructure will follow. Facilities will become more specialized, more modular, and more tightly aligned with workload requirements.
Altizer is explicit about where that leads.
“I’m actually looking forward to building industrial plants, token factories.”
That may be the clearest expression of the transition underway. AI is not just increasing demand for data centers. It is redefining what a data center is.




















