Stay Ahead, Stay ONMINE

The 2025 outlook for data center cooling

Data centers could account for 44% of U.S. electric load growth through 2028 and consume up to 9% of the country’s power supply by 2030, causing concerns over their impact on U.S. power availability and costs. Up to 40% of data center electricity use goes to cooling, according to the National Renewable Energy Laboratory, thus […]

Data centers could account for 44% of U.S. electric load growth through 2028 and consume up to 9% of the country’s power supply by 2030, causing concerns over their impact on U.S. power availability and costs. Up to 40% of data center electricity use goes to cooling, according to the National Renewable Energy Laboratory, thus greater cooling efficiency is of great interest as one strategy for reducing energy consumption. Cooling is also integral to data center design, influencing how these facilities are developed, built and renovated. 

The second half of 2024 saw several notable announcements related to data center cooling systems, which protect high-performance processors and servers, enabling the advanced computations that artificial intelligence requires. In December, Microsoft and Schneider Electric separately released designs for high-efficiency liquid cooling systems to support increasingly powerful AI chips. Microsoft’s water-based design operates on a closed loop, eliminating waste from evaporation, while Schneider Electric’s data center reference design uses a non-water refrigerant. Earlier in 2024, Vertiv and Compass Datacenters showcased their “first-of-a-kind” liquid-air hybrid system, which they expected to deploy early this year.

Here’s what trends and developments data center cooling experts say they’re watching for 2025 and beyond.

Two-phase liquid cooling will break into the mainstream

Most data center professionals say they’re dissatisfied with their current cooling solutions, according to AFCOM’s 2024 State of the Data Center Industry report. Thirty-five percent of respondents said they regularly make adjustments due to inadequate cooling capacity, and 20% said they were actively seeking new, scalable systems.

Many data center cooling experts predict data center developers and operators will increasingly turn to two-phase, direct-to-chip cooling technology to improve cooling performance. These systems toggle the working fluid — typically a non-water refrigerant — between liquid and vapor states in a process that “plays a pivotal role in heat removal,” according to direct-to-chip liquid cooling system designer Accelsius.

2025 will be a “year of implementation” for two-phase systems as data center professionals get more comfortable with the technology, Accelsius CEO Josh Claman said in an interview. More sophisticated data centers with higher computing needs are more likely to seek out two-phase cooling, Claman said.


“Almost no new [data center] builds will be exclusively air-cooled nor exclusively liquid [because] not all applications require intense liquid cooling — think of archived data that is rarely accessed versus generative AI.”

Sarah Renaud

ENCOR Advisors vice president, consulting services


Traditional air cooling reaches its physical limit at server rack densities of about 70 kilowatts, the benchmark for state-of-the-art AI training facilities today, said Sarah Renaud, vice president of consulting services at ENCOR Advisors, a commercial real estate firm that works with data center clients.

Because future racks will be even denser, “two-phase is the future,” Renaud said. “It can handle higher power densities and heat fluxes, meaning it’s better-suited for handling AI workloads.”

Hybrid cooling will expand, but supply chain risks loom

Two-phase immersion cooling provides a lower 10-year total cost of ownership for data center operators than DTC or single-phase immersion cooling, according to a March 2024 study by Chemours, the Syska Hennessy Group and cooling system designer LiquidStack. But its high upfront costs, the long operational life of legacy cooling systems and variable cooling needs within individual data centers mean two-phase will continue to coexist alongside other technologies for some time, experts say.

“Almost no new [data center] builds will be exclusively air-cooled nor exclusively liquid [because] not all applications require intense liquid cooling — think of archived data that is rarely accessed versus generative AI,” Renaud said. “You can cool those [less demanding] racks more cost-effectively with air.”

Microsoft’s closed-loop, water-based cooling system “appears to align with an incremental strategy” that supports its near-term needs while “allowing [its] infrastructure to readily pivot to accommodate advanced cooling technologies like direct-to-chip two-phase when the time comes,” said Nick Schweissguth, director of product and commercial enablement at LiquidStack. 


“Ten years ago, you’d try to [design data centers with] more capacity than you need and grow into it, but now you don’t know what [power] density you need to build at.”

Steven Carlini

Schneider Electric vice president, innovation and data center


But data center operators’ hybrid cooling plans could be complicated by supply chain issues that could be made worse by anticipated Trump administration tariffs, Schweissguth said. Direct-to-chip coolant distribution units, which keep processors bathed in fluid, are particularly at risk, he noted.

With CDU demand set to surge in 2025, “companies vying to capture the direct-to-chip market will ultimately prevail based on their ability to produce at scale and build bulletproof relationships with suppliers,” Schweissguth said. 

Building and system design will evolve to enable 24/7 uptime

Operators expect far more out of state-of-the-art AI data centers than they did from previous generations of these facilities, said Steven Carlini, vice president of innovation and data center at Schneider Electric.

Whereas earlier facilities might have variable workloads averaging 30% or 40% of total processing capacity, AI facilities typically run at 100% capacity for weeks or months when training models, necessitating more rugged and redundant design, Carlini said. 

“It takes the variability out of the equation, but you have to be very sure you design the cooling system to support that,” he said.

Carlini described a near future in which higher rack-power densities require heavier cooling infrastructure, which creates additional physical demands in data center design. The designs his team has worked on recently, for example, involve “huge” pipes with “big steel cages over the supercluster” or two-story floor plans, with the first level flush on a concrete slab to handle the added weight.

“All that water has to go somewhere,” he said.

A technician inspecting a data center server in an immersion cooling tank.

Experts predict developers and operators will increasingly turn to two-phase direct-to-chip cooling technology like that shown here — which toggles working fluid between liquid and vapor states — to improve cooling performance in a process that “plays a pivotal role in heat removal,” according to cooling system designer Accelsius.

Halbergman via Getty Images

“Slow but steady” retrofit activity ahead

Retrofitting an operating data center to accommodate more powerful processors is a big technical and logistical challenge that leads some to conclude that it’s easier to build new, Accelsius’ Claman said. 

But new buildings are significantly more resource-intensive, complicating corporate sustainability goals, he noted. And existing data centers often have more robust power supplies. “That’s why they are where they are, and it’s not easy for them to move,” he said.

The majority of an operating data center’s asset value lies within its power supply and infrastructure, such as electrical, plumbing and other technical systems, according to JLL’s 2025 Global Data Center Outlook. These assets are particularly valuable given the challenges of securing power for new developments. Thus, retrofits such as transitioning existing data centers to liquid cooling will “be a viable solution and an opportunity to increase asset value,” JLL’s outlook says. 

Meta is transitioning its existing data centers to liquid cooling “because they say they ‘have to,’” while colocation giant Equinix said in December 2023 that it would expand liquid cooling to 100 of its data center facilities, Renaud noted.

Claman predicted a “slow but steady” pace of retrofits and “a more balanced conversation” around their benefits. Schneider Electric is betting on this trend as well, recently partnering with Nvidia on the release of three retrofit reference designs for data center operators looking to boost performance without redesigning their facilities from scratch. 

The rapid increase in computing power means data centers on the bleeding edge today may rapidly fall behind, further complicating the already formidable challenge of designing facilities with both air and liquid cooling infrastructure, Carlini said. 

“Ten years ago, you’d try to [design data centers with] more capacity than you need and grow into it, but now you don’t know what [power] density you need to build at,” he said.

Facilities in Northern climates might get an edge

Air provides 20% to 30% of the cooling load, even in newer data centers, according to Carlini. That’s driving efficiency-minded developers to site more facilities in “the attic,” the informal industry term for cooler Northern regions, Renaud and Claman say.

“The market talks a lot about a ‘free cooling zone’” in the Northern United States, Northern Europe and Canada, Claman said. 

In cooler weather, energy use for air cooling systems could drop by as much as 95%, according to Renaud. “We are seeing a trend of hybrid colocation strategies in which data that does not require frequent access can be stored in more remote and colder locations,” leaving higher-access-frequency facilities to operate in warmer, more established data center hubs like northern Virginia, she said.

Cold-climate sites also are less likely to need water-hungry evaporative cooling systems, which are common in warmer, drier climates and have raised concerns around data centers’ environmental impacts, Claman said. He predicted a move toward closed-loop cooling systems that can take advantage of seasonal free cooling.

“There is a lot of scrutiny around emptying aquifers to cool data centers,” he said.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia aims to bring AI to wireless

Key features of ARC-Compact include: Energy Efficiency: Utilizing the L4 GPU (72-watt power footprint) and an energy-efficient ARM CPU, ARC-Compact aims for a total system power comparable to custom baseband unit (BBU) solutions currently in use. 5G vRAN support: It fully supports 5G TDD, FDD, massive MIMO, and all O-RAN

Read More »

Netgear’s enterprise ambitions grow with SASE acquisition

Addressing the SME security gap The acquisition directly addresses a portfolio gap that Netgear (Nasdaq:NTGR) has identified through customer feedback.  According to Badjate, customers have been saying that they like the Netgear products, but they also really need more security capabilities. Netgear’s target market focuses on organizations with fewer than

Read More »

IBM’s cloud crisis deepens: 54 services disrupted in latest outage

Rawat said IBM’s incident response appears slow and ineffective, hinting at procedural or resource limitations. The situation also raises concerns about IBM Cloud’s adherence to zero trust principles, its automation in threat response, and the overall enforcement of security controls. “The recent IBM Cloud outages are part of a broader

Read More »

CNOOC Announces Seventh Upstream Startup in Chinese Waters This Year

CNOOC Ltd. has begun production at the Weizhou 5-3 oilfield in the South China Sea, its seventh announced startup offshore China in 2025. Weizhou 5-3 is expected to reach a peak output of about 10,000 barrels a day next year, the state-backed oil and gas explorer and producer said in an online statement Monday. The field produces medium crude. Weizhou 5-3 is in the South China Sea’s Beibu Gulf, or Gulf of Tonkin, in waters around 35 meters (114.83 feet) deep. The development includes a wellhead platform, as well as uses existing facilities. CNOOC Ltd., majority-owned by China National Offshore Oil Corp., plans to commission seven production wells and two water injection wells. CNOOC Ltd. owns 51 percent of the project. Smart Oil Investment Ltd. holds 49 percent. Previously in 2025 CNOOC Ltd. announced three startups in the Bohai Sea and three in the South China Sea. The Bohai Sea projects are the Caofeidian 6-4 oilfield adjustment, phase 2 of the Luda 5-2 North field and the Bozhong 26-6 field. The South China Sea projects are Wenchang 19-1 oilfield phase 2, the Dongfang 29-1 field and the Panyu 11-12/10-1/10-2 Oilfield Adjustment Joint Development Project. The Caofeidian 6-4 adjustment project is expected to achieve 11,000 barrels of oil equivalent a day (boed) in peak production 2026. The oil is light crude. Luda 5-2 North phase 2 could reach about 6,700 boed in peak production next year. Phase 1 went online 2022 as the first Chinese oilfield to produce from superheavy oil reservoirs through thermal recovery, according to CNOOC Ltd. It said of Luda 5-2 North phase 2, “CNOOC Limited made major technological breakthroughs in this project and significantly enhanced the development efficiency of offshore super heavy oil”. “Through optimized Jet Pump Injection-Production Technology, the project realized efficient and economic development of heavy

Read More »

SAF Firm Completes Combination; Up for Nasdaq Listing

Sustainable aviation fuel (SAF) firm XCF Global Capital, Inc. said it has completed its business combination with special purpose acquisition company Focus Impact BH3 Acquisition in line with its plan for a public listing. The combined company will operate under the name XCF Global, Inc. and its class A common stock is expected to begin trading on the Nasdaq Capital Market under the ticker symbol “SAFX” on June 9, the company said in a news release. XCF Global’s New Rise Reno facility, located in the Reno-Tahoe Industrial Complex in Storey County, Nevada, began commercial production in February of so-called “neat” SAF, which is totally free of all fossil fuels and not blended with conventional jet fuel, with a nameplate production capacity of 38 million gallons of neat SAF per year, according to the release. The first customer deliveries of neat SAF were completed in March, the company said. The company stated it is advancing a pipeline of production sites in Nevada, North Carolina, and Florida to expand SAF capacity and support long-term growth. “The completion of this transaction marks a transformational step for XCF Global and the decarbonization of the aviation industry,” XCF Global CEO Mihir Dange said. “With commercial production underway, first deliveries completed, and a proven business model in place, we are entering the public markets with momentum and a clear path to growth. XCF Global is positioned as a market leader at the intersection of aviation and decarbonization – standing at the forefront of a high-growth opportunity in synthetic aviation fuel. We offer the public capital markets access to one of the fastest-growing sectors in the global energy transition, and we are proud to be leading the shift toward a lower-carbon future for aviation”. “We are thrilled to have completed the business combination with XCF Global and

Read More »

ADNOC Expands STEM Education Program ‘to Empower UAE Students in AI’

ADNOC announced, in a release posted on its site recently, that it has expanded its Science, Technology, Engineering and Mathematics (STEM) education program “to empower UAE students in artificial intelligence (AI) and advanced technology through an initiative called ‘STEM for Life: Future of AI Schools Challenge’”. The release highlighted that the Challenge was launched in January 2025 and recently held its finals at the Abu Dhabi Energy Center. The Challenge received 14,500 applicants from 351 schools across the country, according to the release, which pointed out that 896 teachers helped students to “design, build, and pitch AI solutions that addressed one of three themes: creating real-world impact, demonstrating blue sky thinking, or winning the hearts and minds of local communities”. A total of 1,500 submissions were received, with 80 students in 27 teams selected to attend the final, the release noted. Winning teams pitched their projects to a jury which included members from the Ministry of Industry and Advanced Technology, the Ministry of Education, Abu Dhabi Early Childhood Authority, ADNOC, Khalifa University, ADNOC Technology Academy, Dubai Institute of Design and Innovation, Microsoft, and Neubio, the release stated. Following an assessment by the jury, nine teams each were awarded the gold, silver, and bronze positions respectively, the release said, adding that submissions “featured impressive AI-powered solutions”. The final was attended by Sultan Ahmed Al Jaber, Minister of Industry and Advanced Technology and ADNOC Managing Director and Group CEO, Sarah bint Yousif Al Amiri, Minister of Education, Abdulla Humaid Al Jarwan, Chairman of the Abu Dhabi Department of Energy, Hajer Ahmed Mohamed Al Thehli, Secretary-General of the Education, Human Development and Community Council, Khalaf Abdulla Rahma Al Hammadi, Director-General of the Abu Dhabi Pension Fund, and senior ADNOC executives, the release pointed out. The release also noted that, during the final, ADNOC

Read More »

ScottishPower Allots About $300MM for UK Power Grid Modernization

ScottishPower Energy Networks (SPEN), Iberdrola’s distribution company in the United Kingdom, will invest more than EUR 262 million ($298.8 million) in the modernization of the United Kingdom’s electricity grid. SPEN said in a media release that six partners will continue working on the maintenance and upgrade of more than 20,000 kilometers (12,400 miles) of overhead lines across the network over the next four years. SPEN partners include Scottland-based Aureos, Gaeltec, and PLPC, which will support the six license districts in central and southern Scotland (Ayrshire and Clyde South, Central and Fife, Dumfries and Galloway, Edinburgh and Borders, Glasgow and Clyde North, Lanarkshire). The company said it is also partnering with Emerald Power, IES, and Network Plus, which will support the license districts in Mid-Cheshire, Merseyside, Dee Valley and Mid Wales, Wirral and North Wales. “Ensuring we have the partners, resources, and technical skills in place to deliver on our bold and ambitious plans for our network is vital for the modern and resilient grid needed to support the doubling of demand”, Nicola Connelly, SPEN CEO, said. “These contracts not only support significant investment in our overhead line network, they allow us to build on the solid foundations created with our supply chain partners and give certainty and confidence to further invest in their skills and people.  It’s a win-win on both sides and we look forward to working together to make a long and lasting difference for all our communities – from Anstruther to Anglesey”. The contracts will support over 500 jobs – including more than 50 new linesmen roles – nationwide, with companies based in and around ScottishPower’s Scotland and Manweb license areas. “This is an extremely significant milestone for Emerald Power and provides the opportunity to further invest in our business – recruiting, training, and upskilling the resources needed to deliver

Read More »

Fennex to Deploy AI-Powered Safety System across EnQuest’s UK Operations

Fennex Ltd. has bagged a multi-year deal from EnQuest plc to deploy the flagship AI-powered Behaviour-Based Safety System (BBSS) across EnQuest’s UK operations.   EnQuest oversees a varied portfolio of offshore assets in the North Sea, which includes Thistle, Heather, Magnus, and the Kraken FPSO, along with the Sullom Voe Terminal located onshore Shetland, recognized as one of the largest oil terminals in Europe, Fennex noted in a media release. Fennex added that the BBSS is already live across all of EnQuest’s UK offshore assets and the Sullom Voe Terminal. This rapid deployment was achieved in just eight weeks. “BBSS is now deployed across all EnQuest’s UK-operated offshore assets, and for the first time at a major onshore terminal”, Adrian Brown, Managing Director at Fennex, said. “EnQuest was eager to roll out the platform quickly, and thanks to strong collaboration, we were able to go live both offshore and onshore in record time”. “We identified BBSS as an opportunity to make a step change in operational safety through making it easier and more user-friendly for personnel to participate and allowing us to make more effective use of the resulting leading data. It provides full visibility of engagement in our safety reporting and real-time data, giving us immediate insight into reported issues and the ability to act swiftly for the best outcomes in our operations”, EnQuest’s Director of HSE and Wells, Ian McKimmie, added. As the collaboration progresses, Fennex and EnQuest are working together to reveal even more value – leveraging advanced analytics, behavioral insights, and AI-driven predictive safety tools to foster a culture of proactive, intelligence-led safety, Fennex said. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to

Read More »

Canada’s Oil Sands Emissions Intensity Falls for Sixth Year

Canada’s oil sands industry reduced its emissions per barrel for the sixth straight year in 2023, even as one growing portion of the sector moved in the opposite direction, according to new Alberta government data released Thursday. The emissions intensity of all oil sands sites fell to the equivalent of 0.399 metric tons of carbon dioxide per cubic meter of bitumen produced, down from 0.404 in 2022, the data show. The gain reflects improvements at oil sands mines, where bitumen is dug from the ground. However, in situ oil sands, which use wells similar to traditional oil producers, saw emissions per barrel rise.  Even with the efficiency improvement, total emissions rose to the equivalent 80.1 million metric tons of carbon dioxide, up from 78.8 million in 2022, the data show. That’s the highest in data back to 2011. The oil sands’ declining energy intensity — to the lowest in data stretching back to 2011 — is welcome news for an industry that has struggled with a reputation for being climate unfriendly, prompting some investors to shun it altogether. However, the rising emissions intensity at well sites presents a challenge for the sector, as the method’s lower costs make it increasingly popular among producers. While the average intensity of oil sands producers is higher than the average for the global oil industry overall, drillers ‘emissions profiles vary widely around the world, said Kevin Birn, chief analyst for Canadian oil markets for S&P Global. The oil sands “fits well within the range of carbon intensity of oil and gas we see in the world,” Birn said in an interview. All of the oil sands mines reduced their emissions intensity with Canadian Natural Resources Ltd.’s Horizon making the biggest gain for the year.  In situ production facilities, which include the more than 250,000 barrel-a-day Suncor

Read More »

LiquidStack launches cooling system for high density, high-powered data centers

The CDU is serviceable from the front of the unit, with no rear or end access required, allowing the system to be placed against the wall. The skid-mounted system can come with rail and overhead piping pre-installed or shipped as separate cabinets for on-site assembly. The single-phase system has high-efficiency dual pumps designed to protect critical components from leaks and a centralized design with separate pump and control modules reduce both the number of components and complexity. “AI will keep pushing thermal output to new extremes, and data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise,” said Joe Capes, CEO of LiquidStack in a statement. “With up to 10MW of cooling capacity at N, N+1, or N+2, the GigaModular is a platform like no other—we designed it to be the only CDU our customers will ever need. It future-proofs design selections for direct-to-chip liquid cooling without traditional limits or boundaries.”

Read More »

Enterprises face data center power design challenges

” Now, with AI, GPUs need data to do a lot of compute and send that back to another GPU. That connection needs to be close together, and that is what’s pushing the density, the chips are more powerful and so on, but the necessity of everything being close together is what’s driving this big revolution,” he said. That revolution in new architecture is new data center designs. Cordovil said that instead of putting the power shelves within the rack, system administrators are putting a sidecar next to those racks and loading the sidecar with the power system, which serves two to four racks. This allows for more compute per rack and lower latency since the data doesn’t have to travel as far. The problem is that 1 mW racks are uncharted territory and no one knows how to manage the power, which is considerable now. ”There’s no user manual that says, hey, just follow this and everything’s going to be all right. You really need to push the boundaries of understanding how to work. You need to start designing something somehow, so that is a challenge to data center designers,” he said. And this brings up another issue: many corporate data centers have power plugs that are like the ones that you have at home, more or less, so they didn’t need to have an advanced electrician certification. “We’re not playing with that power anymore. You need to be very aware of how to connect something. Some of the technicians are going to need to be certified electricians, which is a skills gap in the market that we see in most markets out there,” said Cordovil. A CompTIA A+ certification will teach you the basics of power, but not the advanced skills needed for these increasingly dense racks. Cordovil

Read More »

HPE Nonstop servers target data center, high-throughput applications

HPE has bumped up the size and speed of its fault-tolerant Nonstop Compute servers. There are two new servers – the 8TB, Intel Xeon-based Nonstop Compute NS9 X5 and Nonstop Compute NS5 X5 – aimed at enterprise customers looking to upgrade their transaction processing network infrastructure or support larger application workloads. Like other HPE Nonstop systems, the two new boxes include compute, software, storage, networking and database resources as well as full-system clustering and HPE’s specialized Nonstop operating system. The flagship NS9 X5 features support for dual-fabric HDR200 InfiniBand interconnect, which effectively doubles the interconnect bandwidth between it and other servers compared to the current NS8 X4, according to an HPE blog detailing the new servers. It supports up to 270 networking ports per NS9 X system, can be clustered with up to 16 other NS9 X5s, and can support 25 GbE network connectivity for modern data center integration and high-throughput applications, according to HPE.

Read More »

AI boom exposes infrastructure gaps: APAC’s data center demand to outstrip supply by 42%

“Investor confidence in data centres is expected to strengthen over the remainder of the decade,” the report said. “Strong demand and solid underlying fundamentals fuelled by AI and cloud services growth will provide a robust foundation for investors to build scale.” Enterprise strategies must evolve With supply constrained and prices rising, CBRE recommended that enterprises rethink data center procurement models. Waiting for optimal sites or price points is no longer viable in many markets. Instead, enterprises should pursue early partnerships with operators that have robust development pipelines and focus on securing power-ready land. Build-to-suit models are becoming more relevant, especially for larger capacity requirements. Smaller enterprise facilities — those under 5MW — may face sustainability challenges in the long term. The report suggested that these could become “less relevant” as companies increasingly turn to specialized colocation and hyperscale providers. Still, traditional workloads will continue to represent up to 50% of total demand through 2030, preserving value in existing facilities for non-AI use cases, the report added. The region’s projected 15 to 25 GW gap is more than a temporary shortage — it signals a structural shift, CBRE said. Enterprises that act early to secure infrastructure, invest in emerging markets, and align with power availability will be best positioned to meet digital transformation goals. “Those that wait may find themselves locked out of the digital infrastructure they need to compete,” the report added.

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »