Stay Ahead, Stay ONMINE

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more.

The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them.

In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks.

Going all-in on red teaming pays practical, competitive dividends

It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks.

Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find.

What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle design to combine human expertise and contextual intelligence on one side with AI-based techniques on the other.

“When automated red teaming is complemented by targeted human insight, the resulting defense strategy becomes significantly more resilient,” writes OpenAI in the first paper (Ahmad et al., 2024).

The company’s premise is that using external testers to identify the most high-impact real-world scenarios, while also evaluating AI outputs, leads to continuous model improvements. OpenAI contends that combining these methods delivers a multi-layered defense for their models that identify potential vulnerabilities quickly. Capturing and improving models with the human contextual intelligence made possible by a human-in-the-middle design is proving essential for red-teaming AI models.

Why red teaming is the strategic backbone of AI security

Red teaming has emerged as the preferred method for iteratively testing AI models. This kind of testing simulates a variety of lethal and unpredictable attacks and aims to identify their most potent and weakest points. Generative AI (gen AI) models are difficult to test through automated means alone, as they mimic human-generated content at scale. The practices described in OpenAI’s two papers seek to close the gaps automated testing alone leaves, by measuring and verifying a model’s claims of safety and security.

In the first paper (“OpenAI’s Approach to External Red Teaming”) OpenAI explains that red teaming is “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and collaboration with developers” (Ahmad et al., 2024). Committed to leading the industry in red teaming, the company had over 100 external red teamers assigned to work across a broad base of adversarial scenarios during the pre-launch vetting of GPT-4 prior to launch.

Research firm Gartner reinforces the value of red teaming in its forecast, predicting that IT spending on gen AI will soar from $5 billion in 2024 to $39 billion by 2028. Gartner notes that the rapid adoption of gen AI and the proliferation of LLMs is significantly expanding these models’ attack surfaces, making red teaming essential in any release cycle.

Practical insights for security leaders

Even though security leaders have been quick to see the value of red teaming, few are following through by making a commitment to get it done. A recent Gartner survey finds that while 73% of organizations recognize the importance of dedicated red teams, only 28% actually maintain them. To close this gap, a simplified framework is needed that can be applied at scale to any new model, app, or platform’s red teaming needs.

In its paper on external red teaming OpenAI defines four key steps for using a human-in-the-middle design to make the most of human insights:

  • Defining testing scope and teams: Drawing on subject matter experts and specialists across key areas of cybersecurity, regional politics, and natural sciences, OpenAI targets risks that include voice mimicry and bias. The ability to recruit cross-functional experts is, therefore, crucial. (To gain an appreciation for how committed OpenAI is to this methodology and its implications for stopping deepfakes, please see our article “GPT-4: OpenAI’s shield against $40B deepfake threat to enterprises.”)
  • Selecting model versions for testing, then iterating them across diverse teams: Both of OpenAI’s papers emphasize that cycling red teams and models using an iterative approach delivers the most insightful results. Allowing each red team to cycle through all models is conducive to greater team learning of what is and isn’t working.
  • Clear documentation and guidance: Consistency in testing requires well-documented APIs, standardized report formats, and explicit feedback loops. These are essential elements for successful red teaming.
  • Making sure insights translate into practical and long-lasting mitigations: Once red teams log vulnerabilities, they drive targeted updates to models, policies and operational plans — ensuring security strategies evolve in lockstep with emerging threats.

Scaling adversarial testing with GPT-4T: The next frontier in red teaming

AI companies’ red teaming methodologies are demonstrating that while human expertise is resource-intensive, it remains crucial for in-depth testing of AI models.

In OpenAI’s second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning” (Beutel et al., 2024), OpenAI addresses the challenge of scaling adversarial testing using an automated, multi-pronged approach that combines human insights with AI-generated attack strategies.

The core of this methodology is GPT-4T, a specialized variant of the GPT-4 model engineered to produce a wide range of adversarial scenarios.

Here’s how each component of the methodology contributes to a stronger adversarial testing framework:

  • Goal diversification. OpenAI describes how it is using GPT-4T to create a broad spectrum of scenarios, starting with initially benign-seeming prompts and progressing to more sophisticated phishing campaigns. Goal diversification focuses on anticipating and exploring the widest possible range of potential exploits. By using GPT-4T’s capacity for diverse language generation, OpenAI contends that red teams avoid tunnel vision and stay focused on probing for vulnerabilities that manual-only methods miss.
  • Reinforcement learning (RL). A multi-step RL framework rewards the discovery of new and previously unseen vulnerabilities. The purpose is to train the automated red team by improving each iteration. This enables security leaders to refocus on genuine risks rather than sifting through volumes of low-impact alerts. It aligns with Gartner’s projection of a 30% drop in false positives attributable to gen AI in application security testing by 2027. OpenAI writes, “Our multi-step RL approach systematically rewards the discovery of newly identified vulnerabilities, driving continuous improvement in adversarial testing.”
  • Auto-generated rewards: OpenAI defines this as a system that tracks and updates scores for partial successes by red teams, assigning incremental rewards for identifying each unprotected weak area of a model.

Securing the future of AI: Key takeaways for security leaders

OpenAI’s recent papers show why a structured, iterative process that combines internal and external testing delivers the insights needed to keep improving models’ accuracy, safety, security and quality.

Security leaders’ key takeaways from these papers should include: 

Go all-in and adopt a multi-pronged approach to red teaming. The papers emphasize the value of combining external, human-led teams with real-time simulations of AI attacks generated randomly, as they reflect how chaotic intrusion attempts can be. OpenAI contends that while humans excel at spotting context-specific gaps, including biases, automated systems identify weaknesses that emerge only under stress testing and repeated sophisticated attacks.

Test early and continuously throughout model dev cycles. The white papers make a compelling argument against waiting for production-ready models and instead beginning testing with early-stage versions. The goal is to find emerging risks and retest later to make sure the gaps in models were closed before launch.

Whenever possible, streamline documentation and feedback with real-time feedback loops. Standardized reporting and well-documented APIs, along with explicit feedback loops, help convert red team findings into actionable, trackable mitigations. OpenAI emphasizes the need to get this process in place before beginning red teaming, to accelerate fixes and remediation of problem areas.

Using real-time reinforcement learning is critically important, as is the future of AI red teaming. OpenAI makes the case for automating frameworks that reward discoveries of new attack vectors as a core part of the real-time feedback loops. The goal of RL is to create a continuous loop of improvement. 

Don’t settle for anything less than actionable insights from the red team process. It’s essential to treat every red team discovery or finding as a catalyst for updating security strategies, improving incident response plans, and revamping guidelines as required.

Budget for the added expense of enlisting external expertise for red teams. A central premise of OpenAI’s approach to red teaming is to actively recruit outside specialists who have informed perspectives and knowledge of advanced threats. Areas of expertise valuable to AI-model red teams include deepfake technology, social engineering, identity theft, synthetic identity creation, and voice-based fraud. “Involving external specialists often surfaces hidden attack paths, including sophisticated social engineering and deepfake threats.” (Ahmad et al., 2024)

Papers:

Beutel, A., Xiao, K., Heidecke, J., & Weng, L. (2024). “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning.” OpenAI.

Ahmad, L., Agarwal, S., Lampe, M., & Mishkin, P. (2024). “OpenAI’s Approach to External Red Teaming for AI Models and Systems.” OpenAI.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Groundcover grows funding for eBPF-based observability tech

Groundcover’s expanded eBPF approach goes beyond traditional network monitoring, Azulay said: “eBPF is no longer just about network monitoring. We use it as an x-ray into operations flowing through the kernel of the operating system.”  Groundcover uses eBPF to provide full application-level traces. Azulay explained that the system can see the

Read More »

BP Warns of Rising Debt Amid Lower Output, Weak Gas Trading

BP Plc said debts mounted in the first quarter, yet another setback for the UK energy major as it struggles to turn its finances around. Net debt climbed about $4 billion from the prior quarter, BP said Friday, citing an increase in working capital. It also reported lower upstream production and weak gas trading — disappointing for a company pivoting back toward its core fossil-fuel business. The guidance comes just a few months after BP unveiled plans to refocus on oil and gas and spend less on clean energy amid pressure from activist investor Elliott Investment Management. Since the end of the quarter, turnaround efforts have come under further strain, with the oil market roiled by US President Donald Trump’s aggressive trade policy and OPEC+’s move to unleash supply. BP’s net debt totaled $23 billion at the end of last year, its ratio of debt to equity far exceeding that of Shell Plc, TotalEnergies SE, Chevron Corp. and Exxon Mobil Corp. This year the stock has fared worse than peers, particularly since Trump announced new tariffs April 2. And with oil’s sharp plunge, BP’s in a tough position to bring down borrowings and maintain shareholder returns. “We believe BP has the highest likelihood of reducing buybacks” among the oil majors, TD Cowen analyst Jason Gabelman wrote in a note. “The stock has been the weakest performer since April 2 in the peer group due to relatively high leverage and reliance on divestments.” BP saw “slightly higher” volumes in oil production and operations in the first quarter, but lower output in gas and low-carbon energy, the company said in a statement. Its large but opaque trading business, which at times helps the company ride out a softer market, failed to come to the rescue, with a “weak” contribution from gas and “average” for

Read More »

Bill to undo Biden-era water heater efficiency rule heads to Trump’s desk

Dive Brief: Dive Insight: Most tankless water heaters sold today already meet the stricter standards, but rolling back the rule will have a big impact on consumers nonetheless, according to the Appliance Standards Awareness Project. The standards were set to take effect in 2029, requiring new gas tankless water heaters to use about 13% less energy than today’s least efficient models and lowering a household’s total costs by an average of $112 over the life of the product, the group said. “American families are going to face higher bills because the Senate sided with a group of gas utilities and one particular manufacturer,” ASAP Executive Director Andrew deLaski said in a statement. “This is going to keep an outdated version of this technology on the market, with homeowners and renters paying the cost.” Gas industry officials called the vote “a victory for working-class Americans.” “President Biden’s block on certain natural gas appliances was deeply flawed legally and practically,” AGA President and CEO Karen Harbert said. “The water heater rule would have removed consumer choices, placed a disproportionate financial burden on seniors and low-income Americans and pushed financially vulnerable consumers toward less efficient electric products likely to raise their energy bills.”  The Congressional Review Act resolution of disapproval regarding the standards passed with bipartisan support in both chambers of Congress. The DOE finalized the rule in December. “The DOE should focus on promoting energy efficiency without unnecessarily driving up costs and limiting consumer choice,” National Association of Home Builders Chairman Buddy Hughes said in a statement. The measure approved Wednesday by the Senate was introduced in the House by Rep. Gary Palmer, R-Ala. “I applaud the Senate on passing this legislation to protect not only gas water heaters, but consumers,” Palmer said. “For four years, the Biden-Harris administration waged war on our home

Read More »

Department of Energy Overhauls Policy for College and University Research, Saving $405 Million Annually for American Taxpayers

WASHINGTON– The Department of Energy (DOE) today announced a new policy action aimed at halting inefficient spending by colleges and universities while continuing to expand American innovation and scientific research. In a new policy memorandum shared with grant recipients at colleges and universities, DOE announced that it will limit financial support of “indirect costs” of DOE research funding to 15%. This action is projected to generate over $405 million in annual cost savings for the American people, delivering on President Trump’s commitment to bring greater transparency and efficiency to federal government spending. “The purpose of Department of Energy funding to colleges and universities is to support scientific research – not foot the bill for administrative costs and facility upgrades,” U.S. Secretary of Energy Chris Wright said. “With President Trump’s leadership, we are ensuring every dollar of taxpayer funding is being used efficiently to support research and innovation – saving millions for the American people.” Through its grant programs, the Department provides over $2.5 billion annually to more than 300 colleges and universities to support Department-sanctioned research. A portion of the funding goes to “indirect costs”, which include both facilities and administration costs. According to DOE data, the average rate of indirect costs incurred by grant recipients at colleges and universities is more than 30%, a significantly higher rate than other for profit, non-profit and state and local government grant awardees. Limiting these costs to a standard rate of 15% will help improve efficiency, reduce costs and ensure proper stewardship of American taxpayer dollars. Full memorandum is available here: POLICY FLASH DATE: April 11, 2025 SUBJECT: Adjusting Department of Energy Grant Policy for Institutions of Higher Education (IHE)  BACKGROUND: Pursuant to 5 U.S.C. 553(a)(2), the Department of Energy (“Department”) is updating its policy with respect to Department grants awarded to institutions

Read More »

TC Energy Rules Out Sale of Canadian Mainline Pipeline

TC Energy Corp. Chief Executive Officer Francois Poirier ruled out selling the Canadian Mainline natural gas pipeline — which stretches across most of the country — as the trade war with the US pushes energy security up Canadian politicians’ priority list. President Donald Trump’s tariffs and repeated taunts about annexing Canada have highlighted the country’s vulnerability in relying on a crude pipeline that crosses through the US to supply oil for the eastern provinces’ refineries. Both of the main political parties seeking power in this month’s election have discussed the need to reduce reliance on the pipeline that goes through the Midwest.  The Mainline stretches more than 14,000 kilometers (8,700 miles) from energy-producing Alberta to major population centers in Ontario and Quebec while remaining entirely within Canada’s borders. TC Energy had once proposed converting the line from natural gas to oil before the project, known as Energy East, was abandoned amid opposition, primarily in Quebec.  TC Energy last year split off its oil pipelines into a separate company and is now focused on natural gas transportation and power generation, making the Mainline one of its marquee assets. That makes converting or selling the pipeline something the company won’t consider, Poirier said. “We have a very large group of natural gas shippers with whom we have contractual obligations to deliver natural gas for, in some cases, many more decades,” Poirier said in an interview Thursday in Toronto. “Given that all of our capacity is contracted, legally speaking, we wouldn’t be able to consider a conversion of some of our existing infrastructure to oil service.”   WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS

Read More »

Oil Rebounds but Weekly Losses Continue

Oil rebounded on Friday, but still notched its second straight weekly decline as the escalating trade war between the world’s two largest economies drove wild volatility. West Texas Intermediate futures advanced 2.4% to settle at $61.50 a barrel after China raised its tariffs on all US goods to 125%, but said it will pay no attention to further hikes from Washington. Equities rebounded as a selloff in longer-term Treasuries abated, helping buoy the commodity later in the session. The conflict between China and the US has triggered frantic selloffs in stocks, bonds and commodities on concerns the dispute will reduce global growth. The US Energy Information Administration has slashed its forecasts for crude demand this year by almost 500,000 barrels a day, and oil market gauges further along the futures curve are pointing to an oversupply. Oil has retreated about 14% in April, also hurt by an OPEC+ decision to bring back output more quickly than expected. The US levies include a punitive 145% charge on imports from China, which has retaliated with its own tariffs as ties between the two superpowers come under immense strain. US Energy Secretary Chris Wright said on Bloomberg Television on Friday that the market’s recent selloff is overblown, as the US will ultimately have a stronger economy under President Donald Trump. He added that he expects to see higher volumes of US crude and natural gas liquids produced under the current president. Oil’s retreat has led to declines in associated products, with US gasoline futures dropping almost 3% this week. “High-level economic uncertainty is challenging for a macro-sensitive commodity such as oil, and we expect prices will remain under pressure,” BMI, a unit of Fitch Solutions, said in a note. In addition, “we currently factor in a continued, gradual unwinding of the OPEC+ production

Read More »

Viking CCS pipeline wins planning consent

A pipeline that will be used to transport carbon to be buried in a depleted North Sea gas field has been awarded planning consent. An application for the Viking CCS pipeline submitted by energy firm Harbour Energy was granted official development consent by the Secretary of State for Energy Security and Net Zero. The 34-mile (55km) pipeline between Immingham and the Theddlethorpe gas terminal on the Lincolnshire coast is a key plank in the project, which is one of the UK’s so-called “track 2” CCS projects awaiting further support from government. The other is Acorn at Peterhead. Its backers have estimated the project could unlock £7 billion of investment across the Humber region by 2035, with 10,000 jobs during construction and £4bn in economic value forecast by the end of the decade. The consent marks some progress as concern grows that delays to CCS plans may risk the UK failing to meet net zero targets. Harbour had initially envisaged making a final investment on the Viking scheme decision last year. The North Sea producer has since focused on developing oil and gas production internationally following its $11.2bn acquisition of Wintershall Dea. It has also since withdrawn from another UK CCS project. The UK’s track 1 CCS projects including HyNet in the North West of England and the East Coast Cluster in Teesside were backed with £21.7 billion in government support over 10 years. The onshore, buried pipeline will transport CO₂ captured from the industrial cluster at Immingham on the first stage of its journey out to the Viking reservoirs via an existing 75-mile (120km) pipeline, the Lincolnshire offshore gas gathering system (LOGGS),  with plans for a further new 13-mile (20km) spur line. The Viking fields could store up to 300m tonnes of CO₂, with the system handling up to 10m

Read More »

U.S. Advances AI Data Center Push with RFI for Infrastructure on DOE Lands

ORNL is also the home of the Center for Artificial Intelligence Security Research (CAISER), which Edmon Begoli, CAISER founding director, described as being in place to build the security necessary by defining a new field of AI research targeted at fighting future AI security risks. Also, at the end of 2024, Google partner Kairos Power started construction of their Hermes demonstration SMR in Oak Ridge. Hermes is a high-temperature gas-cooled reactor (HTGR) that uses triso-fueled pebbles and a molten fluoride salt coolant (specifically Flibe, a mix of lithium fluoride and beryllium fluoride). This demonstration reactor is expected to be online by 2027, with a production level system becoming available in the 2030 timeframe. Also located in a remote area of Oak Ridge is the Tennessee Valley Clinch River project, where the TVA announced a signed agreement with GE-Hitachi to plan and license a BWRX-300 small modular reactor (SMR). On Integrating AI and Energy Production The foregoing are just examples of ongoing projects at the sites named by the DOE’s RFI. Presuming that additional industry power, utility, and data center providers get on board with these locations, any of the 16 could be the future home of AI data centers and on-site power generation. The RFI marks a pivotal step in the U.S. government’s strategy to solidify its global dominance in AI development and energy innovation. By leveraging the vast resources and infrastructure of its national labs and research sites, the DOE is positioning the country to meet the enormous power and security demands of next-generation AI technologies. The selected locations, already home to critical energy research and cutting-edge supercomputing, present a compelling opportunity for industry stakeholders to collaborate on building integrated, sustainable AI data centers with dedicated energy production capabilities. With projects like Oak Ridge’s pioneering SMRs and advanced AI security

Read More »

Generac Sharpens Focus on Data Center Power with Scalable Diesel and Natural Gas Generators

In a digital economy defined by constant uptime and explosive compute demand, power reliability is more than a design criterion—it’s a strategic imperative. In response to such demand, Generac Power Systems, a company long associated with residential backup and industrial emergency power, is making an assertive move into the heart of the digital infrastructure sector with a new portfolio of high-capacity generators engineered for the data center market. Unveiled this week, Generac’s new lineup includes five generators ranging from 2.25 MW to 3.25 MW. These units are available in both diesel and natural gas configurations, and form part of a broader suite of multi-asset energy systems tailored to hyperscale, colocation, enterprise, and edge environments. The product introductions expand Generac’s commercial and industrial capabilities, building on decades of experience with mission-critical power in hospitals, telecom, and manufacturing, now optimized for the scale and complexity of modern data centers. “Coupled with our expertise in designing generators specific to a wide variety of industries and uses, this new line of generators is designed to meet the most rigorous standards for performance, packaging, and after-treatment specific to the data center market,” said Ricardo Navarro, SVP & GM, Global Telecom and Data Centers, Generac. Engineering for the Demands of Digital Infrastructure Each of the five new generators is designed for seamless integration into complex energy ecosystems. Generac is emphasizing modularity, emissions compliance, and high-ambient operability as central to the offering, reflecting a deep understanding of the real-world challenges facing data center operators today. The systems are built around the Baudouin M55 engine platform, which is engineered for fast transient response and high operating temperatures—key for data center loads that swing sharply under AI and cloud workloads. The M55’s high-pressure common rail fuel system supports low NOx emissions and Tier 4 readiness, aligning with the most

Read More »

CoolIT and Accelsius Push Data Center Liquid Cooling Limits Amid Soaring Rack Densities

The CHx1500’s construction reflects CoolIT’s 24 years of DLC experience, using stainless-steel piping and high-grade wetted materials to meet the rigors of enterprise and hyperscale data centers. It’s also designed to scale: not just for today’s most power-hungry processors, but for future platforms expected to surpass today’s limits. Now available for global orders, CoolIT is offering full lifecycle support in over 75 countries, including system design, installation, CDU-to-server certification, and maintenance services—critical ingredients as liquid cooling shifts from high-performance niche to a requirement for AI infrastructure at scale. Capex Follows Thermals: Dell’Oro Forecast Signals Surge In Cooling and Rack Power Infrastructure Between Accelsius and CoolIT, the message is clear: direct liquid cooling is stepping into its maturity phase, with products engineered not just for performance, but for mass deployment. Still, technology alone doesn’t determine the pace of adoption. The surge in thermal innovation from Accelsius and CoolIT isn’t happening in a vacuum. As the capital demands of AI infrastructure rise, the industry is turning a sharper eye toward how data center operators account for, prioritize, and report their AI-driven investments. To wit: According to new market data from Dell’Oro Group, the transition toward high-power, high-density AI racks is now translating into long-term investment shifts across the data center physical layer. Dell’Oro has raised its forecast for the Data Center Physical Infrastructure (DCPI) market, predicting a 14% CAGR through 2029, with total revenue reaching $61 billion. That revision stems from stronger-than-expected 2024 results, particularly in the adoption of accelerated computing by both Tier 1 and Tier 2 cloud service providers. The research firm cited three catalysts for the upward adjustment: Accelerated server shipments outpaced expectations. Demand for high-power infrastructure is spreading to smaller hyperscalers and regional clouds. Governments and Tier 1 telecoms are joining the buildout effort, reinforcing AI as a

Read More »

Podcast: Nomads at the Frontier – AI, Infrastructure, and Data Center Workforce Evolution at DCD Connect New York

The 25th anniversary of the latest Data Center Dynamics event in New York City last month (DCD Connect NY 2025) brought record-breaking attendance, underscoring the accelerating pace of change in the digital infrastructure sector. At the heart of the discussions were evolving AI workloads, power and cooling challenges, and the crucial role of workforce development. Welcoming Data Center Frontier at their show booth were Phill Lawson-Shanks of Aligned Data Centers and Phillip Koblence of NYI, who are respectively managing director and co-founder of the Nomad Futurist Foundation. Our conversation spanned the pressing issues shaping the industry, from the feasibility of AI factories to the importance of community-driven talent pipelines. AI Factories: Power, Cooling, and the Road Ahead One of the hottest topics in the industry is how to support the staggering energy demands of AI workloads. Reflecting on NVIDIA’s latest announcements at GTC, including the potential of a 600-kilowatt rack, Lawson-Shanks described the challenges of accommodating such density. While 120-130 kW racks are manageable today, scaling beyond 300 kW will require rethinking power distribution methods—perhaps moving power sleds outside of cabinets or shifting to medium-voltage delivery. Cooling is another major concern. Beyond direct-to-chip liquid cooling, air cooling still plays a role, particularly for DIMMs, NICs, and interconnects. However, advances in photonics, such as shared laser fiber interconnects, could reduce switch power consumption, marking a potential turning point in energy efficiency. “From our perspective, AI factories are highly conceivable,” said Lawson-Shanks. “But we’re going to see hybridization for a while—clients will want to run cloud infrastructure alongside inference workloads. The market needs flexibility.” Connectivity and the Role of Tier-1 Cities Koblence emphasized the continuing relevance of major connectivity hubs like New York City in an AI-driven world. While some speculate that dense urban markets may struggle to accommodate hyperscale AI workloads,

Read More »

2025 Data Center Power Poll

@import url(‘/fonts/fira_sans.css’); a { color: #0074c7; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: “Fira Sans”, Arial, sans-serif; } body { letter-spacing: 0.025em; font-family: “Fira Sans”, Arial, sans-serif; } button, .ebm-button-wrapper { font-family: “Fira Sans”, Arial, sans-serif; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #005ea0 !important; border-color: #005ea0 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #005ea0 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #005ea0 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #005ea0 !important; border-color: #005ea0 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #005ea0 !important; border-color: #005ea0 !important; background-color: undefined !important; }

Read More »

How Microgrids and DERs Could Solve the Data Center Power Crisis

Microgrid Knowledge’s annual conference will be held in Dallas, Texas this year. Energy industry leaders and microgrid developers, customers and enthusiasts will gather April 15-17 at the Sheraton Dallas, to learn from each other and discuss a wide variety of microgrid related topics. There will be sessions exploring the role microgrids can play in healthcare, military, aviation and transportation, as well as other sectors of the economy. Experts will share insights on fuels, creating flexible microgrids, integrating electric vehicle charging stations and more.  “Powering Data Centers: Collaborative Microgrid Solutions for a Growing Market” is expected to be one of the most popular sessions at the conference. Starting at 10:45am on April 16, industry experts will tackle the biggest question facing data center operators and the energy industry – how can we solve the data center energy crisis? During the session, the panelists will discuss how private entities, developers and utilities can work together to deploy microgrids and distributed energy technologies that address the data center industry’s rapidly growing power needs. They’ll share solutions, technologies and strategies to favorably position data centers in the energy queue. In advance of the conference, we sat down with two of the featured panelists to learn more about the challenges facing the data center industry and how microgrids can address the sector’s growing energy needs. We spoke with session chair Samantha Reifer, director of strategic alliances at Scale Microgrids and Elham Akhavan, senior microgrid research analyst at Wood Mackenzie. Here’s what Reifer and Akhavan had to say: The data center industry is growing rapidly. What are the critical challenges facing the sector as it expands? Samantha Reifer: The biggest barrier we’ve been hearing about from our customers and partners is whether these data centers can get power where they want to build? For a colocation

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »