Stay Ahead, Stay ONMINE

Project Stalled: Grid Bottlenecks Threaten the Fifth Industrial Revolution

The defining feature of our current data center cycle isn’t a shortage of customers or capital; it’s a shortage of power that can actually be delivered on time. In the space of three years, large‑load interconnection queues have gone from a planning tool to the main reason otherwise viable AI campuses are missing their deployment […]

The defining feature of our current data center cycle isn’t a shortage of customers or capital; it’s a shortage of power that can actually be delivered on time. In the space of three years, large‑load interconnection queues have gone from a planning tool to the main reason otherwise viable AI campuses are missing their deployment windows.

Multi‑year delays for large loads are quickly becoming the norm, not the exception, in major markets, turning what should be a sprint to deploy AI into a long and uncertain wait.

At the grid level, the same pattern is visible in the queues. Across U.S. markets, that queuing infrastructure is now a primary source of delay. Regional operators from PJM to ERCOT and NYISO report steep increases in both the number and size of large‑load requests, with data centers and other energy‑intensive digital infrastructure accounting for a growing share of new demand ( https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/https://www.nyiso.com/-/energy-intensive-projects-in-nyiso-s-interconnection-queue/https://www.latitudemedia.com/news/ercots-large-load-queue-has-nearly-quadrupled-in-a-single-year/). In practice, that means more projects are being told that meaningful capacity will not be available on the timeline their customers expect, forcing them into redesigns, phased power ramps, or alternative power strategies.

Time, in other words, has become the scarcest resource in the data center economy. The same 60 MW AI facility that looks attractive at a 17.1% IRR when delivered on schedule can see its returns fall to 12.6% with a three‑month delay and to 8.8% with a six‑month delay—nearly halving its investment case ( https://www.thefastmode.com/expert-opinion/47210-what-we-learned-in-2025-about-data-center-builds-why-delays-will-persist-in-2026-without-greater-visibility). That is why, in this industrial revolution, the metric that matters most is speed‑to‑power: how quickly real, reliable megawatts can be made available at the fence line, not how many gigawatts exist on slides or in press releases. In this industrial revolution, that metric will do more to determine who wins than any short‑term race to buy chips or secure logos. The operators who escape that bottleneck will be the ones who stop treating the grid as their only path to power and start treating energy as a first‑order design choice, not a background assumption.

The Fifth Industrial Revolution Meets a 20th‑Century Grid

Every industrial revolution has been powered as much by infrastructure as by ideas. Steam needed coal and railroads; mass production needed cheap electricity and high‑voltage transmission; the digital revolution needed a dense web of fiber and reliable baseload power. Each wave demanded entirely new energy systems and massive build‑outs of physical infrastructure. AI, often described as a fifth industrial revolution, is no different—except this time the breakthrough is landing on a grid that was never designed for this kind of load, at this kind of speed.

For most of the last two decades, data center development proceeded as if power were an infinite, fungible input: if you could find land, fiber, and tax incentives, the electrons would somehow follow. That assumption is now colliding with the reality of multi‑year interconnection queues, constrained transmission corridors, and long‑lead equipment that cannot be willed into existence by demand alone. Yet, even as AI roadmaps are accelerating, interconnection reform, permitting modernization, and domestic equipment build‑out moved at a far slower pace, significantly lagging behind demand curves.

The result is a profound mismatch. The industry is behaving like it is in the middle of an infrastructure boom (AI investment, state‑level incentives, and hyperscale build‑out plans)while the processes that govern what can actually be built (queue studies, zoning and permitting, transformer manufacturing) are still calibrated for the previous era. In the gaps between those timelines, data center projects are stalling. We are paying for this mismatch in multi-year delays and stranded projects. DCD recently reported that a one-month delay on a 60MW facility could cost 14.2M USD, underscoring how quickly these structural frictions translate into stranded capital and eroded returns (https://www.datacenterdynamics.com/en/whitepapers/preventing-multimillion-dollar-data-center-losses-through-reporting/).

Where Projects are Stuck: Queues, Permits, and Steel

For a large AI campus today, the hard part is no longer how many racks you can fit into a shell; it is everything that has to happen upstream of the switchgear. Once you cross into nine‑figure loads, new data centers stop being “just another rate-payer” and start triggering transmission‑level impacts—new substations, reconductored lines, and sometimes entirely new high‑voltage corridors. Each of those requirements pulls the project into the same machinery that governs any grid‑scale asset: multi‑stage studies, cost‑allocation fights, and a queue that is already congested with generation.

In PJM, interconnection wait times for large loads such as data centers have stretched beyond eight years in some cases, leaving tens of gigawatts of planned capacity unable to access grid power. In New York, the grid operator has reported 48 large‑load interconnection requests totaling around 12 GW—most from data center and similar digital infrastructure projects—underscoring how dozens of builds are stuck in line rather than under construction (NYISO reports and queue data). Texas is facing the same pattern at a different scale: ERCOT’s large‑load interconnection queue has swelled to about 226 GW, nearly quadruple the prior year’s level, with roughly 77% of that tied to large data centers targeting grid connections by 2030 (https://www.latitudemedia.com/news/ercots-large-load-queue-has-nearly-quadrupled-in-a-single-year/).  Queue position, and the uncertainty around when it will translate into a notice to proceed, has become a central scheduling risk rather than a back‑office detail.

Queue status, however, is only one part of where projects bog down. Zoning, land‑use approvals, and environmental permits have become another major source of friction, as communities and regulators confront a new class of industrial‑scale digital infrastructure. Local debates over water use, visual impact, noise, emergency‑response obligations, and who should pay for upstream grid upgrades are now common features of data center hearings from Northern Virginia to the Pacific Northwest (https://www.thefastmode.com/expert-opinion/47210-what-we-learned-in-2025-about-data-center-builds-why-delays-will-persist-in-2026-without-greater-visibility).

Even when a project clears those hurdles, it still has to contend with a constrained global supply chain for the components that physically connect it to the grid. Large power transformers, generator step‑up units, and high‑voltage breakers now sit at the center of a well‑documented bottleneck, with typical lead times for big transformers in the 80‑ to 120‑week range and some transmission‑class units stretching toward three or four years in tight markets ( https://www.utilitydive.com/news/electric-transformer-shortage-nrel-niac/738947/https://www.woodmac.com/news/opinion/supply-shortages-and-an-inflexible-market-give-rise-to-high-power-transformer-lead-times/). That means a campus can have land, customers, capital, and permits lined up—and still sit idle because the “iron heart” of its interconnection is somewhere in a manufacturing backlog, halfway around the world.

Demand Shock: AI Data Centers as a New Kind of Load

What makes this wave different is the nature of the load. Traditional enterprise and cloud data centers grew in relatively predictable increments that grid planners could smooth over time. AI data centers arrive in chunks that look more like aluminum smelters or steel mills: 100 MW, 300 MW, 1 GW per campus, often on a compressed timeline. For system planners, that turns data centers from industrial rate payers into grid‑scale assets that can reshape local load duration curves and reserve margins. Planning has had to shift accordingly.

BloombergNEF projects that U.S. data center power demand could reach 106 GW by 2035—more than double today’s operating capacity—with over a quarter of recently announced U.S. data center projects larger than 500 MW (https://www.utilitydive.com/news/us-data-center-power-demand-could-reach-106-gw-by-2035-bloombergnef/806972/, https://www.publicpower.org/periodical/article/data-center-power-demand-us-hits-106-gw-2035-bloombergnef). That growth is not confined to Northern Virginia or a handful of legacy hubs; maps of announced and under‑construction capacity show gigawatts of load spreading into exurban and rural regions, often served by transmission infrastructure that was never designed for clusters of multi‑hundred‑megawatt digital plants.

Grid operators are scrambling to keep up. PJM’s 2026 outlook projects that summer peak demand could rise to approximately 222 GW by 2036, about 66 GW above current levels, with data centers a major driver of that growth (https://insidelines.pjm.com/pjms-updated-20-year-forecast-continues-to-see-significant-long-term-load-growth/). At the federal level, the Federal Energy Regulatory Commission (FERC) has opened a dedicated rulemaking—RM26‑4—on interconnection of large loads, seeking to establish a consistent process and standards for customers with 20 MW or more of demand (https://www.ferc.gov/rm26-4). The fact that data centers now require bespoke regulatory categories is a signal in itself: the demand shock from AI has outgrown the assumptions baked into the old rulebook. 

Supply Shock: Generation That Can’t Get to the Fence Line

If AI data centers are a demand shock, offshore wind and other large‑scale generation projects are the supply shock—but they are running into many of the same structural bottlenecks, just in reverse. Offshore wind along the U.S. East Coast depends on a fragile chain of federal permits, specialized vessels and ports, and interconnections into congested coastal grids. When any one of those pieces falters, gigawatts can sit idle on paper.

In late 2025, the Department of the Interior issued a stop‑work order on several major offshore wind projects—including Vineyard Wind, Revolution Wind, Coastal Virginia Offshore Wind, Sunrise Wind, and Empire Wind—citing national security and permitting concerns (https://www.doi.gov/pressreleases/trump-administration-protects-us-national-security-pausing-offshore-wind-leases). Those decisions effectively paused multiple gigawatts of contracted clean energy and put billions of dollars of investment in limbo while developers, states, and the federal government argued in court and in the press (https://www.nytimes.com/2026/01/10/climate/billions-at-stake-in-the-ocean-as-trump-throttles-offshore-wind-farms.html). Early rulings in 2026 allowed some projects—such as Revolution Wind and Empire Wind—to resume installation, but the episode underscored a new reality: infrastructure schedules are now intertwined with litigation timelines as much as engineering ones (https://www.spencerfane.com/insight/revolution-wind-may-proceed-with-its-offshore-wind-energy-project-the-trump-administration-loses-another-court-battle/, https://www.equinor.com/news/20260115-empire-wind-granted-preliminary-injunction).

Offshore wind is not alone. New gas plants, battery storage hubs, and even brownfield repowers are facing their own interconnection and siting bottlenecks. One recent analysis found nearly 2,600 GW of generation and storage capacity (almost double the size of the existing U.S. grid) waiting in interconnection queues nationwide, with solar and battery projects making up the bulk of that backlog. In PJM, a one‑time Reliability Resource Initiative was needed to fast‑track 50 shovel‑ready generators to cover load growth and retirements, and most of the projects selected for that fast lane were gas plants, not renewables, simply because they could clear permitting and interconnection hurdles more quickly in the current system (https://www.cfr.org/articles/us-interconnection-challenge-why-renewables-are-stuck-line).

Transmission is its own constraint. Multiple studies show U.S. transmission build‑out is lagging far behind what is needed to balance regional supply and demand, even before AI’s additional load is fully accounted for ( https://cleanenergygrid.org/new-report-reveals-u-s-transmission-buildout-lagging-far-behind-national-needs/). FERC’s recent Orders 1920 and 1977 push transmission providers toward 20‑year planning horizons and attempt to streamline federal backstop siting, but they cannot erase the reality that every new long‑distance line still has to navigate state‑by‑state approvals, local opposition, and the same transformer bottlenecks that data centers and power plants are competing over ( https://www.whitecase.com/insight-alert/transmission-planning-reforms-finalized-ferc-order-no-1920). In practice, that means many of the clean energy and gas projects being promised to serve AI loads are located in places where they cannot reach the data center clusters without years of intermediate transmission upgrades.

For data center operators, this has a double impact, because the supply they are counting on is experiencing its own stalling effect. Offshore wind, new gas plants, storage projects, and transmission expansions are all vying for the same scarce transformer manufacturing slots, construction crews, and court dockets. Both the demand side (AI load) and the supply side (new generation) are hitting the same constraints in build throughput—permitting, interconnection, and long‑lead equipment—just from opposite directions. When both the loads and the resources that could serve them are constrained by build throughput, planning assumptions that looked balanced on paper can break down quickly in practice.

Policy In Catch‑Up Mode

Policymakers are not ignoring these signals; they are just moving on a different timeline. FERC’s Order 2023, finalized in 2023 and being implemented across regions, aims to unclog generator interconnection queues by imposing cluster studies, stricter project readiness requirements, and firm deadlines on transmission providers (https://www.ferc.gov/explainer-interconnection-final-rule). In theory, that should help move renewable, storage, and gas projects through the system more predictably, eventually easing the supply‑side bottleneck.

On the demand side, RM26‑4 is FERC’s attempt to do something similar for large loads—data centers, hydrogen hubs, and other industrial customers with tens or hundreds of megawatts of demand (https://www.ferc.gov/rm26-4). The docket has attracted dozens of comments from utilities, grid operators, industrial customers, and consumer advocates, debating standards for when a load becomes “large,” how to treat behind‑the‑meter resources, and who should pay for the network upgrades triggered by these connections (https://www.monitoringanalytics.com/filings/2025/IMM_Reply_Comments_re_ANOPR_Docket_No_RM26-4_20251205.pdf). The answers will shape not just how fast projects move through studies, but also whether the economics pencil out for campuses that must shoulder large interconnection bills.

Regional grid operators are making their own adjustments. PJM’s board has outlined new processes to integrate large loads more reliably, including scenario‑based planning for data center clusters and a focus on co‑located generation and load as a mainstream option rather than an exception (https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/, https://www.duanemorris.com/alerts/ferc_mandates_new_transmission_services_accomodate_data_centers_1225.html). FERC has also ordered PJM to revise its tariff to better accommodate arrangements where significant generation and load share an interconnection point—precisely the kind of configuration emerging at AI campuses with on‑site power (https://www.perkinscoie.com/en/insights/ferc-orders-pjm-to-revise-tariff-to-accommodate-co-located-arrangements.html). 

The problem is that regulatory reform timelines are measured in years, while AI‑driven site selection often happens in months. By the time rulemaking is finalized, a new wave of projects may already be stuck in the old rules, prolonging the startup phase for the very customers those reforms are supposed to help.

Beyond the Grid‑Only Mindset: Bring Your Own Power

In this industrial cycle, an uncomfortable truth is emerging: operators who arrive expecting the grid alone to solve their power problem will wait the longest. The ones that move fastest will be those that show up with a credible power plan—generation, fuel, and interconnection—baked into the project from day one. That logic went from industry subtext to national talking point when President Donald Trump used his 2026 State of the Union address to lay out what he called a “ratepayer protection pledge,” telling major tech companies that they would be expected to build or finance their own power plants for AI data centers so that households are not stuck paying for grid upgrades. In his words, “they’re going to produce their own electricity”—a political framing that formalizes a trend many developers were already moving toward. 

For some operators, that shift means pairing data centers with existing generation rather than waiting for greenfield plants to be approved. One of the clearest examples is Amazon and Talen Energy: through a long‑term agreement, Talen will supply up to 1,920 MW of carbon‑free power from the Susquehanna nuclear station in Pennsylvania to support AWS data centers, while the two companies explore uprates and potential small modular reactors at the site (https://www.utilitydive.com/news/talen-amazon-aws-susquehanna-nuclear-data-centert/750440/, https://energydigital.com/articles/amazons-nuclear-energy-deal). That structure effectively treats the nuclear plant and the cloud campus as a single integrated system, giving Amazon a dedicated supply with a clear development path and giving Talen a long‑term revenue stream and a platform for future expansion. It is not quite “off‑grid,” since transmission is still involved, but it is much closer to a bring‑your‑own‑power posture than a traditional retail service agreement.

Others are pursuing firmed hybrid models designed around the data center as an anchor load, combining natural gas, storage, and, in some cases, renewables. PJM’s recent rules explicitly create a faster pathway for combined data center and power‑generation projects, with Reuters reporting that the new framework tends to favor on‑site or adjacent gas plants because they can clear permitting and interconnection more quickly than many greenfield renewables in today’s environment (https://www.reuters.com/business/energy/us-grid-rules-faster-data-centers-favor-on-site-gas-plants–reeii-2026-01-27/). Under that approach, large loads can either lean on their own generation through an expedited connection or enter into “connect‑and‑manage” arrangements that require them to curtail during grid stress, effectively trading some flexibility for earlier access to power.

The most ambitious version of this trend is true bring‑your‑own‑power (BYOP): campuses where the data center and a dedicated power plant are planned, permitted, and financed as one project, with the grid seen as a secondary outlet rather than the primary lifeline. Several AI‑focused developers are now actively marketing off‑grid or near‑grid concepts that pair gas‑fired generation directly with high‑density compute clusters, explicitly arguing that building an independent, “shadow” power system is faster and more controllable than navigating standard interconnection queues (https://finance-commerce.com/2026/02/off-grid-ai-data-centers-natural-gas-power/). In these models, the utility interconnection is still valuable—for backup, for selling surplus, and for long‑term optionality—but it sits alongside on‑site power rather than dictating whether the project can proceed.

This is not an argument against the grid or against clean energy. It is a recognition that, in an AI‑first world, time to power is time to market, and the queue alone cannot deliver the capacity the industry needs on the timelines customers expect. For the next wave of AI data centers, the power plant and the campus will increasingly be designed as one story, not two.

Learning from COVID: What We Should Have Built Already

During the pandemic, global supply chains, transformer availability, and construction logistics were revealed as fragile; that should have triggered a concerted push to regionalize manufacturing, streamline permitting, and modernize grid planning assumptions.​ COVID was, in many ways, a dress rehearsal for today’s constraints. The pandemic revealed just how brittle global supply chains were, especially for heavy equipment like transformers, turbines, and switchgear. It also showed how quickly demand patterns could shift, forcing utilities and grid operators to contend with new load shapes and uncertainties.

COVID should have shattered our infrastructure preparedness illusion. By all means, it was a good readiness test, yet it also revealed how much more quickly digitization needed to occur. Instead, much of the response focused on restoring the old normal. Factory shutdowns, shipping delays, and a wave of early warnings about transformer scarcity and workforce limits exposed how fragile the physical backbone of the grid really was. By 2024, lead times for large generation step‑up transformers had roughly doubled from pre‑pandemic norms of 30–60 weeks to around 120–130 weeks on average, and analysts warned that domestic manufacturing capacity was still nowhere near keeping pace with rising demand from data centers, electrification, and clean energy projects (https://www.nrucfc.coop/content/solutions/en/stories/energy-tech/transformers-are-facing-major-cost–supply-chain-pressures.html, https://www.woodmac.com/press-releases/power-transformers-and-distribution-transformers-will-face-supply-deficits-of-30-and-10-in-2025/). Imports now provide an estimated 80 percent of U.S. power transformers, leaving critical grid projects exposed to geopolitical shocks and trade disputes on top of basic manufacturing bottlenecks.

Let’s face it: COVID was 6 years ago. That’s 60% of the time it typically takes to plan, permit, and build a major new high‑voltage transmission line in the United States, where full project timelines routinely stretch to eight–twelve years from conception to completion (https://energy.sustainability-directory.com/learn/what-is-the-typical-timeline-for-planning-permitting-and-constructing-a-major-https://www.publicadvocates.cpuc.ca.gov/-/media/cal-advocates-website/files/press-room/reports-and-analyses/230612-caladvocates-transmission-timelines.pdf). In that window, the country added only a trickle of new high‑voltage lines: Americans for a Clean Energy Grid estimates that from 2020 to 2023, the U.S. built an average of about 350 miles of new high‑voltage transmission per year—roughly one‑fifth of the 1,700 miles per year added in the early 2010s, and well below the ~5,000 miles per year DOE now says will be needed going forward (https://cleanenergygrid.org/fewer-new-miles-2024/https://cleanenergygrid.org/new-report-reveals-u-s-transmission-buildout-lagging-far-behind-national-needs/). 

We sensed the power shakeup. Have we done enough as a nation to address that? No. In my work supporting equipment providers into the space, I can also tell you–many did not do enough to dial up production. I saw many power distribution equipment companies shuttering some of their production plants, even while their timelines for equipment grew into multi-year waiting lists. Others, like Kohler (now Rehlko) took the opportunity to expand their production or, like JST, launch their equipment into the market as a viable new competitor for power equipment. But these moves remain the exception, not the rule, relative to the scale of AI‑driven demand now arriving.

For the data center and AI ecosystem, the lesson should have been straightforward: if you plan to double or triple load in key regions over a decade, you must start building the supporting infrastructure early, diversify supply chains, and streamline the processes that stand between project conception and energization. But the policy and procurement apparatus did not move at the same speed as AI adoption. Instead, the industry raced ahead with hyperscale build‑out plans, while many of the underlying reforms—domestic transformer manufacturing, permitting modernization, interconnection overhaul—remained incremental.

That is why today’s delayed projects feel particularly frustrating. Unlike earlier industrial revolutions, this one arrived with a recent, vivid, and economically catastrophic reminder of what happens when infrastructure lags technology. The warnings were there; the choice not to build ahead of the next hurdle was society’s.

Moving From “Stalled” to “Under Construction”

The next phase of this story will be written by those willing to treat power as the foremost question, not one of the four first questions. For data center developers, that starts with flipping the usual site‑selection script: instead of finding land and then asking how much power might be available someday, the first filter becomes where firm capacity—or a credible path to it—can be made real within the deployment window. That can mean prioritizing sites near existing generation, pursuing long‑term offtake from plants that already hold interconnection rights, or designing campuses with explicit multi‑path strategies that combine grid supply, on‑site generation, and potential neighbor‑to‑neighbor energy arrangements. In this frame, the site isn’t just “good dirt;” it is a power strategy with a physical address.

Utilities and grid operators, in turn, have an opportunity to move from passive gatekeepers to proactive partners. Publishing hosting‑capacity‑style tools for large loads, clarifying how bring‑your‑own‑power projects will be evaluated, and setting transparent, tiered processes for 20‑MW, 100‑MW, and 500‑MW‑plus customers all help align expectations and reduce surprises. (https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/) Some regions are beginning to show what this can look like: large‑load frameworks that distinguish between customers arriving with their own firm generation and those relying entirely on network upgrades, and public reporting that makes the composition and status of big‑load requests visible enough for developers to plan around. Those steps won’t eliminate interconnection queues, but they can make the time to credible answer much shorter, even if the answer is that a different site or power configuration will be needed.

For policymakers, the task is to connect the dots between transmission and interconnection reforms and the reality of AI‑driven load. Order 2023, at its core, is about unclogging generator interconnection queues—cluster studies, readiness screens, and firm timelines so that new supply can move from applications to energization more predictably. RM26‑4 is the matching piece on the demand side, aiming to create clear, consistent standards for large loads seeking to connect at 20 MW and above: what studies they trigger, how quickly they are processed, and how costs are allocated. Layered on top of state‑level siting reforms, those federal rules can either remain abstract or be used explicitly to treat AI data centers and co‑located generation as strategic infrastructure, with aligned timelines for environmental review, interconnection studies, and cost‑allocation decisions.

The Fifth Revolution’s Coming Power Test

In a world where one grid region is evaluating more than 200 GW of large‑load requests and U.S. data center demand could reach 106 GW by 2035, the differentiator will not be who has the largest AI budget, but who can convert megawatts from a theoretical promise into energized, reliable capacity fastest. That is what speed‑to‑power really means in practice: a shared playbook where developers design for power first, utilities reward projects that solve as well as consume, and policymakers make sure the rules recognize AI data centers and co‑located generation as part of the backbone of the next industrial era, rather than as an afterthought at the edge of the grid.

Every previous industrial revolution has eventually been judged by whether its physical infrastructure managed to keep pace with its transformative potential. The fifth one that we are operating within and actively building will be no different. AI can only be as revolutionary as the substations, transformers, and generation that feed its clusters allow it to be. This problem will not be fixed overnight.

Right now, too many projects are stalled in queues, courtrooms, and equipment backlogs. Whether the “stalled” project label becomes the defining word of this era versus just a painful, brief chapter will depend on how quickly our industry, utilities, and policymakers are willing to treat energy not as an afterthought, but as the central design constraint of the AI age.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

ADNOC, OMV advance formation of Borouge Group International

ADNOC and OMV Aktiengesellschaft signed an asset usage agreement for the Borouge 4 (B4) production complex, advancing the duo’s formation of Borouge Group International AG. The formation of Borouge Group International AG, through the combination of Borouge Plc and Borealis, and acquisition of Nova Chemicals, is progressing according to plan,

Read More »

Nile adds microsegmentation and native NAC to its secure NaaS platform

Identity is the authentication layer that feeds the NAC replacement. For users and employees, Nile pulls identity from Active Directory, including group and role membership, which maps directly to policy enforcement. Corporate devices can authenticate through RADIUS using certificates, which carry additional device metadata. For wired connections, Nile supports 802.1X

Read More »

IDC: Dell leads server market driven by AI infrastructure needs

For calendar year 2025 the market finished growing 80.4% compared to 2024, reaching a yearly record of $444.1 billion dollars revenue. Dell Technologies clearly leads the OEM market with $12.5 billion in total revenue share, accounting for 10% of total sales. IDC attributed this to outstanding growth on accelerated servers.

Read More »

Cloud providers seek to shape European sovereignty legislation

Finally, they say, there should be taxpayer-funded investments in cloud and AI infrastructure and support for the European development of key components such as memory and chips and the incorporation of strict environmental sustainability requirements. “It’s important to realize that the proposal is not just about the technical aspects but

Read More »

Energy Department Begins Delivering SPR Barrels at Record Speeds

WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

Read More »

Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

Read More »

Eni plans tieback of new gas discoveries offshore Libya

Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

Read More »

Azule Energy launches first non-associated gas production offshore Angola

Azule Energy has started natural gas production from the New Gas Consortium (NGC)’s Quiluma shallow water field offshore Angola. Start-up of the gas delivery from Quiluma field follows the November 2025 introduction of gas into the onshore gas plant, marking the beginning of production operations. The initial gas export will be 150 MMscfd and will ramp up to 330 MMscfd by yearend, the operator said in a release Mar. 13.  In a separate release Mar. 17, NGC partner TotalEnergies said the startup marks the first development of a non-associated gas field in Angola, noting that the gas produced “will be a stable and important source of gas supply for the Angola LNG plant that is delivering LNG to both the European and Asian markets.” The non-associated gas of NGC Phase 1 will come from Quiluma and Maboqueiro shallow water fields with additional potential related to gas from Blocks 2, 3, and 15/14 areas. An onshore plant will process gas from the fields and connect to the Angola LNG plant, aimed at a reliable feedstock supply to the plant, sited near Soyo in the Zaire province in north Angola. The plant holds a capacity of 400MMscfd of gas and 20,000 b/d of condensates. Azule Energy, a 50-50 joint venture between bp and Eni, is operator of NGC project with 37.4% interest. Partners are TotalEnergies (11.8%), Cabinda Gulf Oil Co., a subsidiary of Chevron (31%), and Sonangol E&P (19.8%).

Read More »

Equinor eyes Barents Sea oil province expansion with potential oil discovery tieback

Equinor Energy AS and partners will consider a tie back of a new oil discovery to Johan Castberg field in the Barents Sea, 220 km northwest of Hammerfest. Preliminary discovery volume estimates at the in the Polynya Tubåen prospect are 2.3–3.8 million std cu m of recoverable oil equivalent (14–24 MMboe). Wildcat well 7220/7-5, the 17th exploration well in production license 532, was drilled about 16 km southwest of discovery well 7220/8-1 well by the COSL Prospector rig in 361 m of water, according to the Norwegian Offshore Directorate. The well was drilled to a vertical depth of 1,119 m subsea. It was terminated in the Fruholmen formation from the Upper Triassic. The objective was to prove petroleum in Lower Jurassic reservoir rocks in the Tubåen formation. The well encountered a 26-m gas column and a 26-m oil column in the Tubåen formation in reservoir rocks totaling 39 m, with good to very good reservoir quality. The total thickness in the Tubåen formation is 125 m. The gas-oil contact was encountered at 972 m subsea, and the oil-water contact was encountered at 998 m subsea. The well was not formation-tested, but extensive volumes of data and samples were collected. It will now be permanently plugged. ‘New’ Barents Sea oil province The discovery comes as Equinor aims to increase volumes in the Johan Castberg area—originally estimated at 500–700 million bbl—by an additional 200–500 million bbl, with plans to drill 1-2 exploration wells per year in the region, Equinor said. “With Johan Castberg, we opened a new oil province in the Barents Sea one year ago. It is encouraging that we are now making new discoveries in the area,” said Grete Birgitte Haaland, area director for Exploration and Production North at Equinor. Production at Johan Castberg began in 2025.  In June 2025, the Drivis

Read More »

Westcott named Woodside CEO

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Woodside Energy has appointed Elizabeth (Liz) Westcott as chief executive officer and managing director. Westcott, who has served as Woodside’s acting chief executive since the departure of Meg O’Neill in December 2025 to lead bp plc, has more than 30 years’ experience in the global energy industry. Westcott joined Woodside in 2023 as executive vice-president Australian operations, and in 2024 was appointed executive vice-president and chief operating officer Australia, leading Woodside’s Australian projects and business operations. Prior to joining Woodside, she most recently held the role of chief operating officer at EnergyAustralia. Liz had a 25-year career at ExxonMobil working in Australia, the United Kingdom, and Italy, including a secondment in 2013 to Adriatic LNG as managing director.

Read More »

Executive Roundtable: AI Infrastructure Enters Its Execution Era

Miranda Gardiner, iMasons Climate Accord:  Since 2023, the digital infrastructure industry has moved definitively from planning to execution in the AI infrastructure cycle. Industry analysts forecast continued exponential growth, with active capacity at least doubling between now and 2030 and total capacity potentially tripling, quintupling, or more. In practical terms, we’ll see more digital infrastructure capacity come online in the next five year than has been built in the past 30 years, representing a historic industrial transformation requiring trillions of dollars in capital expenditure and a workforce measured in the millions. Design and organizational flexibility, integrated execution of sustainable solutions, and community-centered workforce development will separate those that thrive from those that struggle. Effective organizations will pivot quickly under these constantly shifting conditions and the leaders will be those that build fast but build right, as strategic flexibility balances long-term performance, efficiency, and regulatory compliance. We already know the resource intensity required to bring AI resources online and are working diligently to ensure this short-term, delivering streamlined and optimized solutions for everything from site selection to cooling and power management while lower lifecycle emissions. Additionally, in some regions, grid interconnection timelines and power availability are already the pacing item for data center development. Organizations that align their sustainability targets and energy procurement strategies will have a clearer path to execution. An operational model capable of delivering multiple large-scale facilities simultaneously across regions is another key piece to successful outcomes. Standardized, repeatable frameworks that reduce engineering time and accelerate permitting. We hear often about collaboration and strong partnerships, and these will be critical with utilities, regulators, and equipment manufacturers to anticipate bottlenecks before they impact schedules. Execution discipline will increasingly determine competitive advantage as the industry scales. The world and, especially, our host communities, are watching closely. Projects that move forward

Read More »

Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand. “The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center. That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day. And the scale of the buildout Huang envisions is staggering. Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them. At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens. “Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.” Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history. Four main themes defined the presentation: • The arrival of the inference inflection point.• The emergence of OpenClaw as a foundational operating layer for AI agents.• New hybrid inference architectures involving

Read More »

Executive Roundtable: The Coordination Imperative

Christopher Gorthy, DPR Construction:  Early collaboration of key stakeholders has become the baseline to deliver these complex projects. The teams that are successful in these environments are the ones who combine effective meeting structures with enough in‑person interaction to build real trust. Pairing those relationships with the right tools can help track key decision making, document reasoning, and keep everyone aligned on “The Why,” creating more predictable outcomes. Where the industry continues to feel fragmented is around liability, risk, and comfort with sharing design and model data. Achieving the speed these projects demand requires the entire team to understand each partner’s constraints and then working together to solve problems, communicating clearly and documenting decisions as they go. All of our partnerships are solving equations with multiple variables. Our teams must provide early feedback and solutions when faced with impacts or delays outside our control, and even earlier communications of impacts that cannot be mitigated. Open communication channels, whether through shared digital platforms or recurring working sessions, are critical to staying ahead of risk. As projects get bigger, alignment with financial institutions, insurance entities and private equity partners also have become essential.   The number of trade partners capable of taking on contracts of this size is limited, so making sure we are setting up our partners for success while also working to expand the network of qualified trade partners is a key strategy.  From a tactical standpoint, the most effective projects operate from a single integrated schedule that ties together the owner, vendors, general contractor, trades, commissioning teams, and all other stakeholders. Reinforcing this with consistent two‑ to three‑week look‑ahead reviews and onsite schedule coordination meetings regardless of contractual structure significantly increases alignment and efficiency at the project level.

Read More »

Jensen Huang After the Keynote: Inside Nvidia’s GTC 2026 Press Briefing

The Data Center as Token Factory If there was one line of thinking that defined the session, it was Huang’s insistence that the industry must stop thinking about computers as systems for data entry and retrieval. That, he said, is the old paradigm. The new one is a “token manufacturing system.” That phrase landed because it compresses a lot of Nvidia’s strategy into a single mental model. In this view, the modern data center is no longer just a warehouse of servers or a cloud abstraction layer. It is a factory, and the unit of output is increasingly the token. For Data Center Frontier readers, this is a familiar direction of travel, but Huang pushed it further than most CEOs do. He repeatedly tied Nvidia’s roadmap to token throughput, token economics, and performance per watt. He is clearly trying to establish a new baseline metric for AI infrastructure value. Not raw capacity, but how much useful intelligence a facility can produce from a fixed power envelope. That point also surfaced in his discussion of Grace and Vera CPUs. Huang’s argument was not that Nvidia intends to win every classical CPU market. It was that traditional measures such as cores per dollar are insufficient in AI data centers where the real economic risk is leaving extremely valuable GPUs idle. In other words, the CPU matters because it must move work fast enough to keep the GPU estate productive. In a power-limited, AI-heavy environment, the purpose of the CPU changes. It is no longer optimized for the old hyperscale rental model. It is optimized for keeping the token factory fed. That is a subtle but major shift. It suggests that the next-generation AI data center will be increasingly engineered around the productivity of the overall system rather than around legacy component economics.

Read More »

Project Stalled: Grid Bottlenecks Threaten the Fifth Industrial Revolution

The defining feature of our current data center cycle isn’t a shortage of customers or capital; it’s a shortage of power that can actually be delivered on time. In the space of three years, large‑load interconnection queues have gone from a planning tool to the main reason otherwise viable AI campuses are missing their deployment windows. Multi‑year delays for large loads are quickly becoming the norm, not the exception, in major markets, turning what should be a sprint to deploy AI into a long and uncertain wait. At the grid level, the same pattern is visible in the queues. Across U.S. markets, that queuing infrastructure is now a primary source of delay. Regional operators from PJM to ERCOT and NYISO report steep increases in both the number and size of large‑load requests, with data centers and other energy‑intensive digital infrastructure accounting for a growing share of new demand ( https://insidelines.pjm.com/pjm-board-outlines-plans-to-integrate-large-loads-reliably/,  https://www.nyiso.com/-/energy-intensive-projects-in-nyiso-s-interconnection-queue/,  https://www.latitudemedia.com/news/ercots-large-load-queue-has-nearly-quadrupled-in-a-single-year/). In practice, that means more projects are being told that meaningful capacity will not be available on the timeline their customers expect, forcing them into redesigns, phased power ramps, or alternative power strategies. Time, in other words, has become the scarcest resource in the data center economy. The same 60 MW AI facility that looks attractive at a 17.1% IRR when delivered on schedule can see its returns fall to 12.6% with a three‑month delay and to 8.8% with a six‑month delay—nearly halving its investment case ( https://www.thefastmode.com/expert-opinion/47210-what-we-learned-in-2025-about-data-center-builds-why-delays-will-persist-in-2026-without-greater-visibility). That is why, in this industrial revolution, the metric that matters most is speed‑to‑power: how quickly real, reliable megawatts can be made available at the fence line, not how many gigawatts exist on slides or in press releases. In this industrial revolution, that metric will do more to determine who wins than any short‑term race to buy chips or secure logos.

Read More »

Roundtable: Designing for an Uncertain AI Demand Curve

For the third installment of our Executive Roundtable for the First Quarter of 2026, Data Center Frontier examines a question at the heart of AI infrastructure strategy: How to design for a demand curve that refuses to sit still. The rapid evolution of artificial intelligence workloads has introduced a new kind of uncertainty into data center development. Training clusters continue to scale, inference workloads are proliferating, and enterprise adoption is accelerating in ways that challenge even the most aggressive forecasts. Yet beneath that growth lies a fundamental ambiguity. Not just how much capacity will be needed, but when, where, and in what form. For developers and operators, this creates a tension between speed and flexibility. The pressure to deliver capacity quickly has never been greater, as hyperscale and neocloud players race to secure power and bring AI infrastructure online. At the same time, the risk of overbuilding (or locking into infrastructure that may not align with future workloads, densities, or architectures) has become increasingly difficult to ignore. Nowhere is this tension more visible than in power and electrical design. Decisions around substation sizing, transmission commitments, switchgear capacity, and on-site generation are being made years in advance of fully understood demand profiles. These choices carry long-term consequences, shaping not only capital efficiency but the ability to adapt as AI technologies and use cases continue to evolve. The result is a shift in design philosophy. Increasingly, the industry is moving away from static, one-time provisioning toward architectures that prioritize modularity, scalability, and optionality, seeking to preserve flexibility without sacrificing near-term delivery. In this roundtable, our panel explores how developers, operators, and suppliers are navigating that balance, and what it will take to future-proof AI infrastructure in an era defined by both unprecedented growth and persistent uncertainty.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »