Stay Ahead, Stay ONMINE

Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power. Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that […]

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power.

Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.”

Power demands create infrastructure dilemma

The facility’s power requirements raise serious questions about AI’s sustainability. Gogia noted that the 1.2-gigawatt demand — “on par with a nuclear facility” — highlights “the energy unsustainability of today’s hyperscale AI ambitions.”

Shah warned that the power envelope keeps expanding. “As AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,” he said. “The key question is how much is enough? Today it’s 1.2GW, tomorrow it would need even more.”

This escalating demand could burden Texas’s infrastructure, potentially requiring billions in new power grid investments that “will eventually put burden on the tax-paying residents,” Shah noted. Alternatively, projects like Stargate may need to “build their own separate scalable power plant.”

What this means for enterprises

The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

DOE Announces New Supercomputer Powered by Dell and NVIDIA to Speed Scientific Discovery

BERKELEY— During a visit to Lawrence Berkeley National Laboratory (Berkeley Lab), U.S. Secretary of Energy Chris Wright today announced a new contract with Dell Technologies to develop NERSC-10, the next flagship supercomputer at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) user facility at Berkeley Lab. The new system, due in 2026, will be named after Jennifer Doudna, the Berkeley Lab-based biochemist who was awarded the 2020 Nobel Prize for Chemistry in recognition of her work on the gene-editing technology CRISPR. The new supercomputer, a Dell Technologies system powered by NVIDIA’s next-generation Vera Rubin platform, will be engineered to support large-scale high-performance computing (HPC) workloads like those in molecular dynamics, high-energy physics, and AI training and inference—and provide a robust environment for the workflows that make cutting-edge science possible.   This announcement reflects the Trump Administration’s commitment to restoring the gold standard of American science and unleashing the next great wave of innovation. Doudna will be one of the most advanced supercomputers ever deployed by the Department, advancing U.S. leadership in the global race for AI. “The Doudna system represents DOE’s commitment to advancing American leadership in science, AI, and high-performance computing,” said U.S. Secretary of Energy Chris Wright. “It will be a powerhouse for rapid innovation that will transform our efforts to develop abundant, affordable energy supplies and advance breakthroughs in quantum computing. AI is the Manhattan Project of our time, and Doudna will help ensure America’s scientists have the tools they need to win the global race for AI dominance.” “At Dell Technologies, we are empowering researchers worldwide by seamlessly integrating simulation, data, and AI to address the world’s most complex challenges,” said Michael Dell, Chairman and CEO, Dell Technologies. “Our collaboration with the Department of Energy on Doudna underscores a shared vision to redefine

Read More »

DOE Issues LNG Export Authorization for Port Arthur Phase II, Advancing President Trump’s Commitment to Unleash American Energy

WASHINGTON— U.S. Secretary of Energy Chris Wright today approved a final authorization for liquefied natural gas (LNG) exports to non-free trade agreement (non-FTA) countries from Port Arthur LNG Phase II in Jefferson County, Texas, following the Response to Comments on the 2024 LNG Export Study issued on May 19. This is the first final LNG export approval under President Trump’s leadership and marks another step in restoring regular order to LNG export permitting–reversing the previous administration’s pause and delivering on the President’s pledge to unleash American energy.  “Port Arthur LNG Phase II marks a significant expansion of the first phase already under construction– turning more of the liquid gold beneath our feet into energy security for the American people,” said Secretary Wright. “With President Trump’s leadership, the Energy Department is restoring America’s role as the world’s most reliable energy supplier.”   “U.S. LNG exports continue to gain momentum, and I am glad DOE is able to do its part to answer the call for more reliable and affordable energy, at home and abroad,” said Tala Goudarzi, Principal Deputy Assistant Secretary of the Office of Fossil Energy and Carbon Management.  Port Arthur LNG Phase II, owned by Sempra Energy, is projected to export 1.91 billion cubic feet per day (Bcf/d) once completed. In addition to Port Arthur Phase I—which is currently under construction and expected to begin exporting LNG in 2027—Sempra also operates the Cameron LNG export terminal in Louisiana, which has been exporting LNG since 2019, and is currently constructing the Energia Costa Azul terminal in Mexico, which will begin commercial export operations of U.S.-sourced gas as LNG beginning in 2026.  Today’s action marks the fifth LNG export authorization issued by Secretary Wright, bringing the total volume of exports associated with approvals under President Trump’s leadership to 11.45 Bcf/d.      

Read More »

Oil Falls on Weak US Data and OPEC Output Fears

Oil declined as soft US economic data and concerns about rising supplies eroded the risk-on sentiment from a court ruling that blocked a swath of the Trump administration’s tariffs. West Texas Intermediate fell 1.5% to settle near $61 a barrel after Interfax cited Kazakhstan as saying that OPEC+ is set to hike output at a meeting on Saturday, with the size of the increase still to be decided. Broader markets eased off of earlier highs on data showing the US economy shrank at the start of the year, further pressuring the commodity. Crude had earlier rallied after a trade court blocked a vast range of President Donald Trump’s trade levies, including elevated rates on China — the world’s top importer of crude. “The path to sustainably higher prices remains extremely narrow,” with the market likely to struggle to absorb additional barrels from OPEC+ over the coming months, said Daniel Ghali, a commodity strategist at TD Securities. In the near term, algorithmic selling activity will weigh on prices into the weekend meeting, he added. Oil has trended lower since mid-January on concerns about the fallout from Trump’s tariff war, with the revival of idled production by OPEC+ adding to headwinds. The trade measures have rattled global markets, raising concerns over economic growth and demand for commodities. Meanwhile, wildfires are threatening about 5% of Canada’s crude output as a blaze in Alberta’s oil sands region spreads. Oil Prices WTI for July delivery slipped 1.5% to settle at $60.94 a barrel in New York. Brent for July settlement dipped 1.2% to settle at $64.15 a barrel. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up

Read More »

Goldman, Morgan Stanley Say Trump Can Deploy Other Tariff Tools

Two of Wall Street’s top investment banks cautioned that the impact of a court ruling striking down many of President Donald Trump’s tariff measures may prove limited, given that the administration has other avenues to impose import duties. “The tariff levels that we had yesterday are probably going to be the tariff levels that we have tomorrow, because there are so many different authorities the administration can reach into to put it back together,” Michael Zezas, Morgan Stanley’s global head of fixed income and thematic research, said on Bloomberg TV Thursday. Goldman Sachs Group Inc.’s Alec Phillips wrote in a note to clients late Wednesday that “this ruling represents a setback for the administration’s tariff plans and increases uncertainty but might not change the final outcome for most major US trading partners.”  The judgment by the US Court of International Trade halts 6.7 percentage points of levies announced this year and the White House could use other tariff tools to make up for that, wrote Phillips, Goldman’s chief US political economist. “For now, we expect the Trump administration will find other ways to impose tariffs.” Zezas had a similar assessment. Trump’s power to “raise and escalate — it might be a little bit slower moving, but it is still there.” Talks with countries such as Japan were always likely to take time, he said. And while they proceed, the administration would be able to “stitch together that authority on the other tariffs that went away — so all the same leverage is effectively there during the negotiation.” For now, the White House is signaling it’s not planning to proceed with other tools. “There are different approaches that would take a couple of months” to put in place, Kevin Hassett, director of the National Economic Council, said on Fox Business Thursday.

Read More »

IRA tax credits spur construction, manufacturing in red and blue states

Emmanuel Martin-Lauzer is director of business development and public affairs at Nexans. The jury is still out on whether the Inflation Reduction Act (IRA) has helped contain or reduce inflation. Nevertheless, certain provisions have delivered tangible benefits that deserve closer examination before any potential repeal. While some provisions may not have broad appeal, one success of the IRA has been its impact on strengthening U.S. energy production. The bill speaks more to renewable energy innovation and increase in energy independence to support U.S. economic growth than to direct economic impact. Repealing it wholesale risks far more than we might anticipate. At its core, the IRA tax credits for energy generation are driving significant investment in innovative energy production. Because renewable energy makes up around 21.4% of the energy mix, these incentives have been passed down the chain to the benefits of the ratepayer, while simultaneously sustaining the creation of entire industries. These investments have sparked construction and manufacturing jobs across both red and blue states, proving that clean energy isn’t just an environmental initiative. These tax credits have also bolstered America’s energy independence. Renewables like solar, onshore wind and offshore wind are integral to our domestic energy supply chain, reducing reliance on foreign sources, and making our own infrastructure more resilient. They’ve also driven initiatives to improve long-term cost competitiveness, incentivizing developers to innovate to reduce costs.   Our current grid infrastructure and energy generation systems are nearing obsolescence and over the next decade the demand on these systems is expected to skyrocket. Data centers alone are expected to double their electricity demand, and by 2035, over 71 million electric vehicles will require around 400 kWh to charge per month. Urbanization trends are compounding this demand as more people move to cities. Without the IRA tax credits, we risk slowing down our

Read More »

FERC ALJ order threatens competitive transmission cost caps: CAISO

An order by a Federal Energy Regulatory Commission administrative law judge threatens cost caps included in competitive transmission solicitations across the United States, according to the California Independent System Operator. A May 22 ruling by FERC ALJ Joel deJesus could also upend FERC’s framework for providing refunds to electricity customers when the agency finds a company has been overcollecting revenue, CAISO said in a filing with the commission on Tuesday. The California grid operator urged FERC to overturn deJesus’ findings, saying they “will harm ratepayers, undercut the consumer protections afforded by the Federal Power Act …, and cast doubt on the CAISO’s and customers’ ability to rely on voluntary, binding cost caps proposed and agreed to by project sponsors in competitive transmission planning processes.” The issue centers on a dispute over a proposal by a Lotus Infrastructure Partners affiliate to recover more than double a cost cap for the 500-kV Ten West Link transmission project between California and Arizona. CAISO selected the DCR Transmission project in 2014 following a solicitation that grew out of its transmission planning process. The transmission line started operating a year ago. DCR in June 2023 asked FERC to approve a transmission tariff based on a $553.3 million estimated project cost compared to a $259 million binding cost cap. Three months later, FERC accepted DCR’s proposal, subject to refund, but ordered hearings and settlement procedures, according to CAISO. The proceeding was moving under the Federal Power Act’s section 205, according to CAISO. However, deJesus said FERC’s initial order was “ambiguous” as to what FPA section the case should advance under. He contends FERC should have determined that the DCR rate filing was an “initial rate filing” to be handled under section 206 of the FPA and that FERC should have established a refund date under that

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Indian startup Refroid launches India’s first data center CDUs

They use heat exchangers and pumps to regulate the flow and temperature of fluid delivered to equipment for cooling, while isolating the technology cooling system loop from facility systems. The technology addresses limitations of traditional air cooling, which industry experts say cannot adequately handle the heat generated by modern AI processors and high-density computing applications. Strategic significance for India Industry analysts view the development as a critical milestone for India’s data center ecosystem. “India generates 20% of global data, yet contributes only 3% to global data center capacity. This imbalance is not merely spatial — it’s systemic,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “The emergence of indigenously developed CDUs signals a strategic pivot. Domestic CDU innovation is a defining moment in India’s transition from data centre host to technology co-creator.” Neil Shah, VP for research and partner at Counterpoint Research, noted that major international players like Schneider, Vertiv, Asetek, Liquidstack, and Zutacore have been driving most CDU deployments in Indian enterprises and data centers. “Having a local indigenous CDU tech and supplier designed with Indian weather, infrastructure and costs in mind expands options for domestic data center demand,” he said. AI driving data center cooling revolution India’s data center capacity reached approximately 1,255 MW between January and September 2024 and was projected to expand to around 1,600 MW by the end of 2024, according to CBRE India’s 2024 Data Center Market Update. Multiple market research firms have projected the India data center market to grow from about $5.7 billion in 2024 to $12 billion by 2030. Bhavaraju cited aggressive projections for the sector’s expansion, with AI workloads expected to account for 30% of total workloads by 2030. “All of them need liquid cooling,” he said, noting that “today’s latest GPU servers – GB200 from Nvidia

Read More »

Platform approach gains steam among network teams

Revisting the platform vs. point solutions debate The dilemma of whether to deploy an assortment of best-of-breed products from multiple vendors or go with a unified platform of “good enough” tools from a single vendor has vexed IT execs forever. Today, the pendulum is swinging toward the platform approach for three key reasons. First, complexity, driven by the increasingly distributed nature of enterprise networks, has emerged as a top challenge facing IT execs. Second, the lines between networking and security are blurring, particularly as organizations deploy zero trust network access (ZTNA). And third, to reap the benefits of AIOps, generative AI and agentic AI, organizations need a unified data store. “The era of enterprise connectivity platforms is upon us,” says IDC analyst Brandon Butler. “Organizations are increasingly adopting platform-based approaches to their enterprise connectivity infrastructure to overcome complexity and unlock new business value. When enhanced by AI, enterprise platforms can increase productivity, enrich end-user experiences, enhance security, and ultimately drive new opportunities for innovation.” In IDC’s Worldwide AI in Networking Special Report, 78% of survey respondents agreed or strongly agreed with the statement: “I am moving to an AI-powered platform approach for networking.” Gartner predicts that 70% of enterprises will select a broad platform for new multi-cloud networking software deployments by 2027, an increase from 10% in early 2024. The breakdown of silos between network and security operations will be driven by organizations implementing zero-trust principles as well as the adoption of AI and AIOps. “In the future, enterprise networks will be increasingly automated, AI-assisted and more tightly integrated with security across LAN, data center and WAN domains,” according to Gartner’s 2025 Strategic Roadmap for Enterprise Networking. While all of the major networking vendors have announced cloud-based platforms, it’s still relatively early days. For example, Cisco announced a general framework for Cisco

Read More »

Oracle to spend $40B on Nvidia chips for OpenAI data center in Texas

OpenAI has also expanded Stargate internationally, with plans for a UAE data center announced during Trump’s recent Gulf tour. The Abu Dhabi facility is planned as a 10-square-mile campus with 5 gigawatts of power. Gogia said OpenAI’s selection of Oracle “is not just about raw compute, but about access to geographically distributed, enterprise-grade infrastructure that complements its ambition to serve diverse regulatory environments and availability zones.” Power demands create infrastructure dilemma The facility’s power requirements raise serious questions about AI’s sustainability. Gogia noted that the 1.2-gigawatt demand — “on par with a nuclear facility” — highlights “the energy unsustainability of today’s hyperscale AI ambitions.” Shah warned that the power envelope keeps expanding. “As AI scales up and so does the necessary compute infrastructure needs exponentially, the power envelope is also consistently rising,” he said. “The key question is how much is enough? Today it’s 1.2GW, tomorrow it would need even more.” This escalating demand could burden Texas’s infrastructure, potentially requiring billions in new power grid investments that “will eventually put burden on the tax-paying residents,” Shah noted. Alternatively, projects like Stargate may need to “build their own separate scalable power plant.” What this means for enterprises The scale of these facilities explains why many organizations are shifting toward leased AI computing rather than building their own capabilities. The capital requirements and operational complexity are beyond what most enterprises can handle independently.

Read More »

New Intel Xeon 6 CPUs unveiled; one powers rival Nvidia’s DGX B300

He added that his read is that “Intel recognizes that Nvidia is far and away the leader in the market for AI GPUs and is seeking to hitch itself to that wagon.” Roberts said, “basically, Intel, which has struggled tremendously and has turned over its CEO amidst a stock slide, needs to refocus to where it thinks it can win. That’s not competing directly with Nvidia but trying to use this partnership to re-secure its foothold in the data center and squeeze out rivals like AMD for the data center x86 market. In other words, I see this announcement as confirmation that Intel is looking to regroup, and pick fights it thinks it can win. “ He also predicted, “we can expect competition to heat up in this space as Intel takes on AMD’s Epyc lineup in a push to simplify and get back to basics.” Matt Kimball, vice president and principal analyst, who focuses on datacenter compute and storage at Moor Insights & Strategy, had a much different view about the announcement. The selection of the Intel sixth generation Xeon CPU, the 6776P, to support Nvidia’s DGX B300 is, he said, “important, as it validates Intel as a strong choice for the AI market. In the big picture, this isn’t about volumes or revenue, rather it’s about validating a strategy Intel has had for the last couple of generations — delivering accelerated performance across critical workloads.”  Kimball said that, In particular, there are a “couple things that I would think helped make Xeon the chosen CPU.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »