Stay Ahead, Stay ONMINE

China demands ‘security evidence’ from Nvidia over H20 chip backdoor fears

The proposed legislation would require export-controlled advanced chips to be equipped with location verification mechanisms within six months of enactment, and mandate exporters to report to the Bureau of Industry and Security if products are diverted or tampered with. The CAC statement cited “demands from US lawmakers to add tracking features to advanced chips” and […]

The proposed legislation would require export-controlled advanced chips to be equipped with location verification mechanisms within six months of enactment, and mandate exporters to report to the Bureau of Industry and Security if products are diverted or tampered with.

The CAC statement cited “demands from US lawmakers to add tracking features to advanced chips” and noted that “US artificial intelligence experts have indicated that remote control technologies related to Nvidia’s chips have matured.”

Nvidia denials fall short, China says

The People’s Daily referenced Nvidia’s previous statement that “Cybersecurity is critically important to us. Nvidia does not have ‘backdoors’ in our chips that would give anyone a remote way to access or control them.” However, the state media outlet dismissed this response as insufficient, emphasizing that only “convincing security evidence” would restore trust.

The chipmaker faces pressure balancing US security requirements with Chinese market demands. US Commerce Secretary Howard Lutnick described the H20 as Nvidia’s “fourth best” processor when announcing export resumption: “We don’t sell them our best stuff, not our second best stuff, not even our third best.”

The H20 chip is part of Nvidia’s China-specific product line, engineered to meet US trade restrictions by reducing performance while maintaining sufficient processing power for Chinese customers. It’s based on Nvidia’s Hopper architecture but with trimmed specifications.

Enterprise IT faces chip procurement challenges

The confrontation highlights tensions over semiconductor supply chains critical to enterprise AI deployments. The People’s Daily noted that “cybersecurity not only impacts our daily lives but also acts as the lifeblood of businesses, and is directly linked to national security.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

The US will install these country-specific tariffs Aug. 7

The U.S. plans to lift its pause on country-specific tariffs while implementing a range of new rates for specific trading partners on Aug. 7, per an executive order President Donald Trump signed Thursday.  The order lists rates for over 60 trading partners, ranging from 10% to 41%. The list includes

Read More »

FERC Staff Issues Final SEIS for Rio Grande LNG

Staff at the Federal Energy Regulatory Commission (FERC) has issued a final supplemental environmental impact statement (SEIS) for NextDecade Corp.’s Rio Grande LNG and the associated pipeline project owned by Enbridge Inc. FERC’s ongoing review covers the first five of eight liquefaction trains planned for the project. Trains I to V each have a designed capacity of 5.4 million metric tons per annum (MMtpa). Meanwhile the Rio Bravo Pipeline is designed to carry up to 4.5 billion cubic feet a day of natural gas from the Agua Dulce supply area to the liquefaction facility in Brownsville, Texas. The new SEIS is in response to last year’s remand by the Court of Appeals for the District of Columbia Circuit of FERC’s reauthorization of the projects, according to a FERC statement online. In August 2024, the court vacated FERC’s order issued April 2023 because the commission had not issued a SEIS in reauthorizing the project. Last year’s court ruling was the second remand for the project. On March 18, 2025, NextDecade said the court revised its August 2024 ruling and remanded without vacatur FERC’s order. In the new SEIS, “FERC staff conclude that… communities in the areas near the Rio Grande LNG Terminal may experience significant cumulative visual impacts”, FERC said last week. “Specific to air quality impacts, we clarify that the project’s air quality impacts on communities with environmental justice concerns would also be disproportionate and adverse; however, RG LNG’s air quality analysis demonstrates that air quality impacts near the Rio Grande LNG Terminal would not be significant, with the exception of two discrete areas just north of the LNG terminal where the cumulative model shows an exceedance of the annual PM2.5 SIL, and thus we conclude air quality impacts in those areas would be significant.  “Specific to the RB Pipeline, the revised air quality dispersion modeling that shows

Read More »

B.C. Government Provides Cedar LNG with Additional Funds

The government of British Columbia has signed a new $145 million (CAD 200 million) contribution agreement with the Haisla Nation to support the development of the Cedar LNG facility with renewable electricity. The Cedar LNG project is a floating liquefied natural gas (LNG) terminal that will be located near Kitimat within the territory of Haisla Nation, to be constructed in partnership with Pembina Pipeline Corporation. Cedar LNG is scheduled to be operational in late 2028. The agreement aims to support the Haisla Nation in building a 287-kilovolt transmission line, a new substation, new distribution lines and nearshore electrification, all aimed at providing the infrastructure to enable the project to run on renewable electricity, according to a statement from the provincial government. The $200 million provided will add to the $200 million in federal support for the facility announced earlier in the year, the statement said. The facility, which will be built on tribal waters on Canada’s West Coast, will be powered by renewable electricity from utility BC Hydro. It is planned to have a capacity of 3.3 million metric tons per annum. Coastal GasLink Pipeline LP has agreed to supply 400 million cubic feet per day of natural gas via its pipeline, with feed gas coming from the Western Canadian Sedimentary Basin, according to an earlier statement. “By supporting Haisla Nation to power Cedar LNG with clean B.C. electricity, we’re taking another step in building a stronger economy that’s less exposed to reckless decisions made in the White House,” British Columbia Premier David Eby said. “As the world’s first Indigenous majority-owned LNG facility, Cedar LNG will create more good jobs that support families and give young people a future in local communities and throughout the North, all while generating revenue for the things we all count on, like better health

Read More »

Weatherford Awarded Drilling Contract for Mexico Deepwater Oil Project

Weatherford International plc said it was awarded a “significant” contract to deliver managed pressure drilling (MPD) services for the Trion project, a deepwater oil production project in Mexico operated by Woodside Petróleo Operaciones de Mexico. The multi-year contract includes MPD services for an initial eight wells with the potential to expand to 24 wells, the company said in a news release. Financial terms of the contract were not disclosed. As part of the project, Weatherford said it plans to deploy its Victus intelligent MPD system, which it describes as a solution “designed to enhance drilling safety, efficiency, and performance”. The solution features algorithm-driven pressure control, real-time downhole data for automated responses, and the industry’s first field-proven deepwater riser system for floating rigs, according to the release. The Trion project is located in deepwater about 8,200 feet (2,500 meters) in the Gulf of America, approximately 112 miles (180 kilometers) east of the coast of Tamaulipas and 18.6 miles (30 kilometers) south of the US-Mexico maritime border, the release said. Trion is a joint venture between Woodside Petróleo Operaciones de México, S. de R.L. de C.V., who serves as the operator with a 60 percent interest, and Petróleos Mexicanos (PEMEX) with 40 percent. The contract award “reinforces Weatherford’s market leadership in high-performance MPD and expands its presence in Mexico’s offshore energy sector,” the company said. Weatherford President and CEO Girish Saligram said, “We are proud to support Woodside Energy on this historic project. The Trion development represents a defining moment for Mexico’s energy sector, and Weatherford is honored to contribute with trusted MPD technologies that improve safety, efficiency, and well delivery. This award further strengthens our position as a trusted partner for complex offshore operations”. Second-Quarter Results In the second quarter, Weatherford reported revenue of $1.2 billion, an increase of 1 percent

Read More »

EU Pledge of up to $750B for USA Energy ‘Viewed as Unrealistic’

The European Union’s pledge of up to $750 billion in purchases of U.S. energy products over three years is “a figure widely viewed as unrealistic”, a gas and LNG market update from Masanori Odaka, Rystad Energy Vice President, Gas & LNG Research, outlined. Rystad’s market update highlighted that European gas prices opened higher last week following the trade deal between the U.S. and the European Union, “which established a 15 percent tariff on most EU imports – excluding the existing 50 percent tariffs on steel and aluminum, as well as several other sectors”. “As part of the agreement, the EU pledged up to $750 billion in purchases of U.S. energy products over three years – mainly LNG, oil and nuclear technology,” the update, which was sent to Rigzone by the Rystad team recently, pointed out. The update noted that the $750 billion target has drawn widespread skepticism. “In 2024, the EU imported 37.82 million tons of U.S. LNG, equivalent to a 43 percent share in the EU’s LNG imports,” Rystad’s update said. “So far in 2025, the U.S. share of EU LNG imports has risen to 55.5 percent, equivalent to 35.6 million tons,” it added. “If this exceptionally high share is sustained and the EU imports a total ranging from 100 million tons to 120 million tons this year, it would translate to approximately $28 to $34 billion in U.S. LNG sales, assuming an average price of $11 per million British thermal units (MMBtu),” it continued. Rystad’s update went on to state that, with global LNG supply expected to grow, prices are widely forecast to decline. “This means the EU would need to import significantly higher volumes just to maintain current spending levels – let alone hit $250 billion annually,” the update noted. Rystad’s update stated that other American energy commodities such

Read More »

Philippines On Track for Annual Coal Decline Thanks to RE: IEEFA

A 5.2 percent decline in coal-fired power generation in the Philippines in the first half of 2025 has put the Southeast Asian country on course for its first annual decline in coal power production in decades. This would be due to the growth of renewable energy rather than an increase in liquefied natural gas (LNG) imports, the Institute for Energy Economics and Financial Analysis (IEEFA) said. “Recent media coverage has asserted that growing LNG imports are responsible for coal’s decline, reciting oil and gas industry logic that Asia’s energy transition hinges on replacing one fossil fuel (coal) with another (LNG)”, the United States-based IEEFA said. “However, these conclusions overlook basic trends in the Philippines’ energy market. Renewables have rapidly outpaced the growth of LNG, while gas-fired power generation remains below historical levels. “Outages at existing coal facilities provide a better explanation for declining coal generation than the growth of LNG, which is significantly more expensive than renewables and other energy resources”. Based on an analysis of government data, the IEEFA said the country did not add any new greenfield gas or LNG-fired generation capacity between 2017 and 2024. “The most recent increase in the country’s gas capacity was in 2022, when several existing facilities were uprated”, the IEEFA said, citing data from the nation’s Department of Energy (DOE). On the other hand the Philippines installed over one gigawatt (GW) of solar capacity in 2024 alone. “This growth outpaced all other asset classes last year and previous projections for solar deployment. Centralized government auctions, among other government policies, are driving project developments”, the IEEFA said. Last month the DOE said it had awarded geothermal, hydropower and pumped storage capacities totaling about 6.68 GW. It also launched an auction offering over 10 GW of solar and wind, targeted for commercial operations 2026-29, and a

Read More »

Designing transmission lines with construction safety in mind

When a design team hands off a drawing package for construction, the work isn’t measured only in megawatts delivered or miles of line energized. The stakes are human. Personal experience working in the field on a recent transmission line project supports this assertion. Collaboration with craft labor and general foremen during a multiyear engineer-procure-construct (EPC) project reinforced the value of understanding what is necessary for successful project delivery. Successful construction means more than creating a flow of electricity for consumers; it’s also a matter of seeing that everyone involved in the project goes home safe to their families each day. Designing from this perspective requires engineers to frame their decisions around a central question: “How will this get built?” This approach means moving beyond just designing for code compliance and final configurations. It demands attention to the nuances of outage planning and phased construction, including but not limited to access road limitations, construction package clarity and the real-time dynamics of jobsite decision-making. Engineering for every phase, not just the end A 55-mile project rebuilding existing double-circuit 230-kV lines into new tubular steel 345-kV lines involved seven major construction segments spread across three years. Within tight nonreturnable outage windows, temporary configurations were just as critical as the permanent installation. These temporary configurations were carefully planned; however, it took the knowledge of a foreman to recognize that changes would need to be made to one of these temporary configurations for safer wire pulling. The foreman raised concerns about pulling new conductor through a deadend structure that supported an energized temporary tie. Although required clearance would be met once installed, the setup introduced high safety risk from induced voltage for work crews. Because a design engineer was on-site, the team was able to develop a new shoo-fly configuration well ahead of wire-pulling efforts,

Read More »

Colo space crunch could cripple IT expansion projects

Batson paints a pretty dire picture. “Vacant and immediately available data center space is incredibly limited. Across North America there are very few blocks available larger than 5 MW. Any second-generation space that becomes available is re-leased within weeks. Nearly 6.5 GW is under construction, of which 72% is preleased. Tenants looking to lease any sizable amount of data center capacity must wait 24 months on average.” According to a commercial real estate services firm CBRE, “Demand continues to outpace new supply across both core and emerging hubs.” Inventory across the four largest U.S. data center markets—Northern Virginia, Chicago, Atlanta and Phoenix—increased 43% year-over-year in Q1 2025. But that increase in inventory was overwhelmed by skyrocketing demand. Northern Virginia remained the tightest market, with a vacancy rate at 0.76%. Phoenix was at 1.7%, Chicago at 3.1% and Atlanta’s vacancy rate was 3.6%. What’s driving the colo crunch? Demand has outstripped supply due to multiple factors, according to Pat Lynch, executive managing director at CBRE Data Center Solutions. “AI is definitely part of the demand scenario that we see in the market, but we also see growing demand from enterprise clients for raw compute power that companies are using in all aspects of their business.” Batson agrees. “AI is driving demand, but it’s not the sole driver. We estimate AI workloads are about 20% of all data center workloads.” The big wild card contributing to the colo space shortage is that the hyperscalers are snapping up colo space as fast as it comes on the market, as they try to stay ahead of the surge in demand for AI processing from their big customers.

Read More »

DOE announces site selection for AI data centers

“The DOE is positioned to lead on advanced AI infrastructure due to its historical mandate and decades of expertise in extreme-scale computing for mission-critical science and national security challenges,” he said. “National labs are central hubs for advancing AI by providing researchers with unparalleled access to exascale supercomputers and a vast, interdisciplinary technical workforce.” “The Department of Energy is actually a very logical choice to lead on advanced AI data centers in my opinion,” said Wyatt Mayham, lead consultant at Northwest AI, which specializes in enterprise AI integration. “They already operate the country’s most powerful supercomputers. Frontier at Oak Ridge and Sierra at Lawrence Livermore are not experimental machines, they are active systems that the DOE built and continues to manage.” These labs have the physical and technical capacity to handle the demands of modern AI. Running large AI data centers takes enormous electrical capacity, sophisticated cooling systems, and the ability to manage high and variable power loads. DOE labs have been handling that kind of infrastructure for decades, says Mayham. “DOE has already built much of the surrounding ecosystem,” he says. “These national labs don’t just house big machines. They also maintain the software, data pipelines, and research partnerships that keep those machines useful. NSF and Commerce play important roles in the innovation system, but they don’t have the hands-on operational footprint the DOE has.” And Tanmay Patange, founder of AI R&D firm Fourslash, says the DOE’s longstanding expertise in high-performance computing and energy infrastructure directly overlap with the demands we have seen from AI development in places. “And the foundation the DOE has built is essentially the precursor to modern AI workloads that often require gigawatts of reliable energy,” he said. “I think it’s a strategic play, and I won’t be surprised to see the DOE pair their

Read More »

Data center survey: AI gains ground but trust concerns persist

Cost issues: 76% Forecasting future data center capacity requirements: 71% Improving energy performance for facilities equipment: 67% Power availability: 63% Supply chain disruptions: 65% A lack of qualified staff: 67% With respect to capacity planning, there’s been a notable increase in the number of operators who describe themselves as “very concerned” about forecasting future data center capacity requirements. Andy Lawrence, Uptime’s executive director of research, said two factors are contributing to this concern: ongoing strong growth for IT demand, and the often-unpredictable demand that AI workloads are creating. “There’s great uncertainty about … what the impact of AI is going to be, where it’s going to be located, how much of the power is going to be required, and even for things like space and cooling, how much of the infrastructure is going to be sucked up to support AI, whether it’s in a colocation, whether it’s in an enterprise or even in a hyperscale facility,” Lawrence said during a webinar sharing the survey results. The survey found that roughly one-third of data center owners and operators currently perform some AI training or inference, with significantly more planning to do so in the future. As the number of AI-based software deployments increases, information about the capabilities and limitations of AI in the workplace is becoming available. The awareness is also revealing AI’s suitability for certain tasks. According to the report, “the data center industry is entering a period of careful adoption, testing, and validation. Data centers are slow and careful in adopting new technologies, and AI will not be an exception.”

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »