Stay Ahead, Stay ONMINE

OPEC+ Quota Hikes Yet to Deliver Oil Surge: Morgan Stanley

The OPEC+ alliance may be boosting oil-production quotas at a significant pace in a push to restart idled capacity, but that shift has yet to translate into big gains in actual output, according to Morgan Stanley. “Notwithstanding the around 1 million-barrel-a-day increase in production quotas between March and June, an actual increase in production is hard […]

The OPEC+ alliance may be boosting oil-production quotas at a significant pace in a push to restart idled capacity, but that shift has yet to translate into big gains in actual output, according to Morgan Stanley.

“Notwithstanding the around 1 million-barrel-a-day increase in production quotas between March and June, an actual increase in production is hard to detect,” analysts including Martijn Rats said in a June 9 note. “Notably, it does not appear that production in Saudi Arabia has ramped up significantly.”

The global oil market has been rocked in recent months by the move from eight core OPEC+ nations to relax supply restraints at a faster-than-expected pace, potentially adding supplies just as trade frictions menace demand. The surprise shift has been presented as a bid by the cartel to reclaim market share from rival drillers, as well as punish its own quota cheats.

Morgan Stanley based its conclusions on a slew of data points, including refinery throughput, cargo exports, pipeline flows, and indications of stockpiling, as well as estimates for production from six different providers.

Still, increases may yet be forthcoming. The Wall Street giant said that it still expected supply from the core members to rise by about 420,000 barrels a day between June and September as the cartel continues quota hikes, with about half of the increase coming from Saudi Arabia.

In addition, the bank maintained its outlook for a surplus, as crude supplies from outside the Organization of the Petroleum Exporting Countries and its allies climb by about 1.1 million barrels a day this year, outpacing global demand growth of about 800,000 barrels a day.

Even without an “OPEC production increase, those two assumptions alone already produce a softer outlook for the oil market, especially after the current period of seasonal summer-strength,” the analysts said.

Global benchmark Brent last traded at $66.45 a barrel, down 11 percent in 2025. Morgan Stanley forecasts prices at $57.50 a barrel in the second half.



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AWS cuts prices of some EC2 Nvidia GPU-accelerated instances

“Price cuts on P4d, P4de, P5, and P5en GPU instances suggest a targeted price competition move. These instances, powered by Nvidia A100 and H100-class GPUs, are central to generative AI workloads and already in demand,” said Kaustubh K, practice director at Everest Group. “The reductions can be considered all about

Read More »

Ending the great network depression

This brings us to AI. In both these enterprise examples, we see the concept of a new technology model deploying from a seedling-like start in a single location and then expanding outward and, at the same time, expanding to other related areas of business operation. Through this double-build-out, to be

Read More »

Nvidia aims to bring AI to wireless

Key features of ARC-Compact include: Energy Efficiency: Utilizing the L4 GPU (72-watt power footprint) and an energy-efficient ARM CPU, ARC-Compact aims for a total system power comparable to custom baseband unit (BBU) solutions currently in use. 5G vRAN support: It fully supports 5G TDD, FDD, massive MIMO, and all O-RAN

Read More »

Vermilion Energy Completes Exit from USA with Asset Divestment

Vermilion Energy Inc. said it has entered into a definitive agreement to sell its U.S. assets for cash proceeds of $87.61 million (CAD 120 million). The assets produce approximately 5,500 barrels of oil equivalent (boepd), consisting of 81 percent oil and liquids, and an estimated approximately 10 million barrels of oil equivalent (MMboe) of proved developed producing reserves as of Dec. 31, 2024, Vermilion said in a news release. The company said it expects the transaction, which has an effective date of January 1, to close in the third quarter, subject to the satisfaction of other customary closing conditions. The transaction agreement includes CAD 10 million of contingent payments based on WTI prices over the two-year period starting July 1. Vermilion said the transaction, combined with the sale of its East Finn assets in 2023, completes the company’s exit from the USA. The company said it plans to focus on its core gas-weighted assets in Canada and Europe. Net proceeds from the transaction will be directed towards debt repayment to “further accelerate deleveraging efforts and strengthen Vermilion’s balance sheet,” the company said. Meanwhile, Vermilion said it is adjusting its 2025 capital budget to a range of CAD 630 to 660 million, reflecting a reduction of approximately CAD 100 million from the mid-point of our previous capital budget range of CAD 730 million to 760 million. The reduction reflects the removal of all remaining exploration and development capital associated with the Saskatchewan and U.S. divested assets post-closing, the company said. Vermilion said it expects full-year and second-half 2025 production to range between 117,000 and 122,000 boepd, 68 percent natural gas-weighted in the second half of 2025. The company said it estimates that over 90 percent of production will come from its global gas portfolio and over 80 percent of capital is

Read More »

Standard Lithium Makes Successful Production Using New Method

Standard Lithium Ltd. has announced the successful production of battery-quality lithium sulfide using Telescope Innovations Corp.’s new trademarked low-temperature method. “Standard Lithium has been working with its research and development partner, Telescope Innovations, to develop new and novel conversion technologies to make next-generation battery materials”, Vancouver, Canada-based Standard Lithium said in a press release. “This new conversion process has now been successfully used to convert lithium hydroxide produced by Standard Lithium at its southern Arkansas Demonstration Plant, into battery-quality lithium sulfide. “Samples of the lithium sulfide have been shipped to solid-state battery companies in Asia and North America for ongoing testing and validation purposes”. “This development of new IP and technology with our research partner, Telescope Innovations, exemplifies our approach to becoming the leading new lithium company in North America”, said Standard Lithium president and chief operating officer Andy Robinson. “Whilst our principle area of focus, and capital allocation, is building the first DLE [direct lithium extraction] project in North America at our South West Arkansas Project Phase 1 with our joint venture partner Equinor, we understand that constant technological evolution is integral to staying at the forefront of this rapidly evolving industry. “This recent work led by Telescope demonstrates that we are able to take lithium chemicals produced from the Smackover Formation in southern Arkansas, and then transform them into the feedstocks required by the next generation of batteries”. Telescope’s DualPure production method used feedstocks including lithium hydroxide monohydrate from Standard Lithium’s demo plant in Arkansas. The process can also use lithium carbonate. DualPure’s processing temperatures do not reach 100 degrees Celsius, avoiding thermal risks, according to Telescope. “Lithium sulfide is a key raw material required for many next-generation solid-state battery chemistries, but despite the importance of lithium sulfide in the next generation of battery technology, it is only produced

Read More »

Northeastern North America Will Have Enough Power for Summer: NPCC

The Northeast Power Coordinating Council Inc. (NPCC) forecast in its 2025 Summer Reliability Assessment that Northeastern North America – New England, the state of New York, and the Canadian provinces of Ontario, Québec, and the Maritimes – will have adequate electricity this summer. NPCC’s summer peak electricity demand is projected to be 104,600 MW, about 400 MW lower than last summer. The installed capacity to meet the region’s demand is approximately 157,000 MW, a decrease of 1,300 MW since last summer, primarily due to the retirement of the Pickering G1 and G4 nuclear units in Ontario. Forecasts indicate sufficient transmission and spare capacity to meet peak demand and operating reserves, with spare operable capacity estimated between 4,700 MW and over 17,000 MW, NPCC said in a media release. “NPCC’s assessment indicates our Region has spare capacity for this summer, which can be used to help mitigate reliability risks that may result from unexpected unavailability of key facilities, fuel supply interruptions, generation maintenance, or higher than anticipated demand”, Charles Dickerson, NPCC President and Chief Executive Officer, said. Because of its winter peak demand, Québec is predicted to comfortably cover the anticipated electricity demand, allowing for the transfer of excess electricity supplies to other regions if necessary, NPCC noted. The evaluation took into account a broad spectrum of related risks, such as demand exceeding expectations, uncertainties in projected demand, unforeseen outages of generator plants, transmission limitations between adjacent regions and within NPCC, the execution of operational procedures, the anticipated effects of demand response initiatives, and extra capacity unavailability alongside diminished transfer capabilities, NPCC said. “NPCC continues to see sustained growth in distributed photovoltaic resources”, Phil Fedora, NPCC Chief Engineer and Senior Vice President of External Affairs, said. “The reduction to NPCC’s summer peak demand due to behind-the-meter photovoltaic resources is estimated to be over

Read More »

BEL Valves Bags $6.7MM North Sea Oil Installation

BEL Valves announced in a release sent to Rigzone recently that it had “lock[ed]… in [a] GBP 5 million ($6.7 million) North Sea oil installation”. The company noted in the release that it has supplied 11 actuated and 19 manual slab gate valves to the Murlach oil field development in the North Sea, adding that the valves feature local control panels, range in diameter from two to 12 inches, and can withstand up to 690 Bar pressure in Duplex and Super Duplex materials. BEL Valves highlighted in the release that the Murlach oil field development is a joint venture between BP Exploration Operating Company and NEO Energy, “with the first oil extraction expected in 2025”. It is situated around 135 miles east of the Aberdeenshire coastline and lies in a water depth of 305 feet to 311 feet, the release pointed out. “The Murlach Field Development Project is a really significant North Sea project and to have our slab gate valves playing such a vital part in its operational safety is a great reflection of the trust the industry has in our products,” David Gallagher, Project Director of BEL Valves, said in the release. “We’ve been manufacturing valves for the oil industry since the mid-1960s and our reputation for quality and reliability is second to none,” he added. “And with our ability to design and manufacture bespoke solutions, deliver fast-track orders, and provide full life cycle support, it’s understandable why BEL Valves is held in high esteem by infrastructure specifiers and contractors all over the world,” he continued. “The importance of high pressure, safety critical shut down valves can never be underestimated,” Gallagher went on to state. “The fact we’re known throughout the global oil and gas industry for delivering products that do exactly that, while also being supported by

Read More »

“How much energy do I really have?” — The question costing storage owners millions

Every megawatt-hour matters. Grid demand spikes, prices surge, and systems are pushed to their limits. Energy storage system operators need to know how much energy they can count on and how quickly they can deploy it. The catch? In most grid-scale battery energy storage systems (BESS), that number is fuzzier than it seems.  Imagine operating a 100 MWh battery energy storage system. In practice, most operators hold back 10-15% of that capacity to avoid overestimating available capacity and risking a shortfall on any market commitments. That safety buffer—built around measurement uncertainty—means you’re only dispatching 85 MWh. In today’s volatile energy markets, that 10-15% margin isn’t just a technical issue. It’s a missed opportunity. Every unclaimed megawatt-hour represents lost revenue, stranded grid support capacity, and a system falling short of its full potential. Worse, usable energy loss doesn’t stop at estimation errors. Over time, hidden imbalances between individual cells begin to chip away at overall system performance, further reducing capacity and complicating operations.  Understanding State-of-Charge (SoC) Calibration Knowing how much energy is available to dispatch at any given moment is core to profitability. This is where SoC calibration comes into play: a software-enabled process that allows the battery management system (BMS) to track the state of charge of the battery with a high degree of precision.  This level of accuracy is especially important in systems using LFP batteries. Known for their long cycle life and safety, LFP batteries are widely used in grid-scale projects—but they come with a challenge: their relatively flat voltage curve between 25% and 90% SoC makes it difficult for the BMS to accurately determine SoC by the voltage curve alone during regular operations.   This is particularly problematic in markets where batteries operate within a narrow SoC band—often between 30% and 70%—to deliver ancillary services. If a

Read More »

The hidden infrastructure risk that could derail America’s energy transition

On May 15, 2025, federal investigators disclosed that undocumented “ghost” communication modules were embedded in some Chinese-manufactured solar inverters. China produces about 70 percent of the world’s inverters, according to the International Energy Agency. Multiply that share across the millions of distributed energy resources (DERs) the United States will deploy this decade and you have a network whose endpoints outnumber today’s central-station assets by orders of magnitude. Here’s what keeps me up at night: Each inverter, battery-management system and smart meter contains firmware and communications devices that grid operators seldom inspect and cannot easily patch. As an industry, we’re essentially deploying millions of black boxes across our critical infrastructure. The energy transition’s defining strength—scale—has become our biggest vulnerability. Why conventional defenses fall short Traditional perimeter tools were designed for a grid dominated by a few hundred control-room assets. Distributed energy completely flips that model: Scale: We’re defending millions of endpoints instead of dozens of plants Diversity: Multiple vendors, protocols and code bases make uniform hardening nearly impossible Physical exposure: Rooftop and roadside installations lack utility-grade site security Supply-chain opacity: Operators receive only “black-box” documentation of embedded components I’ve personally torn down equipment from major manufacturers and found undocumented hardware, hidden communication devices, and features that weren’t in any manual. This isn’t theoretical—it’s happening right now. Principles for a secure distributed grid After years of building and securing distributed energy systems, here’s what actually works: 1. Zero-trust architecture: Never trust vendor-supplied data paths. Ever. If you didn’t build it, assume it’s compromised. 2. Hardware agnosticism: Abstract your control logic from field devices. When (not if) you need to swap suppliers, you should be able to do it with minimal code updates. 3. Physical verification: Tear-down inspections and software bills of materials (SBOMs) must continue through the asset’s entire operational life. A

Read More »

LiquidStack launches cooling system for high density, high-powered data centers

The CDU is serviceable from the front of the unit, with no rear or end access required, allowing the system to be placed against the wall. The skid-mounted system can come with rail and overhead piping pre-installed or shipped as separate cabinets for on-site assembly. The single-phase system has high-efficiency dual pumps designed to protect critical components from leaks and a centralized design with separate pump and control modules reduce both the number of components and complexity. “AI will keep pushing thermal output to new extremes, and data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise,” said Joe Capes, CEO of LiquidStack in a statement. “With up to 10MW of cooling capacity at N, N+1, or N+2, the GigaModular is a platform like no other—we designed it to be the only CDU our customers will ever need. It future-proofs design selections for direct-to-chip liquid cooling without traditional limits or boundaries.”

Read More »

Enterprises face data center power design challenges

” Now, with AI, GPUs need data to do a lot of compute and send that back to another GPU. That connection needs to be close together, and that is what’s pushing the density, the chips are more powerful and so on, but the necessity of everything being close together is what’s driving this big revolution,” he said. That revolution in new architecture is new data center designs. Cordovil said that instead of putting the power shelves within the rack, system administrators are putting a sidecar next to those racks and loading the sidecar with the power system, which serves two to four racks. This allows for more compute per rack and lower latency since the data doesn’t have to travel as far. The problem is that 1 mW racks are uncharted territory and no one knows how to manage the power, which is considerable now. ”There’s no user manual that says, hey, just follow this and everything’s going to be all right. You really need to push the boundaries of understanding how to work. You need to start designing something somehow, so that is a challenge to data center designers,” he said. And this brings up another issue: many corporate data centers have power plugs that are like the ones that you have at home, more or less, so they didn’t need to have an advanced electrician certification. “We’re not playing with that power anymore. You need to be very aware of how to connect something. Some of the technicians are going to need to be certified electricians, which is a skills gap in the market that we see in most markets out there,” said Cordovil. A CompTIA A+ certification will teach you the basics of power, but not the advanced skills needed for these increasingly dense racks. Cordovil

Read More »

HPE Nonstop servers target data center, high-throughput applications

HPE has bumped up the size and speed of its fault-tolerant Nonstop Compute servers. There are two new servers – the 8TB, Intel Xeon-based Nonstop Compute NS9 X5 and Nonstop Compute NS5 X5 – aimed at enterprise customers looking to upgrade their transaction processing network infrastructure or support larger application workloads. Like other HPE Nonstop systems, the two new boxes include compute, software, storage, networking and database resources as well as full-system clustering and HPE’s specialized Nonstop operating system. The flagship NS9 X5 features support for dual-fabric HDR200 InfiniBand interconnect, which effectively doubles the interconnect bandwidth between it and other servers compared to the current NS8 X4, according to an HPE blog detailing the new servers. It supports up to 270 networking ports per NS9 X system, can be clustered with up to 16 other NS9 X5s, and can support 25 GbE network connectivity for modern data center integration and high-throughput applications, according to HPE.

Read More »

AI boom exposes infrastructure gaps: APAC’s data center demand to outstrip supply by 42%

“Investor confidence in data centres is expected to strengthen over the remainder of the decade,” the report said. “Strong demand and solid underlying fundamentals fuelled by AI and cloud services growth will provide a robust foundation for investors to build scale.” Enterprise strategies must evolve With supply constrained and prices rising, CBRE recommended that enterprises rethink data center procurement models. Waiting for optimal sites or price points is no longer viable in many markets. Instead, enterprises should pursue early partnerships with operators that have robust development pipelines and focus on securing power-ready land. Build-to-suit models are becoming more relevant, especially for larger capacity requirements. Smaller enterprise facilities — those under 5MW — may face sustainability challenges in the long term. The report suggested that these could become “less relevant” as companies increasingly turn to specialized colocation and hyperscale providers. Still, traditional workloads will continue to represent up to 50% of total demand through 2030, preserving value in existing facilities for non-AI use cases, the report added. The region’s projected 15 to 25 GW gap is more than a temporary shortage — it signals a structural shift, CBRE said. Enterprises that act early to secure infrastructure, invest in emerging markets, and align with power availability will be best positioned to meet digital transformation goals. “Those that wait may find themselves locked out of the digital infrastructure they need to compete,” the report added.

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »