Stay Ahead, Stay ONMINE

Amid rising energy prices, senators call for EPA to maintain Energy Star

Dive Brief: The Environmental Protection Agency’s plan to eliminate the Energy Star program is a “misguided decision [that] would be counterproductive to our national housing, economic, and electricity goals,” Sen. Ruben Gallego, D-Ariz., wrote today in a letter to EPA Administrator Lee Zeldin and provided in advance to Utility Dive. EPA announced “organizational improvements” in May that would […]

Dive Brief:

  • The Environmental Protection Agency’s plan to eliminate the Energy Star program is a “misguided decision [that] would be counterproductive to our national housing, economic, and electricity goals,” Sen. Ruben Gallego, D-Ariz., wrote today in a letter to EPA Administrator Lee Zeldin and provided in advance to Utility Dive.
  • EPA announced “organizational improvements” in May that would phase out the popular program, which has helped consumers identify energy-efficient home appliances for more than 30 years.
  • While Gallego’s letter honed in on Energy Star’s potential impacts on energy prices, other lawmakers have focused on the program’s legal standing. Energy Star is “protected under federal statute and thus illegal for the Administration to terminate unilaterally,” 22 senators wrote in a May 20 letter to Zeldin and Secretary of Energy Chris Wright.

Dive Insight:

Efficiency advocates argue that Energy Star is a popular, cheap and effective program — and also protected by law.

“At a time when American families are grappling with rising energy and housing costs and our nation faces mounting energy and climate challenges, eliminating a highly successful program that lowers utility bills and reduces emissions is indefensible,” Gallego wrote.

“EPA will review the letter and will respond through appropriate channels,” an agency spokesperson said in an email.

More than 2,500 builders, developers and manufactured housing plants are active in the Energy Star program, Gallego noted. In 2023, about 12% of all new U.S. homes were Energy Star compliant.

The program was created in 1992 under President George H.W. Bush and has helped consumers save more than $500 billion in energy costs and 5 trillion kWh since its inception, according to the Energy Star website.

“This program saves families and businesses more than $40 billion every year with a budget of less than $40 million. It’s an astonishingly good deal,” Steven Nadel, executive director of the American Council for an Energy-Efficient Economy, said in a statement.

Congress directed the government to run the Energy Star program, Nadel also noted. “Until now, the EPA has implemented that statutory obligation.”

Lawmakers signing on to the May 20 letter defending Energy Star also noted that the program could not be terminated without Congress.

“The program [is] protected under federal statute and thus illegal for the Administration to terminate unilaterally,” the senators wrote. The group includes Sen. Peter Welch, D-Vermont; Sen. Bernie Sanders, I-Vermont; Sen. Amy Klobuchar, D-Minn.; and Sen. John Fetterman, D-Pa.

The Energy Policy Act of 2005 says that responsibilities for the Energy Star program “shall be divided between the Department of Energy and the Environmental Protection Agency in accordance with the terms of applicable agreements between those agencies.”

In April, more than 1,000 companies and organizations lobbied EPA to maintain the program, noting that “a typical household can save about $450 on energy costs each year” by choosing Energy Star products.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

IBM’s cloud crisis deepens: 54 services disrupted in latest outage

Rawat said IBM’s incident response appears slow and ineffective, hinting at procedural or resource limitations. The situation also raises concerns about IBM Cloud’s adherence to zero trust principles, its automation in threat response, and the overall enforcement of security controls. “The recent IBM Cloud outages are part of a broader

Read More »

AMD acquires Brium to loosen Nvidia’s grip on AI software

According to Greyhound Research, nearly 67 percent of global CIOs identify software maturity, particularly in middleware and runtime optimization, as the primary barrier to adopting alternatives to Nvidia. Brium’s compiler-based approach to AI inference could ease this dependency. While Nvidia still leads among developers, AMD’s expanding open-source stack, now backed

Read More »

3 ways to streamline cloud adoption and cloud security

In today’s cloud-first world, speed and agility are the currency of innovation. Organizations are under pressure to deliver applications faster, more securely, and across increasingly distributed cloud environments. As a result, your developers are being asked to ship code at a rapid pace, often on a weekly or even daily

Read More »

GenAI controls and ZTNA architecture set SSE vendors apart

Critical SSE capabilities Gartner defines SSE as “an offering that secures access to the web, cloud services, and private applications regardless of the location of the user, the device they are using, or where that application is hosted.” “[SSE] provides a range of security capabilities, including adaptive access based on

Read More »

Large load tariffs have a problem. Clean transition tariffs are the solution.

Large load tariffs have a problem. Clean transition tariffs are the solution. | Utility Dive Skip to main content An article from Opinion These tariffs were designed to offer large loads access to renewable energy, but they could be expanded to baseload generation to remove at-risk generation from the utility’s books. Published June 5, 2025 By Ben Hertz-Shargel Construction of Amazon’s Mid-Atlantic Region data center in Loudon County, Virginia, progresses on Feb. 10, 2024. Gerville via Getty Images Ben Hertz-Shargel is global head of grid edge at Wood Mackenzie. The obligation to satisfy an unprecedented number of large load requests, while at the same supporting corporate and state clean energy goals, has presented an enormous ratemaking challenge to utilities. They have responded by developing two types of tariffs to deal with large loads: Clean transition tariffs, which allow large load customers to contract directly with renewable developers, and large load tariffs, designed to protect utility shareholders and other customers from the cost and stranded asset risk of large load-caused infrastructure. We recently completed an analysis of 20 large load tariffs at varying stages of maturity — from recent proposals to updates of longstanding large commercial and industrial tariffs — and have concluded that these tariffs cannot protect both shareholders and other customers. One of the primary mechanisms that large load tariffs use to ensure utility cost recovery is to set long minimum contract terms for customers, with penalties for early exit. But the minimum terms are nearly always 12 years or less, with customers able to exit as soon as five years in nearly all cases. These periods are far shorter than the greater than 20-year recovery time for a new power plant. In the vast majority of cases, exit penalties amount to 3 to 5 years of minimum monthly

Read More »

Venezuela Partners With Smaller Oil Firms as Chevron Scales Back

Venezuela’s state-run oil company has signed at least nine new deals with foreign service providers, including two Chinese firms, in an effort to keep dollars flowing into the economy after US sanctions forced Chevron Corp. to end production, according to people familiar with the agreements. The contracts call for the companies to operate wells that already have been drilled and grant the exclusive right to sell the output, a departure from long-standing practice in the country where state-controlled Petroleos de Venezuela SA has always maintained exclusive trading rights, the people said, asking not to be identified discussing private contracts. At least one of the companies has decided not to go forward because it couldn’t get a US license to operate there, according to one of the people. The accords illuminate President Nicolas Maduro’s strategy for shoring up the economy and filling the void left by Chevron and other Western majors after President Donald Trump’s administration refused to extend licenses that allowed them to operate in the country despite sanctions. Chevron accounted for almost a quarter of Venezuela’s oil production, the country’s most important industry and biggest source of foreign currency.  Chevron’s license allowing it to produce and export crude to the US ended in early April and the company was allowed until May 27 to wrap up work. Permits for US service providers Halliburton Co., Schlumberger NV, Baker Hughes Co. and Weatherford International Plc expired in early May. “PDVSA has a plan to keep producing oil despite the US’s unilateral coercive measures,” Vice President and Oil Minister Delcy Rodriguez said May 29. PDVSA and Venezuela’s oil ministry didn’t reply to a request for comment. The new agreements call for each of the foreign companies to get control over at least one block of land in either Zulia state or the Orinoco Belt area, the two richest oil producing regions,

Read More »

USA Crude Oil Inventories Drop 4.3 Million Barrels Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 4.3 million barrels from the week ending May 23 to the week ending May 30, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That report was released on June 4 and included data for the week ending May 30. It showed that crude oil stocks, not including the SPR, stood at 436.1 million barrels on May 30, 440.4 million barrels on May 23, and 455.9 million barrels on May 31, 2024. Crude oil in the SPR stood at 401.8 million barrels on May 30, 401.3 million barrels on May 23, and 370.2 million barrels on May 31, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.637 billion barrels on May 30, the report highlighted. Total petroleum stocks were up 13.4 million barrels week on week and down 9.7 million barrels year on year, the report showed. “At 436.1 million barrels, U.S. crude oil inventories are about seven percent below the five year average for this time of year,” the EIA noted in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 5.2 million barrels from last week and are about one percent below the five year average for this time of year. Both finished gasoline inventories and blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 4.2 million barrels last week and are about 16 percent below the five year average for this time of year. Propane/propylene inventories increased by 6.8 million barrels from last week and are two percent above the five year average for this

Read More »

Diverse market regions highlight resource adequacy challenge at FERC conference

Grid operators around the country face rapidly changing and highly particular resource adequacy challenges that mean a one-sized approach to reliability is not feasible, they told the Federal Energy Regulatory Commission on Wednesday. They were speaking at a two-day, commissioner-led conference to discuss issues related to resource adequacy constructs. The first panel featured the heads of regional transmission organizations and independent system operators, as well as the North American Electric Reliability Corp., which oversees them. NERC has been conducting seasonal and longer-term resource adequacy assessments for decades, but for most of the reliability watchdog’s history “these were some of the dullest reports we ever created,” President and CEO Jim Robb told regulators. “They weren’t particularly interesting because everything looked pretty good.” But then in 2018, NERC’s long-term resource adequacy assessment showed a material expectation of unserved energy, Robb said. And in August of 2020, California experienced a significant load shed event. “Since then, our analyses have shown a growing risk of unserved energy across the continent,” Robb said. “There are a number of interrelated factors that account for that degradation and risk.” “Disorderly generator retirements” have led NERC’s list of resource adequacy concerns for several years, Robb said. And while several ISO and RTO areas appear to be resource adequate “through the narrow lens of capacity,” he noted that future energy shortfalls “are looming because the resource mix is not supported with the right levels of dispatchable generation with secure fuel to balance supply and demand fluctuations.” Since 2020, the California ISO has added about 25 GW of new generation capacity, including approximately 7 GW last year. The new capacity includes over 12 GW of battery storage, CAISO President and CEO Elliot Mainzer said. “We now have a diverse portfolio of solar, wind, natural gas, hydro electric, geothermal, nuclear and energy storage resources,

Read More »

Does PJM have a data center problem?

The Federal Energy Regulatory Commission on Wednesday held the first day of a two-day technical conference on resource adequacy issues facing grid operators, with most of the discussion focused on the PJM Interconnection’s capacity market. The meeting was in part sparked by the fallout from PJM’s last capacity auction, which was held in July. Total capacity costs in the auction jumped to $14.7 billion from $2.2 billion in the previous auction. The next auction is set to start on July 9. Forecast data center load growth, which contributed to tight supply/demand conditions, resulted in a $9.4 billion increase in capacity market revenue in the auction, PJM’s market monitor said in a report released Tuesday. Monitoring Analytics, the market monitor, expects forecast data center demand growth will continue to have a “very significant” effect on capacity market conditions and prices in upcoming auctions. “Data centers could overwhelm the grids if they chose to,” Joseph Bowring, Market Analytics president, said during the FERC conference. The solution is to require data center owners to procure new generation for their projects, according to Bowring. Bowring’s proposal “would put less pressure on the capacity market,” FERC Commissioner Lindsay See said as part of a question to a panel of state utility regulators on whether PJM’s resource adequacy issues are “a data center-driven problem that needs data center-focused solutions.” Largely driven by data center growth, PJM expects its summer and winter peak load will grow by 3.1% and 3.8% a year on average through 2035, up from 1.6% and 1.9% growth, respectively, in last year’s forecast. However, it is unclear exactly how much data center load will emerge in PJM, according to panelists. “There’s a great deal of uncertainty with this load,” said Emile Thompson, chairman of the Public Service Commission of the District of Columbia

Read More »

Two US DOE-Funded Lithium Projects Set for Streamlined Permitting

The Kings Mountain and Liberty Owl lithium projects have made it to a list of critical mineral production projects added to the United States Federal Permitting Dashboard, giving them a streamlined process, authorities said. “Once completed, these [lithium] projects will help to develop more secure domestic supply chains, strengthening our national security and our economic security”, the Department of Energy (DOE) said in an online statement, noting China controls 70 percent of the market for a key component in energy storage projects and defense applications. TerraVolta’s Liberty Owl seeks to build a commercial-scale extraction and refining plant to produce battery-grade lithium from brine in the Texarkana region. The DOE said it is supporting the project with $225 million. Albemarle Corp.’s Kings Mountain will build a commercial-scale processing facility that can produce up to 350,000 metric tons a year of lithium oxide concentrate. It has been earmarked $150 million by the DOE. “These additions to the Federal Permitting Dashboard reflect the Trump Administration’s commitment to strengthen domestic supply chains for critical minerals and materials, reduce dependence on foreign sources and advance President Trump’s bold agenda for American energy dominance through a more secure, affordable and reliable U.S. energy system”, the agency said. “The Department looks forward to working with federal partners, project sponsors, and developers to ensure these projects move forward with increased transparency, clear project timelines, expedited reviews, and the support needed to strengthen domestic supply chains, drive economic growth and deliver on President Trump’s commitment to unleashing American energy and economic security”. The Federal Permitting Improvement Steering Council has added 25 projects to its Dashboard in response to President Donald Trump’s executive order on March 20 ordering regulators to identify critical mineral projects that may receive immediate approvals. Enlistment on the Dashboard gives the projects the benefits of transparency

Read More »

LiquidStack launches cooling system for high density, high-powered data centers

The CDU is serviceable from the front of the unit, with no rear or end access required, allowing the system to be placed against the wall. The skid-mounted system can come with rail and overhead piping pre-installed or shipped as separate cabinets for on-site assembly. The single-phase system has high-efficiency dual pumps designed to protect critical components from leaks and a centralized design with separate pump and control modules reduce both the number of components and complexity. “AI will keep pushing thermal output to new extremes, and data centers need cooling systems that can be easily deployed, managed, and scaled to match heat rejection demands as they rise,” said Joe Capes, CEO of LiquidStack in a statement. “With up to 10MW of cooling capacity at N, N+1, or N+2, the GigaModular is a platform like no other—we designed it to be the only CDU our customers will ever need. It future-proofs design selections for direct-to-chip liquid cooling without traditional limits or boundaries.”

Read More »

Enterprises face data center power design challenges

” Now, with AI, GPUs need data to do a lot of compute and send that back to another GPU. That connection needs to be close together, and that is what’s pushing the density, the chips are more powerful and so on, but the necessity of everything being close together is what’s driving this big revolution,” he said. That revolution in new architecture is new data center designs. Cordovil said that instead of putting the power shelves within the rack, system administrators are putting a sidecar next to those racks and loading the sidecar with the power system, which serves two to four racks. This allows for more compute per rack and lower latency since the data doesn’t have to travel as far. The problem is that 1 mW racks are uncharted territory and no one knows how to manage the power, which is considerable now. ”There’s no user manual that says, hey, just follow this and everything’s going to be all right. You really need to push the boundaries of understanding how to work. You need to start designing something somehow, so that is a challenge to data center designers,” he said. And this brings up another issue: many corporate data centers have power plugs that are like the ones that you have at home, more or less, so they didn’t need to have an advanced electrician certification. “We’re not playing with that power anymore. You need to be very aware of how to connect something. Some of the technicians are going to need to be certified electricians, which is a skills gap in the market that we see in most markets out there,” said Cordovil. A CompTIA A+ certification will teach you the basics of power, but not the advanced skills needed for these increasingly dense racks. Cordovil

Read More »

HPE Nonstop servers target data center, high-throughput applications

HPE has bumped up the size and speed of its fault-tolerant Nonstop Compute servers. There are two new servers – the 8TB, Intel Xeon-based Nonstop Compute NS9 X5 and Nonstop Compute NS5 X5 – aimed at enterprise customers looking to upgrade their transaction processing network infrastructure or support larger application workloads. Like other HPE Nonstop systems, the two new boxes include compute, software, storage, networking and database resources as well as full-system clustering and HPE’s specialized Nonstop operating system. The flagship NS9 X5 features support for dual-fabric HDR200 InfiniBand interconnect, which effectively doubles the interconnect bandwidth between it and other servers compared to the current NS8 X4, according to an HPE blog detailing the new servers. It supports up to 270 networking ports per NS9 X system, can be clustered with up to 16 other NS9 X5s, and can support 25 GbE network connectivity for modern data center integration and high-throughput applications, according to HPE.

Read More »

AI boom exposes infrastructure gaps: APAC’s data center demand to outstrip supply by 42%

“Investor confidence in data centres is expected to strengthen over the remainder of the decade,” the report said. “Strong demand and solid underlying fundamentals fuelled by AI and cloud services growth will provide a robust foundation for investors to build scale.” Enterprise strategies must evolve With supply constrained and prices rising, CBRE recommended that enterprises rethink data center procurement models. Waiting for optimal sites or price points is no longer viable in many markets. Instead, enterprises should pursue early partnerships with operators that have robust development pipelines and focus on securing power-ready land. Build-to-suit models are becoming more relevant, especially for larger capacity requirements. Smaller enterprise facilities — those under 5MW — may face sustainability challenges in the long term. The report suggested that these could become “less relevant” as companies increasingly turn to specialized colocation and hyperscale providers. Still, traditional workloads will continue to represent up to 50% of total demand through 2030, preserving value in existing facilities for non-AI use cases, the report added. The region’s projected 15 to 25 GW gap is more than a temporary shortage — it signals a structural shift, CBRE said. Enterprises that act early to secure infrastructure, invest in emerging markets, and align with power availability will be best positioned to meet digital transformation goals. “Those that wait may find themselves locked out of the digital infrastructure they need to compete,” the report added.

Read More »

Cisco bolsters DNS security package

The software can block domains associated with phishing, malware, botnets, and other high-risk categories such as cryptomining or new domains that haven’t been reported previously. It can also create custom block and allow lists and offers the ability to pinpoint compromised systems using real-time security activity reports, Brunetto wrote. According to Cisco, many organizations leave DNS resolution to their ISP. “But the growth of direct enterprise internet connections and remote work make DNS optimization for threat defense, privacy, compliance, and performance ever more important,” Cisco stated. “Along with core security hygiene, like a patching program, strong DNS-layer security is the leading cost-effective way to improve security posture. It blocks threats before they even reach your firewall, dramatically reducing the alert pressure your security team manages.” “Unlike other Secure Service Edge (SSE) solutions that have added basic DNS security in a ‘checkbox’ attempt to meet market demand, Cisco Secure Access – DNS Defense embeds strong security into its global network of 50+ DNS data centers,” Brunetto wrote. “Among all SSE solutions, only Cisco’s features a recursive DNS architecture that ensures low-latency, fast DNS resolution, and seamless failover.”

Read More »

HPE Aruba unveils raft of new switches for data center, campus modernization

And in large-scale enterprise environments embracing collapsed-core designs, the switch acts as a high-performance aggregation layer. It consolidates services, simplifies network architecture, and enforces security policies natively, reducing complexity and operational cost, Gray said. In addition, the switch offers the agility and security required at colocation facilities and edge sites. Its integrated Layer 4 stateful security and automation-ready platform enable rapid deployment while maintaining robust control and visibility over distributed infrastructure, Gray said. The CX 10040 significantly expands the capacity it can provide and the roles it can serve for enterprise customers, according to one industry analyst. “From the enterprise side, this expands on the feature set and capabilities of the original 10000, giving customers the ability to run additional services directly in the network,” said Alan Weckel, co-founder and analyst with The 650 Group. “It helps drive a lower TCO and provide a more secure network.”  Aimed as a VMware alternative Gray noted that HPE Aruba is combining its recently announced Morpheus VM Essentials plug-in package, which offers a hypervisor-based package aimed at hybrid cloud virtualization environments, with the CX 10040 to deliver a meaningful alternative to Broadcom’s VMware package. “If customers want to get out of the business of having to buy VM cloud or Cloud Foundation stuff and all of that, they can replace the distributed firewall, microsegmentation and lots of the capabilities found in the old VMware NSX [networking software] and the CX 10k, and Morpheus can easily replace that functionality [such as VM orchestration, automation and policy management],” Gray said. The 650 Group’s Weckel weighed in on the idea of the CX 10040 as a VMware alternative:

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »