Stay Ahead, Stay ONMINE

Norway Premier Holds Fast to Fossil Fuels

Norwegian Prime Minister Jonas Gahr Store said the gas-rich nation will continue to hunt for fossil fuels even as parliamentary partners call for a retreat from the sector. “We should explore and develop the Norwegian shelf,” Store said Tuesday as he started talks to form a new Labor government after winning the election. Norway provides […]

Norwegian Prime Minister Jonas Gahr Store said the gas-rich nation will continue to hunt for fossil fuels even as parliamentary partners call for a retreat from the sector.

“We should explore and develop the Norwegian shelf,” Store said Tuesday as he started talks to form a new Labor government after winning the election.

Norway provides about a third of Europe’s natural gas, becoming the region’s biggest supplier after Russian deliveries sank following the invasion of Ukraine. Under Store, Labor has sought to slow the decline of the resource base. Yet the party won this week’s election with less than 30% of the vote, meaning it will need to find common ground with smaller parties including the Greens.

The Green Party pulled off its best result, more than doubling its seats to seven. It has traditionally demanded an end to oil exploration in the North Sea, though leader Arild Hermstad said Monday he plans to cooperate with Store in negotiations to get policies through parliament.

While total production from the Norwegian shelf peaked in the early 2000s, monthly oil output recently jumped to the highest in more than a decade following the ramp-up of the Johan Castberg field in the Barents Sea. Companies including Equinor ASA and Aker BP ASA are pouring billions into infrastructure aimed at squeezing out more barrels.

Norway is a “leading energy nation, important for Europe” and its energy security, Store said. “We will continue to be a reliable partner, but also to take forward technological shifts, cut emissions and live up to our climate obligations.”



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco’s Splunk embeds agentic AI into security and observability products

AI-powered observability enhancements Cisco also announced it has updated Splunk Observability to use Cisco AgenticOps, which deploys AI agents to automate telemetry collection, detect issues, identify root causes, and apply fixes. The agentic AI updates help enterprise customers automate incident detection, root-cause analysis, and routine fixes. “We are making sure

Read More »

Broadcom touts AI-native VMware, but gains aren’t revolutionary

“We have the relationships,” Umesh Mahajan, Broadcom’s general manager for application networking and security, told Network World. A large organization can’t simply stop using VMware, he says. “These workloads can’t disappear overnight. So, we will continue to have those relationships.” In addition, VMware’s technology is proprietary, complicated, and not something

Read More »

Cisco launches AI-driven data fabric powered by Splunk

At this week’s Splunk .conf25 event in Boston, Cisco unveiled a new data architecture that’s powered by the Splunk platform and designed to help enterprises glean AI-driven insights from machine-generated telemetry, such as metrics, events, logs and traces. The new Cisco Data Fabric integrates business and machine data for AI

Read More »

Norway Premier Holds Fast to Fossil Fuels

Norwegian Prime Minister Jonas Gahr Store said the gas-rich nation will continue to hunt for fossil fuels even as parliamentary partners call for a retreat from the sector. “We should explore and develop the Norwegian shelf,” Store said Tuesday as he started talks to form a new Labor government after winning the election. Norway provides about a third of Europe’s natural gas, becoming the region’s biggest supplier after Russian deliveries sank following the invasion of Ukraine. Under Store, Labor has sought to slow the decline of the resource base. Yet the party won this week’s election with less than 30% of the vote, meaning it will need to find common ground with smaller parties including the Greens. The Green Party pulled off its best result, more than doubling its seats to seven. It has traditionally demanded an end to oil exploration in the North Sea, though leader Arild Hermstad said Monday he plans to cooperate with Store in negotiations to get policies through parliament. While total production from the Norwegian shelf peaked in the early 2000s, monthly oil output recently jumped to the highest in more than a decade following the ramp-up of the Johan Castberg field in the Barents Sea. Companies including Equinor ASA and Aker BP ASA are pouring billions into infrastructure aimed at squeezing out more barrels. Norway is a “leading energy nation, important for Europe” and its energy security, Store said. “We will continue to be a reliable partner, but also to take forward technological shifts, cut emissions and live up to our climate obligations.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Electric grid growing faster than anticipated: EIA

Generation by the electric power sector is expected to grow 2.3% this year and 3% in 2026, exceeding a January forecast of 1.5% growth per year, according to the Energy Information Administration. “Electricity generation has been growing rapidly this year as a result of growing demand for power from data centers and industrial customers,” EIA said in its Short-Term Energy Outlook, released Tuesday. “The higher growth in generation reflects colder-than-expected weather earlier in 2025 along with the incorporation of load growth assessments by grid operators in the Electric Reliability Council of Texas and PJM systems.” Meeting that increased demand are increases in generation from “most energy sources” this year, EIA said. The agency forecasts that utility-scale solar will grow the most in 2025, “generating 33%, or 72 billion kilowatthours (BkWh), more electricity this year compared with 2024.” U.S. electricity generation totaled 4,300 BkWh in 2024. EIA expects it to total 4,400 BkWh in 2025 and to rise to 4,530 BkWh in 2026. “New solar projects account for more than half of the new generating capacity expected to come online this year,” EIA said. “Wind, hydropower, and nuclear all grow this year as well. We expect wind will generate 4% more electricity in 2025 than it did in 2024, while we expect hydropower generation will grow by 2%. Nuclear generation will rise slightly this year and about 2% next year with the restart of the Palisades plant in Michigan.” Natural gas generation, on the other hand, is not expected to grow this year – despite record consumption. Natural gas fuel prices are about 40% higher this year than last year, “which is encouraging more coal-fired generation but is also reducing the amount of electricity produced by natural gas-fired generators,” EIA said. Courtesy of Energy Information Administration Growing international demand and increases in U.S. exports that

Read More »

Complaint over MISO’s $22B transmission portfolio faces widespread opposition

A complaint seeking to overturn the Midcontinent Independent System Operator’s $22 billion Tranche 2.1 regional transmission portfolio faces widespread opposition, according to comments filed Tuesday at the Federal Energy Regulatory Commission. Parties asking FERC to dismiss the complaint include MISO, six utility commissions, AES Indiana, Ameren, Xcel Energy and other utilities in MISO, the Data Center Coalition, the governor of Iowa, and consumer-oriented organizations such as the Illinois Citizens Utility Board. The complaint’s supporters include groups representing large energy users such as the Wisconsin Industrial Energy Group and the Electricity Consumers Resource Council as well as MISO’s market monitor. In a complaint filed on July 30, utility commissions from Arkansas, Louisiana, Mississippi, Montana and North Dakota contend that MISO used flawed modeling and assumptions, which led the grid operator to significantly overstate the benefits of its Tranche 2.1 portfolio. The Tranche 2.1 portfolio, approved by MISO’s board in December, contains 24 transmission projects, including some that form a 3,631-mile, 765-kV backbone. The projects are expected to go online from 2032 to 2034, according to MISO. MISO urged FERC to dismiss the “deficient and misleading” complaint, saying the grid operator followed its FERC-approved rules when it developed its multi-value portfolio and its benefits metrics through a stakeholder process. “Tranche 2.1 was approved nearly eight months ago, and critical reliability, load interconnection, and generation processes have already been undertaken that are dependent on the development of Tranche 2.1; consequently, those processes will be put at risk,” MISO said. MISO’s fast-track review for 26.5 GW in its Expedited Resource Additions Study process relies on the Tranche 2.1 portfolio and would be delayed if the complaint is upheld by FERC, the grid operator said. Terminating or undermining the Tranche 2.1 portfolio would chill transmission development across the United States, according to MISO. “Other regional

Read More »

New Era Energy and Digital Seeks Nasdaq Hearing to Avoid Delisting

New Era Energy & Digital Inc has received a Nasdaq notification of failure to maintain a market value of listed securities of at least $50 million, the company that recently rebranded from New Era Helium Inc said. “[T]he company was therefore subject to delisting unless the company timely requests a hearing before the Nasdaq Hearings Panel”, Midland, Texas-based New Era said in a statement on its website, adding it intends to request a hearing. “At the hearing, the company intends to present its plan to evidence compliance with the applicable continued listing criteria; however, there can be no assurance that the panel will grant the company’s request for continued listing or that the company will be able to achieve compliance within any extension that may be granted by the panel”, the statement said. “The company is considering all options available to it to regain compliance with the applicable listing rules, including but not limited to (i) raising additional capital through its equity line or other sources in order to increase the shareholders equity of the company in excess of $2.5 million (plus an appropriate burn rate) and/or (ii) issuing additional shares of common stock through a PIPE or similar transaction in order to achieve at least $35 million of MVLS (the MVLS threshold for the Nasdaq Capital Markets tier). “In that event, and assuming other listing requirements are met, the company would seek to move to the Nasdaq Capital Markets”. New Era Energy & Digital recently rebranded from New Era Helium to reflect its shift into a vertically integrated energy supplier. The rebranded New Era aims to develop “next-generation digital infrastructure and integrated power assets, including powered land and powered shells”, it said in a statement August 12. “The company delivers turnkey solutions that will enable hyperscale, enterprise and edge operators to

Read More »

Blaming data centers for PJM supply challenges misses the bigger picture

Todd Snitchler is president and CEO of the Electric Power Supply Association, which represents competitive power suppliers that own and operate about 200,000 MW of capacity throughout the U.S. The independent market monitor for the nation’s largest grid operator made waves in early June when it released a report warning that data centers’ power consumption could trigger regional energy shortages as early as next year. While data center growth is a significant factor in rising electricity demand, focusing narrowly on this sector risks distracting from the broader and more persistent challenges facing the grid. In a brief analysis of PJM Interconnection’s capacity auction held in July 2024 for the 2025/2026 delivery year — the mechanism by which the regional transmission organization procures resources to satisfy demand years in advance — Monitoring Analytics lays the blame for last July’s capacity prices squarely at the feet of data centers. The paper’s brevity can be attributed to the fact that this assessment is technically structured as an excerpt (“Part G”) of a forthcoming comprehensive report that ostensibly will take into consideration the myriad supply and demand pressures confronting PJM. An observer could question whether dedicating a standalone report to a single, isolated variable that happens to be an emerging focus for regulators inappropriately creates alarm in policy circles. Such posturing ignores other documented demand stressors in the regional transmission organization’s service territory, including the electrification of the nation’s second-largest port (and the broader economy) and the proliferation of electric vehicles in the region, which S&P Global estimates will grow significantly over the next 15 years. Unfortunately, this suspicion is confirmed by the market monitor’s emphasis on certain key data points throughout the report. One example is a passage lamenting this year’s projection for summer peak load in 2026, which the report explains is a “substantial upward

Read More »

Anaergia Signs Agreement Advancing Riverside Bioenergy Project

Renewable natural gas (RNG) tech company Anaergia Inc. has signed a deal to advance a pivotal RNG infrastructure project in the City of Riverside, California. The company said in a media release that under the agreement, its development-stage project asset Riverside Bioenergy Facility LLC (RivBF) will be sold to a developer with institutional investor funding. RivBF currently has a lease with the City of Riverside that provides for, upon approval of the parties, the construction of organic waste-to-RNG infrastructure at the Riverside Water Quality Control Plant (RWQCP), Anaergia said. Financial close is subject to certain conditions, including the amendment of the site lease with the City of Riverside, receipt of third-party consents and regulatory permits, and the completion of ancillary agreements. Anaergia Technologies LLC, a subsidiary of Anaergia, has been contracted to provide engineering, procurement and construction (EPC) services for the RWQCP upgrades following its closure, with the developer committing to financing. These services involve installing Anaergia’s advanced biogas conditioning and upgrading systems, organic waste feedstock processing systems and various upgrades at the RWQCP. Most of the EPC activities are planned for 2026 and 2027, and Anaergia anticipates cashing in CAD 39 million ($28.2 million) in revenue. The parties have also agreed on operations and maintenance (O&M) services. Anaergia Services LLC will provide these services to RivBF after EPC activities are completed, under a long-term contract. By transferring project ownership to a developer backed by an institutional investor, Anaergia reduces its financial risk while ensuring the effective deployment of its renewable energy solutions, it said. “The new project is pivotal in advancing the City of Riverside’s sustainability efforts by significantly decreasing the carbon footprint of its RWQCP and complying with California’s SB1383 regulations for organic waste recycling”, Assaf Onn, CEO of Anaergia, said. To contact the author, email [email protected] WHAT

Read More »

Microsoft finds possible solution to Azure capacity issues

Jason Wong, distinguished VP analyst at Gartner, said that Microsoft customers would welcome the agreement.  “It’s well documented that Microsoft Azure has had capacity issues, particularly in the Azure East US Region shortfall in July,” he said. “This investment on Nebius is specific to US capacity to help service growing demand and meet peak customer loads.” IDC’s McCarthy said that Microsoft would use the augmented infrastructure in two ways. “ Microsoft will create additional capacity for its customers, and for its own internal R&D,” he said, but he warned “[while this] may help Microsoft with its East Coast capacity, the distributed nature of AI agents means that it won’t be the silver bullet for all customer needs.” The agreement will mean that Microsoft will not be looking to enhance its own data centers to deliver additional capacity. The company announced earlier this year that it was not proceeding with plans to build data centers to meet the increasing demand for AI services. Wong said: “This partnership allows Microsoft to spread its risk while accelerating time-to-market for data center capacity, because hyperscalers simply can’t build data centers fast enough due to the constrained supply chain and regulatory hurdles.”  He added that the Nebius deal is a reflection of Microsoft’s current approach. “Previously, it chose not to expand its capacity with CoreWeave, and this allowed the company to diversify its cloud infrastructure partnerships through Nebius. This partnership also allows Microsoft to de-risk from a GPU perspective, given the high demand and high cost of chips.” McCarthy said that the company would also need to reserve some capacity for future development.  “Cloud providers like Microsoft, AWS, and Google also must decide how much capacity to reserve for R&D to build their own LLMs. In contrast, Oracle, and newer players like CoreWeave, Vultr, and Nebius

Read More »

Cadence adds Nvidia to digital twin tool for data center design

Even though its software covers 750 vendors, Cadence is promoting the Nvidia angle considerably, and understandably since Nvidia has so much momentum. Several months ago, it released blueprints for optimal data center designs, and now it has visualization software to use the designs. Knoth stress support for the DGX SuperPod, a massive piece of equipment with 10 or more racks of processing power and all the interconnection that goes inside of it. “This is a huge leg up for anyone who’s looking to either retrofit an existing data center with new processing power or building out a new one from scratch,” he said. As data centers move from megawatts to gigawatts, complexity increases at a considerable rate. The shift to liquid cooling adds even more of the complexity of calculating power usage, said Knoth. “Because all these things, when you start going into from the megawatt to the gigawatt scale, there are tremendous challenges, and that addition of liquid cooling has huge ramifications on the facility design. This is exactly where a physics-based digital twin come into play,” he said. “The old strategies of building a large shell and then putting compute inside it is not going to cut it, and so you need some new technology to actually make these things work,” he added. The Nvidia systems in the Cadence Reality Digital Twin Platform are available now upon request and will be included in the next software release later this year.

Read More »

Nvidia rolls out new GPUs for AI inferencing, large workloads

Inference is often considered to be a single step in the AI process, but it’s two workloads, according to Shar Narasimhan, director of product in Nvidia’s Data Center group. They are the context or prefill phase and the decode phase. Each of these two phases has different requirements of the underlying AI infrastructure. The prefill phase is compute-intensive, whereas the decode phase is memory-intensive, but up to now, GPU is asked to do both when it really does one task well. The Rubin CPX has been engineered to better the memory performance, Narasimhan said. So, the Rubin CPX is purpose-built for both phases, offering processing power as well as high throughput and efficiency. “It will dramatically increase the productivity and performance of AI factories,” said Narasimhan. It achieves this through massive token generation. Tokens equal work units in AI, particularly generative AI, so the more tokens generated, the more revenue generated. Nvidia is also announcing a new Vera Rubin NVL 144 CPX rack, offering 7.5 times the performance of a NVL72, the current top of the line system. Narasimhan said the NVL 144 CPX enables AI service providers to dramatically increase their profitability by delivering $5 billion of revenue for every $100 million invested in infrastructure. Rubin CPX is offered in multiple configurations, including the Vera Rubin NVL144 CPX, that can be combined with the Quantum‐X800 InfiniBand scale-out compute fabric or the Spectrum-XTM Ethernet networking platform with Nvidia Spectrum-XGS Ethernet technology and Nvidia ConnectX-9 SuperNICs. Nvidia Rubin CPX is expected to be available at the end of 2026.

Read More »

Google adds Gemini to its on-prem cloud for increased data protection

Google has announced the general availability of its Gemini artificial intelligence models on Google Distributed Cloud (GDC), making its generative AI product available on enterprise and government data centers. GDC is an on-premises implementation of Google Cloud, aimed at heavily regulated industries like medical and financial services to bring Google Cloud services within company firewalls rather than the public cloud. The launch of Gemini on GDC allows organizations with strict data residency and compliance requirements to deploy generative AI without compromising control over sensitive information. GDC uses Nvidia Hopper and Blackwell 0era GPU accelerators with automated load balancing and zero-touch updates for high availability. Security features include audit logging and access control capabilities that provide full transparency for customers. The platform also features Confidential Computing support for both CPUs (with Intel TDX) and GPUs (with Nvidia’s confidential computing) to secure sensitive data and prevent tampering or exfiltration.

Read More »

Nvidia networking roadmap: Ethernet, InfiniBand, co-packaged optics will shape data center of the future

Nvidia is baking into its Spectrum-X Ethernet platform a suite of algorithms that can implement networking protocols to allow Spectrum-X switches, ConnectX-8 SuperNICs, and systems with Blackwell GPUs to connect over wider distances without requiring hardware changes. These Spectrum-XGS algorithms use real-time telemetry—tracking traffic patterns, latency, congestion levels, and inter-site distances—to adjust controls dynamically. Ethernet and InfiniBand Developing and building Ethernet technology is a key part of Nvidia’s roadmap. Since it first introduced Spectrum-X in 2023, the vendor has rapidly made Ethernet a core development effort. This is in addition to InfiniBand development, which is still Nvidia’s bread-and-butter connectivity offering. “InfiniBand was designed from the ground up for synchronous, high-performance computing — with features like RDMA to bypass CPU jitter, adaptive routing, and congestion control,” Shainer said. “It’s the gold standard for AI training at scale, connecting more than 270 of the world’s top supercomputers. Ethernet is catching up, but traditional Ethernet designs — built for telco, enterprise, or hyperscale cloud — aren’t optimized for AI’s unique demands,” Shainer said. Most industry analysts predict Ethernet deployment for AI networking in enterprise and hyperscale deployments will increase in the next year; that makes Ethernet advancements a core direction for Nvidia and any vendor looking to offer AI connectivity options to customers. “When we first initiated our coverage of AI back-end Networks in late 2023, the market was dominated by InfiniBand, holding over 80% share,” wrote Sameh Boujelbene, vice president of Dell ’Oro Group, in a recent report. “Despite its dominance, we have consistently predicted that Ethernet would ultimately prevail at scale. What is notable, however, is the rapid pace at which Ethernet gained ground in AI back-end networks. As the industry moves to 800 Gbps and beyond, we believe Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments.”

Read More »

Inside the AI-optimized data center: Why next-gen infrastructure is non-negotiable

How are AI data centers different from traditional data centers? AI data centers and traditional data centers can be physically similar, as they contain hardware, servers, networking equipment, and storage systems. The difference lies in their capabilities: Traditional data centers were built to support general computing tasks, while AI data centers are specifically designed for more sophisticated, time and resource-intensive workloads. Conventional data centers are simply not optimized for AI’s advanced tasks and necessary high-speed data transfer. Here’s a closer look at their differences: AI-optimized vs. traditional data centers Traditional data centers: Handle everyday computing needs such as web browsing, cloud services, email and enterprise app hosting, data storage and retrieval, and a variety of other relatively low-resource tasks. They can also support simpler AI applications, such as chatbots, that do not require intensive processing power or speed. AI data centers: Built to compute significant volumes of data and run complex algorithms, ML and AI tasks, including agentic AI workflows. They feature high-speed networking and low-latency interconnects for rapid scaling and data transfer to support AI apps and edge and internet of things (IoT) use cases. Physical infrastructure Traditional data centers: Typically composed of standard networking architectures such as CPUs suitable for handling networking, apps, and storage. AI data centers: Feature more advanced graphics processing units (GPU) (popularized by chip manufacturer Nvidia), tensor processing units (TPUs) (developed by Google), and other specialized accelerators and equipment. Storage and data management Traditional data centers: Generally, store data in more static cloud storage systems, databases, data lakes, and data lakehouses. AI data centers: Handle huge amounts of unstructured data including text, images, video, audio, and other files. They also incorporate high-performance tools including parallel file systems, multiple network servers, and NVMe solid state drives (SSDs). Power consumption Traditional data centers: Require robust cooling

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »