Stay Ahead, Stay ONMINE

About 700 Bcf of Gas Matched in 2nd Midterm Round of AggregateEU

The European Commission has matched almost 20 billion cubic meters (706.29 billion cubic feet) of demand from European Union gas buyers with offers from potential suppliers under the second midterm round of AggregateEU. Vendors offered 31 Bcm, exceeding the 29 Bcm of demand pooled during the matchmaking round opened this month, according to an online […]

The European Commission has matched almost 20 billion cubic meters (706.29 billion cubic feet) of demand from European Union gas buyers with offers from potential suppliers under the second midterm round of AggregateEU.

Vendors offered 31 Bcm, exceeding the 29 Bcm of demand pooled during the matchmaking round opened this month, according to an online statement Wednesday from the Commission’s Directorate-General for Energy.

“All participants have been informed about the matching results and will now be able to negotiate contracts bilaterally”, the Directorate said.

Energy and Housing Commissioner Dan Jørgensen commented, “As we fast track our decarbonization efforts in the EU, it is also key that European buyers are able to secure competitive gas offers from reliable international suppliers”.

“The positive results of this second matching round on joint gas purchasing show the strong interest from the market and the value in providing increased transparency to European gas users and buyers”, Jørgensen added.

Announcing the second midterm round March 12, 2025, the Directorate said LNG buyers and sellers not only can name their preferred terminal of delivery as before but can now also express preference to have the LNG delivered free-on-board. This option has been added “to better reflect LNG trade practices and attract additional international suppliers”, the Directorate said.

AggregateEU, a mechanism in which gas suppliers compete to book demand placed by companies in the EU and its Energy Community partner countries, was initially only meant for the 2023-24 winter season. However, citing lessons from the prolonged effects of the energy crisis, the EU has made it a permanent mechanism under “Regulation (EU) 2024/1789 on the internal markets for renewable gas, natural gas and hydrogen”, adopted June 13, 2024.

Midterm rounds offer six-month contracts for potential suppliers during a buyer-seller partnership of up to five years.

“In early 2024, with the effects of the energy crisis still not over, AggregateEU is introducing a different concept of mid-term tenders in order to address the growing demand for stability and predictability from buyers and sellers of natural gas”, the Directorate said February 1, 2024, announcing the first midterm tender.

“Under such tenders, buyers will be able to submit their demand for seasonal 6-month periods (for a minimum 1,800,000 MWh for LNG and 30,000 for NBP per period), going from April 2024 to October 2029. This is intended to support sellers in identifying buyers who might be interested in a longer trading partnership – i.e. up to 5 years.

“Mid-term tenders will not only increase security of supply but also help European industrial players increase their competitiveness”.

NBP gas, or National Balancing Point gas, refers to gas from the national transmission systems of EU states.

The first midterm round aggregated 34 Bcm of demand from 19 companies including industrial players. Offers totaled 97.4 Bcm, almost triple the demand, the Commission said February 28, 2024.

A total of 7 rounds have been conducted under AggregateEU, pooling over 119 Bcm of demand and attracting 191 Bcm of offers. Nearly 100 Bcm have been matched, according to Thursday’s results announcement. 

AggregateEU, created under Council Regulation 2022/2576 of December 19, 2022, is part of the broader EU Energy Platform for coordinated purchases of gas and hydrogen. The Energy Platform was formed 2022 as part of the REPowerEU strategy for achieving energy independence from Russia.

To contact the author, email [email protected]

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Observability platforms gain AI capabilities

LogicMonitor also announced Oracle Infrastructure (OCI) Monitoring to expand its multi-cloud coverage, provide visibility across AWS, Azure, GCP, and OCI, and offer observability capabilities across several cloud platforms. The company also made its LM Uptime and Dynamic Service Insights capabilities generally available to help enterprise IT organizations find issues sooner

Read More »

Cisco strengthens integrated IT/OT network and security controls

Another significant move that will help IT/OT integration is the planned integration of the management console for Cisco’s Catalyst and Meraki networks. That combination will allow IT and OT teams to see the same dashboard for industrial OT and IT enterprise/campus networks. Cyber Vision will feeds into the dashboard along

Read More »

TotalEnergies signs agreement for oil exploration blocks offshore Liberia

TotalEnergies has signed four production sharing contracts (PSC) for blocks offshore Liberia. The work program for the exploration blocks, which were awarded following the 2024 Direct Negotiation Licensing Round organized by the Liberia Petroleum Regulatory Agency, includes acquisition of one firm 3D seismic study, the operator said in a release Sept. 17. The PSCs are Liberia’s first upstream petroleum agreements in more than 10 years, the regulator said in a separate release.  Blocks LB-6, LB-11, LB-17 and LB-29, which together cover an area of about 12,700 sq km, lie south of the Liberia basin. Entering the blocks aligns with the operator’s strategy to diversify its exploration portfolio in high-potential new oil-prone basins, said Kevin McLachlan, senior vice-president, exploration, TotalEnergies.

Read More »

Ksi Lisims LNG advances toward construction with regulatory approval

Western LNG, Houston, Tex., in partnership with the Nisga’a Nation and Rockies LNG, could begin construction of the proposed Ksi Lisims LNG export project on Canada’s northwest coast as early as this year. The milestone comes as the company recently received an Environmental Assessment Certificate from the Government of British Columbia and a positive Decision Statement from the Government of Canada. The proposed project, in British Columbia, Canada, will be sited on Nisga’a Nation owned land on the northern tip of Pearse Island. The project is expected to be powered by renewable hydroelectricity and net-zero ready by 2030, according to the company. It is expected to produce 12 million tonnes/year of LNG from two floating LNG production and storage vessels and is expected to receive about 1.7-2.0 bcfd of natural gas. Commercial operations are expected to begin in late 2028 or 2029.  The approval “confirms that the project meets or exceeds the high standards of BC, Canada, and the Nisga’a Nation, including environmental protection, Indigenous and public engagement, and community benefits,” Western LNG said in a release Sept. 16. The Nisga’a Nation conducted its own assessment of the project as required under the Nisga’a Final Agreement, the positive conclusions of which were provided in the Environmental Assessment Application. With proposed mitigation measures in place, the BC Environmental Assessment Office—during a 4-year review which included over 50 technical studies reviewed by a technical advisory committee comprised of six participating Indigenous Nations, subject matter experts, and over 20 federal and provincial regulators—concluded that the proposed project would have no significant residual adverse effects, and cumulative effects can be managed through regulatory conditions and careful project planning,” Western LNG said.

Read More »

Brazil oil supply returns to record high volumes

Preliminary data for July from the Agencia Nacional do Petroleo (ANP) revealed a notable upward revision, showing oil production rising by 200,000 b/d month-on-month to reach a record high of 4 million b/d, an increase of 720,000 b/d year-on-year. Preliminary ANP figures for August indicate a slight decline of 50,000 b/d, primarily due to operational issues at the Petrobras-operated Mero 2 and Equinor Energy-operated Peregrino projects. The International Energy Agency (IEA) projects Brazil’s overall production to rise by an unprecedented average of 400,000 b/d this year, reaching 3.8 million b/d, with an additional gain of 180,000 b/d anticipated in 2026. This robust growth is a key component of  Petrobras’ ambitious multi-year plan to deploy floating production storage and offloading (FPSO) vessels in a systematic manner. In addition to the five FPSOs launched in 2023, another six are expected to be operational by mid-2025, with one more anticipated later this year and an additional one next year. Collectively, the eight FPSOs being put into service from 2024 to 2026 will have a combined capacity of just over 1.3 million b/d, nearly double that of current UK production. For the FPSOs currently in operation, utilization rates have recently surged to nearly 75%, a significant increase from 63% in second-quarter 2025 and substantially higher than last year’s average of 45%.

Read More »

Tamarack Valley to sell two remaining non-core Eastern Alberta positions

The East assets currently produce about 4,000 boe/d (3,500 b/d of oil), or 6% of Tamarack’s corporate production. Tamarak said there is no change to the company’s 2025 full year production guidance, primarily due to ‘outperformance’ from the first-half 2025 development programs, Clearwater waterflood response, and a tuck-in acquisition of additional Clearwater assets early in this year’s third quarter. That $51.5 million-(Can.) acquisition of a private company added 1,100 b/d of Clearwater heavy oil production through the balance of 2025, and over 114 net sections of Clearwater lands, the company noted in its second-quarter 2025 earnings report. With the deal, Tamarack now holds a 100% working interest ownership and operatorshipacross its Nipisi position and holds upside with step-out and exploration opportunities at West Nipisi. Full-year production outlook remains at 67,000-69,000 boe/d with fourth-quarter production expected to be 66,500-67,500 boe/d. The company said the East Asset deal will reduce its asset retirement obligations by $63 million, reflecting 25% of the company’s total corporate liability and includes about 40% of Tamarack’s total inactive decommissioning obligations. The deal is expected to close in October 2025, subject to customary closing considerations.

Read More »

Hibiscus Petroleum spuds well on the UK Continental shelf

Hibiscus Petroleum Bhd. spudded a well at Teal West on the UK Continental shelf (UKCS) in the Central North Sea. Using the Shelf Drilling Fortress jackup rig, a new well was drilled 4 km from the Anasuria floating, production, storage, and offloading (FPSO) vessel. Once completed, the well is expected to be tied back to the FPSO. Subsea installation activities are scheduled to take place in early second-quarter 2026 with first oil expected by mid-2026. Teal West lies in UKCS Block 21/24d 155 km offshore in 90 m of water. The well is one of only three development wells being drilled across the entire UKCS in 2025, the company said. Teal West will be developed over three phases and involves drilling of up to two production subsea wells to extract oil and gas and one water injection well. All wells will be tied back to the Anasuria FPSO by 3.4-km flowlines. Hibiscus has two potential production hubs in the UK: the existing Anasuria Cluster, which includes Fyne and Teal West fields, and the Greater Marigold Area Development (GMAD), which encompasses Marigold, Sunflower, Kildrummy, and Crown fields. Anasuria Hibiscus UK Ltd., Hibiscus Petroleum’s wholly-owned subsidiary, operates the Anasuria FPSO through its Anasuria Operating Co. joint venture with Ping Petroleum UK PLC, a subsidiary of Dagang NeXchange Bhd. The FPSO supports oil production, storage, and gas export from Guillemot A, Teal, Teal South, and Cook fields.

Read More »

Analyst Says Bearish EIA Report Reset Gas Market Momentum

In an EBW Analytics Group report sent to Rigzone by the EBW team on Friday, Eli Rubin, an energy analyst at the company, said a “bearish” U.S. Energy Information Administration (EIA) report “reset… [gas] market momentum lower”. The EBW report highlighted that the October natural gas contract closed at $2.939 per million British thermal units (MMBtu) on Thursday. It pointed out that this was down 16.1 cents, or 5.2 percent, from Wednesday’s close. “Yesterday’s bearish EIA storage surprise refocused the natural gas market on rapidly rising storage levels – and risks of a storage overshoot,” Rubin said in the EBW report. “Intraday prices dropped to $2.925, down 24.3 cents (minus eight percent) off Tuesday’s high. The key near-term question is if technical support that tested as low as $2.869 on Sunday evening can hold,” he added. “Production readings remain subdued, although the conclusion to a portion of Permian maintenance may offer upside for supply – and downside risk for prices – next week,” Rubin continued. The EBW analyst went on to state in the report that, “amid renewed risks of the storage trajectory overshooting higher, October weather may be the decisive factor”. “If the trend of warming autumns holds, production rises, hurricane threats form, or LNG trips offline, a collapse in the front-month is possible into next week’s expiry,” he said in the report. “If signs emerge next week that strengthen DTN’s outlook for a colder October, however, averting bearish risks could offer fundamental uplift – creating volatility into next week’s October expiration,” he added. Rubin went on to warn in the report that “multiple possible bearish risks highlight the balance of price risks remains to the downside”. In its latest weekly natural gas storage report, which was released on September 18 and included data for the week ending September

Read More »

Executive Roundtable: CapEx vs. OpEx in the AI Era – Balancing the Rush to Build with Long-Term Efficiency

Becky Wacker, Trane:  Focusing on post-initial construction CapEx expenditures, finding a balance between capital expenditure (CapEx) and operational expenditure (OpEx) is crucial for efficient capital deployment for data center operators. This balance can be influenced by ownership strategy, cash position, budget planning duration, sustainability goals, and contract commitments and durations with end users. At Trane, we focus on understanding these key characteristics of operations and tailor our ongoing support to best meet the unique business objectives and needs of our customers. We address these challenges through three major approaches: 1.    Smart Services Solutions:  Our smart services solutions improve system efficiency through AI-driven tools and a large fleet of truck-based service providers. By keeping system components operating at peak efficiency, preventing unanticipated failures, and balancing the critical needs of both digital monitoring and well-trained technicians, we maintain critical systems. This approach reduces OpEx through efficient operation and minimizes unplanned CapEx expenditures. Consequently, this enables improved budgeting and the ability to invest in additional data centers or other business ventures. 2.   Sustainable and Flexible System Design:  As a global climate innovator, Trane designs our products and collaborates with engineers and owners to integrate these products into highly efficient system solutions. We apply this approach not only in the initial design of the data center but also in planning for future flexibility as demand increases or components require replacement. This proactive strategy reduces ongoing utility bills, minimizes CapEx for upgrades, and helps meet sustainability goals. By focusing on both immediate and long-term efficiency, Trane ensures that data center operators can maintain optimal performance while adhering to environmental standards. 3.   Flexible Financial Solutions:  Trane’s Energy Services solutions have a 25+ year history of providing Energy Performance Contracting solutions. These can be leveraged to provide upgrades and energy optimization to cooling, power, water, and

Read More »

OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure

Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates. After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue. Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030. The earnings announcement also made Oracle’s Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk’s $384 billion fortune. Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data. Capital Expenditure Strategy These astonishing numbers are due

Read More »

Ethernet, InfiniBand, and Omni-Path battle for the AI-optimized data center

IEEE 802.3df-2024. The IEEE 802.3df-2024 standard, completed in February 2024 marked a watershed moment for AI data center networking. The 800 Gigabit Ethernet specification provides the foundation for next-generation AI clusters. It uan 8-lane parallel structure that enables flexible port configurations from a single 800GbE port: 2×400GbE, 4×200GbE or 8×100GbE depending on workload requirements. The standard maintains backward compatibility with existing 100Gb/s electrical and optical signaling. This protects existing infrastructure investments while enabling seamless migration paths. UEC 1.0. The Ultra Ethernet Consortium represents the industry’s most ambitious attempt to optimize Ethernet for AI workloads. The consortium released its UEC 1.0 specification in 2025, marking a critical milestone for AI networking. The specification introduces modern RDMA implementations, enhanced transport protocols and advanced congestion control mechanisms that eliminate the need for traditional lossless networks. UEC 1.0 enables packet spraying at the switch level with reordering at the NIC, delivering capabilities previously available only in proprietary systems The UEC specification also includes Link Level Retry (LLR) for lossless transmission without traditional Priority Flow Control, addressing one of Ethernet’s historical weaknesses versus InfiniBand.LLR operates at the link layer to detect and retransmit lost packets locally, avoiding expensive recovery mechanisms at higher layers. Packet Rate Improvement (PRI) with header compression reduces protocol overhead, while network probes provide real-time congestion visibility. InfiniBand extends architectural advantages to 800Gb/s InfiniBand emerged in the late 1990s as a high-performance interconnect designed specifically for server-to-server communication in data centers. Unlike Ethernet, which evolved from local area networking,InfiniBand was purpose-built for the demanding requirements of clustered computing. The technology provides lossless, ultra-low latency communication through hardware-based flow control and specialized network adapters. The technology’s key advantage lies in its credit-based flow control. Unlike Ethernet’s packet-based approach, InfiniBand prevents packet loss by ensuring receiving buffers have space before transmission begins. This eliminates

Read More »

Land and Expand: CleanArc Data Centers, Google, Duke Energy, Aligned’s ODATA, Fermi America

Land and Expand is a monthly feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center operators about which we’ve been reading lately. Caroline County, VA, Approves 650-Acre Data Center Campus from CleanArc Caroline County, Virginia, has approved redevelopment of the former Virginia Bazaar property in Ruther Glen into a 650-acre data center campus in partnership with CleanArc Data Centers Operating, LLC. On September 9, 2025, the Caroline County Board of Supervisors unanimously approved an economic development performance agreement with CleanArc to transform the long-vacant flea market site just off I-95. The agreement allows for the phased construction of three initial data center buildings, each measuring roughly 500,000 square feet, which CleanArc plans to lease to major operators. The project represents one of the county’s largest-ever private investments. While CleanArc has not released a final capital cost, county filings suggest the development could reach into the multi-billion-dollar range over its full buildout. Key provisions include: Local hiring: At least 50 permanent jobs at no less than 150% of the prevailing county wage. Revenue sharing: Caroline County will provide annual incentive grants equal to 25% of incremental tax revenue generated by the campus. Water stewardship: CleanArc is prohibited from using potable county water for data center cooling, requiring the developer to pursue alternative technologies such as non-potable sources, recycled water, or advanced liquid cooling systems. Local officials have emphasized the deal’s importance for diversifying the county’s tax base, while community observers will be watching closely to see which cooling strategies CleanArc adopts in order to comply with the water-use restrictions. Google to Build $10 Billion Data Center Campus in Arkansas Moses Tucker Partners, one of Arkansas’

Read More »

Hyperion and Alice & Bob Call on HPC Centers to Prepare Now for Early Fault-Tolerant Quantum Computing

As the data center industry continues to chase greater performance for AI and scientific workloads, a new joint report from Hyperion Research and Alice & Bob is urging high performance computing (HPC) centers to take immediate steps toward integrating early fault-tolerant quantum computing (eFTQC) into their infrastructure. The report, “Seizing Quantum’s Edge: Why and How HPC Should Prepare for eFTQC,” paints a clear picture: the next five years will demand hybrid HPC-quantum workflows if institutions want to stay at the forefront of computational science. According to the analysis, up to half of current HPC workloads at U.S. government research labs—Los Alamos National Laboratory, the National Energy Research Scientific Computing Center, and Department of Energy leadership computing facilities among them—could benefit from the speedups and efficiency gains of eFTQC. “Quantum technologies are a pivotal opportunity for the HPC community, offering the potential to significantly accelerate a wide range of critical science and engineering applications in the near-term,” said Bob Sorensen, Senior VP and Chief Analyst for Quantum Computing at Hyperion Research. “However, these machines won’t be plug-and-play, so HPC centers should begin preparing for integration now, ensuring they can influence system design and gain early operational expertise.” The HPC Bottleneck: Why Quantum is Urgent The report underscores a familiar challenge for the HPC community: classical performance gains have slowed as transistor sizes approach physical limits and energy efficiency becomes increasingly difficult to scale. Meanwhile, the threshold for useful quantum applications is drawing nearer. Advances in qubit stability and error correction, particularly Alice & Bob’s cat qubit technology, have compressed the resource requirements for algorithms like Shor’s by an estimated factor of 1,000. Within the next five years, the report projects that quantum computers with 100–1,000 logical qubits and logical error rates between 10⁻⁶ and 10⁻¹⁰ will accelerate applications across materials science, quantum

Read More »

Google Partners With Utilities to Ease AI Data Center Grid Strain

Transmission and Power Strategy These agreements build on Google’s growing set of strategies to manage electricity needs. In June of 2025, Google announced a deal with CTC Global to upgrade transmission lines with high-capacity composite conductors that increase throughput without requiring new towers. In July 2025, Google and Brookfield Asset Management unveiled a hydropower framework agreement worth up to $3 billion, designed to secure firm clean energy for data centers in PJM and Eastern markets. Alongside renewable deals, Google has signed nuclear supply agreements as well, most notably a landmark contract with Kairos Power for small modular reactor capacity. Each of these moves reflects Google’s effort to create more headroom on the grid while securing firm, carbon-free power. Workload Flexibility and Grid Innovation The demand-response strategy is uniquely suited to AI data centers because of workload diversity. Machine learning training runs can sometimes be paused or rescheduled, unlike latency-sensitive workloads. This flexibility allows Google to throttle certain compute-heavy processes in coordination with utilities. In practice, Google can preemptively pause or shift workloads when notified of peak events, ensuring critical services remain uninterrupted while still creating significant grid relief. Local Utility Impact For utilities like I&M and TVA, partnering with hyperscale customers has a dual benefit: stabilizing the grid while keeping large customers satisfied and growing within their service territories. It also signals to regulators and ratepayers that data centers, often criticized for their heavy energy footprint, can actively contribute to reliability. These agreements may help avoid contentious rate cases or delays in permitting new power plants. Policy, Interconnection Queues, and the Economics of Speed One of the biggest hurdles for data center development today is the long wait in interconnection queues. In regions like PJM Interconnection, developers often face waits of three to five years before new projects can connect

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »