Stay Ahead, Stay ONMINE

Russia Oil Tax Revenue Slides as Prices, Output Decline

Russia reaped less money from oil taxes last month as production fell and global crude prices declined. Oil-tax proceeds dropped by more than 15 percent from a year earlier to 956.8 billion rubles ($11.4 billion), according to Bloomberg calculations based on Finance Ministry data published Thursday.  Global crude prices have fallen amid slower demand growth in […]

Russia reaped less money from oil taxes last month as production fell and global crude prices declined.

Oil-tax proceeds dropped by more than 15 percent from a year earlier to 956.8 billion rubles ($11.4 billion), according to Bloomberg calculations based on Finance Ministry data published Thursday. 

Global crude prices have fallen amid slower demand growth in China, ample supplies from North and South America and the fallout from US President Donald Trump’s increasingly aggressive tariff policy.

Expectations for weaker global demand are “putting pressure on commodity prices,” Russia’s central bank said earlier this week. In February, the bank estimated an average 2025 Urals crude price for tax purposes of $65 a barrel and $60 in the next two years. Now it says the chance of lower prices has “increased somewhat.”

For the latest data, the Finance Ministry calculated taxes based on an average Urals price of $61.69 a barrel in February. That’s down almost 10% from a year earlier and compares with a drop of almost 16 percent in the global Brent benchmark. 

The decline in tax proceeds also follows a reduction in oil output, with Russia saying it’s brought production into line with its OPEC+ quota. The country last year was among the group’s laggard members, and it still needs to make up for the months it overshot targets.

March’s total oil and gas revenue slumped 17 percent from a year earlier to 1.08 trillion rubles, the ministry’s data showed. Almost 89 percent of that came from crude and refined products.

State subsidies to Russia’s refiners have also dented the federal budget, with the government paying 100.3 billion rubles to producers of gasoline and diesel to supply the domestic market. 

Proceeds from the gas industry alone fell by almost a third from a year earlier to 124.5 billion rubles in March. That was driven by lower output as Russia’s piped exports to Europe halved following the end of a transit deal with Ukraine.

What do you think? We’d love to hear from you, join the conversation on the

Rigzone Energy Network.

The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy.


MORE FROM THIS AUTHOR



Bloomberg


Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco, Google Cloud offer enterprises new way to connect SD-WANs

Looking back, the rapid adoption of SaaS and cloud applications led to a WAN transformation and the emergence of SD-WAN via direct internet access, Sambi asserted. “Then, to enhance application performance, enterprises built colocation-based cloud on-ramps, which, while improving latency, introduced complexity and costs. This evolution led to a proliferation

Read More »

Fortinet embeds AI capabilities across Security Fabric platform

“By embedding FortiAI across the Fortinet Security Fabric platform, including new agentic AI capabilities, we’re empowering our customers to reduce the workload on their security and network analysts while improving the efficiency, speed, and accuracy of their security and networking operations,” said Michael Xie, founder, president, and chief technology officer

Read More »

Tailscale secures $160 million for its WireGuard-based VPN development

Building on WireGuard’s foundation At the heart of Tailscale’s technology is WireGuard, a modern VPN protocol that offers significant security and performance advantages over legacy solutions.  WireGuard is an open-source technology built in a way that minimizes the attack surface while providing greater performance than older VPN approaches. While WireGuard

Read More »

UALink releases inaugural GPU interconnect specification

UALink’s primary target for now is to provide an alternative to Nvdia’s high-bandwidth, low-latency, direct interconnect technology for CPU, GPU-to-GPU connectivity, NVLink. NVLink is primarily used in InfiniBand-based networks. Given the spec’s Ethernet heritage, UALink is seen in most circles as working hand-in-hand with the Ultra Ethernet Consortium to help

Read More »

Spirit Energy forced to close Morecambe Bay platform over fire safety concerns

Spirit Energy has been forced to curtail natural gas production at a platform in Morecambe Bay in order to conduct a fire safety inspection, a spokesperson has confirmed. The company’s majority owner Centrica is seeking to redevelop the Morecambe Bay facility into a carbon capture and storage hub that would cater to industrial businesses seeking to reduce their carbon footprint. A spokesperson confirmed that the fire protection sprinkler network known as a deluge system on the CPC-1 platform is in the process of being tested, and that production has been “paused” to enable any necessary remedial measures. The Morecambe central processing complex in the Irish Sea will be shuttered for four weeks while the company conducts fire safety inspections at the behest of the health and safety regulator. Morecambe Bay hub comprises three gas fields in North Morecambe, South Morecambe and Rhyl that have produced natural gas for more than 30 years. The closure follows talks between the health and safety executive (HSE) and Spirit Energy in advance of a routine inspection conducted on 26 March. The platform shutdown was initiated on 12 March. The operator subsequently agreed to a more wide-ranging inspection of deluge systems on the platform in the East Irish Sea, which is usually occupied by about 130 workers. HSE has urged North Sea operators to improve their culture of safety and issued new guidelines on deluge systems maintenance, which are designed to extinguish fires on offshore platforms. In December, it served Repsol with a prohibition notice after an oil platform, the Fulmar platform in the central North Sea, failed a safety inspection due to concerns about its deluge system. Production of natural gas at the Morecambe Bay hub has been curtailed by about 7,500 barrels of oil equivalent per day, down from usual production of about 18,000

Read More »

UK ocean industry could hit £2tn, says maritime council chair

The UK lacks a “coherent approach” to monetising trade and industry in the country’s oceans, according to Ryan Mowat, director and chair at the Marine Science and Technology Council at the Society of Maritime Industries UK. The ocean and maritime industries could represent an economic value of approximately £2 trillion, Mowat said while speaking at the Ocean Business conference in Southampton on Wednesday. He called for a strategic approach across government, academia and industry to maximise opportunities for growth in the sector, urging the government to regularly survey and quantify the ocean landscape. While the hidden industry sector could enable an ocean economy worth £2tn, more ocean data, information and knowledge is needed to maximise that potential, according to Mowat. “We still see a lot of focus on traditional industries, such as shipping, offshore energy,” he said. “But we want to see a lot of focus as well on the more blue and sustainable economies or industries that are growing up in our ocean space.” That economy includes the development of energy and minerals, such as offshore wind as well as oil and gas industries operating in areas such as the North Sea. It also includes ocean conservation and environmental industries, climate mitigation efforts, maritime security and defence, transportation and trade, leisure and tourism, and living resources. © Jessica Davies/DCT MediaJessica Davies/DCT Media Date; 09/04/2025 Mowat said industry needs “government to be doing more to support”, with a national “joined up strategy” to map out the opportunities on offer from UK ocean enterprise. He called for a dialogue with ministers, in addition to funding for biannual surveys in order to estimate the size of the ocean economy and help people understand the value of its development as a potential social enterprise. He also argued that data collection and analysis will be

Read More »

The AI infrastructure race hits a political reality check

Joe Brettell is a partner at Prosody Group and Jeff Berkowitz is the founder and CEO of Delve DC, an AI-driven opposition research and market intelligence firm. As the race to deploy AI infrastructure intensifies, technology firms and power providers find themselves navigating an increasingly complex political and regulatory landscape. For years, states have aggressively courted data centers — offering tax incentives and infrastructure support in hopes of attracting high-tech jobs and future-focused investment. The rise of AI has only accelerated this trend, transforming data centers into critical infrastructure. But now, both the energy and tech sectors are discovering that their once-symbiotic relationship with state and local governments — and with each other — is growing more complicated. Much of this complexity becomes clear when examining the full picture — one surfaced through our long-time experience advising energy and infrastructure firms and in conversations with stakeholders in and around these industries, as well as an AI-driven deep dive into local, state and federal regulatory proposals. The reality check: Competing demands and mounting pressure The boom in AI-driven data center development has created unprecedented demand for land, electricity and water; a dynamic that has spawned growing opposition from communities, utility grid operators, wary regulators and environmental advocates. Power consumption is perhaps the most widely discussed concern — a dominant topic at the recently held CERA Week, and one that hits most residents directly. A single data center can use as much electricity as a mid-sized U.S. city. Utilities in states like Virginia, Illinois and Utah are struggling to meet soaring demand, raising concerns about grid reliability and equitable access to energy.  Meanwhile, pressure from investors to both keep pace and show progress on the massive investment in AI means tech companies need to move fast — often faster than regulators and

Read More »

Analysts Flag Key Price Determinant for Crude Oil

In a report sent to Rigzone by Standard Chartered Bank Commodities Research Head Paul Horsnell late Tuesday, analysts at the bank, including Horsnell, noted that, in their view, the key price determinant for crude oil over the past week was the U.S. tariff announcement. “The downwards vortex set in play by the announcement took Brent prices lower by more than $12 per barrel over just four trading days from the 2 April intra-day high ($74.95 per barrel) to the 7 April intra-day low of $62.51 per barrel,” the analysts said in the report. “Brent 30-trading day realized annualized volatility rose by 15.2 percentage points week on week to 35.7 percent at settlement on 7 April, with 10-trading day volatility rising 34 percentage points to 47.3 percent over the same period,” they added. “In all, we saw a sharp fall at the front of the curve accompanied by large trading ranges on the way down,” they continued. The Standard Chartered Bank analysts highlighted in the report that they have been asked by clients whether the scale of the fall was justified. They noted in the report that, in their view, “given market positioning and normal market dynamics, the fall was fully justified and could have gone significantly further.” “The U.S. tariff announcement was a severe shock to a market that was predominantly of the view that tariff rates would be limited, well-thought out, likely delayed and rapidly negotiated away, and would still lie within the ambit of the normal staid range of international trade diplomacy,” they added. “What was announced did not conform with the dominant oil market consensus view. Instead, the market immediately started to price in a significant reduction in expectations of global GDP, with U.S. recessionary risk in particular marked sharply higher,” they continued. The analysts stated in

Read More »

MidOcean to Invest in Lake Charles LNG

Lake Charles LNG owner Energy Transfer LP has agreed non-binding terms for an investment by MidOcean Energy in the planned Louisiana project. Under the heads of agreement (HOA), MidOcean, part of EIG Global Energy Partners, will fund 30 percent of the construction cost. MidOcean will be entitled to receive 30 percent of the liquefied natural gas (LNG) production. That equates to about 5 million metric tons per annum (MMtpa), according to a joint statement Wednesday. “The HOA also provides that MidOcean Energy will have the option to arrange for gas supply for its share of LNG production and that MidOcean will commit to long-term gas transportation on Energy Transfer pipelines”, the companies said. “The obligations of Energy Transfer LNG and MidOcean Energy under the HOA will be subject to both parties’ determination to take a positive final investment decision as well as the satisfaction of other conditions precedent”. Tom Mason, president of Energy Transfer LNG, said, “MidOcean’s management team brings a wealth of LNG experience to the project. In addition, Energy Transfer and EIG already have an established relationship that will only be strengthened through this transaction”. “This agreement has the potential to transform MidOcean’s portfolio, providing a material volume of advantaged Atlantic Basin supply”, said MidOcean chief executive De la Rey Venter. “This complements our current assets, which are all located in the Asia-Pacific Basin. “Geographical diversity is a key enabler for value delivery from an LNG portfolio. MidOcean considers Lake Charles LNG to be one of the most advantaged US LNG projects under development”. Planned to have an export capacity of 16.45 MMtpa, Lake Charles LNG is fully permitted and would be built as a conversion from an existing brownfield regasification site with four LNG storage tanks, according to Energy Transfer. Meanwhile a decision is pending before the Department of Energy (DOE) on whether

Read More »

Malaysian Firm to Deliver Diesel, Jet Fuel to Swiss One

AGAPE ATP Corporation (ATPC) said it has entered into two sales and purchase agreements collectively worth $24 billion with Swiss One Oil & Gas AG for diesel and jet fuel. Under the terms of the agreements, its subsidiary ATPC Green Energy will supply EN590 10PPM diesel and Jet Fuel A1 to Swiss One over a 12-month period plus rolls and extensions, with an initial trial order comprising 200,000 metric tons of EN590 10PPM diesel and 2 million barrels of Jet Fuel A1, ATPC said in a news release. The trial shipment started in March, the Kuala Lumpur, Malaysia-based company said. Upon successful completion of the trial, the contract will transition into full-scale supply, with weekly deliveries of 500,000 metric tons of EN590 10PPM diesel and 2 million barrels of Jet Fuel A1 to meet growing global demand, according to the release. All deliveries will be executed through free-on-board procedures at major international ports. The agreement complies with global quality standards, with SGS or equivalent inspection authorities conducting independent quality assessments to ensure that the fuel meets ASTM/IP international standards, the company stated. The agreements build upon an earlier initial corporate purchase order signed in February, which laid the foundation for the procurement and supply of refined fuels, including Jet Fuel A1 and EN590 10PPM diesel, the company said. The initial order covered a trial shipment of 100,000 metric tons of EN590 10PPM diesel and one million barrels of Jet Fuel A1, and its successful completion led to long-term structured agreements between the parties. Dato’ Sri How Kok Choong, founder and global group CEO of ATPC, said, “Our initial [corporate purchase order] with Swiss One Oil & Gas AG was a crucial step in trust and operational efficiency in the oil and gas sector. The transition to a full-scale [sales and

Read More »

Google reaffirms $75B AI infra investment as cloud providers pursue divergent strategies

“We are witnessing a divergence in hyperscaler strategy,” noted Abhivyakti Sengar, practice director at Everest Group. “Google is doubling down on global, AI-first scale; Microsoft is signaling regional optimization and selective restraint. For enterprises, this changes the calculus.” Meanwhile, OpenAI is reportedly exploring building its own data center infrastructure to reduce reliance on cloud providers and increase its computing capabilities. Shifting enterprise priorities For CIOs and enterprise architects, these divergent infrastructure approaches present new considerations when planning AI deployments. Organizations must now evaluate not just immediate availability, but long-term infrastructure alignment with their AI roadmaps. “Enterprise cloud strategies for AI are no longer just about picking a hyperscaler — they’re increasingly about workload sovereignty, GPU availability, latency economics, and AI model hosting rights,” said Sanchit Gogia, CEO and chief analyst at Greyhound Research. According to Greyhound’s research, 61% of large enterprises now prioritize “AI-specific procurement criteria” when evaluating cloud providers — up from just 24% in 2023. These criteria include model interoperability, fine-tuning costs, and support for open-weight alternatives. The rise of multicloud strategies As hyperscalers pursue different approaches to AI infrastructure, enterprise IT leaders are increasingly adopting multicloud strategies as a risk mitigation measure.

Read More »

China’s rare earth export controls threaten enterprise IT hardware supply chains

“AI-first infrastructure rollouts — particularly those involving GPUs, edge accelerators, and high-efficiency cooling — are directly in the crosshairs,” Gogia noted. “So are quantum computing R&D efforts and high-reliability storage systems where thermal and magnetic materials matter.” China, responsible for 70% of global rare earth mining output and 87% of refined supply, poses a serious threat to enterprise IT hardware supply chains with these restrictions — especially for companies with AI-optimized server lines. AI chip production under threat The impact on semiconductor manufacturing comes at a critical time when enterprise demand for AI chips is soaring. Companies including Nvidia, AMD, Intel, and TSMC rely on rare earth elements during the manufacturing of advanced chips. “We see the greatest exposure in private data center expansion projects, AI inferencing at the edge, and next-gen device manufacturing, including specialized industrial IoT and robotics,” noted Gogia. Major cloud providers have been aggressively expanding their AI compute capacity, with substantial hardware refreshes planned for late 2025. These plans may now face delays or cost increases as chip manufacturers grapple with supply constraints. Pricing pressures to be felt in 3-6 months The immediate impact is expected to be limited as manufacturers work through existing inventory, but pricing pressure could emerge within 3-6 months, experts feel.

Read More »

DARPA backs multiple quantum paths in benchmarking initiative

Nord Quantique plans to use the money to expand its team, says Julien Camirand Lemyre, the company’s president, CTO and co-founder. That’s an opportunity to accelerate the development of the technology, he says. “By extension, what this will mean for enterprise users is that quantum solutions to real-world business problems will be available sooner, due to that acceleration,” he says. “And so enterprise customers need to also accelerate how they are thinking about adoption because the advantages quantum will provide will be tangible.” Lemyre predicts that useful quantum computers will be available for enterprises before the end of the decade. “In fact, there has been tremendous progress across the entire quantum sector in recent years,” he says. “This means industry needs to begin thinking seriously about how they will integrate quantum computing into their operations over the medium term.” “We’re seeing, with the deployment of programs like the QBI in the US and investments of billions of dollars from  public and private investors globally, an increasing maturity of quantum technologies,” said Paul Terry, CEO at Photonic, which is betting on optically-linked silicon spin qubits.  “Our architecture has been designed from day one to build modular, scalable, fault-tolerant quantum systems able to be deployed in data centers,” he said. He’s not the only one to mention fault-tolerance. DARPA stressed fault-tolerance in its announcement, and its selections point to the importance of error correction for the future of quantum computing. The biggest problem with today’s quantum computers is that the number of errors increases faster than the number of qubits, making them impossible to scale up. Quantum companies are working on a variety of approaches to reduce the error rates low enough that quantum computers can get big enough to actually to real work.

Read More »

Zayo’s Fiber Bet: Scaling Long-Haul and Metro Networks for AI Data Centers

Zayo Group Holdings Inc. has emerged as one of the most aggressive fiber infrastructure players in North America, particularly in the context of AI-driven growth. With a $4 billion investment in AI-related long-haul fiber expansion, Zayo is positioning itself as a critical enabler of the AI and cloud computing boom. The company is aggressively expanding its long-haul fiber network, adding over 5,000 route miles to accommodate the anticipated 2-6X increase in AI-driven data center capacity by 2030. This initiative comes as AI workloads continue to push the limits of existing network infrastructure, particularly in long-haul connectivity. New Fiber Routes The new routes include critical connections between key AI data center hubs, such as Chicago-Columbus, Las Vegas-Reno, Atlanta-Ashburn, and Columbus-Indianapolis, among others. Additionally, Zayo is overbuilding seven existing routes to further enhance network performance, resiliency, and low-latency connectivity. This new development is a follow-on to 15 new long haul routes representing over 5300 route miles of new and expanded capacity deployed over the last five years. These route locations were selected based on expected data center growth, power availability, existing capacity constraints, and specific regional considerations. The AI Data Center Sector: A Significant Driver of Fiber Infrastructure The exponential growth of AI-driven data center demand means that the U.S. faces a potential bandwidth shortage. Zayo’s investments look to ensure that long-haul fiber capacity keeps pace with this growth, allowing AI data centers to efficiently transmit data between key markets. This is especially important as data center development locations are being driven more by land and power availability rather than proximity to market. Emerging AI data center markets get the high speed fiber they need, especially as they are moving away from expensive power regions (e.g., California, Virginia) to lower-cost locations (e.g., Ohio, Nevada, Midwest). Without the high-speed networking capabilities offered by

Read More »

Crusoe Adds 4.5 GW Natural Gas to Fuel AI, Expands Abilene Data Center to 1.2 GW

Crusoe and the Lancium Clean Campus: A New Model for Power-Optimized Compute Crusoe Energy’s 300-megawatt deployment at the Lancium Clean Campus in Abilene is a significant marker of how data center strategies are evolving to integrate more deeply with energy markets. By leveraging demand flexibility, stranded power, and renewable energy, Crusoe is following a path similar to some of the most forward-thinking projects in the data center industry. But it’s also pushing the model further—fusing AI and high-performance computing (HPC) with the next generation of power-responsive infrastructure. Here’s how Crusoe’s strategy compares to some of the industry’s most notable power-driven data center deployments: Google’s Oklahoma Data Center: Proximity to Renewable Growth A close parallel to Crusoe’s energy-centric site selection strategy is Google’s Mayes County data center in Oklahoma. Google sited its facility there to take advantage of abundant wind energy, aligning with the local power grid’s renewable capacity. Similarly, Crusoe is tapping into Texas’s deregulated energy market, optimizing for low-cost renewable power and the ability to flexibly scale compute operations in response to grid conditions. Google has also been an industry leader in time-matching workloads to renewable energy availability, something that Crusoe is enabling in real time through grid-responsive compute orchestration. Sabey Data Centers in Quincy: Low-Cost Power as a Foundation Another instructive comparison is Sabey Data Centers’ Quincy, Washington, campus, which was built around one of the most cost-effective power sources in the U.S.—abundant hydroelectric energy. Sabey’s long-term strategy has been to co-locate power-intensive compute infrastructure near predictable, low-cost energy sources. Crusoe’s project applies a similar logic but adapts it for a variable grid environment. Instead of relying on a fixed low-cost power source like hydro, Crusoe dynamically adjusts to real-time energy availability, a strategy that could become a model for future power-aware, AI-driven workloads. Compass and Aligned: Modular, Energy-Adaptive

Read More »

Executive Roundtable: Data Center Site Selection and Market Evolution in a Constrained Environment

For the third installment of our Executive Roundtable for the First Quarter of 2025, we asked our panel of seasoned industry experts about how the dynamics of data center site selection have never been more complex—or more critical to long-term success. In an industry where speed to market is paramount, operators must now navigate an increasingly constrained landscape in the age of AI, ultra cloud and hyperscale expansion, marked by fierce competition for land, tightening power availability, and evolving local regulations.  Traditional core markets such as Northern Virginia, Dallas, and Phoenix remain essential, but supply constraints and permitting challenges are prompting developers to rethink their approach. As hyperscalers and colocation providers push the boundaries of site selection strategy, secondary and edge markets are emerging as viable alternatives, driven by favorable energy economics, infrastructure investment, and shifting customer demand.  At the same time, power procurement is now reshaping the equation. With grid limitations and interconnection delays creating uncertainty in major hubs, operators are exploring new solutions, from direct utility partnerships to on-site generation with renewables, natural gas, and burgeoning modular nuclear concepts. The question now is not just where to build but how to ensure long-term operational resilience. As data center demand accelerates, operators face mounting challenges in securing suitable land, reliable power, and regulatory approvals in both established and emerging markets.  And so we asked our distinguished executive panel for the First Quarter of 2025, with grid capacity constraints, zoning complexities, and heightened competition shaping development decisions, how are companies refining their site selection strategies in Q1 2025 to balance speed to market, scalability, and sustainability? And, which North American regions are showing the greatest potential as the next wave of data center expansion takes shape?

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »