Stay Ahead, Stay ONMINE

What is the North Sea Transition Taskforce?

“There’s a lot at stake for the government,” said the head of the North Sea Transition Taskforce as he reflected on energy policy. The taskforce was set up by the British Chambers of Commerce (BCC) to be a Bank of England style body for energy policy, following a recommendation from Aberdeen and Grampian Chamber of […]

“There’s a lot at stake for the government,” said the head of the North Sea Transition Taskforce as he reflected on energy policy.

The taskforce was set up by the British Chambers of Commerce (BCC) to be a Bank of England style body for energy policy, following a recommendation from Aberdeen and Grampian Chamber of Commerce (AGCC).

The idea behind the initiative was to inform government policy to support domestic energy firms while the UK moves towards net zero targets.

North Sea Transition Taskforce chairman, Philip Rycroft, told Energy Voice: “The fact there is a taskforce demonstrates that there was a deep dissatisfaction in the industry about current state of play, a frustration about where government policy was going, and this is an opportunity for government to hear through a task force that has emerged from the industry and with other interests on it as well about what a good transition looks like.”

He added that it is “in the government’s interest” to listen to the outcomes from the taskforce, which is set to produce its inaugural report next month.

This comes following the second meeting of the North Sea Transition Taskforce which took place in Aberdeen city centre before its members were taken on a tour of various businesses and energy hubs across the city, including Aberdeen Harbour.

© Supplied by Kenny Elrick/DCTM
North Sea Transition Taskforce chairman, Philip Rycroft.

On why the government should take onboard what the taskforce has to say, Rycroft added that the work being done by the “covers all of their agendas”.

“It’s about growth. It’s about the optimal pathway in net zero. It’s about receipts for the public purse. It’s about good jobs. It’s about the business infrastructure that supports productivity growth, all the things that government is looking for,” he listed.

He suggested that if the Labour Government follows the taskforce’s recommendations will result in “win-win” situations.

Rycroft said that the body has “not been about damning government” but it aims to recommend against policies that “risks impeding the transition”.

“Some of the things that government has done have had, I suspect, unintended consequences,” he explained.

“It’s in that capability to step back and to help government see this as a whole, see how all of the moving parts interact and how they can create a policy context which optimises the future of the North Sea, both for oil and gas and also the renewable energy that’s coming down the track as we move towards net zero.”

The recommendation for the taskforce’s formation came from an AGCC Energy Transition report.

With the group set to produce its first set of recommendations to government, the BBC was questioned on whether Labour would listen more closely to the taskforce.

Director general for BCC, Shevaun Haviland, responded: “They [the energy transition reports] are really important for this region, They’re really important for all the members of Aberdeen Chamber but the Aberdeen Chamber came to us and together we said we want to do something bigger, we want to do something that covers the entire nation.”

Haviland added that “it’s really important that we’ve got all of the chambers working on this together”.

© Supplied by Kenny Elrick/ DCTM
North Sea Transition Taskforce chairman, Philip Rycroft, and Director general for BCC, Shevaun Haviland.

In addition to collaboration for the chambers of commerce, the North Sea Transition Taskforce has collated members from GMB Union and various businesses across the UK supply chain.

Haviland said: “We have been very deliberate in ensuring that we’re covering a wide range of voices from industry, from the supply chain, from, the environmental voice.

“And with the chambers involved, we have economic voice across all regions and nations. So, I think that gives it gravitas.”

On the recommendations that the taskforce will make, Rycroft said that “a lot of people are saying to us that you can’t deal with this piece meal.”

He added: “It’s not about just adjusting this little bit here a little bit there. There’s a lot of agencies; there’s a lot of bits of government that have some influence over all of this.”

The taskforce is going to argue for “a line of sight into the future” for energy firms in the UK.

“That gives you that certainty so you can reskill yourself, you can invest, you can grow your business. That’s what we’re about, and that’s where the focus of the report will lie,” the taskforce chair concluded.

Recommended for you

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Mobile demands spur enterprise Wi-Fi upgrades

Performance requirements (reduced latency, jitter, packet drops): 67.1% Increased bandwidth consumption: 59.9% User mobility (roaming, broader coverage): 53.3% Connectivity for operational technology (industrial systems, medical imaging, video surveillance): 50.0% Return-to-office policies driving up office occupancy: 48.7% Density requirements (users congregating): 44.7% End-of-life network equipment: 40.8% Need for location-based services: 40.1%

Read More »

Anthropic signs billion-dollar deal with Google Cloud

US-based AI company Anthropic has signed a major deal with Google Cloud that is said to be worth tens of billions of dollars. As part of the deal, Anthropic will have access to up to one million of Google’s purpose-built Tensor Processing Unit (TPU) AI accelerators. “Anthropic and Google have

Read More »

Crude Pauses After Sanction-Fueled Rally

Oil swung between gains and losses on the back of the biggest weekly increase since June as attention turned to the wider outlook for supply as the US and China made progress on trade. West Texas Intermediate slipped 0.3% to settle just above $61, extending its decline for a second day. Top Chinese and US negotiators said they came to terms on a range of points, setting the table for President Donald Trump and counterpart Xi Jinping to finalize a deal to ease trade tensions between the world’s two biggest economies and crude importers. Still, oil was little changed Monday after adding nearly 7% last week when the US sanctioned Russian oil giants Rosneft and Lukoil to squeeze Russia over its ongoing war in Ukraine. The move added output risks to a market that’s showing signs of entering a surplus. Lukoil announced in a statement that it intends to sell its international assets due to the latest sanctions. “The market is taking a breather here,” said Dennis Kissler, senior vice president for trading at BOK Financial. “While US-China negotiations continue, no real outcome has been agreed as of yet…and the sanctions on Russia may halt some shipments though it’s more likely most of that oil will still find a home.” The Trump administration is seeking to make Russia’s trade harder, costlier and riskier, but without forcing a sudden supply shock that might spike global oil prices, officials familiar with the matter said over the weekend. The measures helped oil rebound from a five-month low last week, but part of the move was likely driven by extreme market positioning. Traders had amassed record bearish wagers on the global Brent benchmark in anticipation of oversupply in the next few months. In the meantime, commodity trading advisers, or CTAs, are set to accelerate

Read More »

Petrofac’s 7,300 Jobs at Risk

UK energy services provider Petrofac Ltd has applied to enter administration after the company’s latest plans to restructure its balance sheet unexpectedly fell through, putting thousands of jobs at risk. The company has applied to the High Court of England and Wales to appoint administrators, according to a statement on Monday. It comes after European grid operator TenneT canceled Petrofac’s work at a large offshore energy project in the North Sea, rendering the financial restructuring unviable.  The firm employs around 7,300 people globally and has been trying to strike a deal with creditors for over a year, while safeguarding key business deals that could keep the firm afloat. Its collapse would raise the pressure on the UK government to protect British jobs, with the Labour administration already coming under fire for blocking new North Sea oil licenses. The filing from the company, which employs about 2,000 people in the UK, adds to a string of challenges facing the government, including a bruising by-election defeat, pressure to raise taxes in the autumn budget, and ongoing public outrage over troubled utility Thames Water, Britain’s biggest and most indebted water supplier.  A representative for the Department for Energy Security and Net Zero said that Petrofac’s administration is a “product of longstanding issues in their global business,” adding that the UK arm is continuing to operate as normal. “The government will continue to work with the UK company as it focuses on its long-term future,” the spokesperson said.  Meanwhile, alternative restructuring and M&A solutions for the group are still being explored with creditors including its bondholder group, Petrofac said. Key Contract Petrofac’s business with TenneT was particularly crucial given it represents over 80% of revenue in the group’s engineering and construction division, according to court documents filed earlier this year. But since Petrofac was not able to meet

Read More »

Energy Department Announces New Public-Private Partnership Model, Two Supercomputers, to Accelerate American Dominance in Science and Technology

WASHINGTON— The U.S. Department of Energy (DOE) today announced two new AMD-accelerated artificial intelligence (AI) supercomputers at Oak Ridge National Laboratory (ORNL), one of which will be built at record speeds thanks to a new public-private partnership model. The supercomputers will help expand America’s leadership in scientific computing, strengthen national security, and drive the next generation of Gold Standard Science and innovation. With the new public-private partnership model, the Lux AI cluster, powered by AMD Instinct™ MI355X GPUs, AMD EPYC™ CPUs, and AMD Pensando™ advanced networking, will be deployed in early 2026 to expand DOE’s near-term AI capacity and accelerate work on critical national priorities, including fusion, fission, materials discovery, quantum, advanced manufacturing, and grid modernization. Lux will provide a secure, open, and efficient AI software stack to strengthen America’s innovation base and enhance U.S. competitiveness. “Winning the AI race requires new and creative partnerships that will bring together the brightest minds and industries American technology and science has to offer,” said U.S. Secretary of Energy Chris Wright. “That’s why the Trump administration is announcing the first example of a new commonsense approach to computing partnerships with Lux. We are also announcing, as part of a competitive procurement process, Discovery. Working with AMD and HPE, we’re bringing new capacity online faster than ever before, turning shared innovation into national strength, and proving that America leads when private-public partners build together.” “We are proud and honored to partner with the U.S. Department of Energy and Secretary Wright to accelerate America’s AI compute infrastructure,” said AMD chair and CEO Dr. Lisa Su. “This partnership exemplifies public-private collaboration at its best. With Discovery and Lux, we are delivering leadership compute systems that combine performance and energy efficiency to advance America’s research priorities and strengthen U.S. leadership in AI, energy, and national security.” DOE

Read More »

US electric utilities entering investment ‘super cycle,’ says Morningstar DBRS

U.S. electric utilities are entering a five-year capital expenditure “super-cycle” as they build out transmission and generation networks to meet new demand from data centers, Morningstar DBRS said in a Monday research note. “Investment in electricity infrastructure is projected to be $1.4 trillion from 2025 to 2030, double the amount invested in the prior 10 years,” the firm said. And based on data from the North American Electric Reliability Corp., Morningstar said many regions can expect load growth to increase from previously estimated 6.1% to around 11.6% over the next decade. “The challenges posed by the rapid buildout of data centers are overlaid on existing concerns for most utilities, including decarbonization and guaranteeing the reliability of grid infrastructure while increasing the contribution of renewable power,” Morningstar said. “The projected surge in demand provides opportunities for utilities that must be counterbalanced with structural changes and regulatory support for utilities and rate payers.” “We anticipate that regulated utilities with supportive regulatory commissions, solid credit ratings, and access to capital markets will deploy the needed capex to take advantage of the data center boom,” Bukola Folashakin, Morningstar assistant vice president of corporate ratings, said in a statement. “We expect this capex investment in turn to make such utility locations attractive for more data center construction, potentially creating a cycle of increased revenue for as long as data centers remain economically viable.” Some states — Morningstar named California, Texas, and Louisiana — will see elevated risks of resource inadequacy next year that “in extreme conditions, could trigger electricity shortfalls,” according to the note. Morningstar’s analysis mirror’s recent capital expenditure estimates from the Edison Electric Institute, which expects U.S electricity generation to grow “for the foreseeable future.” The group, which represents investor-owned utilities, said generation rose 3% in 2024 and generation investments as a share of

Read More »

Residential electricity prices up more than 6% in August: EIA

Dive Brief: The retail price of electricity for residential customers was up 6.1% in August over the same month last year, reaching an average of 17.62 cents/kWh, according to the U.S. Energy Information Administration’s monthly update on electricity released Friday.  Total net generation and retail sales were down slightly, by 0.8% and 0.5% respectively, which experts attributed to a mild summer in much of the eastern part of the country. Nationally, there were 9.1% fewer cooling days this August compared to August 2024, though parts of the West saw between 5% and 50% more.  Natural gas prices were up 47.4% to just over $3/MMBtu, per Henry Hub, and natural gas consumption fell 5.6%.  Dive Insight: Aaron Denman, who heads Bain & Company’s Americas utilities and renewables practice, said the reasons behind the rise in prices are complicated and nuanced, and vary depending on region. “I think if you look at aggregate headline numbers, it’s hard to blame data centers for much of the increase, and I think part of that is just because it’s still such a small percentage of the overall load,” Denman said. “But I think when you get into specific regions, they’re so large, they’re so meaningful, that you can start to see some of that impact.” He said Bain’s projections put data centers at about 40% to 60% of new load growth in the coming years, with manufacturing and electrification making up most of the remainder. He said the longterm fundamentals remain strong for load growth, despite some recent negative economic data. U.S. factory activity shrank in September for a seventh consecutive month, according to Bloomberg. “This [much] load growth, this fast, is going to put upward pressure on residential prices,” Denman said. “We’re going to need to see creative solutions to minimize the impact on particularly residential

Read More »

EIA Shows Production Has Outweighed Demand All Year

In its latest short term energy outlook (STEO), which was released on October 7, the U.S. Energy Information Administration (EIA) showed that world petroleum and other liquid fuels production outweighed consumption in the first, second, and third quarters of this year. According to the STEO, in the third quarter, production averaged 107.43 million barrels per day and consumption averaged 104.83 million barrels per day. In the second quarter, production came in at 105.06 million barrels per day and consumption was 104.05 million barrels per day, and in the first quarter, production averaged 103.62 million barrels per day and consumption was 102.33 million barrels per day, the STEO highlighted. The last time consumption came in higher than demand was in the third quarter of 2024, according to the STEO, which showed that, in that quarter, production averaged 103.09 million barrels per day and consumption averaged 103.45 million barrels per day. In the first quarter of last year, production was 102.60 million barrels per day and consumption was 101.79 million barrels per day, in the second quarter of 2024 production was 103.23 million barrels per day and consumption was 102.93 million barrels per day, and in the fourth quarter of last year production was 103.83 million barrels per day and consumption was 103.46 million barrels per day, the STEO showed. Looking ahead, the STEO projected that production will average 107.31 million barrels per day in the fourth quarter of this year. The EIA forecast in the STEO that consumption will come in at 104.72 million barrels per day in the fourth quarter. In its latest STEO, the EIA projected that production will average 106.39 million barrels per day in the first quarter of next year, 106.97 million barrels per day in the second quarter, 107.55 million barrels per day in the third

Read More »

AI data center building boom risks fueling future debt bust, bank warns

However, that’s only one part of the problem. Meeting the power demands of AI data centers will require the energy sector to make large investments. Then there’s data center demand for microprocessors, rare earth elements, and other valuable metals such as copper, which could, in a bust, make data centers the most expensively-assembled unwanted assets in history. “Financial stability consequences of an AI-related asset price fall could arise through multiple channels. If forecasted debt-financed AI infrastructure growth materializes, the potential financial stability consequences of such an event are likely to grow,” warned the BoE blog post. “For companies who depend on the continued demand for massive computational capacity to train and run inference on AI models, an algorithmic breakthrough or other event which challenges that paradigm could cause a significant re-evaluation of asset prices,” it continued. According to Matt Hasan, CEO of AI consultancy aiRESULTS, the underlying problem is the speed with which AI has emerged. “What we’re witnessing isn’t just an incremental expansion, it’s a rush to construct power-hungry, mega-scale data centers,” he told Network World. The dot.com reversal might be the wrong comparison; it dented the NASDAQ and hurt tech investment, but the damage to organizations investing in e-commerce was relatively limited. AI, by contrast, might have wider effects for large enterprises because so many have pinned their business prospects on its potential. “Your reliance on these large providers means you are indirectly exposed to the stability of their debt. If a correction occurs, the fallout can impact the services you rely on,” said Hasan.

Read More »

Intel sees supply shortage, will prioritize data center technology

“Capacity constraints, especially on Intel 10 and Intel 7 [Intel’s semiconductor manufacturing process], limited our ability to fully meet demand in Q3 for both data center and client products,” said Zinsner, adding that Intel isn’t about to add capacity to Intel 10 and 7 when it has moved beyond those nodes. “Given the current tight capacity environment, which we expect to persist into 2026, we are working closely with customers to maximize our available output, including adjusting pricing and mix to shift demand towards products where we have supply and they have demand,” said Zinsner. For that reason, Zinzner projects that the fourth quarter will be roughly flat versus the third quarter in terms of revenue. “We expect Intel products up modestly sequentially but below customer demand as we continue to navigate supply environment,” said Zinsner. “We expect CCG to be down modestly and PC AI to be up strongly sequentially as we prioritize wafer capacity for server shipments over entry-level client parts.”

Read More »

How to set up an AI data center in 90 days

“Personally, I think that a brownfield is very creative way to deal with what I think is the biggest problem that we’ve got right now, which is time and speed to market,” he said. “On a brownfield, I can go into a building that’s already got power coming into the building. Sometimes they’ve already got chiller plants, like what we’ve got with the building I’m in right now.” Patmos certainly made the most of the liquid facilities in the old printing press building. The facility is built to handle anywhere from 50 to over 140 kilowatts per cabinet, a leap far beyond the 1–2 kW densities typical of legacy data centers. The chips used in the servers are Nvidia’s Grace Blackwell processors, which run extraordinarily hot. To manage this heat load, Patmos employs a multi-loop liquid cooling system. The design separates water sources into distinct, closed loops, each serving a specific function and ensuring that municipal water never directly contacts sensitive IT equipment. “We have five different, completely separated water loops in this building,” said Morgan. “The cooling tower uses city water for evaporation, but that water never mixes with the closed loops serving the data hall. Everything is designed to maximize efficiency and protect the hardware.” The building taps into Kansas City’s district chilled water supply, which is sourced from a nearby utility plant. This provides the primary cooling resource for the facility. Inside the data center, a dedicated loop circulates a specialized glycol-based fluid, filtered to extremely low micron levels and formulated to be electronically safe. Heat exchangers transfer heat from the data hall fluid to the district chilled water, keeping the two fluids separate and preventing corrosion or contamination. Liquid-to-chip and rear-door heat exchangers are used for immediate heat removal.

Read More »

INNIO and VoltaGrid: Landmark 2.3 GW Modular Power Deal Signals New Phase for AI Data Centers

Why This Project Marks a Landmark Shift The deployment of 2.3 GW of modular generation represents utility-scale capacity, but what makes it distinct is the delivery model. Instead of a centralized plant, the project uses modular gas-reciprocating “power packs” that can be phased in step with data-hall readiness. This approach allows staged energization and limits the bottlenecks that often stall AI campuses as they outgrow grid timelines or wait in interconnection queues. AI training loads fluctuate sharply, placing exceptional stress on grid stability and voltage quality. The INNIO/VoltaGrid platform was engineered specifically for these GPU-driven dynamics, emphasizing high transient performance (rapid load acceptance) and grid-grade power quality, all without dependence on batteries. Each power pack is also designed for maximum permitting efficiency and sustainability. Compared with diesel generation, modern gas-reciprocating systems materially reduce both criteria pollutants and CO₂ emissions. VoltaGrid markets the configuration as near-zero criteria air emissions and hydrogen-ready, extending allowable runtimes under air permits and making “prime-as-a-service” viable even in constrained or non-attainment markets. 2025: Momentum for Modular Prime Power INNIO has spent 2025 positioning its Jenbacher platform as a next-generation power solution for data centers: combining fast start, high transient performance, and lower emissions compared with diesel. While the 3 MW J620 fast-start lineage dates back to 2019, this year the company sharpened its data center narrative and booked grid stability and peaking projects in markets where rapid data center growth is stressing local grids. This momentum was exemplified by an 80 MW deployment in Indonesia announced earlier in October. The same year saw surging AI-driven demand and INNIO’s growing push into North American data-center markets. Specifications for the 2.3 GW VoltaGrid package highlight the platform’s heat tolerance, efficiency, and transient response, all key attributes for powering modern AI campuses. VoltaGrid’s 2025 Milestones VoltaGrid’s announcements across 2025 reflect

Read More »

Inside Google’s multi-architecture revolution: Axion Arm joins x86 in production clusters

Matt Kimball, VP and principal analyst with Moor Insights and Strategy, pointed out that AWS and Microsoft have already moved many workloads from x86 to internally designed Arm-based servers. He noted that, when Arm first hit the hyperscale datacenter market, the architecture was used to support more lightweight, cloud-native workloads with an interpretive layer where architectural affinity was “non-existent.” But now there’s much more focus on architecture, and compatibility issues “largely go away” as Arm servers support more and more workloads. “In parallel, we’ve seen CSPs expand their designs to support both scale out (cloud-native) and traditional scale up workloads effectively,” said Kimball. Simply put, CSPs are looking to monetize chip investments, and this migration signals that Google has found its performance-per-dollar (and likely performance-per-watt) better on Axion than x86. Google will likely continue to expand its Arm footprint as it evolves its Axion chip; as a reference point, Kimball pointed to AWS Graviton, which didn’t really support “scale up” performance until its v3 or v4 chip. Arm is coming to enterprise data centers too When looking at architectures, enterprise CIOs should ask themselves questions such as what instance do they use for cloud workloads, and what servers do they deploy in their data center, Kimball noted. “I think there is a lot less concern about putting my workloads on an Arm-based instance on Google Cloud, a little more hesitance to deploy those Arm servers in my datacenter,” he said. But ultimately, he said, “Arm is coming to the enterprise datacenter as a compute platform, and Nvidia will help usher this in.” Info-Tech’s Jain agreed that Nvidia is the “biggest cheerleader” for Arm-based architecture, and Arm is increasingly moving from niche and mobile use to general-purpose and AI workload execution.

Read More »

AMD Scales the AI Factory: 6 GW OpenAI Deal, Korean HBM Push, and Helios Debut

What 6 GW of GPUs Really Means The 6 GW of accelerator load envisioned under the OpenAI–AMD partnership will be distributed across multiple hyperscale AI factory campuses. If OpenAI begins with 1 GW of deployment in 2026, subsequent phases will likely be spread regionally to balance supply chains, latency zones, and power procurement risk. Importantly, this represents entirely new investment in both power infrastructure and GPU capacity. OpenAI and its partners have already outlined multi-GW ambitions under the broader Stargate program; this new initiative adds another major tranche to that roadmap. Designing for the AI Factory Era These upcoming facilities are being purpose-built for next-generation AI factories, where MI450-class clusters could drive rack densities exceeding 100 kW. That level of compute concentration makes advanced power and cooling architectures mandatory, not optional. Expected solutions include: Warm-water liquid cooling (manifold, rear-door, and CDU variants) as standard practice. Facility-scale water loops and heat-reuse systems—including potential district-heating partnerships where feasible. Medium-voltage distribution within buildings, emphasizing busway-first designs and expanded fault-current engineering. While AMD has not yet disclosed thermal design power (TDP) specifications for the MI450, a 1 GW campus target implies tens of thousands of accelerators. That scale assumes liquid cooling, ultra-dense racks, and minimal network latency footprints, pushing architectures decisively toward an “AI-first” orientation. Design considerations for these AI factories will likely include: Liquid-to-liquid cooling plants engineered for step-function capacity adders (200–400 MW blocks). Optics-friendly white space layouts with short-reach topologies, fiber raceways, and aisles optimized for module swaps. Substation adjacency and on-site generation envelopes negotiated during early land-banking phases. Networking, Memory, and Power Integration As compute density scales, networking and memory bottlenecks will define infrastructure design. Expect fat-tree and dragonfly network topologies, 800 G–1.6 T interconnects, and aggressive optical-module roadmaps to minimize collective-operation latency, aligning with recent disclosures from major networking vendors.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »