Stay Ahead, Stay ONMINE

bp sets new course, plans 75% group capex allocation to upstream oil and gas

bp is changing gears with an overall capital expenditure budget reduction, but an increase in upstream oil and gas investments as it reallocates spending. The company will increase upstream spend to 75% of group capex (70% oil, 30% gas on average) as it simultaneously plans to become more selective with energy transition spending.  The news […]

bp is changing gears with an overall capital expenditure budget reduction, but an increase in upstream oil and gas investments as it reallocates spending. The company will increase upstream spend to 75% of group capex (70% oil, 30% gas on average) as it simultaneously plans to become more selective with energy transition spending. 

The news is part of the operator’s plan to ‘reset’ the business to improve performance, it said in a release Feb. 26. 

“Today we have fundamentally reset bp’s strategy. We are reducing and reallocating capital expenditure to our highest-returning businesses to drive growth,” said Murray Auchincloss, chief executive officer. 

“We will grow upstream investment and production to allow us to produce high margin energy for years to come. We will focus our downstream on markets where we have leading integrated positions. And we will be very selective in our investment in the transition, including through innovative capital-light platforms,” he continued. 

Helge Lund, bp’s chair, said the board has worked with bp executives over the last 12 months as the company developed the new direction, “ensuring it reflects the significant changes we have seen in energy markets and our purpose of delivering energy to the world today and tomorrow.”

The company will reduce its total capital expenditure to $13-15 billion per year to 2027, $1-3 billion lower than in 2024. Capital expenditure for 2025 is targeted at $15 billion. 

Upstream, downsteam 

Of that, $10 billion per year will be allocated to the upstream oil and gas business with the aim of growing production to 2.3-2.5 MMboe/d in 2030. bp said it aims to strengthen its upstream portfolio through access to discovered resources and “reloading [the] exploration hopper.” Ten new major projects are expected to start up by end-2027, and a further 8-10 by end-2030. Changes are expected to generate an additional $2 billion in operating cash flow in 2027.

Downstream, the company will focus on its core integrated positions with investment of about $3 billion by 2027 with an expected $2 billion in structural cost reductions across the downstream portfolio. The company expects an additional $3.5-4 billion in downstream operating cash flow by 2027. 

‘Capital-light’ energy transition investment, potential sales

With the renewed focus on oil and gas, the company is reducing its energy transition investment to $1.5-2 billion per year, $5 billion lower than previous guidance. 

The company said it will be “disciplined” in such investments, including biogas, biofuels, and EV charging, with “capital-light” partnerships in renewables with a focus on investment in hydrogen and carbon capture and storage (CCS). 

To aid in improving the balance sheet, bp is targeting structural cost reductions of $4-5 billion by end-2027 and $20 billion in divestments by 2027, including potential proceeds from adding a partner to Lightsource bp and a strategic review of Castrol, it’s global lubricants business.

Earlier this month, BP Europa SE noted plans to seek potential buyers for Ruhr Oel GmbH – BP Gelsenkirchen and associated refinery assets with sales agreements targeted for 2025.  

The company is targeting a net debt reduction to $14-18 billion by end-2027.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Storage shortage may cause AI delays for enterprises

Higher prices ahead All indicators are showing a steep price increase for memory and storage in 2026. Brad Gastwirth, for example, says he met with many of the most important players in the market at CES earlier this month, and his analysis suggests there will be a 50% or more

Read More »

Rust 1.93 updates bundled musl library to boost networking

The Rust team has unveiled Rust 1.93, the latest version of the programming language designed to create fast and safe system-level software. This release improves operations involving the DNS resolver for the musl implementation of the  C standard library. Linux binaries are expected to be more reliable for networking as

Read More »

Intel nabs Qualcomm veteran to lead GPU initiative

Intel has struggled for more than two decades to develop a successful GPU/accelerated computing strategy, going all the way back to the aughts and the ill-fated Larrabee effort.  Its most recent efforts centered around Ponte Vecchio and Gaudi chips, neither of which have gained any traction. Still, CEO Lip-Bu Tan

Read More »

A new utility-recognized approach to grid-integrated electrification in the “messy middle”

As utilities accelerate electrification and demand-side management efforts, the need for grid-interactive, energy efficient, and customer-centric products has grown exponentially. To help identify and promote solutions capable of meeting this need, the Consortium for Energy Efficiency (CEE), a nonprofit consortium with membership representing more than 70% of all energy efficiency program administrators in the United States and Canada, established the Integrated Home Competition (IHC). The latest IHC Grand Prize winner, the Flair Bridge Pro, offers a unique approach to meeting these goals by better enabling control of a range of often-underserved electrified loads.  Over the IHC’s five-year history, a consistent challenge has emerged: while heat pumps have become a cornerstone of electrification planning, much of the existing residential heating and cooling landscape remains difficult to integrate into existing load management frameworks. Specifically, much of today’s load management infrastructure remains designed around central HVAC architectures, with limited support for non-ducted and hybrid systems. In reality, millions of homes rely on boilers, electric baseboards, mini splits, PTACs, and other non-ducted systems. These systems represent a significant share of residential electric heating and cooling load, yet they remain difficult to integrate into energy efficiency, demand response, and load management programs. For utilities, this creates a growing blind spot: a large, potentially flexible load that is effectively unservable with traditional control approaches. The “messy middle” of electrification Electrification is rarely a clean, one-system transition. Across much of North America, homeowners with traditional heating systems are increasingly adding heat pumps to boilers or electric resistance heating systems — including for home additions and the conversion of non-ducted spaces — creating hybrid configurations that aim to balance cost, comfort, and overall system performance. For many homes, these hybrid systems function as a transitional approach to electrification, enabling meaningful emissions reductions without a full system replacement. From

Read More »

Get AI ready: A practical path for electric and water utilities

Why getting AI‑ready is now critical Electric and water utilities are entering an era of higher demand, aging infrastructure, workforce constraints, extreme weather, and rising expectations from regulators and customers. Deloitte’s latest industry outlook notes mounting reliability pressure as utilities race to deliver firm capacity while modernizing grids and balancing affordability—conditions that make smarter operations and faster decisions indispensable. At the same time, leaders are recognizing that AI will only deliver its full value when built on a foundation of trusted data and governance. Accenture’s Technology Vision Utilities research finds that “74% of utility executives believe that AI’s full potential can only be realized when it is built on a foundation of trust.”  For utilities—where safety, compliance, and service continuity are non‑negotiable—being “AI‑ready” isn’t a buzzword; it’s a critical corporate strategy. How to get AI‑ready: Build data trust across the full information lifecycle for utilities A practical approach to becoming AI-ready starts with a trusted data foundation: Unify, govern, and activate enterprise information so AI operates with context, auditability, and confidence. This framework spans five capabilities—discover, derive insights, manage, protect, monitor—that utilities can apply across assets, operations, customer service, and compliance. DiscoverSurface and connect the data hiding in silos—engineering files, asset histories, SCADA logs, work orders, permits, and customer records—so it’s accessible and searchable. Discovery establishes lineage and metadata so teams know what information exists and where it came from. Derive insightsTurn raw content into intelligence with analytics and AI to spot patterns (leaks, outages, transformer stress), predict failures, and optimize crews. OpenText’s data discovery and analytics tools are designed to handle petabyte‑scale information and create auditable, explainable outputs that operations can trust. ManageGovern content and data across lifecycles—from engineering drawings to maintenance records, supply chain information, and financial controls—to ensure the right context, access controls, retention, and interoperability. Unifying

Read More »

Energy Secretary Secures New England and Texas Grids Amid Winter Storm Fern

WASHINGTON—The U.S. Department of Energy today issued two emergency orders to mitigate blackouts in New England and Texas during Winter Storm Fern. Issued pursuant to Section 202(c) of the Federal Power Act, the orders authorize ISO New England Inc. (ISO-NE) and the Electric Reliability Council of Texas (ERCOT) to run specified resources located within the ISO-NE region and ERCOT region, regardless of limits established by environmental permits or state law. The orders will help ISO-NE and ERCOT with the extreme temperatures and storm impacts across the Northeast and Texas and reduce costs for Americans during the winter storm. This is the second emergency order issued to ERCOT during Winter Storm Fern following an order issued yesterday to leverage backup generation at data centers and other industrial sites.  “As Winter Storm Fern brings extreme cold and dangerous conditions across the country, maintaining affordable, reliable, and secure power is non-negotiable,” said U.S. Secretary of Energy Chris Wright. “The previous administration’s energy subtraction policies weakened the grid, leaving Americans more vulnerable during events like Winter Storm Fern. Thanks to President Trump’s leadership, we are reversing those failures and using every available tool to keep the lights on and Americans safe through this storm.”  On day one, President Trump declared a national energy emergency after the Biden administration’s energy subtraction agenda left behind a grid increasingly vulnerable to blackouts. According to the North American Electric Reliability Corporation (NERC), “Winter electricity demand is rising at the fastest rate in recent years,” while the premature forced closure of reliable generation such as coal and natural gas plants leaves American families vulnerable to power outages. The NERC 2025 – 2026 Winter Reliability Assessment further warns that areas across the continental United States have an elevated risk of blackouts during extreme weather conditions.   Power outages cost the American people $44 billion per year, according to data from DOE’s National Laboratories. These orders will help mitigate power outages in New England and Texas and highlights the commonsense policies of the Trump Administration to ensure an affordable and reliable energy system.  The

Read More »

India LNG Buyers Stall Deals Expecting Record Supply Wave

India’s liquefied natural gas importers are holding up some deals spanning decades as they push to lock in cheaper prices, hoping that a surge in supply will tilt negotiations in their favor. Major buyers including Gail India Ltd. and Bharat Petroleum Corp. have been pushing for lower prices and more flexible long-term contracts, leaving discussions with LNG producers stalled for more than a year, according to people familiar with the matter. That approach could be rewarded if prices drop as new projects from the US to Qatar come online. The talks will be a key topic at India Energy Week, which kicks off on Tuesday and will be attended by major producers including Abu Dhabi National Oil Co. and TotalEnergies SE. These companies have invested billions of dollars in massive export plants and new supply on the bet that demand in Asia, including India, will rise for years as the region powers fast-growing industries and transitions away from dirtier fossil fuels. India for years had a goal of gas making up 15 percent of its energy mix by 2030, roughly double the current level. However, the country has struggled to progress toward the target due in part to LNG being too expensive for consumers. Annual imports have essentially plateaued since 2020, with Russia’s 2022 invasion of Ukraine upending the market and sending prices to an all-time high. That dynamic could begin to shift. Global LNG capacity is set to rise by about 50 percent by the end of the decade – the biggest build-out in the industry’s history. Indian buyers are looking for long-term supply contracts from around 2028, near when that wave peaks, according to the people, who asked not to be named as they aren’t authorized to speak with media. Gail and BPCL didn’t respond to a request for comment. India’s

Read More »

Bulgaria Completes Acquisition of 10 Percent Han Asparuh Stake

NewMed Energy LP and OMV Petrom SA have consummated the sale of 10 percent in the Han Asparuh exploration block on Bulgaria’s side of the Black Sea to state-owned Bulgarian Energy Holding EAD (BEH) in compliance with a government order. The Bulgarian parliament had directed the Energy Ministry to negotiate the transfer of up to 20 percent of the license to a government-owned corporation, according to NewMed Energy. Operator OMV Petrom, an integrated energy company with investments from Austria’s state-backed OMV AG and the Romanian government, and 50-50 co-venturer NewMed Energy, an Israeli natural gas-focused explorer and producer, agreed to sell five percent each to BEH. “All of the conditions precedent were satisfied and the transaction was closed”, NewMed Energy LP said in a brief stock filing. In its earlier announcement of the sale agreement, NewMed Energy said the three companies agreed to work with authorities “in connection with amendments to the ordinance for determining the concession royalty payments for the production of underground resources and extension of the period of the appraisal drillings in the project to two years in lieu of one year”. As part of the amendment process, the Bulgarian Energy Ministry released last month a draft of new regulations for determining royalty payments, “which are determined by multiplying the economic value of annual production by the royalty rate payable to the government”, NewMed Energy said. “It is further proposed to establish in the draft regulations a minimum annual royalty payment obligation”, NewMed Energy added. BEH has agreed to pay OMV Petrom and NewMed Energy its proportionate share of the cost of drilling preparations that the two had already undertaken, NewMed Energy said. Han Asparuh has a two-well campaign costing around EUR 170 million ($201.26 million), according to OMV Petrom. In an online update December 15, 2025

Read More »

Iberdrola Completes Hungary Exit

Iberdrola SA has completed a transaction selling its Hungarian business for EUR 171 million ($202.45 million) to a consortium of iG TECH Capital Investment Fund Management Ltd and Premier Energy PLC. The divestment of Iberdrola Renovables Magyarország KFT is part of Iberdrola’s strategy to refocus investment on markets that it considers to have more stable regulatory frameworks. The assets sold comprised 158 megawatts (MW) of operational wind generation capacity. “Of this total capacity, 124 MW are already selling their energy on the open market after completing the 15-year regulated tariff period, while the remaining 34 MW will do so in less than one year”, the Bilbao, Spain-based power and gas utility said in a statement on its website. “Through this transaction, the Group has received a total of EUR171 million, including the price of the company’s shares (EUR 128 million) and a dividend distributed prior to completion (EUR 43 million)”, Iberdrola said. Nicosia, Cyprus-based Premier Energy had acquired Iberdrola’s Romanian business for EUR 88 million in 2024. The business, called Eolica Dobrogea One, had 80 MW operational wind capacity, according to an Iberdrola press release April 29, 2024. Iberdrola said of its exit from Hungary, “This transaction forms part of Iberdrola’s strategy to focus its investments on its core businesses – mainly regulated networks or generation with long-term contracts – and on key markets such as the United States and the United Kingdom”. Iberdrola has raised its investment plan through 2028 from EUR 41 billion to EUR 58 billion, with the UK allotted the biggest chunk at EUR 20 billion. “This plan aims to transform Iberdrola’s profile into a more regulated company, with networks as a vector for growth”, executive chair Ignacio Galan said in the company’s online announcement of the plan September 24, 2025. “We will invest EUR 58 billion by 2028, two-thirds of which will go

Read More »

Intel’s AI pivot could make lower-end PCs scarce in 2026

However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference. How pricing might increase for enterprises Ultimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production. Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said. In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028. Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.” How enterprises can prepare Supplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.

Read More »

Reports of SATA’s demise are overblown, but the technology is aging fast

The SATA 1.0 interface made its debut in 2003. It was developed by a consortium consisting of Intel, Dell, and storage vendors like Seagate and Maxtor. It quickly advanced to SATA III in 2009, but there never was a SATA IV. There was just nibbling around the edges with incremental updates as momentum and emphasis shifted to PCI Express and NVMe. So is there any life to be had in the venerable SATA interface? Surprisingly, yes, say the analysts. “At a high level, yes, SATA for consumer is pretty much a dead end, although if you’re storing TB of photos and videos, it is still the least expensive option,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. Similarly for enterprise, for massive storage demands, the 20 and 30 TB SATA drives from companies like Seagate and WD are apparently still in wide use in cloud data centers for things like cold storage. “In fact, both of those companies are seeing recording revenues based, in part, on the demand for these huge, high-capacity low-cost drives,” he said. “SATA doesn’t make much sense anymore. It underperforms NVMe significantly,” said Rob Enderle, principal analyst with The Enderle Group. “It really doesn’t make much sense to continue make it given Samsung allegedly makes three to four times more margin on NVMe.” And like O’Donnell, Enderle sees continued life for SATA-based high-capacity hard drives. “There will likely be legacy makers doing SATA for some time. IT doesn’t flip technology quickly and SATA drives do wear out, so there will likely be those producing legacy SATA products for some time,” he said.

Read More »

DCN becoming the new WAN for AI-era applications

“DCN is increasingly treated as an end-to-end operating model that standardizes connectivity, security policy enforcement, and telemetry across users, the middle mile, and cloud/application edges,” Sanchez said. Dell’Oro defines DCN as platforms and services that deliver consistent connectivity, policy enforcement, and telemetry from users, across the WAN, to distributed cloud and application edges spanning branch sites, data centers and public clouds. The category is gaining relevance as hybrid architectures and AI-era traffic patterns increase the operational penalty for fragmented control planes. DCN buyers are moving beyond isolated upgrades and are prioritizing architectures that reduce operational seams across connectivity, security and telemetry so that incident response and change control can follow a single thread, according to Dell’Oro’s research. What makes DCN distinct is that it links user-to-application experience with where policy and visibility are enforced. This matters as application delivery paths become more dynamic and workloads shift between on-premises data centers, public cloud, and edge locations. The architectural requirement is eliminating handoffs between networking and security teams rather than optimizing individual network segments. Where DCN is growing the fastest Cloud/application edge is the fastest-growing DCN pillar. This segment deploys policy enforcement and telemetry collection points adjacent to workloads rather than backhauling traffic to centralized security stacks. “Multi-cloud remains a reality, but it is no longer the durable driver by itself,” Sanchez said. “Cloud/application edge is accelerating because enterprises are trying to make application paths predictable and secure across hybrid environments, and that requires pushing application-aware steering, policy enforcement, and unified telemetry closer to workloads.”

Read More »

Edged US Builds Waterless, High-Density AI Data Center Campuses at Scale

Edged US is targeting a narrow but increasingly valuable lane of the hyperscale AI infrastructure market: high-density compute delivered at speed, paired with a sustainability posture centered on waterless, closed-loop cooling and a portfolio-wide design PUE target of roughly 1.15. Two recent announcements illustrate the model. In Aurora, Illinois, Edged is developing a 72-MW facility purpose-built for AI training and inference, with liquid-to-chip cooling designed to support rack densities exceeding 200 kW. In Irving, Texas, a 24-MW campus expansion combines air-cooled densities above 120 kW per rack with liquid-to-chip capability reaching 400 kW. Taken together, the projects point to a consistent strategy: standardized, multi-building campuses in major markets; a vertically integrated technical stack with cooling at its core; and an operating model built around repeatable designs, modular systems, and readiness for rapidly escalating AI densities. A Campus-First Platform Strategy Edged US’s platform strategy is built around campus-scale expansion rather than one-off facilities. The company positions itself as a gigawatt-scale, AI-ready portfolio expanding across major U.S. metros through repeatable design targets and multi-building campuses: an emphasis that is deliberate and increasingly consequential. In Chicago/Aurora, Edged is developing a multi-building campus with an initial facility already online and a second 72-MW building under construction. Dallas/Irving follows the same playbook: the first facility opened in January 2025, with a second 24-MW building approved unanimously by the city. Taken together with developments in Atlanta, Chicago, Columbus, Dallas, Des Moines, Kansas City, and Phoenix, the footprint reflects a portfolio-first mindset rather than a collection of bespoke sites. This focus on campus-based expansion matters because the AI factory era increasingly rewards developers that can execute three things at once: Lock down power and land at scale. Standardize delivery across markets. Operate efficiently while staying aligned with community and regulatory expectations. Edged is explicitly selling the second

Read More »

CBRE’s 2026 Data Center Outlook: Demand Surges as Delivery Becomes the Constraint

The U.S. data center market is entering 2026 with fundamentals that remain unmatched across commercial real estate, but the nature of the dominant constraint has shifted. Demand is no longer gated by capital, connectivity, or even land. It is gated by the ability to deliver very large blocks of power, on aggressive timelines, at a predictable cost. According to the CBRE 2026 U.S. Real Estate Market Outlook as overseen by Gordon Dolven and Pat Lynch, the sector is on track to post another record year for leasing activity, even as vacancy remains at historic lows and pricing reaches all-time highs. What has changed is the scale at which demand now presents itself, and the difficulty of meeting it. Large-Block Leasing Rewrites the Economics AI-driven workloads are reshaping leasing dynamics in ways that break from prior hyperscale norms. Where 10-MW-plus deployments once commanded pricing concessions, CBRE now observes the opposite behavior: large, contiguous blocks of capacity are commanding premiums. Neocloud providers, GPU-as-a-service platforms and AI startups, many backed by aggressive capital deployment strategies, are actively competing for full-building and campus-scale capacity.  For operators, this is altering development and merchandising strategies. Rather than subdividing shells for flexibility, owners increasingly face a strategic choice: hold buildings intact to preserve optionality for single-tenant, high-density users who are willing to pay for scale. In effect, scale itself has become the scarce asset. Behind-the-Meter Power Moves to the Foreground For data centers, power availability meaning not just access, but certainty of delivery, is now the defining variable in the market.  CBRE notes accelerating adoption of behind-the-meter strategies as operators seek to bypass increasingly constrained utility timelines. On-site generation using natural gas, solar, wind, and battery storage is gaining traction, particularly in deregulated electricity markets where operators have more latitude to structure BYOP (bring your own power) solutions. 

Read More »

Blue Origin targets enterprise networks with a multi-terabit satellite connectivity plan

“It’s ideal for remote, sparse, or sensitive regions,” said Manish Rawat, analyst at TechInsights. “Key use cases include cloud-to-cloud links, data center replication, government, defense, and disaster recovery workloads. It supports rapid or temporary deployments and prioritizes fewer customers with high capacity, strict SLAs, and deep carrier integration.” Adoption, however, is expected to largely depend on the sector. For governments and organizations operating highly critical or sensitive infrastructure, where reliability and security outweigh cost considerations, this could be attractive as a redundancy option. “Banks, national security agencies, and other mission-critical operators may consider it as an alternate routing path,” Jain said. “For most enterprises, however, it is unlikely to replace terrestrial connectivity and would instead function as a supplementary layer.” Real-world performance Although satellite connectivity offers potential advantages, analysts note that questions remain around real-world performance. “TeraWave’s 6 Tbps refers to total constellation capacity, not per-user throughput, achieved via multiple optical inter-satellite links and ground gateways,” Rawat said. “Optical crosslinks provide high aggregate bandwidth but not a single terabit-class pipe. Performance lies between fiber and GEO satellites, with lower intercontinental latency than GEO but higher than fiber.” Operational factors could also affect network stability. Jitter is generally low, but handovers, rerouting, and weather conditions can introduce intermittent performance spikes. Packet loss is expected to remain modest but episodic, Rawat added.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »