Stay Ahead, Stay ONMINE

USEDC Plans to Deploy up to $1B Primarily in Permian, Sees Job Growth

In a statement sent to Rigzone this week, U.S. Energy Development Corporation (USEDC) announced “plans to deploy up to $1 billion during 2025, primarily in the Permian Basin”. USEDC said in the statement that its announcement follows a record-breaking year during which USEDC deployed nearly $800 million into operated and non-operated projects. Last year, USEDC […]

In a statement sent to Rigzone this week, U.S. Energy Development Corporation (USEDC) announced “plans to deploy up to $1 billion during 2025, primarily in the Permian Basin”.

USEDC said in the statement that its announcement follows a record-breaking year during which USEDC deployed nearly $800 million into operated and non-operated projects. Last year, USEDC evaluated over 220 opportunities and completed 29 transactions, the company highlighted in the statement, pointing out that these were both increases from 2023 totals.

USEDC also noted in the statement that the company improved cost efficiency, “seeing reductions in cost per lateral foot while maintaining strong productivity”. In 2023, USEDC closed 19 transactions and deployed nearly $600 million, the company said in the statement. It highlighted that most of these were in the Permian Basin, “with other projects in the Barnett, Haynesville, and Powder River basins”.

“Building on the momentum of 2024, USEDC is entering 2025 with a similar growth mindset, aiming to invest up to $1 billion in U.S.-operated and non-operated oil and gas projects, primarily in the Permian Basin,” USEDC said in the statement.

Rigzone asked the company if this $1 billion investment will result in any additional jobs or job retention. Responding to the question, a USEDC spokesperson said, “yes, we expect our anticipated capital deployment aims for 2025 to facilitate further opportunities to grow and strengthen our team”.

In the statement sent to Rigzone this week, USEDC said it expects the Permian Basin to remain the primary focus of its investment in 2025 due largely to the economics of drilling and operating wells in the basin. The company said in the statement that it has experienced consistent results and is confident in its ability to continue to acquire high-potential Permian Basin properties and efficiently manage the costs of operated and non-operated ventures.

“We have built a strong track record of sourcing and transacting on high-quality opportunities, and our ability to deploy capital efficiently continues to drive strong results,” Jordan Jayson, Chairman and CEO of USEDC.

“Our approach remains the same – we will continue to evaluate opportunities that align with our disciplined investment strategy and deliver value to our partners. With a strong foundation and a targeted approach, we are well-positioned to build on our momentum entering 2025,” he added.

“Our long-term acquisition and production strategies continue to generate solid performance across a portfolio of more than 2,000 wells. Despite global price volatility and market uncertainty, the energy market remained relatively stable, and our reputation for completing deals resulted in a record flow of successful transactions and capital deployment in 2024,” Jayson went on to state.  

In a statement posted on its site in December last year, USEDC Executive Vice President Matthew Iak said, “despite the geopolitical uncertainty in the U.S. and the rest of the world in 2024, the energy markets have remained relatively stable, and deal flow has been strong”.

“It is almost paradoxical that during a tumultuous year, globally and domestically, the energy market’s remarkable achievement has been its truly unremarkable stability,” he added.

“For USEDC, we continued to see a steady, attractive deal flow, many at advantageous price levels for companies with a solid capital structure and robust infrastructure,” he went on to state.

“We continue to actively pursue and invest in deals within the Permian Basin, recognizing it as one of the best areas for predictable productivity and returns,” he continued.

In another statement posted on its site in June last year, USEDC said it was “strategically expanding its operations in the prolific Permian Basin”.

“Building on a series of recent successes, the company is poised for significant growth and development in one of the most productive oil and gas regions in the United States,” the company added in that statement.

USEDC describes itself as a privately held exploration and production firm that manages assets for itself and its partners. The company is headquartered in the Dallas-Fort Worth metro area and has invested in, operated, and/or drilled approximately 4,000 wells in 13 states and Canada and deployed more than $2 billion on behalf of itself and its partners, according to its website.

To contact the author, email [email protected]

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Extreme plots enterprise marketplace for AI agents, tools, apps

Extreme Networks this week previewed an AI marketplace where it plans to offer a curated catalog of AI tools, agents and applications. Called Extreme Exchange, it’s designed to give enterprise customers a way to discover, deploy, and create AI agents, microapps, and workflows in minutes rather than developing such components

Read More »

Top quantum breakthroughs of 2025

The Helios quantum computing platform is available to customers through Quantinuum’s cloud service and on-premises offering. HSBC is using IBM’s Heron quantum computer to improve their bond trading predictions by 34% compared to classical computing. Caltech physicists create 6,100-qubit array. Kon H. Leung is seen working on the apparatus used

Read More »

How enterprises are rethinking online AI tools

A second path enterprises like had only about 35% buy-in, but generated the most enthusiasm. It is to use an online AI tool that offers more than a simple answer to a question, something more like an “interactive AI agent” than a chatbot. Two that got all the attention are

Read More »

TotalEnergies Wins 15-Year Google Contract to Supply Renewable Power

TotalEnergies SE has signed a deal to supply Google a total of 1.5 terawatt hours (TWh) of certified green electricity for 15 years to support the tech giant’s data center operations in Ohio. The power will come from the Montpelier solar project in Ohio, which is “nearing completion” and will be connected to the PJM grid system, a joint statement said. “The deal reflects Google’s strategy of enabling new, carbon-free energy to the grid systems where they operate”, the statement said. “It also aligns with TotalEnergies’ strategy to deliver tailored energy solutions for data centers, which accounted for almost three percent of the world’s energy demand in 2024”. “TotalEnergies is deploying a 10-GW portfolio in the United States, with onshore solar, wind and battery storage projects, one GW of which is located in the PJM market in the northeast of the country, and four GW on the ERCOT market in Texas”, the statement added. Stephane Michel, TotalEnergies president for gas, renewables and power at TotalEnergies, said, “This agreement illustrates TotalEnergies’ ability to meet the growing energy demands of major tech companies by leveraging its integrated portfolio of renewable and flexible assets. It also contributes to achieving our target of 12 percent profitability in the power sector”. This is the second data-center green power supply agreement announced by TotalEnergies this month. On November 4 it said it had bagged a 10-year contract to supply Data4 data centers in Spain with a total of 610 gigawatt hours (GWh) of renewable electricity starting 2026. The power will come from Spanish wind and solar farms with a combined capacity of 30 MW. The plants “are about to start production”, a joint statement said. “As European leader in the data center industry, Data4 is now established in six countries, and announced its plan to invest nearly EUR 2 billion [$2.32 billion] by 2030 to

Read More »

Meren Bumps Up Production Guidance

Meren Energy Inc on Thursday raised its projected entitlement output for 2025 from 32,000-37,000 barrels of oil equivalent per day (boepd) to 34,500-37,500 boepd. The Vancouver, Canada-based company, which explores and develops oil and gas in Africa, also revised up its forecast for working-interest production from 28,000-33,000 boepd to 30,000-33,000 boepd. Meren, which currently derives its production offshore Nigeria, defines entitlement production as “calculated using the economic interest methodology and includes cost recovery oil, royalty oil and profit oil”. Working-interest production, according to Meren, is derived by multiplying project volumes by the company’s effective working interest in each license. In the third quarter, Meren, which this year rebranded from Africa Oil Corp, produced 35,600 boepd, down from 41,200 boepd in Q3 2024. Meren derives its production from Akpo and Egina, both operated by TotalEnergies SE, and Chevron Corp-operated Agbami. Production enhancement and exploration activities are progressing in the fields. “Following the break to the Akpo/Egina (PPL 2/3) drilling campaign in Q3 2025, efforts are underway to recommence the campaign”, Meren said. “As previously communicated, this break will allow for the interpretation of 4D seismic data to enhance the maturation of future infill well opportunities. Accordingly, the aim is to secure a deepwater drilling rig within the gap and start with the drilling of the Akpo Far East near-field prospect, followed by the drilling of further development wells on Akpo and Egina fields. “Akpo Far East is an infrastructure-led exploration opportunity that in case of commercial exploration success, presents an attractive short cycle, high-return investment opportunity that would utilize the existing Akpo facilities. Akpo Far East prospect has an unrisked, best estimate, gross field prospective resource volume of 143.6 MMboe. The targeted hydrocarbons are predicted to be light, high gas-oil-ratio oil equivalent to those found in the Akpo field. If successful,

Read More »

Jade Secures Preliminary Funding Deal for Mongolian CBM-to-LNG Project

Zhengzhou Langrun Intelligent Equipment Co Ltd has signed a non-binding letter of intent to provide up to $46 million (AUD 70 million) in financing for a coal bed methane (CBM)-to-liquefied natural gas (LNG) project by Jade Gas Holdings Ltd in Mongolia. The agreement is for the Red Lake gas field, part of the Australian company’s flagship project with the Mongolian government’s Erdenes Methane LLC to develop the Tavantolgoi XXXIII unconventional oil basin (TTCBM Project). Red Lake has 246 billion cubic feet of 2C gross unrisked contingent resources, according to Jade. The Chinese CBM-focused gas equipment manufacturer would fund drilling and production for the next 18 wells in the field, Jade said in a stock filing. Jade has already drilled seven Red Lake wells according to the company. The “non-dilutive financing” would also cover surface facilities for gathering, processing and liquefying gas produced from the field into LNG. The deal also includes “a low upfront capital outlay option, to be funded by future Jade revenue”, Jade said. The parties agreed to consider expanding the terms to accommodate all 175 gas production wells in Red Lake’s first-phase development. Phase 1 involves 20 production wells, including two that came online June, according to Jade. “Langrun’s expertise in the gas industry in China and in particular in CBM offers a great fit for Jade as the company seeks options to fast-track development of the Red Lake gas field and to optimize gas production for faster access to customer markets and ultimately early revenue”, Jade said. “Subject to agreement of definitive documentation, and government and regulator cooperation and other approvals, the Red Lake gas field could potentially be developed to cover purification, pipeline and other transport, compression (for potential production of CNG), liquefaction (for production of LNG), refueling station construction, enabling gas sales for vehicle,

Read More »

Var Energi Confirms Oil Discovery Near Goliat

Var Energi ASA on Thursday confirmed oil in the Zagato North appraisal well, located 10 kilometers (6.21 miles) north of its operated Goliat field on Norway’s side of the Barents Sea. Zagato North, or well 7122/8-4 S, yielded estimated gross recoverable resources of up to three million barrels of oil equivalent (MMboe) in the Klappmyss and Realgrunnen formations, according to a press release by the Stavanger, Norway-based oil and gas explorer and producer. The discovery is part of Production License 229, operated by Var Energi with a 65 percent stake with Equinor as partner holding 35 percent. It is the13th well drilled in the production license, awarded under the Barents Sea Project in 1997, the Norwegian Offshore Directorate (NOD) said separately. The partners are considering tying the discovery to existing Goliat infrastructure. The discovery had been proven February. The well aimed to delineate the 7122/8-3 S (Zagato) discovery in Lower Jurassic-Upper Triassic and Middle Triassic reservoir rocks in the Realgrunnen Subgroup and the Kobbe Formation respectively. “Well 7122/8-4 S encountered an 11-meter [36.09 feet] oil column in the Tubaen Formation in the Realgrunnen Subgroup in reservoir rocks totaling 8.5 meters with good reservoir quality”, the NOD said. “The oil/water contact was encountered 1,523 meters below sea level. “Additional reservoir rocks were encountered in the Kobbe Formation totaling 48 meters with moderate reservoir quality, but the reservoirs were aquiferous. “An 80-meter oil column was also proven in the Klappmyss Fomation in sandstone layers totaling one meter with poor reservoir quality. The oil/water contact was not encountered. “The well was not formation-tested, but extensive data acquisition and sampling were carried out. “Appraisal well 7122/8-4 S was drilled to respective measured/vertical depths of 2986/2793 meters below sea level, and was terminated in the Klappmyss Formation in the Lower Triassic”. Zagato North, which has a

Read More »

A new kind of self-service: empowering utilities to shape their own tech

When utilities are empowered to meet their own tech needs, the whole system benefits. Across the energy industry, the idea of self-service has long been aimed at end-customers, helping them check their usage, change a tariff, or resolve an issue without calling a support team. But in an era of rapid digital transformation, a new kind of self-service is emerging: one aimed at utilities themselves.Today, it’s becoming increasingly critical that utilities must be able to serve their own tech needs, configuring their tech systems to build new products, refine processes, and connect systems without complicated, lengthy coding and other outside help. Independently configurable tech is about giving teams direct control of their tools so they can respond to challenges and innovate at speed. As grids decentralize and customer expectations rise, utilities can’t afford to get caught up in coding request tickets. Many utilities still struggle with cumbersome, disconnected and inflexible systems.  Now, fortunately, a new generation of integrated, configurable tech is here, laying the groundwork for utilities to champion their changing industry. Rigid tech is holding utilities back Clunky tech remains one of the biggest barriers to utility innovation. Many utilities still rely on a patchwork of disconnected legacy systems – for billing, metering, customer care, and field operations, to name a few. These systems are often siloed, with data held in a range of different forms and formats. Even relatively small updates – building a new rate, or tweaking a debt collection process – often demands weeks of specialist work across several siloed platforms, followed by more effort to stitch those updates together without breaking everything. This isn’t just slow and costly, it wears people down. Talented teams lose faith when they can’t fix what’s broken or move ideas forward. Empowering utilities to serve their own tech needs Bringing

Read More »

Crude Settles Higher

Oil eked out gains, rebounding slightly from the previous trading day’s sharp slump as traders weighed the outlook for a record surplus against supply risks from US sanctions. West Texas Intermediate rose 0.3% to settle under $59 a barrel after losing almost 4.2% on Wednesday, its biggest drop since June. Expectations for a long-awaited surplus were bolstered when the International Energy Agency flagged a deteriorating outlook for a sixth consecutive month, saying in a report on Thursday that supply will exceed demand by just over four million barrels a day next year.  Hours later, a US government report showed crude inventories rose 6.4 million barrels last week, the biggest increase since July and markedly higher than expected.  Both announcements came a day after producer group OPEC — which has been restoring idled capacity this year — said that global supply had topped demand in the third quarter, flipping its earlier estimate for the period from a shortfall. The bearish outlook for next year has weighed on oil prices afresh in recent days, with a key indicator — WTI’s prompt spread — sinking into contango. That pricing pattern, with the nearest contracts trading at discounts to further-out ones, signals ample short-term supplies, though it also clawed back into bearish territory on Thursday. At the same time, the Trump administration has moved to raise the pressure on Russia to end the war in Ukraine, including sanctions on Rosneft PJSC and Lukoil PJSC. With days to go until sanctions fully kick in, The Carlyle Group Inc. is exploring its options to buy Lukoil’s foreign assets, Reuters reported.  And bearish momentum on the news of rising US crude inventories was in part undercut by indications that product inventories fell across the board while exports picked up, a sign of resilient consumption at home and

Read More »

AMD outlines ambitious plan for AI-driven data centers

“There are very beefy workloads that you must have that performance for to run the enterprise,” he said. “The Fortune 500 mainstream enterprise customers are now … adopting Epyc faster than anyone. We’ve seen a 3x adoption this year. And what that does is drives back to the on-prem enterprise adoption, so that the hybrid multi-cloud is end-to-end on Epyc.” One of the key focus areas for AMD’s Epyc strategy has been our ecosystem build out. It has almost 180 platforms, from racks to blades to towers to edge devices, and 3,000 solutions in the market on top of those platforms. One of the areas where AMD pushes into the enterprise is what it calls industry or vertical workloads. “These are the workloads that drive the end business. So in semiconductors, that’s telco, it’s the network, and the goal there is to accelerate those workloads and either driving more throughput or drive faster time to market or faster time to results. And we almost double our competition in terms of faster time to results,” said McNamara. And it’s paying off. McNamara noted that over 60% of the Fortune 100 are using AMD, and that’s growing quarterly. “We track that very, very closely,” he said. The other question is are they getting new customer acquisitions, customers with Epyc for the first time? “We’ve doubled that year on year.” AMD didn’t just brag, it laid out a road map for the next two years, and 2026 is going to be a very busy year. That will be the year that new CPUs, both client and server, built on the Zen 6 architecture begin to appear. On the server side, that means the Venice generation of Epyc server processors. Zen 6 processors will be built on 2 nanometer design generated by (you guessed

Read More »

Building the Regional Edge: DartPoints CEO Scott Willis on High-Density AI Workloads in Non-Tier-One Markets

When DartPoints CEO Scott Willis took the stage on “the Distributed Edge” panel at the 2025 Data Center Frontier Trends Summit, his message resonated across a room full of developers, operators, and hyperscale strategists: the future of AI infrastructure will be built far beyond the nation’s tier-one metros. On the latest episode of the Data Center Frontier Show, Willis expands on that thesis, mapping out how DartPoints has positioned itself for a moment when digital infrastructure inevitably becomes more distributed, and why that moment has now arrived. DartPoints’ strategy centers on what Willis calls the “regional edge”—markets in the Midwest, Southeast, and South Central regions that sit outside traditional cloud hubs but are increasingly essential to the evolving AI economy. These are not tower-edge micro-nodes, nor hyperscale mega-campuses. Instead, they are regional data centers designed to serve enterprises with colocation, cloud, hybrid cloud, multi-tenant cloud, DRaaS, and backup workloads, while increasingly accommodating the AI-driven use cases shaping the next phase of digital infrastructure. As inference expands and latency-sensitive applications proliferate, Willis sees the industry’s momentum bending toward the very markets DartPoints has spent years cultivating. Interconnection as Foundation for Regional AI Growth A key part of the company’s differentiation is its interconnection strategy. Every DartPoints facility is built to operate as a deeply interconnected environment, drawing in all available carriers within a market and stitching sites together through a regional fiber fabric. Willis describes fiber as the “nervous system” of the modern data center, and for DartPoints that means creating an interconnection model robust enough to support a mix of enterprise cloud, multi-site disaster recovery, and emerging AI inference workloads. The company is already hosting latency-sensitive deployments in select facilities—particularly inference AI and specialized healthcare applications—and Willis expects such deployments to expand significantly as regional AI architectures become more widely

Read More »

Key takeaways from Cisco Partner Summit

Brian Ortbals, senior vice president from World Wide Technology, which is one of Cisco’s biggest and most important partners stated: “Cisco engaged partners early in the process and took our feedback along the way. We believe now is the right time for these changes as it will enable us to capitalize on the changes in the market.” The reality is, the more successful its more-than-half-a-million partners are, the more successful Cisco will be. Platform approach is coming together When Jeetu Patel took the reigns as chief product officer, one of his goals was to make the Cisco portfolio a “force multiple.” Patel has stated repeatedly that, historically, Cisco acted more as a technology holding company with good products in networking, security, collaboration, data center and other areas. In this case, product breadth was not an advantage, as everything must be sold as “best of breed,” which is a tough ask of the salesforce and partner community. Since then, there have been many examples of the coming together of the portfolio to create products that leverage the breadth of the platform. The latest is the Unified Edge appliance, an all-in-one solution that brings together compute, networking, storage and security. Cisco has been aggressive with AI products in the data center, and Cisco Unified Edge compliments that work with a device designed to bring AI to edge locations. This is ideally suited for retail, manufacturing, healthcare, factories and other industries where it’s more cost effecting and performative to run AI where the data lives.

Read More »

AI networking demand fueled Cisco’s upbeat Q1 financials

Customers are very focused on modernizing their network infrastructure in the enterprise in preparation for inferencing and AI workloads, Robbins said. “These things are always multi-year efforts,” and this is only the beginning, Robbins said. The AI opportunity “As we look at the AI opportunity, we see customer use cases growing across training, inferencing, and connectivity, with secure networking increasingly critical as workloads move from the data center to end users, devices, and agents at the edge,” Robbins said. “Agents are transforming network traffic from predictable bursts to persistent high-intensity loads, with agentic AI queries generating up to 25 times more network traffic than chatbots.” “Instead of pulling data to and from the data center, AI workloads require models and infrastructure to be closer to where data is created and decisions are made, particularly in industries such as retail, healthcare, and manufacturing.” Robbins pointed to last week’s introduction of Cisco Unified Edge, a converged platform that integrates networking, compute and storage to help enterprise customers more efficiently handle data from AI and other workloads at the edge. “Unified Edge enables real-time inferencing for agentic and physical AI workloads, so enterprises can confidently deploy and manage AI at scale,” Robbins said. On the hyperscaler front, “we see a lot of solid pipeline throughout the rest of the year. The use cases, we see it expanding,” Robbins said. “Obviously, we’ve been selling networking infrastructure under the training models. We’ve been selling scale-out. We launched the P200-based router that will begin to address some of the scale-across opportunities.” Cisco has also seen great success with its pluggable optics, Robbins said. “All of the hyperscalers now are officially customers of our pluggable optics, so we feel like that’s a great opportunity. They not only plug into our products, but they can be used with other companies’

Read More »

When the Cloud Leaves Earth: Google and NVIDIA Test Space Data Centers for the Orbital AI Era

On November 4, 2025, Google unveiled Project Suncatcher, a moonshot research initiative exploring the feasibility of AI data centers in space. The concept envisions constellations of solar-powered satellites in Low Earth Orbit (LEO), each equipped with Tensor Processing Units (TPUs) and interconnected via free-space optical laser links. Google’s stated objective is to launch prototype satellites by early 2027 to test the idea and evaluate scaling paths if the technology proves viable. Rather than a commitment to move production AI workloads off-planet, Suncatcher represents a time-bound research program designed to validate whether solar-powered, laser-linked LEO constellations can augment terrestrial AI factories, particularly for power-intensive, latency-tolerant tasks. The 2025–2027 window effectively serves as a go/no-go phase to assess key technical hurdles including thermal management, radiation resilience, launch economics, and optical-link reliability. If these milestones are met, Suncatcher could signal the emergence of a new cloud tier: one that scales AI with solar energy rather than substations. Inside Google’s Suncatcher Vision Google has released a detailed technical paper titled “Towards a Future Space-Based, Highly Scalable AI Infrastructure Design.” The accompanying Google Research blog describes Project Suncatcher as “a moonshot exploring a new frontier” – an early-stage effort to test whether AI compute clusters in orbit can become a viable complement to terrestrial data centers. The paper outlines several foundational design concepts: Orbit and Power Project Suncatcher targets Low Earth Orbit (LEO), where solar irradiance is significantly higher and can remain continuous in specific orbital paths. Google emphasizes that space-based solar generation will serve as the primary power source for the TPU-equipped satellites. Compute and Interconnect Each satellite would host Tensor Processing Unit (TPU) accelerators, forming a constellation connected through free-space optical inter-satellite links (ISLs). Together, these would function as a disaggregated orbital AI cluster, capable of executing large-scale batch and training workloads. Downlink

Read More »

Cloud-based GPU savings are real – for the nimble

The pattern points to an evolving GPU ecosystem: while top-tier chips like Nvidia’s new GB200 Blackwell processors remain in extremely short supply, older models such as the A100 and H100 are becoming cheaper and more available. Yet, customer behavior may not match practical needs. “Many are buying the newest GPUs because of FOMO—the fear of missing out,” he added. “ChatGPT itself was built on older architecture, and no one complained about its performance.” Gil emphasized that managing cloud GPU resources now requires agility, both operationally and geographically. Spot capacity fluctuates hourly or even by the minute, and availability varies across data center regions. Enterprises willing to move workloads dynamically between regions—often with the help of AI-driven automation—can achieve cost reductions of up to 80%. “If you can move your workloads where the GPUs are cheap and available, you pay five times less than a company that can’t move,” he said. “Human operators can’t respond that fast automation is essential.” Conveniently, Cast sells an AI automation solution. But it is not the only one and the argument is valid. If spot pricing can be found cheaper at another location, you want to take it to keep the cloud bill down/ Gil concluded by urging engineers and CTOs to embrace flexibility and automation rather than lock themselves into fixed regions or infrastructure providers. “If you want to win this game, you have to let your systems self-adjust and find capacity where it exists. That’s how you make AI infrastructure sustainable.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »