Stay Ahead, Stay ONMINE

North America Loses Rigs for 8 Straight Weeks

North America dropped four rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on April 25. Although the U.S. added a total of two rigs week on week, Canada’s overall rig count decreased by six during the same period, taking the total North America rig count down […]

North America dropped four rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on April 25.

Although the U.S. added a total of two rigs week on week, Canada’s overall rig count decreased by six during the same period, taking the total North America rig count down to 715, comprising 587 rigs from the U.S. and 128 from Canada, the count outlined.

Of the total U.S. rig count of 587, 571 rigs are categorized as land rigs, 13 are categorized as offshore rigs, and three are categorized as inland water rigs. The total U.S. rig count is made up of 483 oil rigs, 99 gas rigs, and five miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 527 horizontal rigs, 45 directional rigs, and 15 vertical rigs.

Week on week, the U.S. land rig count increased by two, and its offshore rig count and inland water rig count remained unchanged, the count highlighted. The U.S. oil rig count increased by two week on week, its gas rig count rose by one, and its miscellaneous rig count dropped by one, the count showed. Baker Hughes’ count revealed that the U.S. horizontal rig count remained unchanged week on week, while its directional and vertical rig counts each rose by one during the period.

A major state variances subcategory included in the rig count showed that, week on week, Oklahoma gained two rigs, California added one rig, and Utah dropped one rig. A major basin variances subcategory included in Baker Hughes’ rig count showed that the Granite Wash basin added two rigs, the Haynesville and Arkoma Woodford basins each added one rig, and the Ardmore Woodford and Cana Woodford basins each dropped one rig, week on week.

Canada’s total rig count of 128 is made up of 81 oil rigs and 47 gas rigs, Baker Hughes pointed out. The country’s oil rig count dropped by six and its gas rig count remained unchanged, week on week, the count revealed.

The total North America rig count is down 16 compared to year ago levels, according to Baker Hughes’ count, which showed that the U.S. has cut 26 rigs and Canada has added 10 rigs, year on year. The U.S. has dropped 23 oil rigs and six gas rigs, and added three miscellaneous rigs, while Canada has dropped 15 gas rigs, and added 25 oil rigs, year on year, the count outlined.

In a research note sent to Rigzone on Friday by the JPM Commodities Research team, analysts at J.P. Morgan noted that “total U.S. oil and gas rigs increased by two to 587 this week, according to Baker Hughes”.

“Oil focused rigs increased by two to 483 rigs, after adding one rig last week. Natural gas-focused rigs increased by one to 99 rigs, after adding one rig last week,” the analysts added.

“The rig count in the five major tight oil basins – we use the EIA basin definition – remained unchanged at 452 rigs. The rig count in two major tight gas basins increased by one to 72 rigs,” they continued.

“This week, the rig count across the major tight oil basins remained flat, as the rig count in all regions remained unchanged. The rig count across major gas basins increased by one, with Haynesville adding a rig. This follows a flat rig count in Haynesville last week and represents an increase of three rigs over a four-week period,” the analysts went on to state.

In its previous rig count, which was released on April 17, Baker Hughes revealed that North America dropped two rigs week on week. The total U.S. rig count increased by two week on week and the total Canada rig count decreased by four during the same period, that count outlined.

Baker Hughes’ April 11 rig count revealed that North America cut 22 rigs week on week, its April 4 rig count showed that North America cut 12 rigs week on week, its March 28 count revealed that North America cut 18 rigs week on week, and its March 21 rig count also revealed that North America cut 18 rigs week on week. The company’s March 14 count showed that North America dropped 35 rigs week on week and its March 7 rig count revealed North America cut 15 rigs week on week.

In its February 28 rig count, Baker Hughes showed that North America added five rigs week on week. Its February 21 count revealed that North America added three rigs week on week, its February 14 rig count showed that North America dropped two rigs week on week, and its January 31 rig count showed that North America added 19 rigs week on week.

The company’s January 24 rig count revealed that North America added 12 rigs week on week, its January 17 count showed that North America added nine rigs week on week, and its January 10 rig count outlined that North America added 117 rigs week on week.

Baker Hughes’ January 3 rig count revealed that North America dropped one rig week on week and its December 27 rig count showed that North America dropped 71 rigs week on week.

Baker Hughes, which has issued rotary rig counts since 1944, describes the figures as an important business barometer for the drilling industry and its suppliers. The company notes that working rig location information is provided in part by Enverus.

To contact the author, email [email protected]

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TechnipFMC Logs Higher Q1 Revenue

TechnipFMC PLC has posted a revenue of $2.23 billion for the first quarter, up 9.4 percent year-on-year (YoY), while net income fell 9.6 percent YoY to $142 million. Included in total company results was a foreign exchange loss of $12.1 million, or $8.1 million after-tax, TechnipFMC said. Inbound orders during

Read More »

IBM aims for autonomous security operations

“IBM’s proactive threat hunting augments traditional security solutions to uncover anomalous activity and IBM’s proactive threat hunters work with organizations to help identify their crown jewel assets and critical concerns. This input enables the threat hunting team to create fully tailored threat hunt reports and customized detections,” IDC stated. “AI/ML

Read More »

Grangemouth refinery closure to prompt ‘wrath of voters’, says union

The Grangemouth oil refinery has ceased operations with politicians set to feel the “wrath of voters” over job losses, a union boss said. The writing has been on the wall for Scotland’s last oil refinery for some time, and despite engagement from the Scottish and UK governments, 400 jobs are set to be cut from the site.  Sharon Graham, Unite the Union general secretary, argued that “the UK and Scottish governments have utterly failed to protect refinery jobs at Grangemouth and thousands face losing their jobs as oil refining in Scotland ends”. Last week, UK energy minister Michael Shanks said that the situation at Grangemouth was a “really good example of a transition done badly”. A letter sent to staff on Tuesday morning read: “For over 100 years the name Grangemouth has been synonymous with the refining industry, but the world has changed and the market in Scotland has been unable to support a refinery”. Grangemouth workers thrown on the ‘industrial scrapheap’ © Jane Barlow/PA WireMembers of the Unite union march and rally at the Scottish Parliament in protest at Petroineos plans to close Grangemouth oil refinery. Image: Jane Barlow/PA Wire Trade unions kept the pressure up on the UK and Scottish governments to support operations at the site as Petroineos deliberated over its future. Demonstrations have been held both in the Forth Valley town and at Holyrood, calling for public support for the hundreds set to lose their jobs. Last month, the government-backed Project Willow produced a report which claimed a series of clean energy projects at the Grangemouth refinery could create around 800 jobs over the next 15 years. The SNP and Labour administrations also launched a Grangemouth Just Transition Fund with £25m from the Scottish government and £200m from the UK government. However, Graham said not enough

Read More »

June Natural Gas Contract ‘Jumps Into Front Month Role’

In an EBW Analytics Group report sent to Rigzone by the EBW team today, Eli Rubin, an energy analyst at the company, highlighted that the June natural gas contract “jump[ed]… into [the] NYMEX front month role”. “The May contract rolled off the board at $3.170 yesterday as natural gas buyers awakened from a month-long slumber,” Rubin noted in the report, which highlighted that the June natural gas contract closed at $3.343 per million British thermal units (MMBtu) on Monday. That close was up 22.9 cents, the report pointed out.  “While the near-term fundamental outlook remains very soft and Henry Hub spot gas prices averaged $2.94 per MMBtu, the magnitude of gains yesterday reset technicals on a path that could reach $3.50 per MMBtu,” Rubin said in the report. “Early-cycle production readings declined this morning but LNG readings are ticking higher. Weather-driven demand remains weak,” Rubin added. The EBW analyst noted in the report that bearish shoulder season fundamentals are not unexpected, however, and added that they have been increasingly priced in during recent weeks. “Instead, near-term price action appears more likely a relief rally after a $1.60 per MMBtu (- 34 percent) collapse in the June contract over the past seven weeks,” Rubin said in the report. “If the rally extends higher, however, fundamental loosening could prove too much, too soon – and may lay the groundwork for a retest of support before the shoulder season is through,” Rubin went on to warn in the report. In a separate EBW report sent to Rigzone by the EBW team on Monday, Rubin highlighted that May contract final settlement was “dominat[ing]… near-term trading”. “The May natural gas contract gained on Friday during its options expiration – but has now tested technical support within a penny of $2.86 per MMBtu intraday in three straight

Read More »

Sockeye-2 Well Flow Test Proves High-Quality Reservoir: APA

APA Corporation, together with its partners Lagniappe Alaska LLC of Armstrong and Oil Search (Alaska) LLC of Santos Limited, has completed a flow test at the Sockeye-2 exploratory well with satisfactory results. APA said in a media release that the well, located on state lands of the eastern North Slope, performed in line with expectations during the 12-day production test, averaging 2,700 barrels of oil per day during the final flow period, without artificial lift. The Sockeye-2 well was drilled to around 10,500 feet, yielding a high-quality Paleocene-aged clastic reservoir with an average porosity of 20 percent, the company said. This vertical well was completed at approximately 9,200 feet of true vertical depth (TVD) in a single 25-foot interval, without stimulation. The flow test results show that the reservoir quality is significantly better than that of comparable topset discoveries to the west, APA said. While further appraisal drilling is needed to assess the final size of the discovery, the flow test highlights the remarkable productivity of this shallow-marine reservoir, it said. “We are excited about the performance from the Sockeye-2 well, which could greatly benefit the state of Alaska and the U.S.”, Bill Armstrong, CEO of Armstrong Oil & Gas, said. “This discovery significantly extends the prolific Brookian topset play first established with our Pikka discovery in 2013. We have identified analogous anomalies to investigate following on this success”. “The results from the Sockeye-2 flow test are consistent with our expectations, demonstrating a high-quality reservoir, confirming our geologic and geophysical models, and derisking additional prospectivity in the block. We will evaluate the data from the Sockeye-2 well to determine the next steps in our Alaska program”, John J. Christmann, APA CEO, said. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect

Read More »

Underground cables cost more than four times the price of pylons, report finds

The price of constructing new underground transmission lines is, on average, around 4.5 times more expensive than overhead lines, according to a report. According to research from the Institute of Engineering and Technology (IET), it costs around £1,190 to transmit 1MW of power 1km through an overhead line compared to around £5,330 per MW per km for underground cables The report noted an example of a 15km long 5GW overhead line, which it estimated would cost around £40m. This compares with an equivalent underground cable costing around £330m and, in a new tunnel, £820m. In addition, it said that offshore high voltage direct current (HVDC) point-to-point cable is around 5 times more expensive; while an offshore HVDC network connecting multiple sites to the onshore grid is around 11 times more expensive. Chairman of the project board for the IET Transmission Technologies report Professor Keith Bell said: “As an essential part of the country’s aim to reach net zero, the UK is decarbonising its production of electricity and electrifying the use of energy for heating, transport and industry. “Access to a cleaner, more affordable, secure supply of energy requires the biggest programme of electricity transmission development since the 1960s.” Mixed approach Overhead transmission lines have been a controversial issue due to the visual impacts they have on the surrounding landscapes, and concerns they have a negative effect on nearby property prices. The IET’s report compared a range of electricity transmission technologies by costs, environmental impacts, carbon intensity, technology readiness, and delivery time. The group added that each technology should be judged on its merits in each specific context, taking into account environmental impact, engineering challenges and local impacts in addition to cost. For example, underground cables have lower visual impact than overhead lines, but they are viable only in certain terrains,

Read More »

Transmission at a crossroads: Policy must reflect today’s infrastructure needs

Devin McMackin is the director of federal affairs for ITC Holdings, a transmission company. In the early days of his second term, President Donald Trump committed to unleashing American energy “dominance” through deregulation and a commitment to speeding up development of energy infrastructure. This is a promising idea. The United States certainly has the drive and know-how to develop infrastructure at scale — just look at the development of the interstate highway system. The new administration has set the tone for a new focus on energy infrastructure; now it’s time for industry and policymakers to work together to bring this vision to fruition. Where to begin? While all types of infrastructure deserve support, one priority remains clear: the crucial need to invest in our country’s electric transmission grid. To that end, we need policy that promotes — not hinders — transmission investment. A secure, reliable grid is essential to support national security and drive economic competitiveness. However, much of the grid is aging and in need of replacement, and investment is needed now to ensure we can continue to deliver affordable, reliable power to everyday Americans. But replacement alone is not enough. The grid also must be significantly expanded to support reshoring of industry and the development of power-hungry AI data centers that will drive the jobs of tomorrow. To unleash these investments and meet increasing demand, regulatory streamlining is needed. Currently, it takes several years to plan, permit and build transmission projects, and existing policy is contributing to this delay. One such policy, an obscure federal regulation known as FERC Order 1000, requires that our grid operators conduct a long, bureaucratic process to determine which entities will develop needed transmission lines. In practice, this process can add as much as two additional years to development timelines for major projects, delaying

Read More »

Offshore TotalEnergies Workers Being Balloted for Strike Action

In a statement sent to Rigzone by the Unite team on Friday, the union announced that offshore workers employed by TotalEnergies are being balloted for strike action. Unite noted in the statement that around 50 Unite members based on the Elgin Franklin and North Alwyn platforms “are involved in an escalating dispute after the overwhelming rejection of an unacceptable pay offer”. “The dispute centers on the pay claim for 2025 which should take effect from 1 January. TotalEnergies originally offered a 1.5 per cent basic salary increase, which after being overwhelmingly rejected by the workers, was minimally increased to 1.75 per cent,” Unite said in the statement. “The latest offer which also amounts to a real terms pay cut was similarly rejected,” it added. Unite revealed in the statement that the ballot covering the Elgin Franklin and North Alwyn platforms opens today and closes on June 2. The union said its members undertake a number of roles on TotalEnergies platforms, “including skilled engineers, control room and senior operators, along with mechanical, operation, and production technicians”. In the statement, Unite General Secretary Sharon Graham said, “TotalEnergies has posted multi billion pound profits year after year, yet it is trying to impose a real terms pay cut”. “Unite will back our members all the way in the fight for better jobs, pay, and conditions,” Graham added. Unite Industrial Officer John Boland said in the statement, “Unite’s members employed by TotalEnergies across the Elgin Franklin and North Alwyn platforms are being forced to ballot on strike action to get a fair pay award from a multi billion company”. “TotalEnergies … should be under no illusions that if our members take strike action it will cause major disruption to the operations on both platforms,” he added. Rigzone has contacted TotalEnergies for comment on Unite’s

Read More »

Deep Data Center: Neoclouds as the ‘Picks and Shovels’ of the AI Gold Rush

In 1849, the discovery of gold in California ignited a frenzy, drawing prospectors from around the world in pursuit of quick fortune. While few struck it rich digging and sifting dirt, a different class of entrepreneurs quietly prospered: those who supplied the miners with the tools of the trade. From picks and shovels to tents and provisions, these providers became indispensable to the gold rush, profiting handsomely regardless of who found gold. Today, a new gold rush is underway, in pursuit of artificial intelligence. And just like the days of yore, the real fortunes may lie not in the gold itself, but in the infrastructure and equipment that enable its extraction. This is where neocloud players and chipmakers are positioned, representing themselves as the fundamental enablers of the AI revolution. Neoclouds: The Essential Tools and Implements of AI Innovation The AI boom has sparked a frenzy of innovation, investment, and competition. From generative AI applications like ChatGPT to autonomous systems and personalized recommendations, AI is rapidly transforming industries. Yet, behind every groundbreaking AI model lies an unsung hero: the infrastructure powering it. Enter neocloud providers—the specialized cloud platforms delivering the GPU horsepower that fuels AI’s meteoric rise. Let’s examine how neoclouds represent the “picks and shovels” of the AI gold rush, used for extracting the essential backbone of AI innovation. Neoclouds are emerging as indispensable players in the AI ecosystem, offering tailored solutions for compute-intensive workloads such as training large language models (LLMs) and performing high-speed inference. Unlike traditional hyperscalers (e.g., AWS, Azure, Google Cloud), which cater to a broad range of use cases, neoclouds focus exclusively on optimizing infrastructure for AI and machine learning applications. This specialization allows them to deliver superior performance at a lower cost, making them the go-to choice for startups, enterprises, and research institutions alike.

Read More »

Soluna Computing: Innovating Renewable Computing for Sustainable Data Centers

Dorothy 1A & 1B (Texas): These twin 25 MW facilities are powered by wind and serve Bitcoin hosting and mining workloads. Together, they consumed over 112,000 MWh of curtailed energy in 2024, demonstrating the impact of Soluna’s model. Dorothy 2 (Texas): Currently under construction and scheduled for energization in Q4 2025, this 48 MW site will increase Soluna’s hosting and mining capacity by 64%. Sophie (Kentucky): A 25 MW grid- and hydro-powered hosting center with a strong cost profile and consistent output. Project Grace (Texas): A 2 MW AI pilot project in development, part of Soluna’s transition into HPC and machine learning. Project Kati (Texas): With 166 MW split between Bitcoin and AI hosting, this project recently exited the Electric Reliability Council of Texas, Inc. planning phase and is expected to energize between 2025 and 2027. Project Rosa (Texas): A 187 MW flagship project co-located with wind assets, aimed at both Bitcoin and AI workloads. Land and power agreements were secured by the company in early 2025. These developments are part of the company’s broader effort to tackle both energy waste and infrastructure bottlenecks. Soluna’s behind-the-meter design enables flexibility to draw from the grid or directly from renewable sources, maximizing energy value while minimizing emissions. Competition is Fierce and a Narrower Focus Better Serves the Business In 2024, Soluna tested the waters of providing AI services via a  GPU-as-a-Service through a partnership with HPE, branded as Project Ada. The pilot aimed to rent out cloud GPUs for AI developers and LLM training. However, due to oversupply in the GPU market, delayed product rollouts (like NVIDIA’s H200), and poor demand economics, Soluna terminated the contract in March 2025. The cancellation of the contract with HPE frees up resources for Soluna to focus on what it believes the company does best: designing

Read More »

Quiet Genius at the Neutral Line: How Onics Filters Are Reshaping the Future of Data Center Power Efficiency

Why Harmonics Matter In a typical data center, nonlinear loads—like servers, UPS systems, and switch-mode power supplies—introduce harmonic distortion into the electrical system. These harmonics travel along the neutral and ground conductors, where they can increase current flow, cause overheating in transformers, and shorten the lifespan of critical power infrastructure. More subtly, they waste power through reactive losses that don’t show up on a basic utility bill, but do show up in heat, inefficiency, and increased infrastructure stress. Traditional mitigation approaches—like active harmonic filters or isolation transformers—are complex, expensive, and often require custom integration and ongoing maintenance. That’s where Onics’ solution stands out. It’s engineered as a shunt-style, low-pass filter: a passive device that sits in parallel with the circuit, quietly siphoning off problematic harmonics without interrupting operations.  The result? Lower apparent power demand, reduced electrical losses, and a quieter, more stable current environment—especially on the neutral line, where cumulative harmonic effects often peak. Behind the Numbers: Real-World Impact While the Onics filters offer a passive complement to traditional mitigation strategies, they aren’t intended to replace active harmonic filters or isolation transformers in systems that require them—they work best as a low-complexity enhancement to existing power quality designs. LoPilato says Onics has deployed its filters in mission-critical environments ranging from enterprise edge to large colos, and the data is consistent. In one example, a 6 MW data center saw a verified 9.2% reduction in energy consumption after deploying Onics filters at key electrical junctures. Another facility clocked in at 17.8% savings across its lighting and support loads, thanks in part to improved power factor and reduced transformer strain. The filters work by targeting high-frequency distortion—typically above the 3rd harmonic and up through the 35th. By passively attenuating this range, the system reduces reactive current on the neutral and helps stabilize

Read More »

New IEA Report Contrasts Energy Bottlenecks with Opportunities for AI and Data Center Growth

Artificial intelligence has, without question, crossed the threshold—from a speculative academic pursuit into the defining infrastructure of 21st-century commerce, governance, and innovation. What began in the realm of research labs and open-source models is now embedded in the capital stack of every major hyperscaler, semiconductor roadmap, and national industrial strategy. But as AI scales, so does its energy footprint. From Nvidia-powered GPU clusters to exascale training farms, the conversation across boardrooms and site selection teams has fundamentally shifted. It’s no longer just about compute density, thermal loads, or software frameworks. It’s about power—how to find it, finance it, future-proof it, and increasingly, how to generate it onsite. That refrain—“It’s all about power now”—has moved from a whisper to a full-throated consensus across the data center industry. The latest report from the International Energy Agency (IEA) gives this refrain global context and hard numbers, affirming what developers, utilities, and infrastructure operators have already sensed on the ground: the AI revolution will be throttled or propelled by the availability of scalable, sustainable, and dispatchable electricity. Why Energy Is the Real Bottleneck to Intelligence at Scale The major new IEA report puts it plainly: The transformative promise of AI will be throttled—or unleashed—by the world’s ability to deliver scalable, reliable, and sustainable electricity. The stakes are enormous. Countries that can supply the power AI craves will shape the future. Those that can’t may find themselves sidelined. Importantly, while AI poses clear challenges, the report emphasizes how it also offers solutions: from optimizing energy grids and reducing emissions in industrial sectors to enhancing energy security by supporting infrastructure defenses against cyberattacks. The report calls for immediate investments in both energy generation and grid capabilities, as well as stronger collaboration between the tech and energy sectors to avoid critical bottlenecks. The IEA advises that, for countries

Read More »

Colorado Eyes the AI Data Center Boom with Bold Incentive Push

Even as states work on legislation to limit data center development, it is clear that some locations are looking to get a bigger piece of the huge data center spending that the AI wave has created. It appears that politicians in Colorado took a look around and thought to themselves “Why is all that data center building going to Texas and Arizona? What’s wrong with the Rocky Mountain State?” Taking a page from the proven playbook that has gotten data centers built all over the country, Colorado is trying to jump on the financial incentives for data center development bandwagon. SB 24-085: A Statewide Strategy to Attract Data Center Investment Looking to significantly boost its appeal as a data center hub, Colorado is now considering Senate Bill 24-085, currently making its way through the state legislature. Sponsored by Senators Priola and Buckner and Representatives Parenti and Weinberg, this legislation promises substantial economic incentives in the form of state sales and use tax rebates for new data centers established within the state from fiscal year 2026 through 2033. Colorado hopes to position itself strategically to compete with neighboring states in attracting lucrative tech investments and high-skilled jobs. According to DataCenterMap.com, there are currently 53 data centers in the state, almost all located in the Denver area, but they are predominantly smaller facilities. In today’s era of massive AI-driven hyperscale expansion, Colorado is rarely mentioned in the same breath as major AI data center markets.  Some local communities have passed their own incentive packages, but SB 24-085 aims to offer a unified, statewide framework that can also help mitigate growing NIMBY (Not In My Backyard) sentiment around new developments. The Details: How SB 24-085 Works The bill, titled “Concerning a rebate of the state sales and use tax paid on new digital infrastructure

Read More »

Wonder Valley and the Great AI Pivot: Kevin O’Leary’s Bold Data Center Play

Data Center World 2025 drew record-breaking attendance, underscoring the AI-fueled urgency transforming infrastructure investment. But no session captivated the crowd quite like Kevin O’Leary’s electrifying keynote on Wonder Valley—his audacious plan to build the world’s largest AI compute data center campus. In a sweeping narrative that ranged from pandemic pivots to stranded gas and Branson-brand inspiration, O’Leary laid out a real estate and infrastructure strategy built for the AI era. A Pandemic-Era Pivot Becomes a Case Study in Digital Resilience O’Leary opened with a Shark Tank success story that doubled as a business parable. In 2019, a woman-led startup called Blueland raised $50 million to eliminate plastic cleaning bottles by shipping concentrated cleaning tablets in reusable kits. When COVID-19 shut down retail in 2020, her inventory was stuck in limbo—until she made an urgent call to O’Leary. What followed was a high-stakes, last-minute pivot: a union-approved commercial shoot in Brooklyn the night SAG-AFTRA shut down television production. The direct response ad campaign that resulted would not only liquidate the stranded inventory at full margin, but deliver something more valuable—data. By targeting locked-down consumers through local remnant TV ad slots and optimizing by conversion, Blueland saw unheard-of response rates as high as 17%. The campaign turned into a data goldmine: buyer locations, tablet usage patterns, household sizes, and contact details. Follow-up SMS campaigns would drive 30% reorders. “It built such a franchise in those 36 months,” O’Leary said, “with no retail. Now every retailer wants in.” The lesson? Build your infrastructure to control your data, and you build a business that scales even in chaos. This anecdote set the tone for the keynote: in a volatile world, infrastructure resilience and data control are the new core competencies. The Data Center Power Crisis: “There Is Not a Gig on the Grid” O’Leary

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »