Stay Ahead, Stay ONMINE

Trump Renews Pressure on EU to Stop Buying Russian Oil

President Donald Trump renewed his call for European countries to “stop buying oil” from Russia, a demand he’s linked to further US pressure on President Vladimir Putin to halt the war in Ukraine. “The Europeans are buying oil from Russia — not supposed to happen, right?” Trump said in a dinner speech at Mount Vernon, […]

President Donald Trump renewed his call for European countries to “stop buying oil” from Russia, a demand he’s linked to further US pressure on President Vladimir Putin to halt the war in Ukraine.

“The Europeans are buying oil from Russia — not supposed to happen, right?” Trump said in a dinner speech at Mount Vernon, Virginia, near Washington on Saturday.

Trump has chided Europe repeatedly for its Russian energy purchases. On Thursday, after meeting with UK Prime Minister Keir Starmer, the US president said he’s willing to heighten economic pressure on Moscow “but not when the people that I’m fighting for are buying oil from Russia.”

While direct purchases of Russian oil by most European nations ended after Moscow’s 2022 full-scale invasion of Ukraine, small volumes continue to flow to Eastern Europe. European nations also import diesel from India and Turkey, where Russian oil is refined into the fuel.

The EU has already passed a ban that will prohibit importing petroleum products refined from Russia crude starting next year, and the bloc is discussing banning imports of Russian liquefied natural gas from 2027. 

Almost all EU member states have stopped buying Russian seaborne and pipeline oil. Landlocked Hungary and Slovakia, which import Russian oil via the Druzhba pipeline, are the holdouts. The EU is considering trade measures to target those remaining supplies if Budapest and Bratislava don’t adopt exit plans, Bloomberg reported on Saturday. 

In all, the purchases account for only 3% of EU crude oil imports against about 27% before the war in Ukraine, European Commission figures show. 

Trump on Saturday suggested that Matt Whitaker, US ambassador to the North Atlantic Treaty Organization, increase pressure on Europe.

“They have to stop buying oil from Russia, Matt,” he said, addressing Whitaker in the audience. “Matt won’t let it happen much longer.”

With no end in sight for the war in Ukraine, Trump reiterated his frustration with the Russian leader, saying he’s “very disappointed in President Putin.”

Trump argued that the war would stop if oil prices are squeezed “a little bit more.” 



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

9 Linux certifications to boost your career

Price: $369 Exam format: 90 minutes, 90 questions (multiple-choice and performance-based) Prerequisites: None (12 months Linux experience recommended) Focus: System management, security, troubleshooting, automation across distributions  Salary range: $79,000-$105,000 Best for: Career changers, IT generalists Recertification: Certification expires 3 years after it is granted and requires 50 continuing education credits

Read More »

Nvidia reportedly acquires Enfabrica CEO and chip technology license

Another Enfabrica technology that’s of interest to Nvidia, according to Forrester principal analyst Charlie Dai, is Elastic Memory Fabric System (EMFASYS) that became generally available in July. EMFASYS provides AI servers flexible access to memory bandwidth and capacity through a standalone device that connects over standard network ports. The combination

Read More »

Observability platforms gain AI capabilities

LogicMonitor also announced Oracle Infrastructure (OCI) Monitoring to expand its multi-cloud coverage, provide visibility across AWS, Azure, GCP, and OCI, and offer observability capabilities across several cloud platforms. The company also made its LM Uptime and Dynamic Service Insights capabilities generally available to help enterprise IT organizations find issues sooner

Read More »

Trump Renews Pressure on EU to Stop Buying Russian Oil

President Donald Trump renewed his call for European countries to “stop buying oil” from Russia, a demand he’s linked to further US pressure on President Vladimir Putin to halt the war in Ukraine. “The Europeans are buying oil from Russia — not supposed to happen, right?” Trump said in a dinner speech at Mount Vernon, Virginia, near Washington on Saturday. Trump has chided Europe repeatedly for its Russian energy purchases. On Thursday, after meeting with UK Prime Minister Keir Starmer, the US president said he’s willing to heighten economic pressure on Moscow “but not when the people that I’m fighting for are buying oil from Russia.” While direct purchases of Russian oil by most European nations ended after Moscow’s 2022 full-scale invasion of Ukraine, small volumes continue to flow to Eastern Europe. European nations also import diesel from India and Turkey, where Russian oil is refined into the fuel. The EU has already passed a ban that will prohibit importing petroleum products refined from Russia crude starting next year, and the bloc is discussing banning imports of Russian liquefied natural gas from 2027.  Almost all EU member states have stopped buying Russian seaborne and pipeline oil. Landlocked Hungary and Slovakia, which import Russian oil via the Druzhba pipeline, are the holdouts. The EU is considering trade measures to target those remaining supplies if Budapest and Bratislava don’t adopt exit plans, Bloomberg reported on Saturday.  In all, the purchases account for only 3% of EU crude oil imports against about 27% before the war in Ukraine, European Commission figures show.  Trump on Saturday suggested that Matt Whitaker, US ambassador to the North Atlantic Treaty Organization, increase pressure on Europe. “They have to stop buying oil from Russia, Matt,” he said, addressing Whitaker in the audience. “Matt won’t let it happen much longer.” With no end

Read More »

AI’s electricity demand is a challenge utilities can’t ignore, but subsidies aren’t the solution

Stefan Pastine is CEO of Thintronics, a semiconductor materials company. From my vantage point leading a semiconductor materials company, I see a growing tension between artificial intelligence and the power grid. The technology sector is expanding data centers at record speed, but the electricity to run them must come from somewhere. Unless utilities and policymakers plan carefully, the costs of that expansion will fall on ordinary customers. The problem is visible today in several states. In Virginia, the legislature’s watchdog agency, the Joint Legislative Audit & Review Commission, or JLARC, has documented nearly $1 billion a year in sales tax exemptions for data centers, along with mounting pressure on grid infrastructure. In Georgia, utility filings show that roughly 80% of projected load growth over the next decade will come from data centers. Regulators are already debating how much of the bill should be carried by households. The scale matters. A single 100 MW data center can consume as much electricity as 75,000 homes, and cooling systems can draw millions of gallons of water per day. These facilities are essential to the digital economy, but their local impacts on rates, water, and infrastructure cannot be ignored. Industry projections suggest that AI demand could push data centers from about 4.4% of U.S. electricity use today to as much as 12% by 2028. But, as Amory Lovins has argued in this magazine, forecasts are not destiny. Demand may grow more slowly than the most aggressive scenarios suggest. That uncertainty makes it even more important to avoid locking customers into paying for overbuilt capacity. Grid investment should be based on holistic planning, not on subsidies and tax incentives that prioritize speed over fairness. So what can be done? First, diversify supply strategically. Utilities need portfolios that balance renewables and natural gas with long-duration storage. Existing

Read More »

North America Adds Rigs For 3 Straight Weeks

North America added six rigs week on week, according to Baker Hughes’ latest North America rotary rig count, which was released on September 19. The U.S. and Canada each added three rigs week on week, taking the total North America rig count up to 731, comprising 542 rigs from the U.S. and 189 rigs from Canada, the count outlined. Of the total U.S. rig count of 542, 527 rigs are categorized as land rigs, 13 are categorized as offshore rigs, and two are categorized as inland water rigs. The total U.S. rig count is made up of 418 oil rigs, 118 gas rigs, and six miscellaneous rigs, according to Baker Hughes’ count, which revealed that the U.S. total comprises 473 horizontal rigs, 58 directional rigs, and 11 vertical rigs. Week on week, the U.S. offshore and inland water rig counts remained unchanged and the country’s land rig count increased by three, Baker Hughes highlighted. The U.S. oil rig count increased by two and its miscellaneous rig count rose by one, while its gas rig count remained unchanged week on week, the count showed. The U.S. directional and horizontal rig counts each increased by two, week on week, and the country’s vertical rig count dropped by one during the same period, the count revealed. A major state variances subcategory included in the rig count showed that, week on week, Colorado and Wyoming each added two rigs. Texas dropped two rigs and Louisiana dropped one rig week on week, the count revealed. A major basin variances subcategory included in Baker Hughes’ rig count showed that, week on week, the DJ-Niobrara basin added two rigs, the Granite Wash basin added one rig, and the Barnett basin dropped one rig. Canada’s total rig count of 189 is made up of 128 oil rigs, 60

Read More »

Mars signs solar PPAs with Enel to speed energy transition across supply chain

Dive Brief: Mars signed its first set of U.S.-focused clean energy contracts with Enel North America earlier this month in a bid to decarbonize its supply chain while meeting electricity needs. The power purchase agreements between Mars and Enel North America will help develop renewable energy projects that serve both the confectionery giant and its suppliers while “building energy resilience for the business,” according to a Sept. 11 release. The PPAs are the largest transaction Enel North America has made with a global customer and is the biggest contract Mars has signed to date, according to both companies. Mars’ first three contracts with Enel are estimated to generate 1.8 TWh of clean energy annually, helping the food and snack manufacturer avoid around 700 kilotonnes of carbon dioxide equivalent per year, according to the release. Dive Insight: The agreement will allow Mars and its supply chain to benefit from the entirety of the clean energy output produced by Enel’s three solar plants in Texas, which total 851 MW, the companies said. Enel said vegetation at all three sites will be managed through “sheep grazing,” a sustainable dual-use practice that combines energy production with agriculture by using sheep to manage vegetation under and around solar panels instead of mechanical mowers or herbicides. The energy company signed a contract with Texas Solar Sheep last year, which will deploy over 6,000 sheep to control vegetation at eight solar plants located in the state. The partnership with Enel builds on the confectionery giant’s “Renewable Acceleration” program, launched in April. The strategy is designed to fast-track the shift from fossil fuels to clean energy, not only for sites owned by Mars, but across its entire supply chain, by meeting the entirety of its electricity needs through the renewables market. The McLean, Virginia-based multinational — behind brands like

Read More »

Baker Hughes Extends Stimulation Vessels Deal with Petrobras

Energy tech major Baker Hughes Co. has secured a multi-year extension from Petroleo Brasileiro S.A. (Petrobras) for the deployment of the Blue Marlin and Blue Orca stimulation vessels. Baker Hughes said in a media release these vessels support the optimization of offshore oil and gas production in Brazil’s post and pre-salt fields. The contract, according to Baker Hughes, includes the provision of associated chemicals and services. Blue Marlin and Blue Orca will deliver advanced chemical treatments to stimulate wells, maximizing production in both brownfield and greenfield developments across multiple basins. In addition, these vessels will support well construction through gravel pack and frac pack operations, Baker Hughes said. “Stimulation vessels are critical for optimizing production and limiting costly downtime in offshore fields”, Amerino Gatti, executive vice president for oilfield services  and equipment at Baker Hughes, said. “Blue Marlin and Blue Orca have long histories in Brazil, and the unmatched experience, expertise, and capabilities of the vessels and their crews have helped Petrobras make the country’s pre-salt fields among the most productive in the world. “This latest award further reinforces our Mature Assets Solutions strategy, enabling us to extend the life of the field, enhance recovery, and deliver greater value for our customers”. The vessels feature highly trained crews, onboard laboratories, high-pressure pumping systems, and resilient chemical storage. These capabilities enable them to deliver chemical treatments tailored to each well’s requirements and carry out multiple stimulation operations without returning to port for resupply, Baker Hughes said. Blue Marlin and Blue Orca have been operating in Brazil since 2008 and 2023, respectively. To contact the author, email [email protected] WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be

Read More »

Baker Hughes Tapped for Flare Gas Recovery Project in Iraq

Baker Hughes Co. has signed an agreement with Iraq-based Halfaya Gas Co. Ltd. (HGC) to strengthen collaboration for an innovative flare gas recovery system at the Bin Umar gas processing plant in southeastern Iraq. Baker Hughes said in a media release that the project will significantly reduce upstream flaring and transform waste gas into valuable products. The two companies previously signed a memorandum of understanding to collaborate on the Bin Umar project and complete a pre-Front End Engineering and Design study. The project aims to recover up to 300 million standard cubic feet per day of flared gas. This is equivalent to roughly 32 billion kilowatt-hours annually, similar to the annual electricity use of about 2 million average households in Iraq, Baker Hughes said. The waste gas that would have been flared will instead be processed into treated dry gas, liquefied petroleum gas (LPG), and condensate for both domestic consumption and export, it said. Additionally, the companies have agreed to collaborate on developing upstream oilfields in Iraq, utilizing Baker Hughes’ Oilfield Services & Equipment expertise. This partnership includes exploring strategic opportunities for local maintenance and repair services and potential industrial manufacturing collaborations, according to Baker Hughes. The Bin Umar project is developed by HGC, owned by RASEP of RAS Group, under a 15-year Build-Own-Operate-Transfer contract with South Gas Company, a subsidiary of Iraq’s Ministry of Oil. “Our collaboration with Baker Hughes reaffirms our unwavering commitment to Iraq’s future by reducing emissions, enhancing energy security, and accelerating the development of a modern and sustainable energy infrastructure. Through strategic alliances with world-class partners, we are laying the foundations for long-term prosperity and resilience for our people”, said Hussein Saihood, CEO of RAS Group’s Raban Al Safina for Energy Projects. “Baker Hughes’ demonstrated technical capabilities, tailored solutions, and in-country service presence make them an ideal partner for

Read More »

Who wins/loses with the Intel-Nvidia union?

In announcing the deal, Jensen Huang emphasized the client aspect of the deal, saying future Intel chips would have Nvidia GPUs baked into them instead of Intel’s own GPU technology. But there will be impact for the server business as well. There are two things the analysts all agree on:  AMD is the big loser in this deal. It had the advantage of CPU and GPU combination that Intel and Nvidia didn’t have individually. It was apparent in supercomputers like Frontier and El Capitan, which are an all-AMD design of CPUs and GPUs working in tandem. Now the two companies are joined at the hip and will have a competitive offering in due time. The second area of agreement is that the future of Jaguar Shores, Intel’s AI accelerator based on its GPU technology and the Gaudi AI accelerator is uncertain. “Nvidia already has solutions here and it doesn’t make sense for Intel to work on a redundant product that needs to be marketed over an established one,” said Nguyen. A significant event coming from this deal is that Intel is adopting the Nvidia proprietary NVlink high-speed interconnect protocols. “This means that Intel has essentially determined its ability to compete head-to-head with Nvidia in the current large scale AI marketplace, despite its best efforts, have mostly failed,” wrote Jack Gold of J. Gold Associates in a research note. Gold notes that Nvidia already uses a few Xeon data center chips to power their largest systems, and the x86 chips provide most of the controls and pre-processing that their large-scale GPU racks require. By accelerating the performance of the Xeon, the GPU benefits as well. That leaves the question mark hanging over Nvidia’s Arm CPUs, which is likely to continue for “niche areas,” Gold wrote. “But with this announcement, it now

Read More »

Executive Roundtable: The Integration Imperative

Mukul  Girotra, Ecolab: The AI infrastructure revolution is forcing a complete rethinking of how thermal, water, and power systems interact. It’s breaking down decades of siloed engineering approaches that are now proving inadequate given the increased rack demands. Traditionally, data centers were designed with separate teams managing power, cooling, and IT equipment. AI scale requires these systems to operate holistically, with real-time coordination between power management, thermal control, and workload orchestration. Here’s how Ecolab is addressing integration: We extend our digitally enabled approach from site to chip, spanning cooling water, direct-to-chip systems, and adiabatic units, driving cleanliness, performance, and optimized water and energy use across all layers of cooling infrastructure.  Through collaborations like the one with Digital Realty, our AI-driven water conservation solution is expected to drive up to 15% water savings, significantly reducing demand on local water systems.  Leveraging the ECOLAB3D™ platform, we provide proactive analytics and real-time data to optimize water and power use at the asset, site and enterprise levels, creating real operational efficiency and turning cooling management into a strategic advantage. We provide thermal, hydro and chemistry expertise that considers power constraints, IT equipment requirements, and day-to-day facility operational realities. This approach prevents the sub-optimization that can occur when these systems are designed in isolation.  Crucially, we view cooling through the lens of the water-energy nexus: choices at the rack or chiller level affect both Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE) of a data center, so our recommendations balance energy, water, and lifecycle considerations to deliver reliable performance and operational efficiency. The companies that will succeed in AI infrastructure deployment are those that abandon legacy siloed approaches and embrace integrated thermal management as a core competitive capability.

Read More »

Executive Roundtable: CapEx vs. OpEx in the AI Era – Balancing the Rush to Build with Long-Term Efficiency

Becky Wacker, Trane:  Focusing on post-initial construction CapEx expenditures, finding a balance between capital expenditure (CapEx) and operational expenditure (OpEx) is crucial for efficient capital deployment for data center operators. This balance can be influenced by ownership strategy, cash position, budget planning duration, sustainability goals, and contract commitments and durations with end users. At Trane, we focus on understanding these key characteristics of operations and tailor our ongoing support to best meet the unique business objectives and needs of our customers. We address these challenges through three major approaches: 1.    Smart Services Solutions:  Our smart services solutions improve system efficiency through AI-driven tools and a large fleet of truck-based service providers. By keeping system components operating at peak efficiency, preventing unanticipated failures, and balancing the critical needs of both digital monitoring and well-trained technicians, we maintain critical systems. This approach reduces OpEx through efficient operation and minimizes unplanned CapEx expenditures. Consequently, this enables improved budgeting and the ability to invest in additional data centers or other business ventures. 2.   Sustainable and Flexible System Design:  As a global climate innovator, Trane designs our products and collaborates with engineers and owners to integrate these products into highly efficient system solutions. We apply this approach not only in the initial design of the data center but also in planning for future flexibility as demand increases or components require replacement. This proactive strategy reduces ongoing utility bills, minimizes CapEx for upgrades, and helps meet sustainability goals. By focusing on both immediate and long-term efficiency, Trane ensures that data center operators can maintain optimal performance while adhering to environmental standards. 3.   Flexible Financial Solutions:  Trane’s Energy Services solutions have a 25+ year history of providing Energy Performance Contracting solutions. These can be leveraged to provide upgrades and energy optimization to cooling, power, water, and

Read More »

OpenAI and Oracle’s $300B Stargate Deal: Building AI’s National-Scale Infrastructure

Oracle’s ‘Astonishing’ Quarter Stuns Wall Street, Targeting Cloud Growth and Global Data Center Expansion Oracle’s FY Q1 2026 earnings report on September 9 — along with its massive cloud backlog — stunned Wall Street with its blow-out Q1 earnings. The market reacted positively to the huge growth in infrastructure revenue and performance obligations (RPO), a measure of future revenue from customer contracts, which indicates significant growth potential and Oracle’s increasing role in AI technology—even as earnings and revenue missed estimates. After the earnings announcement, Oracle stock soared more than 36%, marking its biggest daily gain since December 1992 and adding more than $250 billion in market value to the company. The company’s stock surge came even as the software giant’s earnings and lower-than-expected revenue. Leaders reported company’s RPO jumped about 360% in the quarter to $455 billion, indicating its potential growth and demand for its cloud services and infrastructure. As a result, Oracle CEO Safra Catz projects that its GPU‑heavy Oracle Cloud Infrastructure (OCI) business will grow 77% to $18 billion in its current fiscal year (2026) and soar to $144 billion in 2030. The earnings announcement also made Oracle’s Co-Founder, Chairman and CTO Larry Ellison the richest person in the world briefly, with shares of Oracle surging as much as 43%. By the end of the trading day, his wealth increased nearly $90 billion to $383 billion, just shy of Tesla CEO Elon Musk’s $384 billion fortune. Also on the earnings call, Ellison announced that in October at the Oracle AI World event, the company will introduce the Oracle AI Database OCI for customers to use the Large Language Model (LLM) of their choice—including Google’s Gemini, OpenAI’s ChatGPT, xAI’s Grok, etc.—directly on top of the Oracle Database to easily access and analyze all existing database data. Capital Expenditure Strategy These astonishing numbers are due

Read More »

Ethernet, InfiniBand, and Omni-Path battle for the AI-optimized data center

IEEE 802.3df-2024. The IEEE 802.3df-2024 standard, completed in February 2024 marked a watershed moment for AI data center networking. The 800 Gigabit Ethernet specification provides the foundation for next-generation AI clusters. It uan 8-lane parallel structure that enables flexible port configurations from a single 800GbE port: 2×400GbE, 4×200GbE or 8×100GbE depending on workload requirements. The standard maintains backward compatibility with existing 100Gb/s electrical and optical signaling. This protects existing infrastructure investments while enabling seamless migration paths. UEC 1.0. The Ultra Ethernet Consortium represents the industry’s most ambitious attempt to optimize Ethernet for AI workloads. The consortium released its UEC 1.0 specification in 2025, marking a critical milestone for AI networking. The specification introduces modern RDMA implementations, enhanced transport protocols and advanced congestion control mechanisms that eliminate the need for traditional lossless networks. UEC 1.0 enables packet spraying at the switch level with reordering at the NIC, delivering capabilities previously available only in proprietary systems The UEC specification also includes Link Level Retry (LLR) for lossless transmission without traditional Priority Flow Control, addressing one of Ethernet’s historical weaknesses versus InfiniBand.LLR operates at the link layer to detect and retransmit lost packets locally, avoiding expensive recovery mechanisms at higher layers. Packet Rate Improvement (PRI) with header compression reduces protocol overhead, while network probes provide real-time congestion visibility. InfiniBand extends architectural advantages to 800Gb/s InfiniBand emerged in the late 1990s as a high-performance interconnect designed specifically for server-to-server communication in data centers. Unlike Ethernet, which evolved from local area networking,InfiniBand was purpose-built for the demanding requirements of clustered computing. The technology provides lossless, ultra-low latency communication through hardware-based flow control and specialized network adapters. The technology’s key advantage lies in its credit-based flow control. Unlike Ethernet’s packet-based approach, InfiniBand prevents packet loss by ensuring receiving buffers have space before transmission begins. This eliminates

Read More »

Land and Expand: CleanArc Data Centers, Google, Duke Energy, Aligned’s ODATA, Fermi America

Land and Expand is a monthly feature at Data Center Frontier highlighting the latest data center development news, including new sites, land acquisitions and campus expansions. Here are some of the new and notable developments from hyperscale and colocation data center operators about which we’ve been reading lately. Caroline County, VA, Approves 650-Acre Data Center Campus from CleanArc Caroline County, Virginia, has approved redevelopment of the former Virginia Bazaar property in Ruther Glen into a 650-acre data center campus in partnership with CleanArc Data Centers Operating, LLC. On September 9, 2025, the Caroline County Board of Supervisors unanimously approved an economic development performance agreement with CleanArc to transform the long-vacant flea market site just off I-95. The agreement allows for the phased construction of three initial data center buildings, each measuring roughly 500,000 square feet, which CleanArc plans to lease to major operators. The project represents one of the county’s largest-ever private investments. While CleanArc has not released a final capital cost, county filings suggest the development could reach into the multi-billion-dollar range over its full buildout. Key provisions include: Local hiring: At least 50 permanent jobs at no less than 150% of the prevailing county wage. Revenue sharing: Caroline County will provide annual incentive grants equal to 25% of incremental tax revenue generated by the campus. Water stewardship: CleanArc is prohibited from using potable county water for data center cooling, requiring the developer to pursue alternative technologies such as non-potable sources, recycled water, or advanced liquid cooling systems. Local officials have emphasized the deal’s importance for diversifying the county’s tax base, while community observers will be watching closely to see which cooling strategies CleanArc adopts in order to comply with the water-use restrictions. Google to Build $10 Billion Data Center Campus in Arkansas Moses Tucker Partners, one of Arkansas’

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »