Stay Ahead, Stay ONMINE

Samsung to Build New Qatar Carbon Capture Project

QatarEnergy has awarded Samsung C&T Corp the engineering, procurement and construction contract for a carbon capture and storage (CCS) project that will serve existing natural gas liquefaction facilities in Ras Laffan Industrial City. “The new project will capture and sequester up to 4.1 million tons of CO2 per annum, making it one of the world’s […]

QatarEnergy has awarded Samsung C&T Corp the engineering, procurement and construction contract for a carbon capture and storage (CCS) project that will serve existing natural gas liquefaction facilities in Ras Laffan Industrial City.

“The new project will capture and sequester up to 4.1 million tons of CO2 per annum, making it one of the world’s largest of its kind and placing Qatar at the forefront of global large-scale carbon capture deployment, reinforcing its leadership role in providing responsible and sustainable energy”, state-owned integrated energy company QatarEnergy said in a press release.

It said it had “launched” its first CCS project, with a capacity of 2.2 million metric tons per annum (MTPA), in 2019.

“Two other ongoing CCS projects will serve the North Field East and North Field South expansion projects, capturing and storing 2.1 MTPA and 1.2 MTPA of CO2 respectively”, QatarEnergy added.

QatarEnergy president and chief executive Saad Sherida Al-Kaabi, who is also Qatar’s energy minister, said, “All our LNG expansion projects will deploy CCS technologies, with an aim to capture over 11 MTPA of CO2 by 2035.”

QatarEnergy aims to double its liquefied natural gas (LNG) production capacity to 160 MMtpa through the North Field expansion projects in Qatar and Golden Pass LNG in Texas.

The United States project will begin production by year-end, Al-Kaabi told the World Gas Conference in Beijing earlier this year.

The first liquefaction train from the North Field east expansion project will start production by mid-2026. “As for North Field West, it is in the engineering phase and will be going into the construction phase somewhere in 2027”, Al-Kaabi said then.

“QatarEnergy will be the largest single LNG exporter as a company while Qatar, as a country, will be the second-largest exporter of LNG after the United States for a very long time”, Al-Kaabi added.

In a separate project, QatarEnergy also contracted Samsung C&T Corp to build the 2,000-megawatt (MW) Dukhan solar power plant, which would more than double the Gulf state’s solar generation capacity.

QatarEnergy expects the two-phase project to start up 1,000 MW by 2028. The second phase is expected to be completed mid-2029, according to a statement by QatarEnergy September 16.

“When completed, the Dukhan solar power plant along with Al-Kharsaah, Mesaieed, Ras Laffan solar power plants will help reduce carbon dioxide emissions by about 4.7 million tons annually, while contributing up to 30 percent of Qatar’s total peak electricity demand”, said Al-Kaabi.

The plant will rise about 80 kilometers (49.71 miles) west of Doha, QatarEnergy said.

To contact the author, email [email protected]



WHAT DO YOU THINK?

Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.


MORE FROM THIS AUTHOR

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Agentic AI: What now, what next?

Agentic AI burst onto the scene with its promises of streamliningoperations and accelerating productivity. But what’s real and what’s hype when it comes to deploying agentic AI? This Special Report examines the state of agentic AI, the challenges organizations are facing in deploying it, and the lessons learned from success

Read More »

AMD to build two more supercomputers at Oak Ridge National Labs

Lux is engineered to train, refine, and deploy AI foundation models that accelerate scientific and engineering progress. Its advanced architecture supports data-intensive and model-centric workloads, thereby enhancing AI-driven research capabilities. Discovery differs from Lux in that it uses Instinct MI430X GPUs instead of the 300 series. The MI400 Series is

Read More »

Imperial Sets Quarterly Production Record

Imperial Oil Ltd has reported 462,000 gross barrels of oil equivalent per day (boepd) in average production in the third quarter, the company’s highest quarterly output in over 30 years, with Kearl recording its highest-ever quarterly gross production at 316,000 bpd. However, net profit for the July-September period fell CAD 698 million ($496.98 million) year-on-year to CAD 539 million, or CAD 1.07 per diluted share. The decrease was “primarily driven by a non-cash impairment of the Calgary Imperial Campus [CAD 406 million before taxation] and the previously announced restructuring charge [CAD 330 million pre-tax]”, the Canadian oil sands producer, majority-owned by Exxon Mobil Corp, said in its quarterly report. On September 29 Imperial announced a restructuring plan that it expects will reduce its workforce by about 20 percent by 2027, cut annual expenses by CAD 150 million by 2028 and “consolidate activities to its operating sites”. Kearl accounted for 224,000 bpd of Imperial’s net production in the third quarter. Cold Lake, 100 percent owned by Imperial, produced 150,000 bpd, compared to 147,000 bpd in Q3 2024. “The company’s share of Syncrude production averaged 78,000 gross barrels per day”, down from 81,000 bpd in Q3 2024, Imperial said. Refinery throughput averaged 425,000 bpd, increasing from 389,000 bpd in Q3 2024 and representing capacity utilization of 98 percent “including progressing planned turnaround work at Sarnia”, it said. Oil product sales averaged 464,000 bpd, down from 487,000 bpd in Q3 2024 “primarily due to lower volumes in the supply and wholesale channels”, it said. Petrochemical sales totaled 173,000 metric tons, up from 76,000 metric tons in Q3 2024. Net natural gas production was 28 million cubic feet a day (MMcfd), down from 30 MMcfd in Q3 2024. In the prior three-month period, Imperial completed what chair, president and chief executive John Whelan said is

Read More »

Saipem Believes It Is Approaching ‘Turning Point’ in Offshore Drilling

In a conference call focused on Saipem’s nine month results, Saipem CEO and General Manager Alessandro Puliti said the company believes it is “approaching the turning point in the offshore drilling market, particularly in … deepwater activities”. “We expect to see a significant ramp-up in demand from the second half of 2026 onwards,” Puliti said in the call, a transcript of which was posted on Saipem’s website recently. Focusing on the company’s offshore drilling activity and recent awards in the call, Puliti highlighted that “the DVD will start operating for Eni in Indonesia toward the end of 2025”. “This is the beginning of a new chapter for the unit, which has operated in West Africa for about two years. We see strong potential for long-term drilling campaigning for DVD in Indonesia,” he added. Puliti also noted that “the Scarabeo 9 semi-sub remains focused in the Mediterranean Sea and has recently moved from Egypt to Libya, where she has started operating for Eni”. “The Santorini drill ship will continue to operate in West Africa for Eni in Ghana and in the Ivory Coast before moving to the Mediterranean Sea to work for Energean,” he added. “Lastly, the Scarabeo semisub received a 12-month extension from Aker BP in Norway and will now continue to operate in the country until the end of 2027,” he continued. “In shallow water, we are also engaged in constructive discussion with Eni in Mexico on the Perro Negro 10 unit,” he said. Looking at the company’s “key financials” of the third quarter, Puliti noted that, in Q3, Saipem “delivered revenues of EUR 3.8 billion [$4.3 billion], with a growth of 1.6 percent year on year and 2.1 percent sequentially”. “EBITDA stood at EUR 437 million [$503 million], growing 28.5 percent year on year and 5.8 percent sequentially,”

Read More »

OPEC+ 8 Decide to Implement Output Adjustment

A statement posted on OPEC’s website on Sunday revealed that Saudi Arabia, Russia, Iraq, the United Arab Emirates (UAE), Kuwait, Kazakhstan, Algeria, and Oman “decided to implement a production adjustment of 137,000 barrels per day” in a virtual meeting held that day. “The eight OPEC+ countries, which previously announced additional voluntary adjustments in April and November 2023 … met virtually on 2 November 2025, to review global market conditions and outlook,” the statement noted. “In view of a steady global economic outlook and current healthy market fundamentals, as reflected in the low oil inventories, the eight participating countries decided to implement a production adjustment of 137,000 barrels per day from the 1.65 million barrels per day additional voluntary adjustments announced in April 2023,” it added. The statement said this adjustment will be implemented in December 2025. It also announced that, “beyond December, due to seasonality, the eight countries … decided to pause the production increments in January, February, and March 2026”. According to a table accompanying the statement, Saudi Arabia and Russia’s December adjustment amounts to 41,000 barrels per day, each. Iraq’s comes to 18,000 barrels per day, the UAE’s is 12,000 barrels per day, Kuwait’s is 10,000 barrels per day, Kazakhstan’s is 7,000 barrels per day, Algeria’s is 4,000 barrels per day, and Oman’s is 4,000 barrels per day, the table outlined. The table highlighted that December 2025, January 2026, February 2026, and March 2026  “required production” is 10.103 million barrels per day for Saudi Arabia, 9.574 million barrels per day for Russia, 4.273 million barrels per day for Iraq, 3.411 million barrels per day for the UAE, 2.580 million barrels per day for Kuwait, 1.569 million barrels per day for Kazakhstan, 971,000 barrels per day for Algeria, and 811,000 barrels per day for Oman. “The eight participating countries

Read More »

The people’s power plant: How Puerto Rico turned home batteries into a reliable grid asset

In 2017, Puerto Rico was plunged into darkness. Hurricane Maria devastated the island, collapsing the power grid and triggering the longest blackout in American history. Millions were left without electricity. This event exposed the deep vulnerabilities of an electrical system strained by decades of underinvestment, aging infrastructure and increased exposure to extreme weather. Rather than rebuild what was lost, Puerto Ricans turned the devastation into an opportunity for growth and innovation. In 2019, it enacted Act 17, restructuring the island’s energy system and setting an ambitious target of 100% renewable energy by 2050. This shift spurred a rapid surge in home solar panel and battery installations, as the population sought greater energy resilience and independence. Act 17 also paved the way for grid modernization by separating the island’s power generation assets from its transmission and distribution system (T&D), a move that allowed LUMA Energy, the island’s grid operator, to assume control of the T&D system in 2021, under a 15-year public-private partnership. This brought the private expertise and resources needed to upgrade the island’s grid. As more homes became “mini power plants”, generating electricity from rooftop solar and storing it in batteries, the island’s grid faced a new challenge: modernizing to manage a decentralized, variable system. The energy landscape was rapidly shifting away from the traditional model – dominated by a few large power plants – toward thousands of smaller, distributed generation sources. From crisis to opportunity: The challenges of grid modernization The challenges faced by Puerto Rico are not unique. Across the U.S., much of the power grid was built for a different era. One defined by a few large fossil-fuel power plants, predictable energy demand and a more stable climate. Today, the grid is under severe pressure from rising electricity consumption, volatile weather and the rapid integration of

Read More »

Beyond savings: How behavioral energy programs are powering peak demand flexibility

For more than a decade, behavioral energy efficiency programs—like Home Energy Reports (HERs)—have helped millions of households use energy more wisely. These simple, data-driven communications offer personalized insights, neighbor comparisons and practical tips that nudge everyday behavior in a more efficient direction. But as the energy grid faces new pressures, these programs are evolving—and their impact is growing. At Franklin Energy, we’re leaning into that evolution. Because in today’s energy landscape, it’s not just about how much energy we use. It’s about when. Meeting the moment: Behavioral flexibility in action Grid operators and utilities are under increasing strain as demand spikes during key hours—especially on hot summer afternoons or cold winter evenings. At the same time, more renewables are coming online, adding variability to the system. Behavioral programs can help bridge this gap, not just by reducing consumption, but by shifting it. We’re now using NGAGE Discover, our advanced analytics platform, to assess how HERs and similar tools influence when people use electricity, not just how much. Through rigorous modeling, we’ve uncovered real, measurable changes in peak demand, especially when reports are designed with timing in mind. What the data shows In regions across the country, we analyzed high-frequency interval data from residential behavioral programs. Our analysis combined econometrics with machine learning to capture subtle shifts in usage patterns. The takeaway? Customers respond. In many cases, they’re using less energy during high-stress hours—without any hardware or incentives. Just better information, delivered at the right time. Although this is helpful information, it’s important to note that behavior is not uniform. Results vary by region, household makeup and even messaging style. But with the right design, behavioral programs offer a cost-effective path to demand flexibility. And they’re ready to scale. Centering equity from the start To fully unlock this potential, we must design

Read More »

Future-proofing utility communications: The case for multi-carrier SIMs

Every day, the utility sector moves toward a more connected, data-driven environment. Smart meters stand as a foundational element of this transition — but only if they remain reliable in the long term. However, the reliability of these devices is only as strong as their connection.  Early smart meters often relied on single-carrier SIM cards, which left them vulnerable to cellular outages and coverage gaps that could disrupt the flow of essential data. For critical infrastructure, that’s a risk utilities cannot afford. To solve this, leading original equipment manufacturers (OEMs) are now integrating multi-carrier eSIMs. This modern approach ensures resilient, long-lasting connectivity that allows a smart meter to switch between networks to maintain a connection automatically. It’s the key to delivering the dependable performance utilities require from devices expected to last 15 to 20 years in the field. Why smart meter OEMs have hesitated to go multi-carrier Despite the advantages of multi-carrier connectivity, many OEMs have approached multi-IMSI SIM technologies with caution. Varying concerns have been responsible for slowing adoption: Battery drain during network switching. Earlier SIM designs consumed significant power when scanning and switching networks, shortening the device’s battery life — an unacceptable trade-off for meters expected to last 15–20 years. Cost premiums. Multi-carrier SIMs once came with higher costs compared to single-carrier versions, making them harder to justify at scale. Loss of control. Some OEMs with preferred carrier relationships worried that switching to a multi-carrier SIM could impact those arrangements. Regulatory complexity. Permanent roaming restrictions and compliance requirements complicated global rollouts. However, modern eSIM technology has effectively resolved these past challenges and paved the way for wider adoption. The advantages of a modern multi-carrier approach For smart gas and water meters, it isn’t just the ability to connect to multiple networks that matters — it’s the ability to

Read More »

Supermicro Unveils Data Center Building Blocks to Accelerate AI Factory Deployment

Supermicro has introduced a new business line, Data Center Building Block Solutions (DCBBS), expanding its modular approach to data center development. The offering packages servers, storage, liquid-cooling infrastructure, networking, power shelves and battery backup units (BBUs), DCIM and automation software, and on-site services into pre-validated, factory-tested bundles designed to accelerate time-to-online (TTO) and improve long-term serviceability. This move represents a significant step beyond traditional rack integration; a shift toward a one-stop, data-center-scale platform aimed squarely at the hyperscale and AI factory market. By providing a single point of accountability across IT, power, and thermal domains, Supermicro’s model enables faster deployments and reduces integration risk—the modern equivalent of a “single throat to choke” for data center operators racing to bring GB200/NVL72-class racks online. What’s New in DCBBS DCBBS extends Supermicro’s modular design philosophy to an integrated catalog of facility-adjacent building blocks, not just IT nodes. By including critical supporting infrastructure—cooling, power, networking, and lifecycle software—the platform helps operators bring new capacity online more quickly and predictably. According to Supermicro, DCBBS encompasses: Multi-vendor AI system support: Compatibility with NVIDIA, AMD, and Intel architectures, featuring Supermicro-designed cold plates that dissipate up to 98% of component-level heat. In-rack liquid-cooling designs: Coolant distribution manifolds (CDMs) and CDUs rated up to 250 kW, supporting 45 °C liquids, alongside rear-door heat exchangers, 800 GbE switches (51.2 Tb/s), 33 kW power shelves, and 48 V battery backup units. Liquid-to-Air (L2A) sidecars: Each row can reject up to 200 kW of heat without modifying existing building hydronics—an especially practical design for air-to-liquid retrofits. Automation and management software: SuperCloud Composer for rack-scale and liquid-cooling lifecycle management SuperCloud Automation Center for firmware, OS, Kubernetes, and AI pipeline enablement Developer Experience Console for self-service workflows and orchestration End-to-end services: Design, validation, and on-site deployment options—including four-hour response service levels—for both greenfield builds

Read More »

Investments Anchor Vertiv’s Growth Strategy as AI-Driven Data Center Orders Surge 60% YoY

New Acquisitions and Partner Awards Vertiv’s third-quarter financial performance was underscored by a series of strategic acquisitions and ecosystem recognitions that expand the company’s technological capabilities and market reach amid AI-driven demand. Acquisition of Waylay NV: AI and Hyperautomation for Infrastructure Intelligence On August 26, Vertiv announced its acquisition of Waylay NV, a Belgium-based developer of generative AI and hyperautomation software. The move bolsters Vertiv’s portfolio with AI-driven monitoring, predictive services, and performance optimization for digital infrastructure. Waylay’s automation platform integrates real-time analytics, orchestration, and workflow automation across diverse connected assets and cloud services—enabling predictive maintenance, uptime optimization, and energy management across power and cooling systems. “With the addition of Waylay’s technology and software-focused team, Vertiv will accelerate its vision of intelligent infrastructure—data-driven, proactive, and optimized for the world’s most demanding environments,” said CEO Giordano Albertazzi. Completion of Great Lakes Acquisition: Expanding White Space Integration Just days earlier, as alluded to above, Vertiv finalized its $200 million acquisition of Great Lakes Data Racks & Cabinets, a U.S.-based manufacturer of enclosures and integrated rack systems. The addition expands Vertiv’s capabilities in high-density, factory-integrated white space solutions; bridging power, cooling, and IT enclosures for hyperscale and edge data centers alike. Great Lakes’ U.S. and European manufacturing footprint complements Vertiv’s global reach, supporting faster deployment cycles and expanded configuration flexibility.  Albertazzi noted that the acquisition “enhances our ability to deliver comprehensive infrastructure solutions, furthering Vertiv’s capabilities to customize at scale and configure at speed for AI and high-density computing environments.” 2024 Partner Awards: Recognizing the Ecosystem Behind Growth Vertiv also spotlighted its partner ecosystem in August with its 2024 North America Partner Awards. The company recognized 11 partners for 2024 performance, growth, and AI execution across segments: Partner of the Year – SHI for launching a customer-facing high-density AI & Cyber Labs featuring

Read More »

QuEra’s Quantum Leap: From Neutral-Atom Breakthroughs to Hybrid HPC Integration

The race to make quantum computing practical – and commercially consequential – took a major step forward this fall, as Boston-based QuEra Computing announced new research milestones, expanded strategic funding, and an accelerating roadmap for hybrid quantum-classical supercomputing. QuEra’s Chief Commercial Officer Yuval Boger joined the Data Center Frontier Show to discuss how neutral-atom quantum systems are moving from research labs into high-performance computing centers and cloud environments worldwide. NVIDIA Joins Google in Backing QuEra’s $230 Million Round In early September, QuEra disclosed that NVentures, NVIDIA’s venture arm, has joined Google and others in expanding its $230 million Series B round. The investment deepens what has already been one of the most active collaborations between quantum and accelerated-computing companies. “We already work with NVIDIA, pairing our scalable neutral-atom architecture with its accelerated-computing stack to speed the arrival of useful, fault-tolerant quantum machines,” said QuEra CEO Andy Ory. “The decision to invest in us underscores our shared belief that hybrid quantum-classical systems will unlock meaningful value for customers sooner than many expect.” The partnership spans hardware, software, and go-to-market initiatives. QuEra’s neutral-atom machines are being integrated into NVIDIA’s CUDA-Q software platform for hybrid workloads, while the two companies collaborate at the NVIDIA Accelerated Quantum Center (NVAQC) in Boston, linking QuEra hardware with NVIDIA’s GB200 NVL72 GPU clusters for simulation and quantum-error-decoder research. Meanwhile, at Japan’s AIST ABCI-Q supercomputing center, QuEra’s Gemini-class quantum computer now operates beside more than 2,000 H100 GPUs, serving as a national testbed for hybrid workflows. A jointly developed transformer-based decoder running on NVIDIA’s GPUs has already outperformed classical maximum-likelihood error-correction models, marking a concrete step toward practical fault-tolerant quantum computing. For NVIDIA, the move signals conviction that quantum processing units (QPUs) will one day complement GPUs inside large-scale data centers. For QuEra, it widens access to the

Read More »

How CoreWeave and Poolside Are Teaming Up in West Texas to Build the Next Generation of AI Data Centers

In the evolving landscape of artificial-intelligence infrastructure, a singular truth is emerging: access to cutting-edge silicon and massive GPU clusters is no longer enough by itself. For companies chasing the frontier of multi-trillion-parameter model training and agentic AI deployment, the bottleneck increasingly lies not just in compute, but in the seamless integration of compute + power + data center scale. The latest chapter in this story is the collaboration between CoreWeave and Poolside, culminating in the launch of Project Horizon, a 2-gigawatt AI-campus build in West Texas. Setting the Stage: Who’s Involved, and Why It Matters CoreWeave (NASDAQ: CRWV) has positioned itself as “The Essential Cloud for AI™” — a company founded in 2017, publicly listed in March 2025, and aggressively building out its footprint of ultra-high-performance infrastructure.  One of its strategic moves: in July 2025 CoreWeave struck a definitive agreement to acquire Core Scientific (NASDAQ: CORZ) in an all-stock transaction. Through that deal, CoreWeave gains grip over approximately 1.3 GW of gross power across Core Scientific’s nationwide data center footprint, plus more than 1 GW of expansion potential.  That acquisition underlines a broader trend: AI-specialist clouds are no longer renting space and power; they’re working to own or tightly control it. Poolside, founded in 2023, is a foundation-model company with an ambitious mission: building artificial general intelligence (AGI) and deploying enterprise-scale agents.  According to Poolside’s blog: “When people ask what it takes to build frontier AI … the focus is usually on the model … but that’s only half the story. The other half is infrastructure. If you don’t control your infrastructure, you don’t control your destiny—and you don’t have a shot at the frontier.”  Simply put: if you’re chasing multi-trillion-parameter models, you need both the compute horsepower and the power infrastructure; and ideally, tight vertical integration. Together, the

Read More »

Vantage Data Centers Pours $15B Into Wisconsin AI Campus as It Builds Global Giga-Scale Footprint

Expanding in Ohio: Financing Growth Through Green Capital In June 2025, Vantage secured $5 billion in green loan capacity, including $2.25 billion to fully fund its New Albany, Ohio (OH1) campus and expand its existing borrowing base. The 192 MW development will comprise three 64 MW buildings, with first delivery expected in December 2025 and phased completion through 2028. The OH1 campus is designed to come online as Vantage’s larger megasites ramp up, providing early capacity and regional proximity to major cloud and AI customers in the Columbus–New Albany corridor. The site also offers logistical and workforce advantages within one of the fastest-growing data center regions in the U.S. Beyond the U.S. – Vantage Expands Its Global Footprint Moving North: Reinforcing Canada’s Renewable Advantage In February 2025, Vantage announced a C$500 million investment to complete QC24, the fourth and final building at its Québec City campus, adding 32 MW of capacity by 2027. The project strengthens Vantage’s Montreal–Québec platform and reinforces its renewable-heavy power profile, leveraging abundant hydropower to serve sustainability-driven customers. APAC Expansion: Strategic Scale in Southeast Asia In September 2025, Vantage unveiled a $1.6 billion APAC expansion, led by existing investors GIC (Singapore’s sovereign wealth fund) and ADIA (Abu Dhabi Investment Authority). The investment includes the acquisition of Yondr’s Johor, Malaysia campus at Sedenak Tech Park. Currently delivering 72.5 MW, the Johor campus is planned to scale to 300 MW at full build-out, positioning it within one of Southeast Asia’s most active AI and cloud growth corridors. Analysts note that the location’s connectivity to Singapore’s hyperscale market and favorable development economics give Vantage a strong competitive foothold across the region. Italy: Expanding European Presence Under National Priority Status Vantage is also adding a second Italian campus alongside its existing Milan site, totaling 32 MW across two facilities. Phase

Read More »

Nvidia GTC show news you need to know round-up

In the case of Flex, it will use digital twins to unify inventory, labor, and freight operations, streamlining logistics across Flex’s worldwide network. Flex’s new 400,000 sq. ft. facility in Dallas is purpose-built for data center infrastructure, aiming to significantly shorten lead times for U.S. customers. The Flex/Nvidia partnership aims to address the country’s labor shortages and drive innovation in manufacturing, pharmaceuticals, and technology. The companies believe the partnership sets the stage for a new era of giga-scale AI factories. Nvidia and Oracle to Build DOE’s Largest AI Supercomputer Oracle continues its aggressive push into supercomputing with a deal to build the largest AI supercomputer for scientific discovery — Using Nvidia GPUs, obviously — at a Department of Energy facility. The system, dubbed Solstice, will feature an incredible 100,000 Nvidia Blackwell GPUs. A second system, dubbed Equinox, will include 10,000 Blackwell GPUs and is expected to be available in the first half of 2026. Both systems will be interconnected by Nvidia networking and deliver a combined 2,200 exaflops of AI performance. The Solstice and Equinox supercomputers will be located at Argonne National Laboratory, the home to the Aurora supercomputer, built using all Intel parts. They will enable scientists and researchers to develop and train new frontier models and AI reasoning models for open science using the Nvidia Megatron-Core library and scale them using the Nvidia TensorRT inference software stack.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »