Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Gluware tackles AI agent coordination with Titan platform

The first phase focused on configuration management and drift detection. Gluware’s system identified when network devices deviated from approved configurations and proposed fixes, but network operations teams manually reviewed and approved each remediation. The second phase introduced automatic remediation. As customers gained confidence, they allowed the system to automatically correct devices that drifted from approved standards without human approval for each action. Current deployments represent what the industry calls self-operating networks, where configuration drift triggers instant automated correction. Titan represents the third phase: system-determined operations where AI identifies new changes needed and executes them within defined risk parameters. This differs fundamentally from drift remediation, which restores known-good states. Titan handles new modifications from multiple AI systems that may conflict with each other. The platform coordinates between observability platforms, service management tools and security systems, each deploying AI agents that want to make simultaneous network changes. The validation framework extends from preventing configuration drift to preventing conflicts between competing AI agents. Titan’s architecture for network and agentic AI coordination Titan consists of a series of integrated components working together to solve the multi-agent coordination problem.  The Intelligent MCP Server uses the Model Context Protocol to coordinate between Gluware’s automation capabilities and external AI agents. The Gluware Agent executes the actual automation work between the MCP server and network devices. The Co-Pilot provides a natural language interface for network operations teams. The MCP Server implements what Gluware calls a validation engine. Every action passes through verification before execution. This architecture allows third-party agents from observability platforms or service management tools to request network changes while Gluware maintains control over execution. Initial MCP service integrations include NetBox and ServiceNow.

Read More »

The week in 5 numbers: Power outages stretch, data center load overstated

Capacity secured by the Texas Energy Fund so far for the Electric Reliability Council of Texas market. The fund, which voters authorized in 2023 to give low-interest loans to energy projects, was a response to the devastating impacts of 2021’s winter storm blackouts. It finalized its sixth loan this week, which went to NRG Energy to develop a 455-MW gas plant near Houston. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and it is expected to come online in 2028. The fund has faced challenges, however, as a number of projects have withdrawn from the pipeline as the economics of gas plants, in particular, have shifted.

Read More »

Texas loan fund tops 3.5 GW of gas capacity secured with latest NRG deal

NRG Energy will develop a 455-MW gas plant near Houston, backed by a low-interest loan from the state of Texas, the company said Thursday. It is the sixth loan finalized through the Texas Energy Fund program for the Electric Reliability Council of Texas market. Voters authorized the fund in 2023. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and is expected online in 2028. New generation backed by the Texas Energy Fund, across six projects, now exceeds more than 3.5 GW, said officials at the Public Utility Commission of Texas, which manages the program. The Lone Star State is experiencing “unprecedented growth,” NRG Executive Vice President, President of NRG Business and Wholesale Operations Robert Gaudette, said in a statement. Greens Bayou is NRG’s third project to receive support from the loan fund, marking about 1.5 GW of total capacity and up to $1.15 billion in low-interest loans. Under the loan agreement for Greens Bayou, total project costs are estimated to be less than $617 million and the PUCT will provide a 20-year loan up to $370 million, or 60% of total cost, at a 3% interest rate. In August, the PUCT tapped NRG for a TEF loan up to $216 million to develop two gas units totaling 456 MW of capacity at its existing TH Wharton Generating Station in Houston. And in September the company was selected for a loan of $562 million to develop a 721-MW gas plant near Baytown. There are another 11 Texas Energy Fund applications moving through a due diligence review process, said state officials, representing another 5.4 GW of possible new capacity in the ERCOT footprint.

Read More »

Microsoft’s Fairwater Atlanta and the Rise of the Distributed AI Supercomputer

Microsoft’s second Fairwater data center in Atlanta isn’t just “another big GPU shed.” It represents the other half of a deliberate architectural experiment: proving that two massive AI campuses, separated by roughly 700 miles, can operate as one coherent, distributed supercomputer. The Atlanta installation is the latest expression of Microsoft’s AI-first data center design: purpose-built for training and serving frontier models rather than supporting mixed cloud workloads. It links directly to the original Fairwater campus in Wisconsin, as well as to earlier generations of Azure AI supercomputers, through a dedicated AI WAN backbone that Microsoft describes as the foundation of a “planet-scale AI superfactory.” Inside a Fairwater Site: Preparing for Multi-Site Distribution Efficient multi-site training only works if each individual site behaves as a clean, well-structured unit. Microsoft’s intra-site design is deliberately simplified so that cross-site coordination has a predictable abstraction boundary—essential for treating multiple campuses as one distributed AI system. Each Fairwater installation presents itself as a single, flat, high-regularity cluster: Up to 72 NVIDIA Blackwell GPUs per rack, using GB200 NVL72 rack-scale systems. NVLink provides the ultra-low-latency, high-bandwidth scale-up fabric within the rack, while the Spectrum-X Ethernet stack handles scale-out. Each rack delivers roughly 1.8 TB/s of GPU-to-GPU bandwidth and exposes a multi-terabyte pooled memory space addressable via NVLink—critical for large-model sharding, activation checkpointing, and parallelism strategies. Racks feed into a two-tier Ethernet scale-out network offering 800 Gbps GPU-to-GPU connectivity with very low hop counts, engineered to scale to hundreds of thousands of GPUs without encountering the classic port-count and topology constraints of traditional Clos fabrics. Microsoft confirms that the fabric relies heavily on: SONiC-based switching and a broad commodity Ethernet ecosystem to avoid vendor lock-in and accelerate architectural iteration. Custom network optimizations, such as packet trimming, packet spray, high-frequency telemetry, and advanced congestion-control mechanisms, to prevent collective

Read More »

StanChart Flags ‘Fragility of Russian Supply’

In a market that continues to be dominated by bearish supply sentiment, one of the few bullish drivers is the fragility of Russian supply. That’s what Standard Chartered Bank Energy Research Head Emily Ashford said in a report sent to Rigzone by the Standard Chartered team earlier this week, adding that both crude and refined product exports are being curtailed by the pressure of focused sanctions on Russian oil producers, a lowered crude oil price cap, and ongoing missile and drone attacks on oil and gas export infrastructure. “On 14 November the port of Novorossiysk, in Krasnodar Krai, was targeted by missiles and drones, with a focus on the Sheskharis oil terminal,” Ashford highlighted in the report, noting that the terminal “has an export capacity of c.2.2 million barrels per day” and that “loadings were suspended for two days”. “Ukraine’s attacks on a series of Black Sea export terminals have highlighted the vulnerability of exports via Russia’s southern route,” Ashford noted in the report. “This is particularly important, with weather closing down the Northern Sea Route via the Arctic over the winter. The winter transit routes to Asia are then limited to the Suez Canal and take, on average, 10 days longer,” Ashford added. “These longer transits are a contributing factor to the increased volumes of oil on water, which have increased by 294 million barrels year on year to an all-time high of 1.37 billion barrels as of 14 November, according to data from Vortexa,” Ashford continued. In the report, Ashford went on to note that Russian crude exports “have … remained relatively steady” but added that Standard Chartered expects to see “a sharp slowdown after the 21 November deadline for dealings with the two sanctioned oil producers”. “This is likely also a contributing factor to the volumes of

Read More »

Fission Forward: Next-Gen Nuclear Power Developments for the AI Data Center Boom

Constellation proposes to begin with 1.5 GW of fast-tracked projects, including 800 MW of battery energy storage and 700 MW of new natural gas generation to address short-term reliability needs. The remaining 4.3 GW represents longer-term investment at the Calvert Cliffs Clean Energy Center: extending both units for an additional 20 years beyond their current 2034 and 2036 license expirations, implementing a 10% uprate that would add roughly 190 MW of output, and pursuing 2 GW of next-generation nuclear at the existing site. For Maryland, a state defined by a dense I-95 fiber corridor, accelerating data center buildout, and rising AI-driven load, the plan could be transformative. If Constellation moves from “option” to “program,” the company estimates that 70% of the state’s electricity supply could come from clean energy sources, positioning Maryland as a top-tier market for 24/7 carbon-free power. TerraPower’s Natrium SMR Clears a Key Federal Milestone On Oct. 23, the Nuclear Regulatory Commission issued the final environmental impact statement (FEIS) for TerraPower’s Natrium small modular reactor in Kemmerer, Wyoming. While not a construction permit, FEIS completion removes a major element of federal environmental risk and keeps the project on track for the next phase of NRC review. TerraPower and its subsidiary, US SFR Owner, LLC, originally submitted the construction permit application on March 28, 2024. Natrium is a sodium-cooled fast reactor producing roughly 345 MW of electric output, paired with a molten-salt thermal-storage system capable of boosting generation to about 500 MW during peak periods. The design combines firm baseload power with flexible, dispatchable capability, an attractive profile for hyperscalers evaluating 24/7 clean energy options in the western U.S. The project is part of the DOE’s Advanced Reactor Demonstration Program, intended to replace retiring coal capacity in PacifiCorp’s service territory while showcasing advanced fission technology. For operators planning multi-GW

Read More »

Gluware tackles AI agent coordination with Titan platform

The first phase focused on configuration management and drift detection. Gluware’s system identified when network devices deviated from approved configurations and proposed fixes, but network operations teams manually reviewed and approved each remediation. The second phase introduced automatic remediation. As customers gained confidence, they allowed the system to automatically correct devices that drifted from approved standards without human approval for each action. Current deployments represent what the industry calls self-operating networks, where configuration drift triggers instant automated correction. Titan represents the third phase: system-determined operations where AI identifies new changes needed and executes them within defined risk parameters. This differs fundamentally from drift remediation, which restores known-good states. Titan handles new modifications from multiple AI systems that may conflict with each other. The platform coordinates between observability platforms, service management tools and security systems, each deploying AI agents that want to make simultaneous network changes. The validation framework extends from preventing configuration drift to preventing conflicts between competing AI agents. Titan’s architecture for network and agentic AI coordination Titan consists of a series of integrated components working together to solve the multi-agent coordination problem.  The Intelligent MCP Server uses the Model Context Protocol to coordinate between Gluware’s automation capabilities and external AI agents. The Gluware Agent executes the actual automation work between the MCP server and network devices. The Co-Pilot provides a natural language interface for network operations teams. The MCP Server implements what Gluware calls a validation engine. Every action passes through verification before execution. This architecture allows third-party agents from observability platforms or service management tools to request network changes while Gluware maintains control over execution. Initial MCP service integrations include NetBox and ServiceNow.

Read More »

The week in 5 numbers: Power outages stretch, data center load overstated

Capacity secured by the Texas Energy Fund so far for the Electric Reliability Council of Texas market. The fund, which voters authorized in 2023 to give low-interest loans to energy projects, was a response to the devastating impacts of 2021’s winter storm blackouts. It finalized its sixth loan this week, which went to NRG Energy to develop a 455-MW gas plant near Houston. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and it is expected to come online in 2028. The fund has faced challenges, however, as a number of projects have withdrawn from the pipeline as the economics of gas plants, in particular, have shifted.

Read More »

Texas loan fund tops 3.5 GW of gas capacity secured with latest NRG deal

NRG Energy will develop a 455-MW gas plant near Houston, backed by a low-interest loan from the state of Texas, the company said Thursday. It is the sixth loan finalized through the Texas Energy Fund program for the Electric Reliability Council of Texas market. Voters authorized the fund in 2023. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and is expected online in 2028. New generation backed by the Texas Energy Fund, across six projects, now exceeds more than 3.5 GW, said officials at the Public Utility Commission of Texas, which manages the program. The Lone Star State is experiencing “unprecedented growth,” NRG Executive Vice President, President of NRG Business and Wholesale Operations Robert Gaudette, said in a statement. Greens Bayou is NRG’s third project to receive support from the loan fund, marking about 1.5 GW of total capacity and up to $1.15 billion in low-interest loans. Under the loan agreement for Greens Bayou, total project costs are estimated to be less than $617 million and the PUCT will provide a 20-year loan up to $370 million, or 60% of total cost, at a 3% interest rate. In August, the PUCT tapped NRG for a TEF loan up to $216 million to develop two gas units totaling 456 MW of capacity at its existing TH Wharton Generating Station in Houston. And in September the company was selected for a loan of $562 million to develop a 721-MW gas plant near Baytown. There are another 11 Texas Energy Fund applications moving through a due diligence review process, said state officials, representing another 5.4 GW of possible new capacity in the ERCOT footprint.

Read More »

Microsoft’s Fairwater Atlanta and the Rise of the Distributed AI Supercomputer

Microsoft’s second Fairwater data center in Atlanta isn’t just “another big GPU shed.” It represents the other half of a deliberate architectural experiment: proving that two massive AI campuses, separated by roughly 700 miles, can operate as one coherent, distributed supercomputer. The Atlanta installation is the latest expression of Microsoft’s AI-first data center design: purpose-built for training and serving frontier models rather than supporting mixed cloud workloads. It links directly to the original Fairwater campus in Wisconsin, as well as to earlier generations of Azure AI supercomputers, through a dedicated AI WAN backbone that Microsoft describes as the foundation of a “planet-scale AI superfactory.” Inside a Fairwater Site: Preparing for Multi-Site Distribution Efficient multi-site training only works if each individual site behaves as a clean, well-structured unit. Microsoft’s intra-site design is deliberately simplified so that cross-site coordination has a predictable abstraction boundary—essential for treating multiple campuses as one distributed AI system. Each Fairwater installation presents itself as a single, flat, high-regularity cluster: Up to 72 NVIDIA Blackwell GPUs per rack, using GB200 NVL72 rack-scale systems. NVLink provides the ultra-low-latency, high-bandwidth scale-up fabric within the rack, while the Spectrum-X Ethernet stack handles scale-out. Each rack delivers roughly 1.8 TB/s of GPU-to-GPU bandwidth and exposes a multi-terabyte pooled memory space addressable via NVLink—critical for large-model sharding, activation checkpointing, and parallelism strategies. Racks feed into a two-tier Ethernet scale-out network offering 800 Gbps GPU-to-GPU connectivity with very low hop counts, engineered to scale to hundreds of thousands of GPUs without encountering the classic port-count and topology constraints of traditional Clos fabrics. Microsoft confirms that the fabric relies heavily on: SONiC-based switching and a broad commodity Ethernet ecosystem to avoid vendor lock-in and accelerate architectural iteration. Custom network optimizations, such as packet trimming, packet spray, high-frequency telemetry, and advanced congestion-control mechanisms, to prevent collective

Read More »

StanChart Flags ‘Fragility of Russian Supply’

In a market that continues to be dominated by bearish supply sentiment, one of the few bullish drivers is the fragility of Russian supply. That’s what Standard Chartered Bank Energy Research Head Emily Ashford said in a report sent to Rigzone by the Standard Chartered team earlier this week, adding that both crude and refined product exports are being curtailed by the pressure of focused sanctions on Russian oil producers, a lowered crude oil price cap, and ongoing missile and drone attacks on oil and gas export infrastructure. “On 14 November the port of Novorossiysk, in Krasnodar Krai, was targeted by missiles and drones, with a focus on the Sheskharis oil terminal,” Ashford highlighted in the report, noting that the terminal “has an export capacity of c.2.2 million barrels per day” and that “loadings were suspended for two days”. “Ukraine’s attacks on a series of Black Sea export terminals have highlighted the vulnerability of exports via Russia’s southern route,” Ashford noted in the report. “This is particularly important, with weather closing down the Northern Sea Route via the Arctic over the winter. The winter transit routes to Asia are then limited to the Suez Canal and take, on average, 10 days longer,” Ashford added. “These longer transits are a contributing factor to the increased volumes of oil on water, which have increased by 294 million barrels year on year to an all-time high of 1.37 billion barrels as of 14 November, according to data from Vortexa,” Ashford continued. In the report, Ashford went on to note that Russian crude exports “have … remained relatively steady” but added that Standard Chartered expects to see “a sharp slowdown after the 21 November deadline for dealings with the two sanctioned oil producers”. “This is likely also a contributing factor to the volumes of

Read More »

Fission Forward: Next-Gen Nuclear Power Developments for the AI Data Center Boom

Constellation proposes to begin with 1.5 GW of fast-tracked projects, including 800 MW of battery energy storage and 700 MW of new natural gas generation to address short-term reliability needs. The remaining 4.3 GW represents longer-term investment at the Calvert Cliffs Clean Energy Center: extending both units for an additional 20 years beyond their current 2034 and 2036 license expirations, implementing a 10% uprate that would add roughly 190 MW of output, and pursuing 2 GW of next-generation nuclear at the existing site. For Maryland, a state defined by a dense I-95 fiber corridor, accelerating data center buildout, and rising AI-driven load, the plan could be transformative. If Constellation moves from “option” to “program,” the company estimates that 70% of the state’s electricity supply could come from clean energy sources, positioning Maryland as a top-tier market for 24/7 carbon-free power. TerraPower’s Natrium SMR Clears a Key Federal Milestone On Oct. 23, the Nuclear Regulatory Commission issued the final environmental impact statement (FEIS) for TerraPower’s Natrium small modular reactor in Kemmerer, Wyoming. While not a construction permit, FEIS completion removes a major element of federal environmental risk and keeps the project on track for the next phase of NRC review. TerraPower and its subsidiary, US SFR Owner, LLC, originally submitted the construction permit application on March 28, 2024. Natrium is a sodium-cooled fast reactor producing roughly 345 MW of electric output, paired with a molten-salt thermal-storage system capable of boosting generation to about 500 MW during peak periods. The design combines firm baseload power with flexible, dispatchable capability, an attractive profile for hyperscalers evaluating 24/7 clean energy options in the western U.S. The project is part of the DOE’s Advanced Reactor Demonstration Program, intended to replace retiring coal capacity in PacifiCorp’s service territory while showcasing advanced fission technology. For operators planning multi-GW

Read More »

The week in 5 numbers: Power outages stretch, data center load overstated

Capacity secured by the Texas Energy Fund so far for the Electric Reliability Council of Texas market. The fund, which voters authorized in 2023 to give low-interest loans to energy projects, was a response to the devastating impacts of 2021’s winter storm blackouts. It finalized its sixth loan this week, which went to NRG Energy to develop a 455-MW gas plant near Houston. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and it is expected to come online in 2028. The fund has faced challenges, however, as a number of projects have withdrawn from the pipeline as the economics of gas plants, in particular, have shifted.

Read More »

StanChart Flags ‘Fragility of Russian Supply’

In a market that continues to be dominated by bearish supply sentiment, one of the few bullish drivers is the fragility of Russian supply. That’s what Standard Chartered Bank Energy Research Head Emily Ashford said in a report sent to Rigzone by the Standard Chartered team earlier this week, adding that both crude and refined product exports are being curtailed by the pressure of focused sanctions on Russian oil producers, a lowered crude oil price cap, and ongoing missile and drone attacks on oil and gas export infrastructure. “On 14 November the port of Novorossiysk, in Krasnodar Krai, was targeted by missiles and drones, with a focus on the Sheskharis oil terminal,” Ashford highlighted in the report, noting that the terminal “has an export capacity of c.2.2 million barrels per day” and that “loadings were suspended for two days”. “Ukraine’s attacks on a series of Black Sea export terminals have highlighted the vulnerability of exports via Russia’s southern route,” Ashford noted in the report. “This is particularly important, with weather closing down the Northern Sea Route via the Arctic over the winter. The winter transit routes to Asia are then limited to the Suez Canal and take, on average, 10 days longer,” Ashford added. “These longer transits are a contributing factor to the increased volumes of oil on water, which have increased by 294 million barrels year on year to an all-time high of 1.37 billion barrels as of 14 November, according to data from Vortexa,” Ashford continued. In the report, Ashford went on to note that Russian crude exports “have … remained relatively steady” but added that Standard Chartered expects to see “a sharp slowdown after the 21 November deadline for dealings with the two sanctioned oil producers”. “This is likely also a contributing factor to the volumes of

Read More »

ExxonMobil to Acquire 40 Percent of Bahia NGL Pipeline

Enterprise Products Partners LP will farm out 40 percent of the Bahia natural gas liquids (NGLs) pipeline to Exxon Mobil Corp, in a deal expected to be completed “early 2026” subject to regulatory approvals, Enterprise said Thursday. “The 550-mile Bahia pipeline, which has begun commissioning activities and will begin commercial operations immediately thereafter, will have an initial capacity to transport 600,000 barrels per day (bpd) of NGLs from the Midland and Delaware basins of West Texas to Enterprise’s Mont Belvieu fractionation complex”, the Houston, Texas-based oil and gas midstream company said in a press release. A.J. Teague, co-chief executive of Enterprise’s general partner Enterprise Products Holdings LLC, earlier said in Enterprise’s quarterly report the pipeline was on track to start operations this month. “Upon closing of the transaction, Enterprise and ExxonMobil plan to increase Bahia’s capacity to one million bpd by adding incremental pumping capacity and constructing a 92-mile extension of Bahia to ExxonMobil’s Cowboy natural gas processing plant in Eddy County, New Mexico”, Enterprise added. “The extension will also connect to multiple Enterprise-owned processing facilities in the Delaware Basin. “The expansion and extension are expected to be completed in the fourth quarter of 2027, with ExxonMobil’s interest referred to as the ‘Cowboy Connector’. Enterprise will serve as operator of the combined system”. Teague said, “As the ratio of natural gas and NGL production to crude oil production continues to increase in the Permian, the Bahia pipeline will be an essential artery to deliver mixed NGLs to the fractionation complex in Mont Belvieu. From 2024 to 2030, NGL production in the Permian Basin is expected to increase by over 30 percent”. In the third quarter Enterprise logged NGL pipeline volumes of 4.7 million bpd, up 391,000 bpd from the same three-month period last year. NGL marine terminal volumes averaged 908,000 bpd in July-September 2025, up 21,000 bpd against the third quarter of 2024. NGL fractionation volumes totaled 1.6

Read More »

Oil Prices Have Fallen Sharply

Crude oil prices have fallen sharply, Nadir Belbarka, an analyst at XMArabia, said in a statement sent to Rigzone on Friday, highlighting that Brent was at $62.67 per barrel and WTI was at $58.29 per barrel. “The decline reflects rising expectations of oversupply, fading geopolitical supply risks, and growing coverage of reported progress toward a U.S.-endorsed Russia-Ukraine peace agreement,” Belbarka said in the statement. “Upcoming data – including today’s flash PMIs across the U.S., UK, Germany, and France, along with remarks from ECB President Lagarde – will direct near-term sentiment,” Belbarka added. “Weak readings could heighten recession fears and deepen demand destruction before triggering a technical rebound. Positive surprises could strengthen the dollar and reinforce downward pressure on crude,” the XMArabia analyst continued. Belbarka went on to state that, “in the absence of major inventory drawdowns or a significant supply shock, crude is likely to remain constrained within its new trading range through year-end, awaiting meaningful geopolitical or macroeconomic catalysts”. “Close attention to inventories, IEA [International Energy Agency] and OPEC forecasts, and dollar performance remains essential,” Belbarka warned. In a separate market comment sent to Rigzone on Friday, Eric Chia, Financial Markets Strategist at Exness, noted that crude oil prices “were under pressure today, extending this week’s downside bias as the market digested the potential for geopolitical de-escalation and structural oversupply”. “WTI prices were trading below $58 per barrel, down roughly two percent intraday and set for weekly losses of more than three percent,” Chia added. “The emergence of a Russia-Ukraine peace framework could weigh on the oil market as the prospect of future normalization of Russian crude exports tempered the impact of new U.S. sanctions on Rosneft and Lukoil,” Chia said. “Higher Russian oil exports could also add to the current oversupply narrative. However, a failed deal could help lift

Read More »

USA Crude Oil Stocks Drop by 3.4 Million Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR) decreased by 3.4 million barrels from the week ending November 7 to the week ending November 14, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This EIA report, which was released on November 19 and included data for the week ending November 14, showed that crude oil stocks, not including the SPR, stood at 424.2 million barrels on November 14, 427.6 million barrels on November 7, and 430.3 million barrels on November 15, 2024. Crude oil in the SPR stood at 410.9 million barrels on November 14, 410.4 million barrels on November 7, and 389.2 million barrels on November 15, 2024, the report highlighted. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.680 billion barrels on November 14, the report revealed. Total petroleum stocks were down 2.2 million barrels week on week and up 47.1 million barrels year on year, the report showed. “At 424.2 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 2.3 million barrels from last week and are about three percent below the five year average for this time of year. Finished gasoline inventories decreased, while blending components inventories increased last week,” the EIA added. “Distillate fuel inventories increased by 0.2 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories remained unchanged from last week and are about 16 percent above the five year average for this

Read More »

Norway Gas Output Hits Six-Month High

Norway produced 336.76 million standard cubic meters a day (MMscmd) of natural gas in October, its highest over the last six months, according to preliminary monthly production figures from the country’s upstream regulator. However, last month’s gas output fell 1.7 percent compared to October 2024, though it beat the Norwegian Offshore Directorate’s (NOD) projection by 2.1 percent. Norway sold 10.4 billion scm of gas last month, up 1.9 billion scm from September, the NOD reported on its website. The Nordic country’s oil production in October averaged 1.82 million barrels per day (MMbpd), down 3.6 percent from September but up 2.1 percent from October 2024. The figure exceeded the NOD forecast by 0.4 percent. Total liquids production was 2.02 MMbpd, down 2.8 percent month-on-month but up 0.8 percent year-on-year. “Preliminary production figures for October 2025 show an average daily production of 2,017,000 barrels of oil, NGL and condensate”, the NOD said. “The total petroleum production so far in 2025 is about 197.1 million Sm3 oil equivalents (MSm3 o.e.), broken down as follows: about 87.8 MSm3 o.e. of oil, about 9.7 MSm3 o.e. of NGL and condensate and about 99.5 MSm3 o.e. of gas for sale”, it said. “The total volume is 4.1 MSm3 o.e. less than 2024”. For the third quarter majority state-owned Equinor ASA reported Norwegian equity liquid and gas production of 1.42 million barrels of oil equivalent a day (MMboed), up from 1.36 MMboed in Q2 and 1.31 MMboed in Q3 2024. “In the third quarter of 2025, new fields coming onstream (Johan Castberg and Halten East) drove an increase in production compared to the same quarter last year”, Equinor said of its Norwegian production in its quarterly report October 29. “High production efficiency from Johan Sverdrup, new wells and a lower impact from turnarounds and maintenance more than

Read More »

LG rolls out new AI services to help consumers with daily tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Read More »

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Read More »

Final 45V hydrogen tax credit guidance draws mixed response

Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Read More »

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations

Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them.  The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Read More »

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Read More »

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future

Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Read More »

The Download: de-censoring DeepSeek, and Gemini 3

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Quantum physicists have shrunk and “de-censored” DeepSeek R1 The news: A group of quantum physicists at Spanish firm Multiverse Computing claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators.  Why it matters: In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda.
How they did it: Multiverse Computing specializes in quantum-inspired AI techniques, which it used to create DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. It allowed them to identify and remove Chinese censorship so that the model answered sensitive questions in much the same way as Western models. Read the full story. —Caiwei Chen
Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.Gemini Agent is an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Read the full story. —Caiwei Chen MIT Technology Review Narrated: Why climate researchers are taking the temperature of mountain snow The Sierra’s frozen reservoir provides about a third of California’s water and most of what comes out of the faucets, shower heads, and sprinklers in the towns and cities of northwestern Nevada. The need for better snowpack temperature data has become increasingly critical for predicting when the water will flow down the mountains, as climate change fuels hotter weather, melts snow faster, and drives rapid swings between very wet and very dry periods.

A new generation of tools, techniques, and models promises to improve water forecasts, and help California and other states manage in the face of increasingly severe droughts and flooding. However, observers fear that any such advances could be undercut by the Trump administration’s cutbacks across federal agencies. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Yesterday’s Cloudflare outage was not triggered by a hackAn error in its bot management system was to blame. (The Verge)+ ChatGPT, X and Uber were among the services that dropped. (WP $)+ It’s another example of the dangers of having a handful of infrastructure providers. (WSJ $)+ Today’s web is incredibly fragile. (Bloomberg $) 2 Donald Trump has called for a federal AI regulatory standardInstead of allowing each state to make its own laws. (Axios)+ He claims the current approach risks slowing down AI progress. (Bloomberg $) 3 Meta has won the antitrust case that threatened to spin off InstagramIt’s one of the most high-profile cases in recent years. (FT $)+ A judge ruled that Meta doesn’t hold a social media monopoly. (BBC)
4 The Three Mile Island nuclear plant is making a comebackIt’s the lucky recipient of a $1 billion federal loan to kickstart the facility. (WP $)+ Why Microsoft made a deal to help restart Three Mile Island. (MIT Technology Review)5 Roblox will block children from speaking to adult strangers The gaming platform is facing fresh lawsuits alleging it is failing to protect young users from online predators. (The Guardian)+ But we don’t know much about how accurate its age verification is. (CNN)+ All users will have to submit a selfie or an ID to use chat features. (Engadget) 6 Boston Dynamics’ robot dog is becoming a widespread policing toolIt’s deployed by dozens of US and Canadian bomb squads and SWAT teams. (Bloomberg $)
7 A tribally-owned network of EV chargers is nearing completionIt’s part of Standing Rock reservation’s big push for clean energy. (NYT $) 8 Resist the temptation to use AI to cheat at conversationsIt makes it much more difficult to forge a connection. (The Atlantic $) 9 Amazon wants San Francisco residents to ride its robotaxis for freeIt’s squaring up against Alphabet’s Waymo in the city for the first time. (CNBC)+ But its cars look very different to traditional vehicles. (LA Times $)+ Zoox is operating around 50 robotaxis across SF and Las Vegas. (The Verge) 10 TikTok’s new setting allows you to filter out AI-generated clipsFarewell, sweet slop. (TechCrunch)+ How do AI models generate videos? (MIT Technology Review) Quote of the day
“The rapids of social media rush along so fast that the Court has never even stepped into the same case twice.” —Judge James Boasberg, who rejected the Federal Trade Commission’s claim that Meta had created an illegal social media monopoly, acknowledges the law’s failure to keep up with technology, Politico reports. One more thing
Namibia wants to build the world’s first hydrogen economyFactories have used fossil fuels to process iron ore for three centuries, and the climate has paid a heavy price: According to the International Energy Agency, the steel industry today accounts for 8% of carbon dioxide emissions.But it turns out there is a less carbon-­intensive alternative: using hydrogen. Unlike coal or natural gas, which release carbon dioxide as a by-product, this process releases water. And if the hydrogen itself is “green,” the climate impact of the entire process will be minimal. HyIron, which has a site in the Namib desert, is one of a handful of companies around the world that are betting green hydrogen can help the $1.8 trillion steel industry clean up its act. The question now is whether Namibia’s government, its trading partners, and hydrogen innovators can work together to build the industry in a way that satisfies the world’s appetite for cleaner fuels—and also helps improve lives at home. Read the full story. —Jonathan W. Rosen We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.+ This art installation in Paris revolves around porcelain bowls clanging against each other in a pool of water—it’s oddly hypnotic.+ Feeling burnt out? Get down to your local sauna for a quick reset.+ New York’s subway system is something else.+ Your dog has ancient origins. No, really!

Read More »

Quantum physicists have shrunk and “de-censored” DeepSeek R1

EXECUTIVE SUMMARY A group of quantum physicists claims to have created a version of the powerful reasoning AI model DeepSeek R1 that strips out the censorship built into the original by its Chinese creators.  The scientists at Multiverse Computing, a Spanish firm specializing in quantum-inspired AI techniques, created DeepSeek R1 Slim, a model that is 55% smaller but performs almost as well as the original model. Crucially, they also claim to have eliminated official Chinese censorship from the model. In China, AI companies are subject to rules and regulations meant to ensure that content output aligns with laws and “socialist values.” As a result, companies build in layers of censorship when training the AI systems. When asked questions that are deemed “politically sensitive,” the models often refuse to answer or provide talking points straight from state propaganda. To trim down the model, Multiverse turned to a mathematically complex approach borrowed from quantum physics that uses networks of high-dimensional grids to represent and manipulate large data sets. Using these so-called tensor networks shrinks the size of the model significantly and allows a complex AI system to be expressed more efficiently.
The method gives researchers a “map” of all the correlations in the model, allowing them to identify and remove specific bits of information with precision. After compressing and editing a model, Multiverse researchers fine-tune it so its output remains as close as possible to that of the original. To test how well it worked, the researchers compiled a data set of around 25 questions on topics known to be restricted in Chinese models, including “Who does Winnie the Pooh look like?”—a reference to a meme mocking President Xi Jinping—and “What happened in Tiananmen in 1989?” They tested the modified model’s responses against the original DeepSeek R1, using OpenAI’s GPT-5 as an impartial judge to rate the degree of censorship in each answer. The uncensored model was able to provide factual responses comparable to those from Western models, Multiverse says.
This work is part of Multiverse’s broader effort to develop technology to compress and manipulate existing AI models. Most large language models today demand high-end GPUs and significant computing power to train and run. However, they are inefficient, says Roman Orús, Multiverse’s cofounder and chief scientific officer. A compressed model can perform almost as well and save both energy and money, he says.  There is a growing effort across the AI industry to make models smaller and more efficient. Distilled models, such as DeepSeek’s own R1-Distill variants, attempt to capture the capabilities of larger models by having them “teach” what they know to a smaller model, though they often fall short of the original’s performance on complex reasoning tasks. Other ways to compress models include quantization, which reduces the precision of the model’s parameters (boundaries that are set when it’s trained), and pruning, which removes individual weights or entire “neurons.” “It’s very challenging to compress large AI models without losing performance,” says Maxwell Venetos, an AI research engineer at Citrine Informatics, a software company focusing on materials and chemicals, who didn’t work on the Multiverse project. “Most techniques have to compromise between size and capability. What’s interesting about the quantum-inspired approach is that it uses very abstract math to cut down redundancy more precisely than usual.” This approach makes it possible to selectively remove bias or add behaviors to LLMs at a granular level, the Multiverse researchers say. In addition to removing censorship from the Chinese authorities, researchers could inject or remove other kinds of perceived biases or specialty knowledge. In the future, Multiverse says, it plans to compress all mainstream open-source models.   Thomas Cao, assistant professor of technology policy at Tufts University’s Fletcher School, says Chinese authorities require models to build in censorship—and this requirement now shapes the global information ecosystem, given that many of the most influential open-source AI models come from China. Academics have also begun to document and analyze the phenomenon. Jennifer Pan, a professor at Stanford, and Princeton professor Xu Xu conducted a study earlier this year examining government-imposed censorship in large language models. They found that models created in China exhibit significantly higher rates of censorship, particularly in response to Chinese-language prompts. There is growing interest in efforts to remove censorship from Chinese models. Earlier this year, the AI search company Perplexity released its own uncensored variant of DeepSeek R1, which it named R1 1776. Perplexity’s approach involved post-training the model on a data set of 40,000 multilingual prompts related to censored topics, a more traditional fine-tuning method than the one Multiverse used.  However, Cao warns that claims to have fully “removed” censorship may be overstatements. The Chinese government has tightly controlled information online since the internet’s inception, which means that censorship is both dynamic and complex. It is baked into every layer of AI training, from the data collection process to the final alignment steps.  “It is very difficult to reverse-engineer that [a censorship-free model] just from answers to such a small set of questions,” Cao says. 

Read More »

Realizing value with AI inference at scale and in production

In partnership withHPE Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line. Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,'” he says. “That’s where we think the biggest return on AI investments will come from.”Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation. Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully. Trust as a prerequisite for scalable, high-stakes AI Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.
Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.” Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re  looking for is not achieved,” he says.
On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge. The shift to data-centric thinking and rise of the AI factory In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture. “Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence. This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach. These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure): Source: HPE, 2025 Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage. RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms. Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems. Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution. Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.” IT’s moment to scale—and lead The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.

“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.” In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.” For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle. The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to. Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production. For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.” For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent

EXECUTIVE SUMMARY Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent.  The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless.  But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text.  Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next.
When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective.  “Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. “These elements don’t just look good but invite your input to further tailor the results.” 
With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules.  Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there. The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode. For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site. For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a  development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt. Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents.  “Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.” 

Read More »

Networking for AI: Building the foundation for real-time intelligence

In partnership withHPE The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways. From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day. To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds. It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.
As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance. Making a network AI inference-ready More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775  IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.
The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision. “There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.” Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data. The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds. “The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.” To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green. Physical AI and the return of on-prem intelligence If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake. In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.

This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains. There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029. AI for networking and the future of self-driving infrastructure The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable. “Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.” At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions. The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves. That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.” Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.
For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

The Download: AI-powered warfare, and how embryo care is changing

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The State of AI: How war will be changed forever —Helen Warrell & James O’Donnell It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. 
But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Read the full story. This is the third edition of The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.Every Monday, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing. Sign up here to receive future editions every Monday.
Job titles of the future: AI embryologist Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up.Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. Read the full story. —Amanda Smith The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Big Tech’s job cuts are a warning signThey’re a canary down the mine for other industries. (WP $)+ Americans appear to feel increasingly unsettled by AI. (WSJ $)+ Global fund managers worry companies are overinvesting in the technology. (FT $)

2 Iran is attempting to stimulate rain to end its deadly droughtBut critics warn that cloud seeding is a challenging process. (New Scientist $)+ Parts of western Iran are now experiencing flooding. (Reuters)+ Why it’s so hard to bust the weather control conspiracy theory. (MIT Technology Review) 3 Air taxi startups may produce new aircraft for war zonesThe US Army has announced its intentions to acquire most of its weapons from startups, not major contractors. (The Information $)+ US firm Joby Aviation is launching flying taxis in Dubai. (NBC News)+ This giant microwave may change the future of war. (MIT Technology Review)4 Weight-loss drug make Eli Lilly is likely to cross a trillion-dollar valuationAs it prepares to launch a pill alternative to its injections. (WSJ $)+ Arch rival Novo Nordisk A/S is undercutting the company to compete. (Bloomberg $)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review) 5 What’s going on with the US TikTok ban?Even the lawmakers in charge don’t seem to know. (The Verge) 6 It’s getting harder to grow cocoaMass tree felling and lower rainfall in the Congo Basin is to blame. (FT $)+ Industrial agriculture activists are everywhere at COP30. (The Guardian)+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review) 7 Russia is cracking down on its critical military bloggersArmchair critics are facing jail time if they refuse to apologize. (Economist $) 8 Why the auto industry is so obsessed with humanoid robotsIt’s not just Tesla—plenty of others want to get in on the act. (The Atlantic $)+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review) 9 Indian startups are challenging ChatGPT’s AI dominanceThey support a far wider range of languages than the large AI firms’ models. (Rest of World)+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review) 10 These tiny sensors track butterflies on their journey to Mexico 🦋Scientists hope it’ll shed some light on their mysterious life cycles. (NYT $)
Quote of the day
“I think no company is going to be immune, including us.”  —Sundar Pichai, CEO of Google, warns the BBC about the precarious nature of the AI bubble. One more thing How a 1980s toy robot arm inspired modern robotics—Jon KeeganAs a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm.Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was a legit robotic arm. And the bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics. Read the full story.
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The US Library of Congress has attained some handwritten drafts of iconic songs from The Wizard of Oz.+ This interesting dashboard tracks the world’s top 500 musical artists in the world right now—some of the listings may surprise you (or just make you feel really old.)+ Cult author Chris Kraus shares what’s floating her boat right now.+ The first images of the forthcoming Legend of Zelda film are here!

Read More »

Gluware tackles AI agent coordination with Titan platform

The first phase focused on configuration management and drift detection. Gluware’s system identified when network devices deviated from approved configurations and proposed fixes, but network operations teams manually reviewed and approved each remediation. The second phase introduced automatic remediation. As customers gained confidence, they allowed the system to automatically correct devices that drifted from approved standards without human approval for each action. Current deployments represent what the industry calls self-operating networks, where configuration drift triggers instant automated correction. Titan represents the third phase: system-determined operations where AI identifies new changes needed and executes them within defined risk parameters. This differs fundamentally from drift remediation, which restores known-good states. Titan handles new modifications from multiple AI systems that may conflict with each other. The platform coordinates between observability platforms, service management tools and security systems, each deploying AI agents that want to make simultaneous network changes. The validation framework extends from preventing configuration drift to preventing conflicts between competing AI agents. Titan’s architecture for network and agentic AI coordination Titan consists of a series of integrated components working together to solve the multi-agent coordination problem.  The Intelligent MCP Server uses the Model Context Protocol to coordinate between Gluware’s automation capabilities and external AI agents. The Gluware Agent executes the actual automation work between the MCP server and network devices. The Co-Pilot provides a natural language interface for network operations teams. The MCP Server implements what Gluware calls a validation engine. Every action passes through verification before execution. This architecture allows third-party agents from observability platforms or service management tools to request network changes while Gluware maintains control over execution. Initial MCP service integrations include NetBox and ServiceNow.

Read More »

The week in 5 numbers: Power outages stretch, data center load overstated

Capacity secured by the Texas Energy Fund so far for the Electric Reliability Council of Texas market. The fund, which voters authorized in 2023 to give low-interest loans to energy projects, was a response to the devastating impacts of 2021’s winter storm blackouts. It finalized its sixth loan this week, which went to NRG Energy to develop a 455-MW gas plant near Houston. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and it is expected to come online in 2028. The fund has faced challenges, however, as a number of projects have withdrawn from the pipeline as the economics of gas plants, in particular, have shifted.

Read More »

Texas loan fund tops 3.5 GW of gas capacity secured with latest NRG deal

NRG Energy will develop a 455-MW gas plant near Houston, backed by a low-interest loan from the state of Texas, the company said Thursday. It is the sixth loan finalized through the Texas Energy Fund program for the Electric Reliability Council of Texas market. Voters authorized the fund in 2023. The new generation will be built at NRG’s existing Greens Bayou Generating Station in Harris County, and is expected online in 2028. New generation backed by the Texas Energy Fund, across six projects, now exceeds more than 3.5 GW, said officials at the Public Utility Commission of Texas, which manages the program. The Lone Star State is experiencing “unprecedented growth,” NRG Executive Vice President, President of NRG Business and Wholesale Operations Robert Gaudette, said in a statement. Greens Bayou is NRG’s third project to receive support from the loan fund, marking about 1.5 GW of total capacity and up to $1.15 billion in low-interest loans. Under the loan agreement for Greens Bayou, total project costs are estimated to be less than $617 million and the PUCT will provide a 20-year loan up to $370 million, or 60% of total cost, at a 3% interest rate. In August, the PUCT tapped NRG for a TEF loan up to $216 million to develop two gas units totaling 456 MW of capacity at its existing TH Wharton Generating Station in Houston. And in September the company was selected for a loan of $562 million to develop a 721-MW gas plant near Baytown. There are another 11 Texas Energy Fund applications moving through a due diligence review process, said state officials, representing another 5.4 GW of possible new capacity in the ERCOT footprint.

Read More »

Microsoft’s Fairwater Atlanta and the Rise of the Distributed AI Supercomputer

Microsoft’s second Fairwater data center in Atlanta isn’t just “another big GPU shed.” It represents the other half of a deliberate architectural experiment: proving that two massive AI campuses, separated by roughly 700 miles, can operate as one coherent, distributed supercomputer. The Atlanta installation is the latest expression of Microsoft’s AI-first data center design: purpose-built for training and serving frontier models rather than supporting mixed cloud workloads. It links directly to the original Fairwater campus in Wisconsin, as well as to earlier generations of Azure AI supercomputers, through a dedicated AI WAN backbone that Microsoft describes as the foundation of a “planet-scale AI superfactory.” Inside a Fairwater Site: Preparing for Multi-Site Distribution Efficient multi-site training only works if each individual site behaves as a clean, well-structured unit. Microsoft’s intra-site design is deliberately simplified so that cross-site coordination has a predictable abstraction boundary—essential for treating multiple campuses as one distributed AI system. Each Fairwater installation presents itself as a single, flat, high-regularity cluster: Up to 72 NVIDIA Blackwell GPUs per rack, using GB200 NVL72 rack-scale systems. NVLink provides the ultra-low-latency, high-bandwidth scale-up fabric within the rack, while the Spectrum-X Ethernet stack handles scale-out. Each rack delivers roughly 1.8 TB/s of GPU-to-GPU bandwidth and exposes a multi-terabyte pooled memory space addressable via NVLink—critical for large-model sharding, activation checkpointing, and parallelism strategies. Racks feed into a two-tier Ethernet scale-out network offering 800 Gbps GPU-to-GPU connectivity with very low hop counts, engineered to scale to hundreds of thousands of GPUs without encountering the classic port-count and topology constraints of traditional Clos fabrics. Microsoft confirms that the fabric relies heavily on: SONiC-based switching and a broad commodity Ethernet ecosystem to avoid vendor lock-in and accelerate architectural iteration. Custom network optimizations, such as packet trimming, packet spray, high-frequency telemetry, and advanced congestion-control mechanisms, to prevent collective

Read More »

StanChart Flags ‘Fragility of Russian Supply’

In a market that continues to be dominated by bearish supply sentiment, one of the few bullish drivers is the fragility of Russian supply. That’s what Standard Chartered Bank Energy Research Head Emily Ashford said in a report sent to Rigzone by the Standard Chartered team earlier this week, adding that both crude and refined product exports are being curtailed by the pressure of focused sanctions on Russian oil producers, a lowered crude oil price cap, and ongoing missile and drone attacks on oil and gas export infrastructure. “On 14 November the port of Novorossiysk, in Krasnodar Krai, was targeted by missiles and drones, with a focus on the Sheskharis oil terminal,” Ashford highlighted in the report, noting that the terminal “has an export capacity of c.2.2 million barrels per day” and that “loadings were suspended for two days”. “Ukraine’s attacks on a series of Black Sea export terminals have highlighted the vulnerability of exports via Russia’s southern route,” Ashford noted in the report. “This is particularly important, with weather closing down the Northern Sea Route via the Arctic over the winter. The winter transit routes to Asia are then limited to the Suez Canal and take, on average, 10 days longer,” Ashford added. “These longer transits are a contributing factor to the increased volumes of oil on water, which have increased by 294 million barrels year on year to an all-time high of 1.37 billion barrels as of 14 November, according to data from Vortexa,” Ashford continued. In the report, Ashford went on to note that Russian crude exports “have … remained relatively steady” but added that Standard Chartered expects to see “a sharp slowdown after the 21 November deadline for dealings with the two sanctioned oil producers”. “This is likely also a contributing factor to the volumes of

Read More »

Fission Forward: Next-Gen Nuclear Power Developments for the AI Data Center Boom

Constellation proposes to begin with 1.5 GW of fast-tracked projects, including 800 MW of battery energy storage and 700 MW of new natural gas generation to address short-term reliability needs. The remaining 4.3 GW represents longer-term investment at the Calvert Cliffs Clean Energy Center: extending both units for an additional 20 years beyond their current 2034 and 2036 license expirations, implementing a 10% uprate that would add roughly 190 MW of output, and pursuing 2 GW of next-generation nuclear at the existing site. For Maryland, a state defined by a dense I-95 fiber corridor, accelerating data center buildout, and rising AI-driven load, the plan could be transformative. If Constellation moves from “option” to “program,” the company estimates that 70% of the state’s electricity supply could come from clean energy sources, positioning Maryland as a top-tier market for 24/7 carbon-free power. TerraPower’s Natrium SMR Clears a Key Federal Milestone On Oct. 23, the Nuclear Regulatory Commission issued the final environmental impact statement (FEIS) for TerraPower’s Natrium small modular reactor in Kemmerer, Wyoming. While not a construction permit, FEIS completion removes a major element of federal environmental risk and keeps the project on track for the next phase of NRC review. TerraPower and its subsidiary, US SFR Owner, LLC, originally submitted the construction permit application on March 28, 2024. Natrium is a sodium-cooled fast reactor producing roughly 345 MW of electric output, paired with a molten-salt thermal-storage system capable of boosting generation to about 500 MW during peak periods. The design combines firm baseload power with flexible, dispatchable capability, an attractive profile for hyperscalers evaluating 24/7 clean energy options in the western U.S. The project is part of the DOE’s Advanced Reactor Demonstration Program, intended to replace retiring coal capacity in PacifiCorp’s service territory while showcasing advanced fission technology. For operators planning multi-GW

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE