Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Data Center Pushback Watch: Community Opposition and Regulatory Challenges in Data Center Expansion – Q4 2025
DTE argues that the system can absorb the additional load without raising customer rates. Yet that confidence rests on a series of assumptions: from how many substations, transformers, and transmission segments must be upgraded, to how winter demand spikes are modeled, to whether projected load materializes at full scale. Minimum revenue guarantees soften downside risk, but they don’t erase it. A forthcoming FERC rulemaking could further reshape how utilities evaluate, and recover costs from, AI-scale interconnection requests. Bloomberg: Amazon’s Footprint is Wider — and More Distributed — than Public Perception Suggests Reporting from Bloomberg and SourceMaterial, based on internal documents, indicates that Amazon’s cloud infrastructure stretches far beyond its well-known hyperscale campuses. AWS is operating more than 900 facilities across 50+ countries, a count that includes not only owned or leased mega-sites, but also a vast network of colocation deployments where Amazon rents power and space from third-party operators. Bloomberg highlights that this colocation layer, estimated at roughly one-fifth of AWS compute capacity as of last year, has been a key lever in Amazon’s ability to scale rapidly and globally. While AWS does not publicly break out those numbers, the reporting suggests Amazon’s architecture is far more distributed than generally assumed. Public narratives often center on the flagship campuses in Northern Virginia, Oregon, or Ohio; the documents instead portray a hub-and-spoke model shaped by hundreds of partner facilities. The timing is notable. As AI infrastructure demand accelerates for training clusters, inference fleets, and low-latency edge access, Amazon appears to be leaning into a topology designed for reach and speed, not just scale in a handful of metros. Springdale Borough: Approval at the Planning Stage, but Residents Push Back Outside Pittsburgh, Springdale Borough is facing a growing fight over a proposed 565,000-square-foot AI-focused data center planned for the former Cheswick Generating

Building the Thermal Backbone of AI: Tracking the Latest Data Center Liquid Cooling Deals and Deployments
Over the past three years, we’ve tracked how liquid cooling has moved from the margins of the white space to the critical path of AI data center design. And over the past quarter, the deals and product launches crossing Data Center Frontier’s radar all point in the same direction: liquid is becoming the organizing principle for how operators think about power, density, and risk. From OEMs and HVAC majors buying their way deeper into liquid-to-chip, to capital flowing into microfluidic cold plates and high-efficiency chillers, to immersion systems pushing out to the edge and even into battery storage, the thermal stack around AI is being rebuilt in real time. What’s striking in this latest wave of announcements is not just the technology, but the scale and specificity: 300MW two-phase campuses, 10MW liquid-to-chip AI halls at cable landing stations, stainless-steel chillers designed to eliminate in-row CDUs. Here’s a look at the most consequential recent moves shaping the next phase of data center liquid cooling. Trane Technologies to Acquire Stellar Energy Digital: Buying a Liquid-to-Chip Platform Trane Technologies is making a decisive move up the liquid cooling stack with its just-announced agreement to acquire the Stellar Energy Digital business, a Jacksonville-based specialist in turnkey liquid-to-chip cooling plants and coolant distribution units. Stellar Energy’s Digital business — roughly 700 employees and two Jacksonville assembly operations — designs and builds modular cooling plants, central utility plants and CDUs for liquid-cooled data centers and other complex enterprise environments. Trane is clearly buying more than incremental capacity; it’s acquiring a platform that’s already oriented around prefab, AI-era deployments. Karin De Bondt, Trane’s Chief Strategy Officer, framed the deal squarely around the shift DCF has been tracking all year: data center customers want repeatable, modular systems they can deploy at speed. “The data center ecosystem is growing

Inside Anthropic’s Multi-Cloud AI Factory: How AWS Trainium and Google TPUs Shape Its Next Phase
A Massive TPU Commitment—and a Strategic Signal From Google This is not a casual “we spun up a few pods on GCP.” Anthropic is effectively reserving a substantial share of Google’s future TPU capacity and tying that scale directly into Google Cloud’s enterprise AI go-to-market. The compute commitment has been described as being worth tens of billions of dollars over the life of the agreement, signaling Google’s intention to anchor Anthropic as a marquee external TPU customer. Google’s Investment Outlook: Still Early, But Potentially Transformational Reports from Reuters (citing Business Insider) on November 6 indicate that Google is exploring whether to deepen its financial investment in Anthropic. The discussions are early and non-binding, but the structures under consideration (including convertible notes or a new priced round paired with additional TPU and cloud commitments) suggest a valuation that could exceed $350 billion. Google has not commented publicly. This would come on top of Alphabet’s existing position, reported at more than $3 billion invested and 14% ownership, and follows Anthropic’s September 2025 $13 billion Series F, which valued the company at $183 billion. None of these prospective terms are final, but the direction of travel is clear: Google is looking to bind TPU productization, cloud consumption, and strategic alignment more tightly together. Anthropic’s Multi-Cloud Architecture Reaches Million-Accelerator Scale Anthropic’s strategy depends on not being bound to a single hyperscaler or silicon roadmap. In November 2024, the company named AWS its primary cloud and training partner; a deal that brought Amazon’s total investment to $8 billion and committed Anthropic to Trainium for its largest models. The two companies recently activated Project Rainier, an AI supercomputer cluster in Indiana with roughly 500,000 Trainium2 chips, with Anthropic expected to scale to more than 1 million Trainium2 chips on AWS by the end of 2025. The Google Cloud

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots
Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available) Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA and New York, NY. Navy Nuke / Military Vets leaving service accepted! This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Montvale, NJ This traveling position is also available in: New York, NY; White Plains, NY; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and

Flex’s Integrated Data Center Bet: How a Manufacturing Giant Plans to Reshape AI-Scale Infrastructure
At this year’s OCP Global Summit, Flex made a declaration that resonated across the industry: the era of slow, bespoke data center construction is over. AI isn’t just stressing the grid or forcing new cooling techniques—it’s overwhelming the entire design-build process. To meet this moment, Flex introduced a globally manufactured, fully integrated data center platform aimed directly at multi-gigawatt AI campuses. The company claims it can cut deployment timelines by as much as 30 percent by shifting integration upstream into the factory and unifying power, cooling, compute, and lifecycle services into pre-engineered modules. This is not a repositioning on the margins. Flex is effectively asserting that the future hyperscale data center will be manufactured like a complex industrial system, not built like a construction project. On the latest episode of The Data Center Frontier Show, we spoke with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, about why Flex believes this new approach is not only viable but necessary in the age of AI. The discussion revealed a company leaning heavily on its global manufacturing footprint, its cross-industry experience, and its expanding cooling and power technology stack to redefine what deployment speed and integration can look like at scale. AI Has Broken the Old Data Center Model From the outset, Campbell and Butler made clear that Flex’s strategy is a response to a structural shift. AI workloads no longer allow power, cooling, and compute to evolve independently. Densities have jumped so quickly—and thermals have risen so sharply—that the white space, gray space, and power yard are now interdependent engineering challenges. Higher chip TDPs, liquid-cooled racks approaching one to two megawatts, and the need to assemble entire campuses in record time have revealed deep fragility in traditional workflows. As Butler put it, AI

The Future of Hyperscale: Neoverse Joins NVLink Fusion as SC25 Accelerates Rack-Scale AI Architectures
Neoverse’s Expanding Footprint and the Power-Efficiency Imperative With Neoverse deployments now approaching roughly 50% of all compute shipped into top hyperscalers in 2025 (representing more than a billion Arm cores) and with nation-scale AI campuses such as the Stargate project already anchored on Arm compute, the addition of NVLink Fusion becomes a pivotal extension of the Neoverse roadmap. Partners can now connect custom Arm CPUs to their preferred NVIDIA accelerators across a coherent, high-bandwidth, rack-scale fabric. Arm characterized the shift as a generational inflection point in data-center architecture, noting that “power—not FLOPs—is the bottleneck,” and that future design priorities hinge on maximizing “intelligence per watt.” Ian Buck, vice president and general manager of accelerated computing at NVIDIA, underscored the practical impact: “Folks building their own Arm CPU, or using an Arm IP, can actually have access to NVLink Fusion—be able to connect that Arm CPU to an NVIDIA GPU or to the rest of the NVLink ecosystem—and that’s happening at the racks and scale-up infrastructure.” Despite the expanded design flexibility, this is not being positioned as an open interconnect ecosystem. NVIDIA continues to control the NVLink Fusion fabric, and all connections ultimately run through NVIDIA’s architecture. For data-center planners, the SC25 announcement translates into several concrete implications: 1. NVIDIA “Grace-style” Racks Without Buying Grace With NVLink Fusion now baked into Neoverse, hyperscalers and sovereign operators can design their own Arm-based control-plane or pre-processing CPUs that attach coherently to NVIDIA GPU domains—such as NVL72 racks or HGX B200/B300 systems—without relying on Grace CPUs. A rack-level architecture might now resemble: Custom Neoverse SoC for ingest, orchestration, agent logic, and pre/post-processing NVLink Fusion fabric Blackwell GPU islands and/or NVLink-attached custom accelerators (Marvell, MediaTek, others) This decouples CPU choice from NVIDIA’s GPU roadmap while retaining the full NVLink fabric. In practice, it also opens

Data Center Pushback Watch: Community Opposition and Regulatory Challenges in Data Center Expansion – Q4 2025
DTE argues that the system can absorb the additional load without raising customer rates. Yet that confidence rests on a series of assumptions: from how many substations, transformers, and transmission segments must be upgraded, to how winter demand spikes are modeled, to whether projected load materializes at full scale. Minimum revenue guarantees soften downside risk, but they don’t erase it. A forthcoming FERC rulemaking could further reshape how utilities evaluate, and recover costs from, AI-scale interconnection requests. Bloomberg: Amazon’s Footprint is Wider — and More Distributed — than Public Perception Suggests Reporting from Bloomberg and SourceMaterial, based on internal documents, indicates that Amazon’s cloud infrastructure stretches far beyond its well-known hyperscale campuses. AWS is operating more than 900 facilities across 50+ countries, a count that includes not only owned or leased mega-sites, but also a vast network of colocation deployments where Amazon rents power and space from third-party operators. Bloomberg highlights that this colocation layer, estimated at roughly one-fifth of AWS compute capacity as of last year, has been a key lever in Amazon’s ability to scale rapidly and globally. While AWS does not publicly break out those numbers, the reporting suggests Amazon’s architecture is far more distributed than generally assumed. Public narratives often center on the flagship campuses in Northern Virginia, Oregon, or Ohio; the documents instead portray a hub-and-spoke model shaped by hundreds of partner facilities. The timing is notable. As AI infrastructure demand accelerates for training clusters, inference fleets, and low-latency edge access, Amazon appears to be leaning into a topology designed for reach and speed, not just scale in a handful of metros. Springdale Borough: Approval at the Planning Stage, but Residents Push Back Outside Pittsburgh, Springdale Borough is facing a growing fight over a proposed 565,000-square-foot AI-focused data center planned for the former Cheswick Generating

Building the Thermal Backbone of AI: Tracking the Latest Data Center Liquid Cooling Deals and Deployments
Over the past three years, we’ve tracked how liquid cooling has moved from the margins of the white space to the critical path of AI data center design. And over the past quarter, the deals and product launches crossing Data Center Frontier’s radar all point in the same direction: liquid is becoming the organizing principle for how operators think about power, density, and risk. From OEMs and HVAC majors buying their way deeper into liquid-to-chip, to capital flowing into microfluidic cold plates and high-efficiency chillers, to immersion systems pushing out to the edge and even into battery storage, the thermal stack around AI is being rebuilt in real time. What’s striking in this latest wave of announcements is not just the technology, but the scale and specificity: 300MW two-phase campuses, 10MW liquid-to-chip AI halls at cable landing stations, stainless-steel chillers designed to eliminate in-row CDUs. Here’s a look at the most consequential recent moves shaping the next phase of data center liquid cooling. Trane Technologies to Acquire Stellar Energy Digital: Buying a Liquid-to-Chip Platform Trane Technologies is making a decisive move up the liquid cooling stack with its just-announced agreement to acquire the Stellar Energy Digital business, a Jacksonville-based specialist in turnkey liquid-to-chip cooling plants and coolant distribution units. Stellar Energy’s Digital business — roughly 700 employees and two Jacksonville assembly operations — designs and builds modular cooling plants, central utility plants and CDUs for liquid-cooled data centers and other complex enterprise environments. Trane is clearly buying more than incremental capacity; it’s acquiring a platform that’s already oriented around prefab, AI-era deployments. Karin De Bondt, Trane’s Chief Strategy Officer, framed the deal squarely around the shift DCF has been tracking all year: data center customers want repeatable, modular systems they can deploy at speed. “The data center ecosystem is growing

Inside Anthropic’s Multi-Cloud AI Factory: How AWS Trainium and Google TPUs Shape Its Next Phase
A Massive TPU Commitment—and a Strategic Signal From Google This is not a casual “we spun up a few pods on GCP.” Anthropic is effectively reserving a substantial share of Google’s future TPU capacity and tying that scale directly into Google Cloud’s enterprise AI go-to-market. The compute commitment has been described as being worth tens of billions of dollars over the life of the agreement, signaling Google’s intention to anchor Anthropic as a marquee external TPU customer. Google’s Investment Outlook: Still Early, But Potentially Transformational Reports from Reuters (citing Business Insider) on November 6 indicate that Google is exploring whether to deepen its financial investment in Anthropic. The discussions are early and non-binding, but the structures under consideration (including convertible notes or a new priced round paired with additional TPU and cloud commitments) suggest a valuation that could exceed $350 billion. Google has not commented publicly. This would come on top of Alphabet’s existing position, reported at more than $3 billion invested and 14% ownership, and follows Anthropic’s September 2025 $13 billion Series F, which valued the company at $183 billion. None of these prospective terms are final, but the direction of travel is clear: Google is looking to bind TPU productization, cloud consumption, and strategic alignment more tightly together. Anthropic’s Multi-Cloud Architecture Reaches Million-Accelerator Scale Anthropic’s strategy depends on not being bound to a single hyperscaler or silicon roadmap. In November 2024, the company named AWS its primary cloud and training partner; a deal that brought Amazon’s total investment to $8 billion and committed Anthropic to Trainium for its largest models. The two companies recently activated Project Rainier, an AI supercomputer cluster in Indiana with roughly 500,000 Trainium2 chips, with Anthropic expected to scale to more than 1 million Trainium2 chips on AWS by the end of 2025. The Google Cloud

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots
Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available) Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA and New York, NY. Navy Nuke / Military Vets leaving service accepted! This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Montvale, NJ This traveling position is also available in: New York, NY; White Plains, NY; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and

Flex’s Integrated Data Center Bet: How a Manufacturing Giant Plans to Reshape AI-Scale Infrastructure
At this year’s OCP Global Summit, Flex made a declaration that resonated across the industry: the era of slow, bespoke data center construction is over. AI isn’t just stressing the grid or forcing new cooling techniques—it’s overwhelming the entire design-build process. To meet this moment, Flex introduced a globally manufactured, fully integrated data center platform aimed directly at multi-gigawatt AI campuses. The company claims it can cut deployment timelines by as much as 30 percent by shifting integration upstream into the factory and unifying power, cooling, compute, and lifecycle services into pre-engineered modules. This is not a repositioning on the margins. Flex is effectively asserting that the future hyperscale data center will be manufactured like a complex industrial system, not built like a construction project. On the latest episode of The Data Center Frontier Show, we spoke with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, about why Flex believes this new approach is not only viable but necessary in the age of AI. The discussion revealed a company leaning heavily on its global manufacturing footprint, its cross-industry experience, and its expanding cooling and power technology stack to redefine what deployment speed and integration can look like at scale. AI Has Broken the Old Data Center Model From the outset, Campbell and Butler made clear that Flex’s strategy is a response to a structural shift. AI workloads no longer allow power, cooling, and compute to evolve independently. Densities have jumped so quickly—and thermals have risen so sharply—that the white space, gray space, and power yard are now interdependent engineering challenges. Higher chip TDPs, liquid-cooled racks approaching one to two megawatts, and the need to assemble entire campuses in record time have revealed deep fragility in traditional workflows. As Butler put it, AI

The Future of Hyperscale: Neoverse Joins NVLink Fusion as SC25 Accelerates Rack-Scale AI Architectures
Neoverse’s Expanding Footprint and the Power-Efficiency Imperative With Neoverse deployments now approaching roughly 50% of all compute shipped into top hyperscalers in 2025 (representing more than a billion Arm cores) and with nation-scale AI campuses such as the Stargate project already anchored on Arm compute, the addition of NVLink Fusion becomes a pivotal extension of the Neoverse roadmap. Partners can now connect custom Arm CPUs to their preferred NVIDIA accelerators across a coherent, high-bandwidth, rack-scale fabric. Arm characterized the shift as a generational inflection point in data-center architecture, noting that “power—not FLOPs—is the bottleneck,” and that future design priorities hinge on maximizing “intelligence per watt.” Ian Buck, vice president and general manager of accelerated computing at NVIDIA, underscored the practical impact: “Folks building their own Arm CPU, or using an Arm IP, can actually have access to NVLink Fusion—be able to connect that Arm CPU to an NVIDIA GPU or to the rest of the NVLink ecosystem—and that’s happening at the racks and scale-up infrastructure.” Despite the expanded design flexibility, this is not being positioned as an open interconnect ecosystem. NVIDIA continues to control the NVLink Fusion fabric, and all connections ultimately run through NVIDIA’s architecture. For data-center planners, the SC25 announcement translates into several concrete implications: 1. NVIDIA “Grace-style” Racks Without Buying Grace With NVLink Fusion now baked into Neoverse, hyperscalers and sovereign operators can design their own Arm-based control-plane or pre-processing CPUs that attach coherently to NVIDIA GPU domains—such as NVL72 racks or HGX B200/B300 systems—without relying on Grace CPUs. A rack-level architecture might now resemble: Custom Neoverse SoC for ingest, orchestration, agent logic, and pre/post-processing NVLink Fusion fabric Blackwell GPU islands and/or NVLink-attached custom accelerators (Marvell, MediaTek, others) This decouples CPU choice from NVIDIA’s GPU roadmap while retaining the full NVLink fabric. In practice, it also opens

Crude Finishes Higher on Short Covering
Oil gained, finishing the week positive as investors assessed the murky outlook for a cease-fire in Ukraine and as the commodity pushed past an important technical level. West Texas Intermediate rose 0.7% to settle above $60 a barrel, signaling that a risk premium persists as a peace deal between Russia and Ukraine remains elusive. Ukrainian negotiators continued talks with US officials in Florida for a second day, with Russia objecting to some of the points in a US-backed plan. The market is watching for progress on a settlement that could lower prices by potentially easing sanctions and boosting Russian oil flows just as an expected oversupply in the market starts to materialize. But an agreement appears distant: Ukraine took credit for an overnight attack on Russia’s Syzran refinery and the Temryuk seaport. Meanwhile, Washington reportedly lobbied European countries in an effort to block a plan to use Moscow’s frozen assets to back a massive loan for Ukraine. Adding to bullish momentum, WTI on Friday settled above its 50-day moving average, a key level of support for the commodity. Prices have also received a boost from algorithmic traders covering some of their bearish positions in recent sessions — and analysts say more buying could materialize in coming weeks. “This session should mark the first notable short covering program since algo selling activity exhausted itself, and the bar is low for subsequent CTA buying activity to hit the tapes over the coming week,” said Dan Ghali, a commodity strategist at TD Securities. Countering geopolitical risks, oversupply is putting downward pressure on prices globally. Saudi Aramco will reduce the price of its flagship Arab Light crude grade to the lowest level since 2021 for January, while Canadian oil has tumbled. And the number of crude oil rigs in the US rose by 6

ITT Agrees to Buy Lone Star’s SPX Flow in $4.8B Deal
ITT Inc. has agreed to acquire industrial equipment manufacturer SPX Flow Inc. from Lone Star Funds in a $4.775 billion cash and stock deal. The deal will will consist of a combination of cash and $700 million in ITT common stock issued to Lone Star, according to a statement confirming an earlier report by Bloomberg News that the companies were nearing a deal. Charlotte, North Carolina-based SPX Flow makes products including valves and pumps under brands such as APV and Johnson Pump, as well as food processing equipment such as its Gerstenberg Schröder-branded butter maker. Lone Star Funds agreed in 2021 to take SPX Flow private for $3.8 billion including debt. The SPX Flow acquisition is the largest ever by Stamford, Connecticut-based ITT, according to data compiled by Bloomberg. ITT’s shares have gained 28% this year, giving it a market value of $14.3 billion. ITT’s history dates to 1920, with its genesis as International Telephone and Telegraph, a provider of telephone switching equipment and services, according to the company’s website. In 1995, that conglomerate was split into three divisions, including the company that became the current manufacturer of components and technology for a range of transportation, industrial and energy markets. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Energy Department Launches Breakthrough AI-Driven Biotechnology Platform at PNNL
Richland, Wash.—U.S. Secretary of Energy Chris Wright launched a new chapter to secure American leadership in autonomous biological discovery yesterday alongside scientists and private partners at Pacific Northwest National Laboratory (PNNL). As part of his visit to PNNL, Secretary Wright commissioned and signed the Anaerobic Microbial Phenotyping Platform (AMP2). PNNL scientists believe AMP2 will be the world’s largest autonomous-capable science system for anaerobic microbial experimentation. The platform supports the Trump Administration’s recently announced Genesis Mission, which calls on the Department of Energy (DOE) to transform American leadership in science and innovation with the development of artificial intelligence (AI). Built by Gingko Bioworks, AMP2 gives DOE scientists an unprecedented capability to explore the world of microbes—an invisible yet powerful workforce poised to boost biotech manufacturing as well as provide insights into basic life science questions. This first-of-its-kind capability will transform how the U.S. identifies, grows, and optimizes the use of microbes in days and weeks instead of years using automation and AI. “President Trump launched the Genesis Mission to ensure American leadership in science and innovation,” said Secretary Chris Wright. “This ongoing public-private partnership at PNNL will help do exactly that in the field of biotechnology. By launching AI-enabled, autonomous platforms like AMP2, our DOE National Laboratories are driving scientific breakthroughs faster than ever before and ensuring the United States leads the world in technologies that will better human lives and secure our future.” The AMP2 platform will serve as a prototype for DOE’s planned development of the larger Microbial Molecular Phenotyping Capability (M2PC). Together, the systems will establish the world’s largest autonomous microbial research infrastructure, and position the U.S. to lead in biotechnology, biomanufacturing, and next-generation materials innovation for decades to come. Secretary Wright visited PNNL as part of his ongoing tour of all 17 DOE National Laboratories. PNNL marks

Chevron, Gorgon Partners OK $2B to Drill for More Gas
Chevron Corp’s Australian unit and its joint venture partners have reached a final investment decision to further develop the massive Gorgon natural gas project in Western Australia, it said in a statement on Friday. Chevron Australia and its partners — including Exxon Mobil Corp. and Shell Plc — will spend A$3 billion ($2 billion) connecting two offshore natural gas fields to existing infrastructure and processing facilities on Barrow Island as part of the Gorgon Stage 3 development, it said in the statement. Six wells will also be drilled. Gorgon, on the remote Barrow Island in northwestern Australia, is the largest resource development in Australia’s history, and produces about 15.6 million tons of liquefied natural gas a year. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

USA Crude Oil Stocks Rise Week on Week
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 0.6 million barrels from the week ending November 21 to the week ending November 28, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. That EIA report was released on December 3 and included data for the week ending November 28. It showed that crude oil stocks, not including the SPR, stood at 427.5 million barrels on November 28, 426.9 million barrels on November 21, and 423.4 million barrels on November 29, 2024. Crude oil in the SPR stood at 411.7 million barrels on November 28, 411.4 million barrels on November 21, and 391.8 million barrels on November 29, 2024, the report revealed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.687 billion barrels on November 28, the report showed. Total petroleum stocks were up 5.5 million barrels week on week and up 58.5 million barrels year on year, the report pointed out. “At 427.5 million barrels, U.S. crude oil inventories are about three percent below the five year average for this time of year,” the EIA noted in its latest weekly petroleum status report. “Total motor gasoline inventories increased by 4.5 million barrels from last week and are about two percent below the five year average for this time of year. Finished gasoline and blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 2.1 million barrels last week and are about seven percent below the five year average for this time of year. Propane/propylene inventories decreased 0.7 million barrels from last week and are about 15 percent above the five year average for this

Today’s $67 Per Barrel Is Only $44 in 2008 Dollars
Today’s $67 per barrel is only $44 per barrel in 2008-dollars. That’s what Skandinaviska Enskilda Banken AB (SEB) Chief Commodities Analyst Bjarne Schieldrop said in a SEB report sent to Rigzone by the SEB team on Wednesday. “The ‘fair price’ of oil today ($67 per barrel) is nominally not much different from the average prices over the three years to April 2008,” Schieldrop highlighted in the report. “Since then, we have had 52 percent U.S. inflation. And still the nominal fair price of oil is more or less the same. Today’s $67 per barrel is only $44 per barrel in 2008-dollars,” he added. “In real terms the world is getting cheaper and cheaper oil – to the joy of consumers and to the terror of oil producers who have to chase every possible avenue of productivity improvements to counter inflation and maintain margins,” Schieldrop continued, noting that, as they successfully do so, “the consequence is a nominal oil price not going up”. In the report, Schieldrop went on to outline that a “cost-floor of around $40 per barrel” multiplied by “a natural cost inflation-drift of 2.4 percent” comes to $0.96 per barrel. He added that, since 2008, the oil industry has been able to counter this drift with an equal amount of productivity. “The very stable five year oil price at around $67 per barrel over the past three years, and still the same today, is implying that the market is expecting the global oil industry will be able to counter an ongoing 2.4 percent inflation per year to 2030 with an equal amount of productivity,” Schieldrop said. “The world consumes 38 billion barrels per year. A productivity improvement of $0.96 per barrel equals $36 billion in productivity/year or $182 billion to 2030,” he added. Schieldrop outlined in the report that the

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Harnessing human-AI collaboration for an AI roadmap that moves beyond pilots
In partnership withConcentrix The past year has marked a turning point in the corporate AI conversation. After a period of eager experimentation, organizations are now confronting a more complex reality: While investment in AI has never been higher, the path from pilot to production remains elusive. Three-quarters of enterprises remain stuck in experimentation mode, despite mounting pressure to convert early tests into operational gains. “Most organizations can suffer from what we like to call PTSD, or process technology skills and data challenges,” says Shirley Hung, partner at Everest Group. “They have rigid, fragmented workflows that don’t adapt well to change, technology systems that don’t speak to each other, talent that is really immersed in low-value tasks rather than creating high impact. And they are buried in endless streams of information, but no unified fabric to tie it all together.” The central challenge, then, lies in rethinking how people, processes, and technology work together. Across industries as different as customer experience and agricultural equipment, the same pattern is emerging: Traditional organizational structures—centralized decision-making, fragmented workflows, data spread across incompatible systems—are proving too rigid to support agentic AI. To unlock value, leaders must rethink how decisions are made, how work is executed, and what humans should uniquely contribute.
“It is very important that humans continue to verify the content. And that is where you’re going to see more energy being put into,” Ryan Peterson, EVP and chief product officer at Concentrix. Much of the conversation centered on what can be described as the next major unlock: operationalizing human-AI collaboration. Rather than positioning AI as a standalone tool or a “virtual worker,” this approach reframes AI as a system-level capability that augments human judgment, accelerates execution, and reimagines work from end to end. That shift requires organizations to map the value they want to create; design workflows that blend human oversight with AI-driven automation; and build the data, governance, and security foundations that make these systems trustworthy.
“My advice would be to expect some delays because you need to make sure you secure the data,” says Heidi Hough, VP for North America aftermarket at Valmont. “As you think about commercializing or operationalizing any piece of using AI, if you start from ground zero and have governance at the forefront, I think that will help with outcomes.” Early adopters are already showing what this looks like in practice: starting with low-risk operational use cases, shaping data into tightly scoped enclaves, embedding governance into everyday decision-making, and empowering business leaders, not just technologists, to identify where AI can create measurable impact. The result is a new blueprint for AI maturity grounded in reengineering how modern enterprises operate. “Optimization is really about doing existing things better, but reimagination is about discovering entirely new things that are worth doing,” says Hung. Watch the webcast. This webcast is produced in partnership with Concentrix. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: political chatbot persuasion, and gene editing adverts
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. AI chatbots can sway voters better than political advertisements The news: Chatting with a politically biased AI model is more effective than political ads at nudging both Democrats and Republicans to support presidential candidates of the opposing party, new research shows. The catch: The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. Read the full story.
—Michelle Kim
The era of AI persuasion in elections is about to begin —Tal Feldman is a JD candidate at Yale Law School who focuses on technology and national security. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University who focuses on agentic AI and technology policy. The fear that elections could be overwhelmed by AI-generated realistic fake media has gone mainstream—and for good reason. But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. AI chatbots can shift voters’ views by a substantial margin, far more than traditional political advertising tends to do. In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply. Read the full story. The ads that sell the sizzle of genetic trait discrimination —Antonio Regalado, senior editor for biomedicine
One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, which promises a way for potential parents to use genetic tests to influence their baby’s traits, including eye color, hair color, and IQ. Inside the station, every surface was wrapped with more of its ads—babies on turnstiles, on staircases, on banners overhead. “Think about it. Makeup and then genetic optimization,” exulted Kian Sadeghi, the 26-year-old founder of Nucleus Genomics, the startup running the ads. The day after the campaign launched, Sadeghi and I had briefly sparred online. He’d been on X showing off a phone app where parents can click through traits like eye color and hair color. I snapped back that all this sounded a lot like Uber Eats—another crappy, frictionless future invented by entrepreneurs, but this time you’d click for a baby. That night, I agreed to meet Sadeghi in the station under a banner that read, “IQ is 50% genetic.” Read on to see how Antonio’s conversation with Sadeghi went. This story first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
1 The metaverse’s future looks murkier than everOG believer Mark Zuckerberg is planning deep cuts to the division’s budget. (Bloomberg $)+ However some of that money will be diverted toward smart glasses and wearables. (NYT $)+ Meta just managed to poach one of Apple’s top design chiefs. (Bloomberg $) 2 Kids are effectively AI’s guinea pigsAnd regulators are slowly starting to take note of the risks. (The Economist $)+ You need to talk to your kid about AI. Here are 6 things you should say. (MIT Technology Review)
3 How a group of women changed UK law on non-consensual deepfakesIt’s a big victory, and they managed to secure it with stunning speed. (The Guardian)+ But bans on deepfakes take us only so far—here’s what else we need. (MIT Technology Review)+ An AI image generator startup just leaked a huge trove of nude images. (Wired $) 4 OpenAI is acquiring an AI model training startupIts researchers have been impressed by the monitoring and de-bugging tools built by Neptune. (NBC)+ It’s not just you: the speed of AI deal-making really is accelerating. (NYT $)5 Russia has blocked Apple’s FaceTime video calling featureIt seems the Kremlin views any platform it doesn’t control as dangerous. (Reuters $)+ How Russia killed its tech industry. (MIT Technology Review)6 The trouble with AI browsersThis reviewer tested five of them and found them to be far more effort than they’re worth. (The Verge $)+ AI means the end of internet search as we’ve known it. (MIT Technology Review)7 An anti-AI activist has disappeared Sam Kirchner went AWOL after failing to show up at a scheduled court hearing, and friends are worried. (The Atlantic$)8 Taiwanese chip workers are creating a community in the Arizona desertA TSMC project to build chip factories is rapidly transforming this corner of the US. (NYT $) 9 This hearing aid has become a status symbol Rich people with hearing issues swear by a product made by startup Fortell. (Wired $)+ Apple AirPods can be a gateway hearing aid. (MIT Technology Review) 10 A plane crashed after one of its 3D-printed parts melted 🛩️🫠Just because you can do something, that doesn’t mean you should. (BBC) Quote of the day “Some people claim we can scale up current technology and get to general intelligence…I think that’s bullshit, if you’ll pardon my French.” —AI researcher Yann LeCun explains why he’s leaving Meta to set up a world-model startup, Sifted reports.
One more thing ILLUSTRATION SOURCES: NATIONAL HUMAN GENOME RESEARCH INSTITUTE What to expect when you’re expecting an extra X or Y chromosome Sex chromosome variations, in which people have a surplus or missing X or Y, occur in as many as one in 400 births. Yet the majority of people affected don’t even know they have them, because these conditions can fly under the radar.
As more expectant parents opt for noninvasive prenatal testing in hopes of ruling out serious conditions, many of them are surprised to discover instead that their fetus has a far less severe—but far less well-known—condition. And because so many sex chromosome variations have historically gone undiagnosed, many ob-gyns are not familiar with these conditions, leaving families to navigate the unexpected news on their own. Read the full story. —Bonnie Rochman We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + It’s never too early to start practicing your bûche de Noëlskills for the holidays.+ Brandi Carlile, you will always be famous.+ What do bartenders get up to after finishing their Thanksgiving shift? It’s time to find out.+ Pitchfork’s controversial list of the best albums of the year is here!

The ads that sell the sizzle of genetic trait discrimination
One day this fall, I watched an electronic sign outside the Broadway-Lafayette subway station in Manhattan switch seamlessly between an ad for makeup and one promoting the website Pickyourbaby.com, which promises a way for potential parents to use genetic tests to influence their baby’s traits, including eye color, hair color, and IQ. Inside the station, every surface was wrapped with more of its ads—babies on turnstiles, on staircases, on banners overhead. “Think about it. Makeup and then genetic optimization,” exulted Kian Sadeghi, the 26-year-old founder of Nucleus Genomics, the startup running the ads. To his mind, one should be as accessible as the other. Nucleus is a young, attention-seeking genetic software company that says it can analyze genetic tests on IVF embryos to score them for 2,000 traits and disease risks, letting parents pick some and reject others. This is possible because of how our DNA shapes us, sometimes powerfully. As one of the subway banners reminded the New York riders: “Height is 80% genetic.” The day after the campaign launched, Sadeghi and I had briefly sparred online. He’d been on X showing off a phone app where parents can click through traits like eye color and hair color. I snapped back that all this sounded a lot like Uber Eats—another crappy, frictionless future invented by entrepreneurs, but this time you’d click for a baby.
I agreed to meet Sadeghi that night in the station under a banner that read, “IQ is 50% genetic.” He appeared in a puffer jacket and told me the campaign would soon spread to 1,000 train cars. Not long ago, this was a secretive technology to whisper about at Silicon Valley dinner parties. But now? “Look at the stairs. The entire subway is genetic optimization. We’re bringing it mainstream,” he said. “I mean, like, we are normalizing it, right?” Normalizing what, exactly? The ability to choose embryos on the basis of predicted traits could lead to healthier people. But the traits mentioned in the subway—height and IQ—focus the public’s mind toward cosmetic choices and even naked discrimination. “I think people are going to read this and start realizing: Wow, it is now an option that I can pick. I can have a taller, smarter, healthier baby,” says Sadeghi.
Entrepreneur Kian Sadeghi stands under advertising banner in the Broadway-Lafayette subway station in Manhattan, part of a campaign called “Have Your Best Baby.”COURTESY OF THE AUTHOR Nucleus got its seed funding from Founders Fund, an investment firm known for its love of contrarian bets. And embryo scoring fits right in—it’s an unpopular concept, and professional groups say the genetic predictions aren’t reliable. So far, leading IVF clinics still refuse to offer these tests. Doctors worry, among other things, that they’ll create unrealistic parental expectations. What if little Johnny doesn’t do as well on the SAT as his embryo score predicted? The ad blitz is a way to end-run such gatekeepers: If a clinic won’t agree to order the test, would-be parents can take their business elsewhere. Another embryo testing company, Orchid, notes that high consumer demand emboldened Uber’s early incursions into regulated taxi markets. “Doctors are essentially being shoved in the direction of using it, not because they want to, but because they will lose patients if they don’t,” Orchid founder Noor Siddiqui said during an online event this past August. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Sadeghi prefers to compare his startup to Airbnb. He hopes it can link customers to clinics, becoming a digital “funnel” offering a “better experience” for everyone. He notes that Nucleus ads don’t mention DNA or any details of how the scoring technique works. That’s not the point. In advertising, you sell the sizzle, not the steak. And in Nucleus’s ad copy, what sizzles is height, smarts, and light-colored eyes. It makes you wonder if the ads should be permitted. Indeed, I learned from Sadeghi that the Metropolitan Transportation Authority had objected to parts of the campaign. The metro agency, for instance, did not let Nucleus run ads saying “Have a girl” and “Have a boy,” even though it’s very easy to identify the sex of an embryo using a genetic test. The reason was an MTA policy that forbids using government-owned infrastructure to promote “invidious discrimination” against protected classes, which include race, religion and biological sex. Since 2023, New York City has also included height and weight in its anti-discrimination law, the idea being to “root out bias” related to body size in housing and in public spaces. So I’m not sure why the MTA let Nucleus declare that height is 80% genetic. (The MTA advertising department didn’t respond to questions.) Perhaps it’s because the statement is a factual claim, not an explicit call to action. But we all know what to do: Pick the tall one and leave shorty in the IVF freezer, never to be born. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

The era of AI persuasion in elections is about to begin
In January 2024, the phone rang in homes all around New Hampshire. On the other end was Joe Biden’s voice, urging Democrats to “save your vote” by skipping the primary. It sounded authentic, but it wasn’t. The call was a fake, generated by artificial intelligence. Today, the technology behind that hoax looks quaint. Tools like OpenAI’s Sora now make it possible to create convincing synthetic videos with astonishing ease. AI can be used to fabricate messages from politicians and celebrities—even entire news clips—in minutes. The fear that elections could be overwhelmed by realistic fake media has gone mainstream—and for good reason. But that’s only half the story. The deeper threat isn’t that AI can just imitate people—it’s that it can actively persuade people. And new research published this week shows just how powerful that persuasion can be. In two large peer-reviewed studies, AI chatbots shifted voters’ views by a substantial margin, far more than traditional political advertising tends to do. In the coming years, we will see the rise of AI that can personalize arguments, test what works, and quietly reshape political views at scale. That shift—from imitation to active persuasion—should worry us deeply.
The challenge is that modern AI doesn’t just copy voices or faces; it holds conversations, reads emotions, and tailors its tone to persuade. And it can now command other AIs—directing image, video, and voice models to generate the most convincing content for each target. Putting these pieces together, it’s not hard to imagine how one could build a coordinated persuasion machine. One AI might write the message, another could create the visuals, another could distribute it across platforms and watch what works. No humans required. A decade ago, mounting an effective online influence campaign typically meant deploying armies of people running fake accounts and meme farms. Now that kind of work can be automated—cheaply and invisibly. The same technology that powers customer service bots and tutoring apps can be repurposed to nudge political opinions or amplify a government’s preferred narrative. And the persuasion doesn’t have to be confined to ads or robocalls. It can be woven into the tools people already use every day—social media feeds, language learning apps, dating platforms, or even voice assistants built and sold by parties trying to influence the American public. That kind of influence could come from malicious actors using the APIs of popular AI tools people already rely on, or from entirely new apps built with the persuasion baked in from the start.
And it’s affordable. For less than a million dollars, anyone can generate personalized, conversational messages for every registered voter in America. The math isn’t complicated. Assume 10 brief exchanges per person—around 2,700 tokens of text—and price them at current rates for ChatGPT’s API. Even with a population of 174 million registered voters, the total still comes in under $1 million. The 80,000 swing voters who decided the 2016 election could be targeted for less than $3,000. Although this is a challenge in elections across the world, the stakes for the United States are especially high, given the scale of its elections and the attention they attract from foreign actors. If the US doesn’t move fast, the next presidential election in 2028, or even the midterms in 2026, could be won by whoever automates persuasion first. The 2028 threat While there have been indications that the threat AI poses to elections is overblown, a growing body of research suggests the situation could be changing. Recent studies have shown that GPT-4 can exceed the persuasive capabilities of communications experts when generating statements on polarizing US political topics, and it is more persuasive than non-expert humans two-thirds of the time when debating real voters. Two major studies published yesterday extend those findings to real election contexts in the United States, Canada, Poland, and the United Kingdom, showing that brief chatbot conversations can move voters’ attitudes by up to 10 percentage points, with US participant opinions shifting nearly four times more than it did in response to tested 2016 and 2020 political ads. And when models were explicitly optimized for persuasion, the shift soared to 25 percentage points—an almost unfathomable difference. While previously confined to well-resourced companies, modern large language models are becoming increasingly easy to use. Major AI providers like OpenAI, Anthropic, and Google wrap their frontier models in usage policies, automated safety filters, and account-level monitoring, and they do sometimes suspend users who violate those rules. But those restrictions apply only to traffic that goes through their platforms; they don’t extend to the rapidly growing ecosystem of open-source and open-weight models, which can be downloaded by anyone with an internet connection. Though they’re usually smaller and less capable than their commercial counterparts, research has shown with careful prompting and fine-tuning, these models can now match the performance of leading commercial systems. All this means that actors, whether well-resourced organizations or grassroots collectives, have a clear path to deploying politically persuasive AI at scale. Early demonstrations have already occurred elsewhere in the world. In India’s 2024 general election, tens of millions of dollars were reportedly spent on AI to segment voters, identify swing voters, deliver personalized messaging through robocalls and chatbots, and more. In Taiwan, officials and researchers have documented China-linked operations using generative AI to produce more subtle disinformation, ranging from deepfakes to language model outputs that are biased toward messaging approved by the Chinese Communist Party. It’s only a matter of time before this technology comes to US elections—if it hasn’t already. Foreign adversaries are well positioned to move first. China, Russia, Iran, and others already maintain networks of troll farms, bot accounts, and covert influence operators. Paired with open-source language models that generate fluent and localized political content, those operations can be supercharged. In fact, there is no longer a need for human operators who understand the language or the context. With light tuning, a model can impersonate a neighborhood organizer, a union rep, or a disaffected parent without a person ever setting foot in the country. Political campaigns themselves will likely be close behind. Every major operation already segments voters, tests messages, and optimizes delivery. AI lowers the cost of doing all that. Instead of poll-testing a slogan, a campaign can generate hundreds of arguments, deliver them one on one, and watch in real time which ones shift opinions. The underlying fact is simple: Persuasion has become effective and cheap. Campaigns, PACs, foreign actors, advocacy groups, and opportunists are all playing on the same field—and there are very few rules.
The policy vacuum Most policymakers have not caught up. Over the past several years, legislators in the US have focused on deepfakes but have ignored the wider persuasive threat. Foreign governments have begun to take the problem more seriously. The European Union’s 2024 AI Act classifies election-related persuasion as a “high-risk” use case. Any system designed to influence voting behavior is now subject to strict requirements. Administrative tools, like AI systems used to plan campaign events or optimize logistics, are exempt. However, tools that aim to shape political beliefs or voting decisions are not. By contrast, the United States has so far refused to draw any meaningful lines. There are no binding rules about what constitutes a political influence operation, no external standards to guide enforcement, and no shared infrastructure for tracking AI-generated persuasion across platforms. The federal and state governments have gestured toward regulation—the Federal Election Commission is applying old fraud provisions, the Federal Communications Commission has proposed narrow disclosure rules for broadcast ads, and a handful of states have passed deepfake laws—but these efforts are piecemeal and leave most digital campaigning untouched. In practice, the responsibility for detecting and dismantling covert campaigns has been left almost entirely to private companies, each with its own rules, incentives, and blind spots. Google and Meta have adopted policies requiring disclosure when political ads are generated using AI. X has remained largely silent on this, while TikTok bans all paid political advertising. However, these rules, modest as they are, cover only the sliver of content that is bought and publicly displayed. They say almost nothing about the unpaid, private persuasion campaigns that may matter most. To their credit, some firms have begun publishing periodic threat reports identifying covert influence campaigns. Anthropic, OpenAI, Meta, and Google have all disclosed takedowns of inauthentic accounts. However, these efforts are voluntary and not subject to independent auditing. Most important, none of this prevents determined actors from bypassing platform restrictions altogether with open-source models and off-platform infrastructure. What a real strategy would look like The United States does not need to ban AI from political life. Some applications may even strengthen democracy. A well-designed candidate chatbot could help voters understand where the candidate stands on key issues, answer questions directly, or translate complex policy into plain language. Research has even shown that AI can reduce belief in conspiracy theories. Still, there are a few things the United States should do to protect against the threat of AI persuasion. First, it must guard against foreign-made political technology with built-in persuasion capabilities. Adversarial political technology could take the form of a foreign-produced video game where in-game characters echo political talking points, a social media platform whose recommendation algorithm tilts toward certain narratives, or a language learning app that slips subtle messages into daily lessons. Evaluations, such as the Center for AI Standards and Innovation’s recent analysis of DeepSeek, should focus on identifying and assessing AI products—particularly from countries like China, Russia, or Iran—before they are widely deployed. This effort would require coordination among intelligence agencies, regulators, and platforms to spot and address risks. Second, the United States should lead in shaping the rules around AI-driven persuasion. That includes tightening access to computing power for large-scale foreign persuasion efforts, since many actors will either rent existing models or lease the GPU capacity to train their own. It also means establishing clear technical standards—through governments, standards bodies, and voluntary industry commitments—for how AI systems capable of generating political content should operate, especially during sensitive election periods. And domestically, the United States needs to determine what kinds of disclosures should apply to AI-generated political messaging while navigating First Amendment concerns.
Finally, foreign adversaries will try to evade these safeguards—using offshore servers, open-source models, or intermediaries in third countries. That is why the United States also needs a foreign policy response. Multilateral election integrity agreements should codify a basic norm: States that deploy AI systems to manipulate another country’s electorate risk coordinated sanctions and public exposure. Doing so will likely involve building shared monitoring infrastructure, aligning disclosure and provenance standards, and being prepared to conduct coordinated takedowns of cross-border persuasion campaigns—because many of these operations are already moving into opaque spaces where our current detection tools are weak. The US should also push to make election manipulation part of the broader agenda at forums like the G7 and OECD, ensuring that threats related to AI persuasion are treated not as isolated tech problems but as collective security challenges.
Indeed, the task of securing elections cannot fall to the United States alone. A functioning radar system for AI persuasion will require partnerships with our partners and allies. Influence campaigns are rarely confined by borders, and open-source models and offshore servers will always exist. The goal is not to eliminate them but to raise the cost of misuse and shrink the window in which they can operate undetected across jurisdictions. The era of AI persuasion is just around the corner, and America’s adversaries are prepared. In the US, on the other hand, the laws are out of date, the guardrails too narrow, and the oversight largely voluntary. If the last decade was shaped by viral lies and doctored videos, the next will be shaped by a subtler force: messages that sound reasonable, familiar, and just persuasive enough to change hearts and minds. For China, Russia, Iran, and others, exploiting America’s open information ecosystem is a strategic opportunity. We need a strategy that treats AI persuasion not as a distant threat but as a present fact. That means soberly assessing the risks to democratic discourse, putting real standards in place, and building a technical and legal infrastructure around them. Because if we wait until we can see it happening, it will already be too late. Tal Feldman is a JD candidate at Yale Law School who focuses on technology and national security. Before law school, he built AI models across the federal government and was a Schwarzman and Truman scholar. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University who focuses on agentic AI and technology policy. Before Stanford, he was a privacy and security researcher at Google DeepMind and a Marshall scholar.

AI chatbots can sway voters better than political advertisements
In 2024, a Democratic congressional candidate in Pennsylvania, Shamaine Daniels, used an AI chatbot named Ashley to call voters and carry on conversations with them. “Hello. My name is Ashley, and I’m an artificial intelligence volunteer for Shamaine Daniels’s run for Congress,” the calls began. Daniels didn’t ultimately win. But maybe those calls helped her cause: New research reveals that AI chatbots can shift voters’ opinions in a single conversation—and they’re surprisingly good at it. A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party. The chatbots swayed opinions by citing facts and evidence, but they were not always accurate—in fact, the researchers found, the most persuasive models said the most untrue things. The findings, detailed in a pair of studies published in the journals Nature and Science, are the latest in an emerging body of research demonstrating the persuasive power of LLMs. They raise profound questions about how generative AI could reshape elections. “One conversation with an LLM has a pretty meaningful effect on salient election choices,” says Gordon Pennycook, a psychologist at Cornell University who worked on the Nature study. LLMs can persuade people more effectively than political advertisements because they generate much more information in real time and strategically deploy it in conversations, he says.
For the Nature paper, the researchers recruited more than 2,300 participants to engage in a conversation with a chatbot two months before the 2024 US presidential election. The chatbot, which was trained to advocate for either one of the top two candidates, was surprisingly persuasive, especially when discussing candidates’ policy platforms on issues such as the economy and health care. Donald Trump supporters who chatted with an AI model favoring Kamala Harris became slightly more inclined to support Harris, moving 3.9 points toward her on a 100-point scale. That was roughly four times the measured effect of political advertisements during the 2016 and 2020 elections. The AI model favoring Trump moved Harris supporters 2.3 points toward Trump. In similar experiments conducted during the lead-ups to the 2025 Canadian federal election and the 2025 Polish presidential election, the team found an even larger effect. The chatbots shifted opposition voters’ attitudes by about 10 points.
Long-standing theories of politically motivated reasoning hold that partisan voters are impervious to facts and evidence that contradict their beliefs. But the researchers found that the chatbots, which used a range of models including variants of GPT and DeepSeek, were more persuasive when they were instructed to use facts and evidence than when they were told not to do so. “People are updating on the basis of the facts and information that the model is providing to them,” says Thomas Costello, a psychologist at American University, who worked on the project. The catch is, some of the “evidence” and “facts” the chatbots presented were untrue. Across all three countries, chatbots advocating for right-leaning candidates made a larger number of inaccurate claims than those advocating for left-leaning candidates. The underlying models are trained on vast amounts of human-written text, which means they reproduce real-world phenomena—including “political communication that comes from the right, which tends to be less accurate,” according to studies of partisan social media posts, says Costello. In the other study published this week, in Science, an overlapping team of researchers investigated what makes these chatbots so persuasive. They deployed 19 LLMs to interact with nearly 77,000 participants from the UK on more than 700 political issues while varying factors like computational power, training techniques, and rhetorical strategies. The most effective way to make the models persuasive was to instruct them to pack their arguments with facts and evidence and then give them additional training by feeding them examples of persuasive conversations. In fact, the most persuasive model shifted participants who initially disagreed with a political statement 26.1 points toward agreeing. “These are really large treatment effects,” says Kobi Hackenburg, a research scientist at the UK AI Security Institute, who worked on the project. But optimizing persuasiveness came at the cost of truthfulness. When the models became more persuasive, they increasingly provided misleading or false information—and no one is sure why. “It could be that as the models learn to deploy more and more facts, they essentially reach to the bottom of the barrel of stuff they know, so the facts get worse-quality,” says Hackenburg. The chatbots’ persuasive power could have profound consequences for the future of democracy, the authors note. Political campaigns that use AI chatbots could shape public opinion in ways that compromise voters’ ability to make independent political judgments. Still, the exact contours of the impact remain to be seen. “We’re not sure what future campaigns might look like and how they might incorporate these kinds of technologies,” says Andy Guess, a political scientist at Princeton University. Competing for voters’ attention is expensive and difficult, and getting them to engage in long political conversations with chatbots might be challenging. “Is this going to be the way that people inform themselves about politics, or is this going to be more of a niche activity?” he asks. Even if chatbots do become a bigger part of elections, it’s not clear whether they’ll do more to amplify truth or fiction. Usually, misinformation has an informational advantage in a campaign, so the emergence of electioneering AIs “might mean we’re headed for a disaster,” says Alex Coppock, a political scientist at Northwestern University. “But it’s also possible that means that now, correct information will also be scalable.” And then the question is who will have the upper hand. “If everybody has their chatbots running around in the wild, does that mean that we’ll just persuade ourselves to a draw?” Coppock asks. But there are reasons to doubt a stalemate. Politicians’ access to the most persuasive models may not be evenly distributed. And voters across the political spectrum may have different levels of engagement with chatbots. “If supporters of one candidate or party are more tech savvy than the other,” the persuasive impacts might not balance out, says Guess. As people turn to AI to help them navigate their lives, they may also start asking chatbots for voting advice whether campaigns prompt the interaction or not. That may be a troubling world for democracy, unless there are strong guardrails to keep the systems in check. Auditing and documenting the accuracy of LLM outputs in conversations about politics may be a first step.
Engineering more resilient crops for a warming climate
Scientists are using AlphaFold in their research to strengthen an enzyme that’s vital to photosynthesis, paving the way for more heat-tolerant crops.As global warming accompanies more droughts and heatwaves, harvests of some staple crops are shrinking. But less visible is what is happening inside these plants, where high heat can break down the molecular machinery that keeps them alive.At the heart of that machinery lies a sun-powered process that supports virtually all life on Earth: photosynthesis. Plants use photosynthesis to produce the glucose that fuels their growth via an intricate choreography of enzymes inside plant cells. As global temperatures rise, that choreography can falter.Berkley Walker, an associate professor at Michigan State University, spends his days thinking about how to keep that choreography in step. “Nature already holds the blueprints for lots of enzymes that can handle heat,” he says. “Our job is to learn from those examples and build that same resilience into the crops we depend on.”Walker’s lab focuses on a vital enzyme in photosynthesis called glycerate kinase (GLYK), an enzyme that helps plants recycle carbon during photosynthesis.One hypothesis is that, if it gets too hot, GLYK stops working, and photosynthesis fails.Walker’s team set out to understand why. Because the structure of GLYK has never been determined experimentally, they turned to AlphaFold to predict its 3D shape, not only in plants but also in a heat-loving algae that thrives in volcanic hot springs. By taking AlphaFold’s predicted shapes and plugging them into sophisticated molecular simulations, the researchers could watch as these enzymes flexed and twisted as the temperature rose.That’s when the problem came into focus: three flexible loops in the plant version of GLYK wobbled out of shape at high heat.Experiments alone could never deliver such insights, says Walker: “AlphaFold enabled access to experimentally unavailable enzyme structures and helped us identify key sections for modification.”Armed with this knowledge, the researchers in Walker’s lab made a series of hybrid enzymes that replaced the unstable loops in the plant GLYK with more rigid ones borrowed from the algae’s GLYK. One of these performed spectacularly, remaining stable at temperatures up to 65 °C.

Data Center Pushback Watch: Community Opposition and Regulatory Challenges in Data Center Expansion – Q4 2025
DTE argues that the system can absorb the additional load without raising customer rates. Yet that confidence rests on a series of assumptions: from how many substations, transformers, and transmission segments must be upgraded, to how winter demand spikes are modeled, to whether projected load materializes at full scale. Minimum revenue guarantees soften downside risk, but they don’t erase it. A forthcoming FERC rulemaking could further reshape how utilities evaluate, and recover costs from, AI-scale interconnection requests. Bloomberg: Amazon’s Footprint is Wider — and More Distributed — than Public Perception Suggests Reporting from Bloomberg and SourceMaterial, based on internal documents, indicates that Amazon’s cloud infrastructure stretches far beyond its well-known hyperscale campuses. AWS is operating more than 900 facilities across 50+ countries, a count that includes not only owned or leased mega-sites, but also a vast network of colocation deployments where Amazon rents power and space from third-party operators. Bloomberg highlights that this colocation layer, estimated at roughly one-fifth of AWS compute capacity as of last year, has been a key lever in Amazon’s ability to scale rapidly and globally. While AWS does not publicly break out those numbers, the reporting suggests Amazon’s architecture is far more distributed than generally assumed. Public narratives often center on the flagship campuses in Northern Virginia, Oregon, or Ohio; the documents instead portray a hub-and-spoke model shaped by hundreds of partner facilities. The timing is notable. As AI infrastructure demand accelerates for training clusters, inference fleets, and low-latency edge access, Amazon appears to be leaning into a topology designed for reach and speed, not just scale in a handful of metros. Springdale Borough: Approval at the Planning Stage, but Residents Push Back Outside Pittsburgh, Springdale Borough is facing a growing fight over a proposed 565,000-square-foot AI-focused data center planned for the former Cheswick Generating

Building the Thermal Backbone of AI: Tracking the Latest Data Center Liquid Cooling Deals and Deployments
Over the past three years, we’ve tracked how liquid cooling has moved from the margins of the white space to the critical path of AI data center design. And over the past quarter, the deals and product launches crossing Data Center Frontier’s radar all point in the same direction: liquid is becoming the organizing principle for how operators think about power, density, and risk. From OEMs and HVAC majors buying their way deeper into liquid-to-chip, to capital flowing into microfluidic cold plates and high-efficiency chillers, to immersion systems pushing out to the edge and even into battery storage, the thermal stack around AI is being rebuilt in real time. What’s striking in this latest wave of announcements is not just the technology, but the scale and specificity: 300MW two-phase campuses, 10MW liquid-to-chip AI halls at cable landing stations, stainless-steel chillers designed to eliminate in-row CDUs. Here’s a look at the most consequential recent moves shaping the next phase of data center liquid cooling. Trane Technologies to Acquire Stellar Energy Digital: Buying a Liquid-to-Chip Platform Trane Technologies is making a decisive move up the liquid cooling stack with its just-announced agreement to acquire the Stellar Energy Digital business, a Jacksonville-based specialist in turnkey liquid-to-chip cooling plants and coolant distribution units. Stellar Energy’s Digital business — roughly 700 employees and two Jacksonville assembly operations — designs and builds modular cooling plants, central utility plants and CDUs for liquid-cooled data centers and other complex enterprise environments. Trane is clearly buying more than incremental capacity; it’s acquiring a platform that’s already oriented around prefab, AI-era deployments. Karin De Bondt, Trane’s Chief Strategy Officer, framed the deal squarely around the shift DCF has been tracking all year: data center customers want repeatable, modular systems they can deploy at speed. “The data center ecosystem is growing

Inside Anthropic’s Multi-Cloud AI Factory: How AWS Trainium and Google TPUs Shape Its Next Phase
A Massive TPU Commitment—and a Strategic Signal From Google This is not a casual “we spun up a few pods on GCP.” Anthropic is effectively reserving a substantial share of Google’s future TPU capacity and tying that scale directly into Google Cloud’s enterprise AI go-to-market. The compute commitment has been described as being worth tens of billions of dollars over the life of the agreement, signaling Google’s intention to anchor Anthropic as a marquee external TPU customer. Google’s Investment Outlook: Still Early, But Potentially Transformational Reports from Reuters (citing Business Insider) on November 6 indicate that Google is exploring whether to deepen its financial investment in Anthropic. The discussions are early and non-binding, but the structures under consideration (including convertible notes or a new priced round paired with additional TPU and cloud commitments) suggest a valuation that could exceed $350 billion. Google has not commented publicly. This would come on top of Alphabet’s existing position, reported at more than $3 billion invested and 14% ownership, and follows Anthropic’s September 2025 $13 billion Series F, which valued the company at $183 billion. None of these prospective terms are final, but the direction of travel is clear: Google is looking to bind TPU productization, cloud consumption, and strategic alignment more tightly together. Anthropic’s Multi-Cloud Architecture Reaches Million-Accelerator Scale Anthropic’s strategy depends on not being bound to a single hyperscaler or silicon roadmap. In November 2024, the company named AWS its primary cloud and training partner; a deal that brought Amazon’s total investment to $8 billion and committed Anthropic to Trainium for its largest models. The two companies recently activated Project Rainier, an AI supercomputer cluster in Indiana with roughly 500,000 Trainium2 chips, with Anthropic expected to scale to more than 1 million Trainium2 chips on AWS by the end of 2025. The Google Cloud

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots
Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available) Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA and New York, NY. Navy Nuke / Military Vets leaving service accepted! This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer Montvale, NJ This traveling position is also available in: New York, NY; White Plains, NY; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs. *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and

Flex’s Integrated Data Center Bet: How a Manufacturing Giant Plans to Reshape AI-Scale Infrastructure
At this year’s OCP Global Summit, Flex made a declaration that resonated across the industry: the era of slow, bespoke data center construction is over. AI isn’t just stressing the grid or forcing new cooling techniques—it’s overwhelming the entire design-build process. To meet this moment, Flex introduced a globally manufactured, fully integrated data center platform aimed directly at multi-gigawatt AI campuses. The company claims it can cut deployment timelines by as much as 30 percent by shifting integration upstream into the factory and unifying power, cooling, compute, and lifecycle services into pre-engineered modules. This is not a repositioning on the margins. Flex is effectively asserting that the future hyperscale data center will be manufactured like a complex industrial system, not built like a construction project. On the latest episode of The Data Center Frontier Show, we spoke with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, about why Flex believes this new approach is not only viable but necessary in the age of AI. The discussion revealed a company leaning heavily on its global manufacturing footprint, its cross-industry experience, and its expanding cooling and power technology stack to redefine what deployment speed and integration can look like at scale. AI Has Broken the Old Data Center Model From the outset, Campbell and Butler made clear that Flex’s strategy is a response to a structural shift. AI workloads no longer allow power, cooling, and compute to evolve independently. Densities have jumped so quickly—and thermals have risen so sharply—that the white space, gray space, and power yard are now interdependent engineering challenges. Higher chip TDPs, liquid-cooled racks approaching one to two megawatts, and the need to assemble entire campuses in record time have revealed deep fragility in traditional workflows. As Butler put it, AI

The Future of Hyperscale: Neoverse Joins NVLink Fusion as SC25 Accelerates Rack-Scale AI Architectures
Neoverse’s Expanding Footprint and the Power-Efficiency Imperative With Neoverse deployments now approaching roughly 50% of all compute shipped into top hyperscalers in 2025 (representing more than a billion Arm cores) and with nation-scale AI campuses such as the Stargate project already anchored on Arm compute, the addition of NVLink Fusion becomes a pivotal extension of the Neoverse roadmap. Partners can now connect custom Arm CPUs to their preferred NVIDIA accelerators across a coherent, high-bandwidth, rack-scale fabric. Arm characterized the shift as a generational inflection point in data-center architecture, noting that “power—not FLOPs—is the bottleneck,” and that future design priorities hinge on maximizing “intelligence per watt.” Ian Buck, vice president and general manager of accelerated computing at NVIDIA, underscored the practical impact: “Folks building their own Arm CPU, or using an Arm IP, can actually have access to NVLink Fusion—be able to connect that Arm CPU to an NVIDIA GPU or to the rest of the NVLink ecosystem—and that’s happening at the racks and scale-up infrastructure.” Despite the expanded design flexibility, this is not being positioned as an open interconnect ecosystem. NVIDIA continues to control the NVLink Fusion fabric, and all connections ultimately run through NVIDIA’s architecture. For data-center planners, the SC25 announcement translates into several concrete implications: 1. NVIDIA “Grace-style” Racks Without Buying Grace With NVLink Fusion now baked into Neoverse, hyperscalers and sovereign operators can design their own Arm-based control-plane or pre-processing CPUs that attach coherently to NVIDIA GPU domains—such as NVL72 racks or HGX B200/B300 systems—without relying on Grace CPUs. A rack-level architecture might now resemble: Custom Neoverse SoC for ingest, orchestration, agent logic, and pre/post-processing NVLink Fusion fabric Blackwell GPU islands and/or NVLink-attached custom accelerators (Marvell, MediaTek, others) This decouples CPU choice from NVIDIA’s GPU roadmap while retaining the full NVLink fabric. In practice, it also opens
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.