Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Oil Gains on EU Sanctions Pressure
Oil rose as hawkish rhetoric by the European Union’s top diplomat raised expectations that sanctions on Russia will tighten. West Texas Intermediate gained 1.4% to settle under $61 a barrel on Tuesday, pushing higher after the EU’s Kaja Kallas said that Moscow’s aggression against the bloc should be considered terrorism. The comments come against a background of surging diesel-market tightness, in which Russia is a significant player, with the difference between the two nearest ICE gasoil contracts surging Tuesday. Futures for the European benchmark rose by 4.5%. Adding to bullish sentiment, the price of Russia’s flagship crude plunged to the lowest in over two years, with just days to go before US sanctions take effect against giant producers Rosneft PJSC and Lukoil PJSC over Moscow’s ongoing war in Ukraine. Other impacts are emerging. Major Asian buyers paused at least some purchases. And traders making physical deliveries of diesel under ICE Gasoil contracts will soon be banned from supplying barrels made with Russian crude in third countries, ICE said. Still, US benchmark futures are down this year as expectations for a glut weigh on the outlook, with the International Energy Agency forecasting a record surplus in 2026. The oversupply is being driven by the return of idled output from OPEC and its allies, as well as more supplies from outside of the group. But for now, the price of WTI is staying stubbornly above the $60 mark, according to Frank Monkam, head of macro trading at Buffalo Bayou Commodities. “It’s going to take a lot more than just a repeat of the same negative headlines to push the market below $60 a barrel unless we see a total unraveling of risk assets, particularly equities,” he said. “Absent of that, I think we’re still very vulnerable to an upside push here in

Dell targets enterprise AI with switches, servers, reinforced Nvidia partnership
The company also introduced a new Intel-powered PowerEdge server, the R770AP. It is an air-cooled platform equipped with Intel Xeon 6 P-core 6900-series processors, featuring high-core-count CPUs, large cache sizes and support for CXL memory expansion. The PowerEdge R770AP offers enhanced parallel processing, reduced memory latency and abundant PCIe lanes enabling accelerated trading algorithms, scalable memory configurations and improved network performance. Dell first announced its AI strategy last year with Nvidia as a partner and now it is furthering the partnership with integration of Dell’s ObjectScale and PowerScale unstructured data storage devices with the Nvidia NIXL library. The NIXL library is a part of Nvidia Dynamo, a distributed inference-serving framework built to deploy and automate AI models at scale. This integration enables scalable KV Cache storage, reuse and sharing, achieving a 1-second Time to First Token (TTFT) at a full context window of 131K tokens – 19X faster than standard vLLM – while reducing infrastructure costs and overcoming GPU memory capacity bottlenecks. The Dell Automation Platform, now integrated into the Dell AI Factory with Nvidia, enables more intelligent and automated operations by deploying secure, validated, and optimized solutions. This strategy ensures consistent, repeatable results, removes uncertainty, and empowers organizations to fully realize the benefits of AI-driven applications powered by Nvidia’s accelerated computing.

Energy Department Closes Loan to Restart Nuclear Power Plant in Pennsylvania
WASHINGTON—U.S. Secretary of Energy Chris Wright today announced the Department of Energy’s (DOE) Loan Programs Office (LPO) closed a loan to lower energy costs and restart a Pennsylvania nuclear power plant. The $1 billion loan to Constellation Energy Generation, LLC (Constellation) will help finance the Crane Clean Energy Center, an 835 MW plant located on the Susquehanna River in Londonderry Township, Pennsylvania. Today’s announcement, funded by the Energy Dominance Financing (EDF) Program created under the Working Families Tax Cut, highlights the Energy Department’s role in advancing President Trump’s Executive Order, Reinvigorating the Nuclear Industrial Base, by supporting the restart of nuclear power plants. “Thanks to President Trump’s bold leadership and the Working Families Tax Cut, the United States is taking unprecedented steps to lower energy costs and bring about the next American nuclear renaissance,” said Energy Secretary Wright. “Constellation’s restart of a nuclear power plant in Pennsylvania will provide affordable, reliable, and secure energy to Americans across the Mid-Atlantic region. It will also help ensure America has the energy it needs to grow its domestic manufacturing base and win the AI race.” This announcement marks the first project to receive a concurrent conditional commitment and financial close under the Trump Administration. The loan will partially finance the restart of a reactor which ceased operations in 2019 but was never fully decommissioned. Once restarted, pending U.S. Nuclear Regulatory Commission licensing approvals, the 835 MW reactor will provide reliable and affordable baseload power to the PJM Interconnection region, powering the equivalent of approximately 800,000 homes. The Crane Restart project will help lower electricity costs, strengthen grid reliability, create over 600 American jobs, and advance the Administration’s mission to lead in global AI innovation and restore domestic manufacturing industries. DOE remains committed to fulfilling this mission to maximize the speed and scale of

Kentik bolsters network observability platform with autonomous investigation
“Advisor knows how to actually do networking things and can be more like a teammate,” Freedman explained. “It will go, reason, make a plan, use the different products, go look across the domains of telemetry and awareness, and say, ‘here’s what I think is going on, and here’s what you should do about it.’” In practice, an engineer can now ask, “What might be causing this customer to be down?” and the system will autonomously check traffic volumes, review recent firewall changes, examine the timing of events, and identify whether a specific rule change correlates with the traffic drop. It presents findings with the underlying data and suggests specific remediation steps. Data engine extensions for contextual analysis The autonomous investigation capability required Kentik to extend its data platform beyond flow records and device metrics. The Kentik Data Engine processes approximately one trillion telemetry points daily from NetFlow, sFlow, device APIs, cloud provider APIs, and synthetic monitoring. But correlation analysis requires additional context that wasn’t previously captured. “We needed configs, which we didn’t have,” Freedman said. “We needed graph and topology, which we had, but in places.” The company added configuration tracking, topology modeling, and relationship mapping to the platform. This allows the system to answer questions like whether a firewall rule change affected specific customer IP addresses or whether an IGP metric adjustment could have influenced routing decisions. The context layer connects time series data with network state information. The underlying database architecture uses a columnar store for historical data and a streaming database for real-time analysis. Both use the same query language, which allows the system to correlate events across time windows without moving data between systems. Foundation models and workflow training Kentik uses commercial large language models (LLMs) rather than training its own from scratch.

IBM readies commercially valuable quantum computer technology
But even at release, the system lets enterprises run longer quantum programs than before, with a wider variety of potential applications, says Crowder. Another breakthrough is its error correction. Last year, IBM demonstrated that it can do error correction on classical computers quickly and cheaply enough to be practical, on off-the-shelf components — specifically, AMD’s FPGA chip and now it’s commercially available. “And we did it a year early,” added Crowder. IBM now also has the technology to couple multiple quantum chips together into larger systems. “Nighthawk is going to be the chip that is the basis for building these systems,” says Crowder. By the end of 2026, Nighthawk’s new modular architecture and advanced error-correction will allow it to support 7,500 gates and get to 10,000 gates in 2027 — and 15,000 in 2028. The average company won’t see any immediate benefit, says Gartner’s Horvath. “But if you were in charge of a capital markets desk, you might use this,” he says. “If you were in charge of logistics, you might use this to optimize your routes.” And if you’re in charge of cybersecurity, it might be time to upgrade to quantum-safe encryption, he said. “You need to be on PQC [post-quantum cryptography] by 2030,” he says. “We think that commercial actors will have the ability to start breaking digital signatures by 2032 if the current timelines hold up.”

Atlantic LNG Freight Rates at Highest in Nearly 2 Years
The cost of transporting liquefied natural gas across the Atlantic Ocean surged to the highest in almost two years, as expanding exports from North America boosted demand for tankers. The spot rate to hire an LNG vessel for delivery from the US to Europe jumped 19 percent to $98,250 per day on Monday, the highest since January 2024, according to Spark Commodities, which tracks shipping prices. Costs to hire a tanker in the Pacific Ocean also jumped 15 percent to the highest in over a year, the data show. This is a stark turnaround for the market, which had languished at rock-bottom prices for most of the year amid a glut of available ships. Output from North America has increased steadily as new projects ramp up, requiring more vessels to deliver the fuel to customers in Europe and Asia. The 30-day moving average for LNG exports from North America has climbed nearly 40 percent year-to-date, according to ship-tracking data compiled by Bloomberg. Higher freight rates threaten to widen the spread between Asian and European gas prices, as it will be more expensive to send US shipments to the Pacific. A company booked a vessel for December in the Atlantic for about $100,000 per day, traders said. Likewise, when freight rates were lower, companies sent some vessels to Asia, further exacerbating a shortage of ships in the Atlantic, they added. Still, the surge in charter rates is likely to have peaked and has “limited potential to run much higher,” according to Han Wei, a BloombergNEF analyst. “On the LNG tanker supply side, we’ll continue to see strong new build deliveries, which should keep spot charter rates in check,” he said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social

Oil Gains on EU Sanctions Pressure
Oil rose as hawkish rhetoric by the European Union’s top diplomat raised expectations that sanctions on Russia will tighten. West Texas Intermediate gained 1.4% to settle under $61 a barrel on Tuesday, pushing higher after the EU’s Kaja Kallas said that Moscow’s aggression against the bloc should be considered terrorism. The comments come against a background of surging diesel-market tightness, in which Russia is a significant player, with the difference between the two nearest ICE gasoil contracts surging Tuesday. Futures for the European benchmark rose by 4.5%. Adding to bullish sentiment, the price of Russia’s flagship crude plunged to the lowest in over two years, with just days to go before US sanctions take effect against giant producers Rosneft PJSC and Lukoil PJSC over Moscow’s ongoing war in Ukraine. Other impacts are emerging. Major Asian buyers paused at least some purchases. And traders making physical deliveries of diesel under ICE Gasoil contracts will soon be banned from supplying barrels made with Russian crude in third countries, ICE said. Still, US benchmark futures are down this year as expectations for a glut weigh on the outlook, with the International Energy Agency forecasting a record surplus in 2026. The oversupply is being driven by the return of idled output from OPEC and its allies, as well as more supplies from outside of the group. But for now, the price of WTI is staying stubbornly above the $60 mark, according to Frank Monkam, head of macro trading at Buffalo Bayou Commodities. “It’s going to take a lot more than just a repeat of the same negative headlines to push the market below $60 a barrel unless we see a total unraveling of risk assets, particularly equities,” he said. “Absent of that, I think we’re still very vulnerable to an upside push here in

Dell targets enterprise AI with switches, servers, reinforced Nvidia partnership
The company also introduced a new Intel-powered PowerEdge server, the R770AP. It is an air-cooled platform equipped with Intel Xeon 6 P-core 6900-series processors, featuring high-core-count CPUs, large cache sizes and support for CXL memory expansion. The PowerEdge R770AP offers enhanced parallel processing, reduced memory latency and abundant PCIe lanes enabling accelerated trading algorithms, scalable memory configurations and improved network performance. Dell first announced its AI strategy last year with Nvidia as a partner and now it is furthering the partnership with integration of Dell’s ObjectScale and PowerScale unstructured data storage devices with the Nvidia NIXL library. The NIXL library is a part of Nvidia Dynamo, a distributed inference-serving framework built to deploy and automate AI models at scale. This integration enables scalable KV Cache storage, reuse and sharing, achieving a 1-second Time to First Token (TTFT) at a full context window of 131K tokens – 19X faster than standard vLLM – while reducing infrastructure costs and overcoming GPU memory capacity bottlenecks. The Dell Automation Platform, now integrated into the Dell AI Factory with Nvidia, enables more intelligent and automated operations by deploying secure, validated, and optimized solutions. This strategy ensures consistent, repeatable results, removes uncertainty, and empowers organizations to fully realize the benefits of AI-driven applications powered by Nvidia’s accelerated computing.

Energy Department Closes Loan to Restart Nuclear Power Plant in Pennsylvania
WASHINGTON—U.S. Secretary of Energy Chris Wright today announced the Department of Energy’s (DOE) Loan Programs Office (LPO) closed a loan to lower energy costs and restart a Pennsylvania nuclear power plant. The $1 billion loan to Constellation Energy Generation, LLC (Constellation) will help finance the Crane Clean Energy Center, an 835 MW plant located on the Susquehanna River in Londonderry Township, Pennsylvania. Today’s announcement, funded by the Energy Dominance Financing (EDF) Program created under the Working Families Tax Cut, highlights the Energy Department’s role in advancing President Trump’s Executive Order, Reinvigorating the Nuclear Industrial Base, by supporting the restart of nuclear power plants. “Thanks to President Trump’s bold leadership and the Working Families Tax Cut, the United States is taking unprecedented steps to lower energy costs and bring about the next American nuclear renaissance,” said Energy Secretary Wright. “Constellation’s restart of a nuclear power plant in Pennsylvania will provide affordable, reliable, and secure energy to Americans across the Mid-Atlantic region. It will also help ensure America has the energy it needs to grow its domestic manufacturing base and win the AI race.” This announcement marks the first project to receive a concurrent conditional commitment and financial close under the Trump Administration. The loan will partially finance the restart of a reactor which ceased operations in 2019 but was never fully decommissioned. Once restarted, pending U.S. Nuclear Regulatory Commission licensing approvals, the 835 MW reactor will provide reliable and affordable baseload power to the PJM Interconnection region, powering the equivalent of approximately 800,000 homes. The Crane Restart project will help lower electricity costs, strengthen grid reliability, create over 600 American jobs, and advance the Administration’s mission to lead in global AI innovation and restore domestic manufacturing industries. DOE remains committed to fulfilling this mission to maximize the speed and scale of

Kentik bolsters network observability platform with autonomous investigation
“Advisor knows how to actually do networking things and can be more like a teammate,” Freedman explained. “It will go, reason, make a plan, use the different products, go look across the domains of telemetry and awareness, and say, ‘here’s what I think is going on, and here’s what you should do about it.’” In practice, an engineer can now ask, “What might be causing this customer to be down?” and the system will autonomously check traffic volumes, review recent firewall changes, examine the timing of events, and identify whether a specific rule change correlates with the traffic drop. It presents findings with the underlying data and suggests specific remediation steps. Data engine extensions for contextual analysis The autonomous investigation capability required Kentik to extend its data platform beyond flow records and device metrics. The Kentik Data Engine processes approximately one trillion telemetry points daily from NetFlow, sFlow, device APIs, cloud provider APIs, and synthetic monitoring. But correlation analysis requires additional context that wasn’t previously captured. “We needed configs, which we didn’t have,” Freedman said. “We needed graph and topology, which we had, but in places.” The company added configuration tracking, topology modeling, and relationship mapping to the platform. This allows the system to answer questions like whether a firewall rule change affected specific customer IP addresses or whether an IGP metric adjustment could have influenced routing decisions. The context layer connects time series data with network state information. The underlying database architecture uses a columnar store for historical data and a streaming database for real-time analysis. Both use the same query language, which allows the system to correlate events across time windows without moving data between systems. Foundation models and workflow training Kentik uses commercial large language models (LLMs) rather than training its own from scratch.

IBM readies commercially valuable quantum computer technology
But even at release, the system lets enterprises run longer quantum programs than before, with a wider variety of potential applications, says Crowder. Another breakthrough is its error correction. Last year, IBM demonstrated that it can do error correction on classical computers quickly and cheaply enough to be practical, on off-the-shelf components — specifically, AMD’s FPGA chip and now it’s commercially available. “And we did it a year early,” added Crowder. IBM now also has the technology to couple multiple quantum chips together into larger systems. “Nighthawk is going to be the chip that is the basis for building these systems,” says Crowder. By the end of 2026, Nighthawk’s new modular architecture and advanced error-correction will allow it to support 7,500 gates and get to 10,000 gates in 2027 — and 15,000 in 2028. The average company won’t see any immediate benefit, says Gartner’s Horvath. “But if you were in charge of a capital markets desk, you might use this,” he says. “If you were in charge of logistics, you might use this to optimize your routes.” And if you’re in charge of cybersecurity, it might be time to upgrade to quantum-safe encryption, he said. “You need to be on PQC [post-quantum cryptography] by 2030,” he says. “We think that commercial actors will have the ability to start breaking digital signatures by 2032 if the current timelines hold up.”

Atlantic LNG Freight Rates at Highest in Nearly 2 Years
The cost of transporting liquefied natural gas across the Atlantic Ocean surged to the highest in almost two years, as expanding exports from North America boosted demand for tankers. The spot rate to hire an LNG vessel for delivery from the US to Europe jumped 19 percent to $98,250 per day on Monday, the highest since January 2024, according to Spark Commodities, which tracks shipping prices. Costs to hire a tanker in the Pacific Ocean also jumped 15 percent to the highest in over a year, the data show. This is a stark turnaround for the market, which had languished at rock-bottom prices for most of the year amid a glut of available ships. Output from North America has increased steadily as new projects ramp up, requiring more vessels to deliver the fuel to customers in Europe and Asia. The 30-day moving average for LNG exports from North America has climbed nearly 40 percent year-to-date, according to ship-tracking data compiled by Bloomberg. Higher freight rates threaten to widen the spread between Asian and European gas prices, as it will be more expensive to send US shipments to the Pacific. A company booked a vessel for December in the Atlantic for about $100,000 per day, traders said. Likewise, when freight rates were lower, companies sent some vessels to Asia, further exacerbating a shortage of ships in the Atlantic, they added. Still, the surge in charter rates is likely to have peaked and has “limited potential to run much higher,” according to Han Wei, a BloombergNEF analyst. “On the LNG tanker supply side, we’ll continue to see strong new build deliveries, which should keep spot charter rates in check,” he said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social

Oil Gains on EU Sanctions Pressure
Oil rose as hawkish rhetoric by the European Union’s top diplomat raised expectations that sanctions on Russia will tighten. West Texas Intermediate gained 1.4% to settle under $61 a barrel on Tuesday, pushing higher after the EU’s Kaja Kallas said that Moscow’s aggression against the bloc should be considered terrorism. The comments come against a background of surging diesel-market tightness, in which Russia is a significant player, with the difference between the two nearest ICE gasoil contracts surging Tuesday. Futures for the European benchmark rose by 4.5%. Adding to bullish sentiment, the price of Russia’s flagship crude plunged to the lowest in over two years, with just days to go before US sanctions take effect against giant producers Rosneft PJSC and Lukoil PJSC over Moscow’s ongoing war in Ukraine. Other impacts are emerging. Major Asian buyers paused at least some purchases. And traders making physical deliveries of diesel under ICE Gasoil contracts will soon be banned from supplying barrels made with Russian crude in third countries, ICE said. Still, US benchmark futures are down this year as expectations for a glut weigh on the outlook, with the International Energy Agency forecasting a record surplus in 2026. The oversupply is being driven by the return of idled output from OPEC and its allies, as well as more supplies from outside of the group. But for now, the price of WTI is staying stubbornly above the $60 mark, according to Frank Monkam, head of macro trading at Buffalo Bayou Commodities. “It’s going to take a lot more than just a repeat of the same negative headlines to push the market below $60 a barrel unless we see a total unraveling of risk assets, particularly equities,” he said. “Absent of that, I think we’re still very vulnerable to an upside push here in

Energy Department Closes Loan to Restart Nuclear Power Plant in Pennsylvania
WASHINGTON—U.S. Secretary of Energy Chris Wright today announced the Department of Energy’s (DOE) Loan Programs Office (LPO) closed a loan to lower energy costs and restart a Pennsylvania nuclear power plant. The $1 billion loan to Constellation Energy Generation, LLC (Constellation) will help finance the Crane Clean Energy Center, an 835 MW plant located on the Susquehanna River in Londonderry Township, Pennsylvania. Today’s announcement, funded by the Energy Dominance Financing (EDF) Program created under the Working Families Tax Cut, highlights the Energy Department’s role in advancing President Trump’s Executive Order, Reinvigorating the Nuclear Industrial Base, by supporting the restart of nuclear power plants. “Thanks to President Trump’s bold leadership and the Working Families Tax Cut, the United States is taking unprecedented steps to lower energy costs and bring about the next American nuclear renaissance,” said Energy Secretary Wright. “Constellation’s restart of a nuclear power plant in Pennsylvania will provide affordable, reliable, and secure energy to Americans across the Mid-Atlantic region. It will also help ensure America has the energy it needs to grow its domestic manufacturing base and win the AI race.” This announcement marks the first project to receive a concurrent conditional commitment and financial close under the Trump Administration. The loan will partially finance the restart of a reactor which ceased operations in 2019 but was never fully decommissioned. Once restarted, pending U.S. Nuclear Regulatory Commission licensing approvals, the 835 MW reactor will provide reliable and affordable baseload power to the PJM Interconnection region, powering the equivalent of approximately 800,000 homes. The Crane Restart project will help lower electricity costs, strengthen grid reliability, create over 600 American jobs, and advance the Administration’s mission to lead in global AI innovation and restore domestic manufacturing industries. DOE remains committed to fulfilling this mission to maximize the speed and scale of

Atlantic LNG Freight Rates at Highest in Nearly 2 Years
The cost of transporting liquefied natural gas across the Atlantic Ocean surged to the highest in almost two years, as expanding exports from North America boosted demand for tankers. The spot rate to hire an LNG vessel for delivery from the US to Europe jumped 19 percent to $98,250 per day on Monday, the highest since January 2024, according to Spark Commodities, which tracks shipping prices. Costs to hire a tanker in the Pacific Ocean also jumped 15 percent to the highest in over a year, the data show. This is a stark turnaround for the market, which had languished at rock-bottom prices for most of the year amid a glut of available ships. Output from North America has increased steadily as new projects ramp up, requiring more vessels to deliver the fuel to customers in Europe and Asia. The 30-day moving average for LNG exports from North America has climbed nearly 40 percent year-to-date, according to ship-tracking data compiled by Bloomberg. Higher freight rates threaten to widen the spread between Asian and European gas prices, as it will be more expensive to send US shipments to the Pacific. A company booked a vessel for December in the Atlantic for about $100,000 per day, traders said. Likewise, when freight rates were lower, companies sent some vessels to Asia, further exacerbating a shortage of ships in the Atlantic, they added. Still, the surge in charter rates is likely to have peaked and has “limited potential to run much higher,” according to Han Wei, a BloombergNEF analyst. “On the LNG tanker supply side, we’ll continue to see strong new build deliveries, which should keep spot charter rates in check,” he said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social

Meeting America’s generation challenge: Why smarter permitting matters
Scott Corwin is president and CEO of the American Public Power Association. The United States faces a complex power challenge. The rise of data centers, the return of American manufacturing and the push to electrify vehicles and homes are all driving new demands on the nation’s electrical grid. Public power utilities, which are community-owned and not-for-profit providers, are prepared to meet these needs, serving nearly 55 million Americans in more than 2,000 communities across 49 states and several territories. However, these utilities are often slowed by an outdated, unpredictable federal permitting system. This is particularly challenging for new generation projects, as regulatory barriers and red tape can prolong timelines by years and raise costs by millions. In the end, these costs are borne by communities and leave families and businesses more vulnerable to supply disruptions or delayed improvements in reliability. Permitting reform is not about diminishing environmental protections. Instead, it is about removing unnecessary and duplicative regulatory hurdles that slow the construction of new generation resources and other needed energy infrastructure. The American Public Power Association supports pragmatic legislative efforts that streamline the permitting and siting process, provide clearer federal guidance, and produce timelier decisions. Reform must maintain strong environmental oversight but deliver outcomes that allow vital energy projects to proceed without unnecessary delay. When the permitting process is lengthy or unpredictable, the difficulty and expense of building new infrastructure grows. Customers are directly affected; they may see higher energy bills or even miss economic development opportunities because the process takes too long. Projects that meet environmental standards should move through federal review with clear milestones and prompt decisions. Federal policy must also ensure reviews are coordinated, not conducted in succession, so agencies work together with established schedules. Regulatory guidance should be consistent, even as administrations change, to enable local

Insights: What’s next for Permian basin electrification?
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } This Insights episode of the Oil & Gas Journal ReEnterprised podcast examines the rapidly growing power demands in the Permian basin region and the implications for operators, utilities, and adjacent industries. OGJ Editor-in-Chief Chris Smith interviews Will Kernan, Power Solutions Strategy Manager for Caterpillar Oil & Gas, on why electricity demand has surged by multiple gigawatts since 2021 and why traditional reliance on the grid is no longer sufficient to ensure timely project development and stable operations. Kernan outlines how accelerating electricity demand from both oil and gas operations and new industrial entrants—particularly data centers—has strained transmission capacity, driving greater interest in on-site natural-gas-fired generation and microgrid models. The episode closes with a look at major grid-expansion proposals under consideration in Texas, their long lead-times, and how distributed generation, waste-gas utilization, and field-scale microgrids will shape a more flexible and resilient power ecosystem for the Permian in the years ahead. Highlights 1:50 – Permian electricity demand surgingUp ~4 Gw since 2021 to 7.5 Gw total—driven by upstream electrification, compression, midstream growth, and residential/commercial load. 3:13 – Grid is no longer the “easy button.” Utility interconnection timelines of 3–5+ years can’t

Venture Global CEO: CP2 capacity could grow to 30 million tpy
The CP2 LNG plant Venture Global Inc. is building out in Cameron Parish, La., will be able to supply 30 million tonnes/year (tpy) versus its currently permitted capacity of 28 million tpy, Mike Sabel, the company’s chief executive officer and executive co-chairman said Nov. 10. Speaking after Virginia-based Venture Global reported its third-quarter results as well as the signing of a 1-million tpy supply agreement with Spain’s Naturgy, Sabel said teams have been applying learnings from the company’s Calcasieu Pass and Plaquemines plants. That includes from tens of thousands data points those plants are generating every minute. “We have a dedicated team of data scientists and process engineers and AI programmers that have been incorporating that data into our current operations, but also into design changes as we’ve learned some very surprising interactions of different parts of the facilities […] that we expect will carry over into CP2,” Sabel said. “We’ll have to go back and get the export authorization moved from 28 up to 30 but we think CP2 will be doing even better than Plaquemines, which is doing the best that any project has ever done.” As of Oct. 31, eight of the 26 planned liquefaction trains at CP2—which is forecast to cost a total of $29 billion—had been completed. Sabel said more than 3,500 construction workers are active at the site, which spans 700 acres. The Venture Global team this summer took final investment decision on the project and during the third quarter won final authorization from the US Department of Energy to export LNG to non-free trade agreement nations. During the 3 months that ended Sept. 30, Venture Global exported 100 LNG cargos, up from 89 in the spring and 31 in third-quarter 2024. That translated into net income of $429 million on more than $3.3 billion in

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Realizing value with AI inference at scale and in production
In partnership withHPE Training an AI model to predict equipment failures is an engineering achievement. But it’s not until prediction meets action—the moment that model successfully flags a malfunctioning machine—that true business transformation occurs. One technical milestone lives in a proof-of-concept deck; the other meaningfully contributes to the bottom line. Craig Partridge, senior director worldwide of Digital Next Advisory at HPE, believes “the true value of AI lies in inference”. Inference is where AI earns its keep. It’s the operational layer that puts all that training to use in real-world workflows. Partridge elaborates, “The phrase we use for this is ‘trusted AI inferencing at scale and in production,'” he says. “That’s where we think the biggest return on AI investments will come from.”Getting to that point is difficult. Christian Reichenbach, worldwide digital advisor at HPE, points to findings from the company’s recent survey of 1,775 IT leaders: While nearly a quarter (22%) of organizations have now operationalized AI—up from 15% the previous year—the majority remain stuck in experimentation. Reaching the next stage requires a three-part approach: establishing trust as an operating principle, ensuring data-centric execution, and cultivating IT leadership capable of scaling AI successfully. Trust as a prerequisite for scalable, high-stakes AI Trusted inference means users can actually rely on the answers they’re getting from AI systems. This is important for applications like generating marketing copy and deploying customer service chatbots, but it’s absolutely critical for higher-stakes scenarios—say, a robot assisting during surgeries or an autonomous vehicle navigating crowded streets.
Whatever the use case, establishing trust will require doubling down on data quality; first and foremost, inferencing outcomes must be built on reliable foundations. This reality informs one of Partridge’s go-to mantras: “Bad data in equals bad inferencing out.” Reichenbach cites a real-world example of what happens when data quality falls short—the rise of unreliable AI-generated content, including hallucinations, that clogs workflows and forces employees to spend significant time fact-checking. “When things go wrong, trust goes down, productivity gains are not reached, and the outcome we’re looking for is not achieved,” he says.
On the other hand, when trust is properly engineered into inference systems, efficiency and productivity gains can increase. Take a network operations team tasked with troubleshooting configurations. With a trusted inferencing engine, that unit gains a reliable copilot that can deliver faster, more accurate, custom-tailored recommendations—”a 24/7 member of the team they didn’t have before,” says Partridge. The shift to data-centric thinking and rise of the AI factory In the first AI wave, companies rushed to hire data scientists and many viewed sophisticated, trillion-parameter models as the primary goal. But today, as organizations move to turn early pilots into real, measurable outcomes, the focus has shifted toward data engineering and architecture. “Over the past five years, what’s become more meaningful is breaking down data silos, accessing data streams, and quickly unlocking value,” says Reichenbach. It’s an evolution happening alongside the rise of the AI factory—the always-on production line where data moves through pipelines and feedback loops to generate continuous intelligence. This shift reflects an evolution from model-centric to data-centric thinking, and with it comes a new set of strategic considerations. “It comes down to two things: How much of the intelligence–the model itself–is truly yours? And how much of the input–the data–is uniquely yours, from your customers, operations, or market?” says Reichenbach. These two central questions inform everything from platform direction and operating models to engineering roles and trust and security considerations. To help clients map their answers—and translate them into actionable strategies—Partridge breaks down HPE’s four-quadrant AI factory implication matrix (see figure): Source: HPE, 2025 Run: Accessing an external, pretrained model via an interface or API; organizations don’t own the model or the data. Implementation requires strong security and governance. It also requires establishing a center of excellence that makes and communicates decisions about AI usage. RAG (retrieval augmented generation): Using external, pre-trained models combined with a company’s proprietary data to create unique insights. Implementation focuses on connecting data streams to inferencing capabilities that provide rapid, integrated access to full-stack AI platforms. Riches: Training custom models on data that resides in the enterprise for unique differentiation opportunities and insights. Implementation requires scalable, energy-efficient environments, and often high-performance systems. Regulate: Leveraging custom models trained on external data, requiring the same scalable setup as Riches, but with added focus on legal and regulatory compliance for handling sensitive, non-owned data with extreme caution. Importantly, these quadrants are not mutually exclusive. Partridge notes that most organizations—including HPE itself—operate across many of the quadrants. “We build our own models to help understand how networks operate,” he says. “We then deploy that intelligence into our products, so that our end customer gets the chance to deliver in what we call the ‘Run’ quadrant. So for them, it’s not their data; it’s not their model. They’re just adding that capability inside their organization.” IT’s moment to scale—and lead The second part of Partridge’s catchphrase about inferencing—”at scale”— speaks to a primary tension in enterprise AI: what works for a handful of use cases often breaks when applied across an entire organization.
“There’s value in experimentation and kicking ideas around,” he says. “But if you want to really see the benefits of AI, it needs to be something that everybody can engage in and that solves for many different use cases.” In Partridge’s view, the challenge of turning boutique pilots into organization-wide systems is uniquely suited to the IT function’s core competencies—and it’s a leadership opportunity the function can’t afford to sit out. “IT takes things that are small-scale and implements the discipline required to run them at scale,” he says. “So, IT organizations really need to lean into this debate.” For IT teams content to linger on the sidelines, history offers a cautionary tale from the last major infrastructure shift: enterprise migration to the cloud. Many IT departments sat out decision-making during the early cloud adoption wave a decade ago, while business units independently deployed cloud services. This led to fragmented systems, redundant spending, and security gaps that took years to untangle. The same dynamic threatens to repeat with AI, as different teams experiment with tools and models outside IT’s purview. This phenomenon—sometimes called shadow AI—describes environments where pilots proliferate without oversight or governance. Partridge believes that most organizations are already operating in the “Run” quadrant in some capacity, as employees will use AI tools whether or not they’re officially authorized to. Rather than shut down experimentation, it is now IT’s mandate to bring structure to it. And enterprises must architect a data platform strategy that brings together enterprise data with guardrails, governance framework, and accessibility to feed AI. Also, it’s critical to keep standardizing infrastructure (such as private cloud AI platforms), protecting data integrity, and safeguarding brand trust, all while enabling the speed and flexibility that AI applications demand. These are the requirements for reaching the final milestone: AI that’s truly in production. For teams on the path to that goal, Reichenbach distills what success requires. “It comes down to knowing where you play: When to Run external models smarter, when to apply RAG to make them more informed, where to invest to unlock Riches from your own data and models, and when to Regulate what you don’t control,” says Reichenbach. “The winners will be those who bring clarity to all quadrants and align technology ambition with governance and value creation.” For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Google’s new Gemini 3 “vibe-codes” responses and comes with its own agent
EXECUTIVE SUMMARY Google today unveiled Gemini 3, a major upgrade to its flagship multimodal model. The firm says the new model is better at reasoning, has more fluid multimodal capabilities (the ability to work across voice, text or images), and will work like an agent. The previous model, Gemini 2.5, supports multimodal input. Users can feed it images, handwriting, or voice. But it usually requires explicit instructions about the format the user wants back, and it defaults to plain text regardless. But Gemini 3 introduces what Google calls “generative interfaces,” which allow the model to make its own choices about what kind of output fits the prompt best, assembling visual layouts and dynamic views on its own instead of returning a block of text. Ask for travel recommendations and it may spin up a website-like interface inside the app, complete with modules, images, and follow-up prompts such as “How many days are you traveling?” or “What kinds of activities do you enjoy?” It also presents clickable options based on what you might want next.
When asked to explain a concept, Gemini 3 may sketch a diagram or generate a simple animation on its own if it believes a visual is more effective. “Visual layout generates an immersive, magazine-style view complete with photos and modules,” says Josh Woodward, VP of Google Labs, Gemini, and AI Studio. “These elements don’t just look good but invite your input to further tailor the results.”
With Gemini 3, Google is also introducing Gemini Agent, an experimental feature designed to handle multi-step tasks directly inside the app. The agent can connect to services such as Google Calendar, Gmail, and Reminders. Once granted access, it can execute tasks like organizing an inbox or managing schedules. Similar to other agents, it breaks tasks into discrete steps, displays its progress in real time, and pauses for approval from the user before continuing. Google describes the feature as a step toward “a true generalist agent.” It will be available on the web for Google AI Ultra subscribers in the US starting November 18. The overall approach can seem a lot like “vibe coding,” where users describe an end goal in plain language and let the model assemble the interface or code needed to get there. The update also ties Gemini more deeply into Google’s existing products. In Search, a limited group of Google AI Pro and Ultra subscribers can now switch to Gemini 3 Pro, the reasoning variation of the new model, to receive deeper, more thorough AI-generated summaries that rely on the model’s reasoning rather than the existing AI Mode. For shopping, Gemini will now pull from Google’s Shopping Graph—which the company says contains more than 50 billion product listings—to generate its own recommendation guides. Users just need to ask a shopping-related question or search a shopping-related phrase, and the model assembles an interactive, Wirecutter-style product recommendation piece, complete with prices and product details, without redirecting to an external site. For developers, Google is also pushing single-prompt software generation further. The company introduced Google Antigravity, a development platform that acts as an all-in-one space where code, tools, and workflows can be created and managed from a single prompt. Derek Nee, CEO of Flowith, an agentic AI application, told MIT Technology Review that Gemini 3 Pro addresses several gaps in earlier models. Improvements include stronger visual understanding, better code generation, and better performance on long tasks—features he sees as essential for developers of AI apps and agents. “Given its speed and cost advantages, we’re integrating the new model into our product,” he says. “We’re optimistic about its potential, but we need deeper testing to understand how far it can go.”

Networking for AI: Building the foundation for real-time intelligence
In partnership withHPE The Ryder Cup is an almost-century-old tournament pitting Europe against the United States in an elite showcase of golf skill and strategy. At the 2025 event, nearly a quarter of a million spectators gathered to watch three days of fierce competition on the fairways. From a technology and logistics perspective, pulling off an event of this scale is no easy feat. The Ryder Cup’s infrastructure must accommodate the tens of thousands of network users who flood the venue (this year, at Bethpage Black in Farmingdale, New York) every day. To manage this IT complexity, Ryder Cup engaged technology partner HPE to create a central hub for its operations. The solution centered around a platform where tournament staff could access data visualization supporting operational decision-making. This dashboard, which leveraged a high-performance network and private-cloud environment, aggregated and distilled insights from diverse real-time data feeds. It was a glimpse into what AI-ready networking looks like at scale—a real-world stress test with implications for everything from event management to enterprise operations. While models and data readiness get the lion’s share of boardroom attention and media hype, networking is a critical third leg of successful AI implementation, explains Jon Green, CTO of HPE Networking. “Disconnected AI doesn’t get you very much; you need a way to get data into it and out of it for both training and inference,” he says.
As businesses move toward distributed, real-time AI applications, tomorrow’s networks will need to parse even more massive volumes of information at ever more lightning-fast speeds. What played out on the greens at Bethpage Black represents a lesson being learned across industries: Inference-ready networks are a make-or-break factor for turning AI’s promise into real-world performance. Making a network AI inference-ready More than half of organizations are still struggling to operationalize their data pipelines. In a recent HPE cross-industry survey of 1,775 IT leaders, 45% said they could run real-time data pushes and pulls for innovation. It’s a noticeable change over last year’s numbers (just 7% reported having such capabilities in 2024), but there’s still work to be done to connect data collection with real-time decision-making.
The network may hold the key to further narrowing that gap. Part of the solution will likely come down to infrastructure design. While traditional enterprise networks are engineered to handle the predictable flow of business applications—email, browsers, file sharing, etc.—they’re not designed to field the dynamic, high-volume data movement required by AI workloads. Inferencing in particular depends on shuttling vast datasets between multiple GPUs with supercomputer-like precision. “There’s an ability to play fast and loose with a standard, off-the-shelf enterprise network,” says Green. “Few will notice if an email platform is half a second slower than it might’ve been. But with AI transaction processing, the entire job is gated by the last calculation taking place. So it becomes really noticeable if you’ve got any loss or congestion.” Networks built for AI, therefore, must operate with a different set of performance characteristics, including ultra-low latency, lossless throughput, specialized equipment, and adaptability at scale. One of these differences is AI’s distributed nature, which affects the seamless flow of data. The Ryder Cup was a vivid demonstration of this new class of networking in action. During the event, a Connected Intelligence Center was put in place to ingest data from ticket scans, weather reports, GPS-tracked golf carts, concession and merchandise sales, spectator and consumer queues, and network performance. Additionally, 67 AI-enabled cameras were positioned throughout the course. Inputs were analyzed through an operational intelligence dashboard and provided staff with an instantaneous view of activity across the grounds. “The tournament is really complex from a networking perspective, because you have many big open areas that aren’t uniformly packed with people,” explains Green. “People tend to follow the action. So in certain areas, it’s really dense with lots of people and devices, while other areas are completely empty.” To handle that variability, engineers built out a two-tiered architecture. Across the sprawling venue, more than 650 WiFi 6E access points, 170 network switches, and 25 user experience sensors worked together to maintain continuous connectivity and feed a private cloud AI cluster for live analytics. The front-end layer connected cameras, sensors, and access points to capture live video and movement data, while a back-end layer—located within a temporary on-site data center—linked GPUs and servers in a high-speed, low-latency configuration that effectively served as the system’s brain. Together, the setup enabled both rapid on-the-ground responses and data collection that could inform future operational planning. “AI models also were available to the team which could process video of the shots taken and help determine, from the footage, which ones were the most interesting,” says Green. Physical AI and the return of on-prem intelligence If time is of the essence for event management, it’s even more critical in contexts where safety is on the line—for instance a self-driving car making a split-second decision to accelerate or brake. In planning for the rise of physical AI, where applications move off screens and onto factory floors and city streets, a growing number of enterprises are rethinking their architectures. Instead of sending the data to centralized clouds for inference, some are deploying edge-based AI clusters that process information closer to where it is generated. Data-intensive training may still occur in the cloud, but inferencing happens on-site.
This hybrid approach is fueling a wave of operational repatriation, as workloads once relegated to the cloud return to on-premises infrastructure for enhanced speed, security, sovereignty, and cost reasons. “We’ve had an out-migration of IT into the cloud in recent years, but physical AI is one of the use cases that we believe will bring a lot of that back on-prem,” predicts Green, giving the example of an AI-infused factory floor, where a round-trip of sensor data to the cloud would be too slow to safely control automated machinery. “By the time processing happens in the cloud, the machine has already moved,” he explains. There’s data to back up Green’s projection: research from Enterprise Research Group shows that 84% of respondents are reevaluating application deployment strategies due to the growth of AI. Market forecasts also reflect this shift. According to IDC, the AI market for infrastructure is expected to reach $758 billion by 2029. AI for networking and the future of self-driving infrastructure The relationship between networking and AI is circular: Modern networks make AI at scale possible, but AI is also helping make networks smarter and more capable. “Networks are some of the most data-rich systems in any organization,” says Green. “That makes them a perfect use case for AI. We can analyze millions of configuration states across thousands of customer environments and learn what actually improves performance or stability.” At HPE for example, which has one of the largest network telemetry repositories in the world, AI models analyze anonymized data collected from billions of connected devices to identify trends and refine behavior over time. The platform processes more than a trillion telemetry points each day, which means it can continuously learn from real-world conditions. The concept broadly known as AIOps (or AI-driven IT operations) is changing how enterprise networks are managed across industries. Today, AI surfaces insights as recommendations that administrators can choose to apply with a single click. Tomorrow, those same systems might automatically test and deploy low-risk changes themselves. That long-term vision, Green notes, is referred to as a “self-driving network”—one that handles the repetitive, error-prone tasks that have historically plagued IT teams. “AI isn’t coming for the network engineer’s job, but it will eliminate the tedious stuff that slows them down,” he says. “You’ll be able to say, ‘Please go configure 130 switches to solve this issue,’ and the system will handle it. When a port gets stuck or someone plugs a connector in the wrong direction, AI can detect it—and in many cases, fix it automatically.” Digital initiatives now depend on how effectively information moves. Whether coordinating a live event or streamlining a supply chain, the performance of the network increasingly defines the performance of the business. Building that foundation today will separate those who pilot from those who scale AI.
For more, register to watch MIT Technology Review’s EmTech AI Salon, featuring HPE. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: AI-powered warfare, and how embryo care is changing
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The State of AI: How war will be changed forever —Helen Warrell & James O’Donnell It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat.
But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Read the full story. This is the third edition of The State of AI, our subscriber-only collaboration between the Financial Times & MIT Technology Review examining the ways in which AI is reshaping global power.Every Monday, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. While subscribers to The Algorithm, our weekly AI newsletter, get access to an extended excerpt, subscribers to the MIT Technology Review are able to read the whole thing. Sign up here to receive future editions every Monday.
Job titles of the future: AI embryologist Embryologists are the scientists behind the scenes of in vitro fertilization who oversee the development and selection of embryos, prepare them for transfer, and maintain the lab environment. They’ve been a critical part of IVF for decades, but their job has gotten a whole lot busier in recent years as demand for the fertility treatment skyrockets and clinics struggle to keep up.Klaus Wiemer, a veteran embryologist and IVF lab director, believes artificial intelligence might help by predicting embryo health in real time and unlocking new avenues for productivity in the lab. Read the full story. —Amanda Smith The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Big Tech’s job cuts are a warning signThey’re a canary down the mine for other industries. (WP $)+ Americans appear to feel increasingly unsettled by AI. (WSJ $)+ Global fund managers worry companies are overinvesting in the technology. (FT $)
2 Iran is attempting to stimulate rain to end its deadly droughtBut critics warn that cloud seeding is a challenging process. (New Scientist $)+ Parts of western Iran are now experiencing flooding. (Reuters)+ Why it’s so hard to bust the weather control conspiracy theory. (MIT Technology Review) 3 Air taxi startups may produce new aircraft for war zonesThe US Army has announced its intentions to acquire most of its weapons from startups, not major contractors. (The Information $)+ US firm Joby Aviation is launching flying taxis in Dubai. (NBC News)+ This giant microwave may change the future of war. (MIT Technology Review)4 Weight-loss drug make Eli Lilly is likely to cross a trillion-dollar valuationAs it prepares to launch a pill alternative to its injections. (WSJ $)+ Arch rival Novo Nordisk A/S is undercutting the company to compete. (Bloomberg $)+ We’re learning more about what weight-loss drugs do to the body. (MIT Technology Review) 5 What’s going on with the US TikTok ban?Even the lawmakers in charge don’t seem to know. (The Verge) 6 It’s getting harder to grow cocoaMass tree felling and lower rainfall in the Congo Basin is to blame. (FT $)+ Industrial agriculture activists are everywhere at COP30. (The Guardian)+ Africa fights rising hunger by looking to foods of the past. (MIT Technology Review) 7 Russia is cracking down on its critical military bloggersArmchair critics are facing jail time if they refuse to apologize. (Economist $) 8 Why the auto industry is so obsessed with humanoid robotsIt’s not just Tesla—plenty of others want to get in on the act. (The Atlantic $)+ China’s EV giants are betting big on humanoid robots. (MIT Technology Review) 9 Indian startups are challenging ChatGPT’s AI dominanceThey support a far wider range of languages than the large AI firms’ models. (Rest of World)+ OpenAI is huge in India. Its models are steeped in caste bias. (MIT Technology Review) 10 These tiny sensors track butterflies on their journey to Mexico 🦋Scientists hope it’ll shed some light on their mysterious life cycles. (NYT $)
Quote of the day
“I think no company is going to be immune, including us.” —Sundar Pichai, CEO of Google, warns the BBC about the precarious nature of the AI bubble. One more thing How a 1980s toy robot arm inspired modern robotics—Jon KeeganAs a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm.Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was a legit robotic arm. And the bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics. Read the full story.
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The US Library of Congress has attained some handwritten drafts of iconic songs from The Wizard of Oz.+ This interesting dashboard tracks the world’s top 500 musical artists in the world right now—some of the listings may surprise you (or just make you feel really old.)+ Cult author Chris Kraus shares what’s floating her boat right now.+ The first images of the forthcoming Legend of Zelda film are here!

OpenAI named Emerging Leader in Generative AI
To us, this recognition reinforces something we hear constantly from our customers like Amgen, Cisco, Morgan Stanley, T-Mobile, Target, and Thermo Fisher Scientific: AI is becoming a core layer of enterprise infrastructure. Companies aren’t just experimenting—they’re deploying AI systems that reshape how work gets done. We’ve invested heavily in privacy controls, data governance and residency, monitoring, and evaluations so enterprises can deploy AI safely.

The State of AI: How war will be changed forever
Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday, writers from both publications debate one aspect of the generative AI revolution reshaping global power. In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O’Donnell, MIT Technology Review’s senior AI reporter, consider the ethical quandaries and financial incentives around AI’s use by the military. Helen Warrell, FT investigations reporter It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island’s air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing’s act of aggression.
Scenarios such as this have brought dystopian horror to the debate about the use of AI in warfare. Military commanders hope for a digitally enhanced force that is faster and more accurate than human-directed combat. But there are fears that as AI assumes an increasingly central role, these same commanders will lose control of a conflict that escalates too quickly and lacks ethical or legal oversight. Henry Kissinger, the former US secretary of state, spent his final years warning about the coming catastrophe of AI-driven warfare. Grasping and mitigating these risks is the military priority—some would say the “Oppenheimer moment”—of our age. One emerging consensus in the West is that decisions around the deployment of nuclear weapons should not be outsourced to AI. UN secretary-general António Guterres has gone further, calling for an outright ban on fully autonomous lethal weapons systems. It is essential that regulation keep pace with evolving technology. But in the sci-fi-fueled excitement, it is easy to lose track of what is actually possible. As researchers at Harvard’s Belfer Center point out, AI optimists often underestimate the challenges of fielding fully autonomous weapon systems. It is entirely possible that the capabilities of AI in combat are being overhyped.
Anthony King, Director of the Strategy and Security Institute at the University of Exeter and a key proponent of this argument, suggests that rather than replacing humans, AI will be used to improve military insight. Even if the character of war is changing and remote technology is refining weapon systems, he insists, “the complete automation of war itself is simply an illusion.” Of the three current military use cases of AI, none involves full autonomy. It is being developed for planning and logistics, cyber warfare (in sabotage, espionage, hacking, and information operations; and—most controversially—for weapons targeting, an application already in use on the battlefields of Ukraine and Gaza. Kyiv’s troops use AI software to direct drones able to evade Russian jammers as they close in on sensitive sites. The Israel Defense Forces have developed an AI-assisted decision support system known as Lavender, which has helped identify around 37,000 potential human targets within Gaza. FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK There is clearly a danger that the Lavender database replicates the biases of the data it is trained on. But military personnel carry biases too. One Israeli intelligence officer who used Lavender claimed to have more faith in the fairness of a “statistical mechanism” than that of a grieving soldier. Tech optimists designing AI weapons even deny that specific new controls are needed to control their capabilities. Keith Dear, a former UK military officer who now runs the strategic forecasting company Cassi AI, says existing laws are more than sufficient: “You make sure there’s nothing in the training data that might cause the system to go rogue … when you are confident you deploy it—and you, the human commander, are responsible for anything they might do that goes wrong.” It is an intriguing thought that some of the fear and shock about use of AI in war may come from those who are unfamiliar with brutal but realistic military norms. What do you think, James? Is some opposition to AI in warfare less about the use of autonomous systems and really an argument against war itself? James O’Donnell replies: Hi Helen, One thing I’ve noticed is that there’s been a drastic shift in attitudes of AI companies regarding military applications of their products. In the beginning of 2024, OpenAI unambiguously forbade the use of its tools for warfare, but by the end of the year, it had signed an agreement with Anduril to help it take down drones on the battlefield.
This step—not a fully autonomous weapon, to be sure, but very much a battlefield application of AI—marked a drastic change in how much tech companies could publicly link themselves with defense. What happened along the way? For one thing, it’s the hype. We’re told AI will not just bring superintelligence and scientific discovery but also make warfare sharper, more accurate and calculated, less prone to human fallibility. I spoke with US Marines, for example, who tested a type of AI while patrolling the South Pacific that was advertised to analyze foreign intelligence faster than a human could. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Secondly, money talks. OpenAI and others need to start recouping some of the unimaginable amounts of cash they’re spending on training and running these models. And few have deeper pockets than the Pentagon. And Europe’s defense heads seem keen to splash the cash too. Meanwhile, the amount of venture capital funding for defense tech this year has already doubled the total for all of 2024, as VCs hope to cash in on militaries’ newfound willingness to buy from startups. I do think the opposition to AI warfare falls into a few camps, one of which simply rejects the idea that more precise targeting (if it’s actually more precise at all) will mean fewer casualties rather than just more war. Consider the first era of drone warfare in Afghanistan. As drone strikes became cheaper to implement, can we really say it reduced carnage? Instead, did it merely enable more destruction per dollar? But the second camp of criticism (and now I’m finally getting to your question) comes from people who are well versed in the realities of war but have very specific complaints about the technology’s fundamental limitations. Missy Cummings, for example, is a former fighter pilot for the US Navy who is now a professor of engineering and computer science at George Mason University. She has been outspoken in her belief that large language models, specifically, are prone to make huge mistakes in military settings. The typical response to this complaint is that AI’s outputs are human-checked. But if an AI model relies on thousands of inputs for its conclusion, can that conclusion really be checked by one person? Tech companies are making extraordinarily big promises about what AI can do in these high-stakes applications, all while pressure to implement them is sky high. For me, this means it’s time for more skepticism, not less. Helen responds:
Hi James, We should definitely continue to question the safety of AI warfare systems and the oversight to which they’re subjected—and hold political leaders to account in this area. I am suggesting that we also apply some skepticism to what you rightly describe as the “extraordinarily big promises” made by some companies about what AI might be able to achieve on the battlefield.
There will be both opportunities and hazards in what the military is being offered by a relatively nascent (though booming) defense tech scene. The danger is that in the speed and secrecy of an arms race in AI weapons, these emerging capabilities may not receive the scrutiny and debate they desperately need. Further reading:

Oil Gains on EU Sanctions Pressure
Oil rose as hawkish rhetoric by the European Union’s top diplomat raised expectations that sanctions on Russia will tighten. West Texas Intermediate gained 1.4% to settle under $61 a barrel on Tuesday, pushing higher after the EU’s Kaja Kallas said that Moscow’s aggression against the bloc should be considered terrorism. The comments come against a background of surging diesel-market tightness, in which Russia is a significant player, with the difference between the two nearest ICE gasoil contracts surging Tuesday. Futures for the European benchmark rose by 4.5%. Adding to bullish sentiment, the price of Russia’s flagship crude plunged to the lowest in over two years, with just days to go before US sanctions take effect against giant producers Rosneft PJSC and Lukoil PJSC over Moscow’s ongoing war in Ukraine. Other impacts are emerging. Major Asian buyers paused at least some purchases. And traders making physical deliveries of diesel under ICE Gasoil contracts will soon be banned from supplying barrels made with Russian crude in third countries, ICE said. Still, US benchmark futures are down this year as expectations for a glut weigh on the outlook, with the International Energy Agency forecasting a record surplus in 2026. The oversupply is being driven by the return of idled output from OPEC and its allies, as well as more supplies from outside of the group. But for now, the price of WTI is staying stubbornly above the $60 mark, according to Frank Monkam, head of macro trading at Buffalo Bayou Commodities. “It’s going to take a lot more than just a repeat of the same negative headlines to push the market below $60 a barrel unless we see a total unraveling of risk assets, particularly equities,” he said. “Absent of that, I think we’re still very vulnerable to an upside push here in

Dell targets enterprise AI with switches, servers, reinforced Nvidia partnership
The company also introduced a new Intel-powered PowerEdge server, the R770AP. It is an air-cooled platform equipped with Intel Xeon 6 P-core 6900-series processors, featuring high-core-count CPUs, large cache sizes and support for CXL memory expansion. The PowerEdge R770AP offers enhanced parallel processing, reduced memory latency and abundant PCIe lanes enabling accelerated trading algorithms, scalable memory configurations and improved network performance. Dell first announced its AI strategy last year with Nvidia as a partner and now it is furthering the partnership with integration of Dell’s ObjectScale and PowerScale unstructured data storage devices with the Nvidia NIXL library. The NIXL library is a part of Nvidia Dynamo, a distributed inference-serving framework built to deploy and automate AI models at scale. This integration enables scalable KV Cache storage, reuse and sharing, achieving a 1-second Time to First Token (TTFT) at a full context window of 131K tokens – 19X faster than standard vLLM – while reducing infrastructure costs and overcoming GPU memory capacity bottlenecks. The Dell Automation Platform, now integrated into the Dell AI Factory with Nvidia, enables more intelligent and automated operations by deploying secure, validated, and optimized solutions. This strategy ensures consistent, repeatable results, removes uncertainty, and empowers organizations to fully realize the benefits of AI-driven applications powered by Nvidia’s accelerated computing.

Energy Department Closes Loan to Restart Nuclear Power Plant in Pennsylvania
WASHINGTON—U.S. Secretary of Energy Chris Wright today announced the Department of Energy’s (DOE) Loan Programs Office (LPO) closed a loan to lower energy costs and restart a Pennsylvania nuclear power plant. The $1 billion loan to Constellation Energy Generation, LLC (Constellation) will help finance the Crane Clean Energy Center, an 835 MW plant located on the Susquehanna River in Londonderry Township, Pennsylvania. Today’s announcement, funded by the Energy Dominance Financing (EDF) Program created under the Working Families Tax Cut, highlights the Energy Department’s role in advancing President Trump’s Executive Order, Reinvigorating the Nuclear Industrial Base, by supporting the restart of nuclear power plants. “Thanks to President Trump’s bold leadership and the Working Families Tax Cut, the United States is taking unprecedented steps to lower energy costs and bring about the next American nuclear renaissance,” said Energy Secretary Wright. “Constellation’s restart of a nuclear power plant in Pennsylvania will provide affordable, reliable, and secure energy to Americans across the Mid-Atlantic region. It will also help ensure America has the energy it needs to grow its domestic manufacturing base and win the AI race.” This announcement marks the first project to receive a concurrent conditional commitment and financial close under the Trump Administration. The loan will partially finance the restart of a reactor which ceased operations in 2019 but was never fully decommissioned. Once restarted, pending U.S. Nuclear Regulatory Commission licensing approvals, the 835 MW reactor will provide reliable and affordable baseload power to the PJM Interconnection region, powering the equivalent of approximately 800,000 homes. The Crane Restart project will help lower electricity costs, strengthen grid reliability, create over 600 American jobs, and advance the Administration’s mission to lead in global AI innovation and restore domestic manufacturing industries. DOE remains committed to fulfilling this mission to maximize the speed and scale of

Kentik bolsters network observability platform with autonomous investigation
“Advisor knows how to actually do networking things and can be more like a teammate,” Freedman explained. “It will go, reason, make a plan, use the different products, go look across the domains of telemetry and awareness, and say, ‘here’s what I think is going on, and here’s what you should do about it.’” In practice, an engineer can now ask, “What might be causing this customer to be down?” and the system will autonomously check traffic volumes, review recent firewall changes, examine the timing of events, and identify whether a specific rule change correlates with the traffic drop. It presents findings with the underlying data and suggests specific remediation steps. Data engine extensions for contextual analysis The autonomous investigation capability required Kentik to extend its data platform beyond flow records and device metrics. The Kentik Data Engine processes approximately one trillion telemetry points daily from NetFlow, sFlow, device APIs, cloud provider APIs, and synthetic monitoring. But correlation analysis requires additional context that wasn’t previously captured. “We needed configs, which we didn’t have,” Freedman said. “We needed graph and topology, which we had, but in places.” The company added configuration tracking, topology modeling, and relationship mapping to the platform. This allows the system to answer questions like whether a firewall rule change affected specific customer IP addresses or whether an IGP metric adjustment could have influenced routing decisions. The context layer connects time series data with network state information. The underlying database architecture uses a columnar store for historical data and a streaming database for real-time analysis. Both use the same query language, which allows the system to correlate events across time windows without moving data between systems. Foundation models and workflow training Kentik uses commercial large language models (LLMs) rather than training its own from scratch.

IBM readies commercially valuable quantum computer technology
But even at release, the system lets enterprises run longer quantum programs than before, with a wider variety of potential applications, says Crowder. Another breakthrough is its error correction. Last year, IBM demonstrated that it can do error correction on classical computers quickly and cheaply enough to be practical, on off-the-shelf components — specifically, AMD’s FPGA chip and now it’s commercially available. “And we did it a year early,” added Crowder. IBM now also has the technology to couple multiple quantum chips together into larger systems. “Nighthawk is going to be the chip that is the basis for building these systems,” says Crowder. By the end of 2026, Nighthawk’s new modular architecture and advanced error-correction will allow it to support 7,500 gates and get to 10,000 gates in 2027 — and 15,000 in 2028. The average company won’t see any immediate benefit, says Gartner’s Horvath. “But if you were in charge of a capital markets desk, you might use this,” he says. “If you were in charge of logistics, you might use this to optimize your routes.” And if you’re in charge of cybersecurity, it might be time to upgrade to quantum-safe encryption, he said. “You need to be on PQC [post-quantum cryptography] by 2030,” he says. “We think that commercial actors will have the ability to start breaking digital signatures by 2032 if the current timelines hold up.”

Atlantic LNG Freight Rates at Highest in Nearly 2 Years
The cost of transporting liquefied natural gas across the Atlantic Ocean surged to the highest in almost two years, as expanding exports from North America boosted demand for tankers. The spot rate to hire an LNG vessel for delivery from the US to Europe jumped 19 percent to $98,250 per day on Monday, the highest since January 2024, according to Spark Commodities, which tracks shipping prices. Costs to hire a tanker in the Pacific Ocean also jumped 15 percent to the highest in over a year, the data show. This is a stark turnaround for the market, which had languished at rock-bottom prices for most of the year amid a glut of available ships. Output from North America has increased steadily as new projects ramp up, requiring more vessels to deliver the fuel to customers in Europe and Asia. The 30-day moving average for LNG exports from North America has climbed nearly 40 percent year-to-date, according to ship-tracking data compiled by Bloomberg. Higher freight rates threaten to widen the spread between Asian and European gas prices, as it will be more expensive to send US shipments to the Pacific. A company booked a vessel for December in the Atlantic for about $100,000 per day, traders said. Likewise, when freight rates were lower, companies sent some vessels to Asia, further exacerbating a shortage of ships in the Atlantic, they added. Still, the surge in charter rates is likely to have peaked and has “limited potential to run much higher,” according to Han Wei, a BloombergNEF analyst. “On the LNG tanker supply side, we’ll continue to see strong new build deliveries, which should keep spot charter rates in check,” he said. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.