Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Cisco extends its Secure AI Factory with Nvidia

“Customers can now control and manage this environment and operate it like it was a traditional data center fabric,” Wollenweber said. “The ability to bring it under the same Nexus umbrella is actually a huge selling point for AI customers, because their IT infrastructure folks, their operational people that are running the network, already understand how to use these Nexus tools, and so they can now add AI workloads and kind of accelerated computing technologies like GPUs, but in that same Nexus umbrella,” Wollenweber said.  “As Al becomes operational and distributed, complexity becomes the enemy of scale. Fragmented architectures force customers to manage integration, policy enforcement, observability, and security across silos, increasing cost and slowing innovation,” said Wollenweber. “Architecting silicon, networking, compute, security, and Al software into a cohesive system gives organizations a unified operating model, stronger performance guarantees, and embedded trust.” Those are the driving ideas around Cisco Secure AI Factory with Nvidia, Wollenweber said. Introduced a year ago, Secure AI Factory with Nvidia integrates Cisco’s Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications. Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. AI Defense discovers the various models being used in a customer’s AI development and uses four features to help customers enforce AI protection: AI access, AI cloud visibility, AI model and application validation, and AI runtime protection. Cisco integrates Hybrid Mesh Firewall technology On the security side, Cisco said it will embed its Hybrid Mesh Firewall technology to allow for security policy enforcement on Nvidia BlueField data processing units (DPU) that are embedded in Nvidia GPU servers connected to Cisco Nexus One fabrics. Cisco Hybrid Mesh Firewall offers a distributed security fabric

Read More »

Middle East war fosters concerns about physical data center security

The most common issue that Guidepost talks about with its clients is insider threats, which can be anyone that is rightfully permitted into your data center. Data centers have very strict rules regarding movement of visitors, but employees pretty much have free rule of the place. “Insider threat could be someone simply putting a USB stick in a server or having access to a data device that they’re not supposed to,” he said. “A threat actor could potentially cause harm within the facility, whether that’s mechanical, electrical, plumbing spaces or the data halls themselves is our number one preventative item that we’re trying to thwart.” When it comes to external threats, Guidepost looks after vehicle-borne IEDs and vehicle ramming, even if it’s accidental. That’s why data centers have high, anti-climb perimeter fences, multi-layered gates. and vehicle barriers that are put in place help to prevent any unwanted vehicles outside of the facility. “It’s a lot of what we call Crime Prevention Through Environmental Design,” said Bekisz. “It’s a theory that we utilize in our industry for ensuring that we are detecting and thwarting individuals before they are willing to commit some type of offensive action or some type of unwanted behavior.” That includes simple things like lighting right or reducing the visibility of the data center through shrubs and trees and berms and using that in consortium with physical preventative devices. Drones are a growing problem, even if they are not being used in kamikaze attacks. Bekisz said the only thing you can do is put in drone detection, so you have some type of device in the air in the area of your facility, and then you call for support from local emergency services.

Read More »

Palantir partners with Nvidia to streamline AI data center deployment

This collaboration grants enterprises full control over their data, AI models, and applications while supporting the use of open-source AI models and related data acceleration tools. The Palantir AI OS reference architecture gives enterprises total control over their data, AI models and applications. It is particularly critical for customers with existing GPU infrastructure, latency-sensitive workflows, data sovereignty requirements, and high geographic distribution. “From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control,” says Akshay Krishnaswamy, Palantir’s chief architect in a statement. “Together with Nvidia — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for Nvidia accelerated compute infrastructure and enables customers to realize the promise of on-premises, edge, and sovereign cloud deployments,” he added. Sovereign AI is an emerging market that represents a country’s efforts to develop and maintain control of its own AI, using its own data, and keeping the data within its borders.

Read More »

Where OpenAI’s technology could show up in Iran

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious. It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China. The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?
Targets and strikes Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.) If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 
A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions? For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first.  It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran. Drone defense At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people.  Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit. The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.  Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack.  Back-office AI In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world.  Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions. Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Read More »

Who’s in the data-center space race?

But not everyone is that optimistic. According to Gartner, space-based data centers won’t be useful for decades, so companies should focus on expanding capacity down here on Earth. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” OpenAI CEO Sam Altman told The Indian Express in February. Current satellite computing can’t easily scale to data centers, agrees Holger Mueller, an analyst at Constellation Research. “Weight is still the restriction,” he says. “It’s the equivalent of you buying a tablet or small laptop to travel across Latin America versus putting in a data center in the Amazon. Different power requirements, investment, totally different setup.” Then there are issues like damaged solar panels from meteorite storms and satellite debris, he adds. “You would have to pay for operational redundancy, which is further investment.” “Data centers will be built where they are affordable,” he says. “I don’t see space happening soon. Remember the Microsoft submerged one? Crickets…” But he agrees that solar power is nice, though the sun is only visible from one side of the planet at any given time. And space is cold, he says. Cooling down in outer space In fact, space is very cold. Close to absolute zero cold. But vacuum is also a great insulator, and there’s no air to move the heat around. “You can’t convect heat away,” says Richard Bonner, CTO at Accelsius, a liquid cooling company. Bonner has worked on NASA research projects about the challenge of cooling in space and is very familiar with the problem. A small proportion of the heat might be turned back into useful electricity, but that’s not really a solution, he says, because computer chips don’t get quite that hot. Instead, heat is radiated. When an object warms up, it generates

Read More »

Quantum Elements cuts quantum error rates using AI-powered digital twin

“That’s pretty clever, actually,” Sutor says. “It’s a little microwave pulse. That fixes some of the errors.” The Quantum Elements paper specifically addressed quantum error correction in IBM’s 127-qubit superconducting processor. But these techniques might also be able to be generalized to other types of quantum computers, Sutor says. And any improvement in error correction will bring usable quantum computers that much closer. So will the other aspect of this announcement—the fact that the new error-correction technique was developed using Quantum Element’s AI-powered, digital-twin-style quantum computer simulator, Constellation. Most quantum computer simulators allow people developing quantum applications to test them in ideal environments. But real quantum computers have errors and noise. Quantum Elements’ simulator models that noise, allowing developers to test in near-real-world conditions. There are also other simulation platforms, including IBM’s Qiskit Aer and Quantinuum’s H-Series Emulator. According to Medalsy, the simulators from IBM and Quantinuum use simplified models that don’t reproduce all the noise. “Quantum Elements’ digital twin is aimed at hardware-faithful simulation at experiment scale,” he says. “It is designed to preserve the full noise signature, both coherent and incoherent.”

Read More »

Cisco extends its Secure AI Factory with Nvidia

“Customers can now control and manage this environment and operate it like it was a traditional data center fabric,” Wollenweber said. “The ability to bring it under the same Nexus umbrella is actually a huge selling point for AI customers, because their IT infrastructure folks, their operational people that are running the network, already understand how to use these Nexus tools, and so they can now add AI workloads and kind of accelerated computing technologies like GPUs, but in that same Nexus umbrella,” Wollenweber said.  “As Al becomes operational and distributed, complexity becomes the enemy of scale. Fragmented architectures force customers to manage integration, policy enforcement, observability, and security across silos, increasing cost and slowing innovation,” said Wollenweber. “Architecting silicon, networking, compute, security, and Al software into a cohesive system gives organizations a unified operating model, stronger performance guarantees, and embedded trust.” Those are the driving ideas around Cisco Secure AI Factory with Nvidia, Wollenweber said. Introduced a year ago, Secure AI Factory with Nvidia integrates Cisco’s Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications. Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. AI Defense discovers the various models being used in a customer’s AI development and uses four features to help customers enforce AI protection: AI access, AI cloud visibility, AI model and application validation, and AI runtime protection. Cisco integrates Hybrid Mesh Firewall technology On the security side, Cisco said it will embed its Hybrid Mesh Firewall technology to allow for security policy enforcement on Nvidia BlueField data processing units (DPU) that are embedded in Nvidia GPU servers connected to Cisco Nexus One fabrics. Cisco Hybrid Mesh Firewall offers a distributed security fabric

Read More »

Middle East war fosters concerns about physical data center security

The most common issue that Guidepost talks about with its clients is insider threats, which can be anyone that is rightfully permitted into your data center. Data centers have very strict rules regarding movement of visitors, but employees pretty much have free rule of the place. “Insider threat could be someone simply putting a USB stick in a server or having access to a data device that they’re not supposed to,” he said. “A threat actor could potentially cause harm within the facility, whether that’s mechanical, electrical, plumbing spaces or the data halls themselves is our number one preventative item that we’re trying to thwart.” When it comes to external threats, Guidepost looks after vehicle-borne IEDs and vehicle ramming, even if it’s accidental. That’s why data centers have high, anti-climb perimeter fences, multi-layered gates. and vehicle barriers that are put in place help to prevent any unwanted vehicles outside of the facility. “It’s a lot of what we call Crime Prevention Through Environmental Design,” said Bekisz. “It’s a theory that we utilize in our industry for ensuring that we are detecting and thwarting individuals before they are willing to commit some type of offensive action or some type of unwanted behavior.” That includes simple things like lighting right or reducing the visibility of the data center through shrubs and trees and berms and using that in consortium with physical preventative devices. Drones are a growing problem, even if they are not being used in kamikaze attacks. Bekisz said the only thing you can do is put in drone detection, so you have some type of device in the air in the area of your facility, and then you call for support from local emergency services.

Read More »

Palantir partners with Nvidia to streamline AI data center deployment

This collaboration grants enterprises full control over their data, AI models, and applications while supporting the use of open-source AI models and related data acceleration tools. The Palantir AI OS reference architecture gives enterprises total control over their data, AI models and applications. It is particularly critical for customers with existing GPU infrastructure, latency-sensitive workflows, data sovereignty requirements, and high geographic distribution. “From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control,” says Akshay Krishnaswamy, Palantir’s chief architect in a statement. “Together with Nvidia — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for Nvidia accelerated compute infrastructure and enables customers to realize the promise of on-premises, edge, and sovereign cloud deployments,” he added. Sovereign AI is an emerging market that represents a country’s efforts to develop and maintain control of its own AI, using its own data, and keeping the data within its borders.

Read More »

Where OpenAI’s technology could show up in Iran

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious. It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China. The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?
Targets and strikes Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.) If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 
A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions? For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first.  It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran. Drone defense At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people.  Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit. The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.  Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack.  Back-office AI In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world.  Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions. Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Read More »

Who’s in the data-center space race?

But not everyone is that optimistic. According to Gartner, space-based data centers won’t be useful for decades, so companies should focus on expanding capacity down here on Earth. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” OpenAI CEO Sam Altman told The Indian Express in February. Current satellite computing can’t easily scale to data centers, agrees Holger Mueller, an analyst at Constellation Research. “Weight is still the restriction,” he says. “It’s the equivalent of you buying a tablet or small laptop to travel across Latin America versus putting in a data center in the Amazon. Different power requirements, investment, totally different setup.” Then there are issues like damaged solar panels from meteorite storms and satellite debris, he adds. “You would have to pay for operational redundancy, which is further investment.” “Data centers will be built where they are affordable,” he says. “I don’t see space happening soon. Remember the Microsoft submerged one? Crickets…” But he agrees that solar power is nice, though the sun is only visible from one side of the planet at any given time. And space is cold, he says. Cooling down in outer space In fact, space is very cold. Close to absolute zero cold. But vacuum is also a great insulator, and there’s no air to move the heat around. “You can’t convect heat away,” says Richard Bonner, CTO at Accelsius, a liquid cooling company. Bonner has worked on NASA research projects about the challenge of cooling in space and is very familiar with the problem. A small proportion of the heat might be turned back into useful electricity, but that’s not really a solution, he says, because computer chips don’t get quite that hot. Instead, heat is radiated. When an object warms up, it generates

Read More »

Quantum Elements cuts quantum error rates using AI-powered digital twin

“That’s pretty clever, actually,” Sutor says. “It’s a little microwave pulse. That fixes some of the errors.” The Quantum Elements paper specifically addressed quantum error correction in IBM’s 127-qubit superconducting processor. But these techniques might also be able to be generalized to other types of quantum computers, Sutor says. And any improvement in error correction will bring usable quantum computers that much closer. So will the other aspect of this announcement—the fact that the new error-correction technique was developed using Quantum Element’s AI-powered, digital-twin-style quantum computer simulator, Constellation. Most quantum computer simulators allow people developing quantum applications to test them in ideal environments. But real quantum computers have errors and noise. Quantum Elements’ simulator models that noise, allowing developers to test in near-real-world conditions. There are also other simulation platforms, including IBM’s Qiskit Aer and Quantinuum’s H-Series Emulator. According to Medalsy, the simulators from IBM and Quantinuum use simplified models that don’t reproduce all the noise. “Quantum Elements’ digital twin is aimed at hardware-faithful simulation at experiment scale,” he says. “It is designed to preserve the full noise signature, both coherent and incoherent.”

Read More »

TotalEnergies starts production from Lapa Southwest project offshore Brazil

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } TotalEnergies started oil production from the Lapa Southwest project in Santos basin, about 300 km offshore Brazil. Development of Lapa Southwest consist of subsea tieback of three wells to the existing Lapa floating production, storage, and offloading (FPSO) unit. The project will increase oil production from Lapa field by 25,000 b/d at plateau and bring the total output of the field to 60,000 b/d. Production startup follows the that of Mero-4 in May 2025, and ahead of startup of Atapu-2 and Sépia-2 expected in 2029. TotalEnergies is operator at Lapa Southwest with 48% intrest. Partners are Shell (27%) and Repsol Sinopec Brazil (25%).

Read More »

The Iran war: Regional geopolitics, oil, and natural gas

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } In this bonus episode of the Oil & Gas Journal ReEnterprised podcast, Head of Content Chris Smith is joined by Jim Krane, the Diana Tamari Sabbagh Fellow in Middle East Energy Studies and Center for Energy Studies Lead for Energy and Geopolitics in the Middle East at Rice University’s Baker Institute for Public Policy. The two discuss the regional political forces shaping the Iran war so far, exactly how vulnerable the Strait of Hormuz is, and—shifting inland—what’s in it for the Kurds.

Read More »

Infinity more likely to add frac crews than third rig in 2026

Infinity Natural Resources Inc., Morgantown, plans to add a second rig to its operations this spring as it builds on the December acquisition of some Ohio Utica assets from Antero Resources. But executives said Mar. 11 they’re more likely to add fracturing crews than a third rig this year should oil prices stay at higher levels. Infinity, led by president and chief executive officer Zack Arnold, 3 months ago paid roughly $1.2 billion for upstream and midstream assets in Ohio that peers at Antero were divesting as part of their purchase of HG Energy II assets in the Marcellus basin. In the second quarter, Infinity will begin operating a rig in the former Antero footprint and Arnold said he expects the company will maintain that count for the rest of the year. “We are cognizant of our portfolio and the returns that we have,” Arnold said on a conference call discussing Infinity’s fourth-quarter results and 2026 outlook. “We’re probably more likely to maybe consider additional frac crews […] than drilling rigs at this stage. But it’s difficult to say […] Three weeks ago, oil prices were a little bit different.” Arnold said the team has flexibility “to do the right things” should commodity prices support further investment and could tweak the oil-natural gas priorities in a 2026 plan that today calls for 72% of drilling activity to be in Ohio with the remainder in Pennsylvania. The goal is to turn in line about 30 wells (gross) this year—10 of which will be at the recently acquired assets—and have 2026 total net production grow to 345-375 MMcfed, of which liquids and oil will be 18,000-20,000 b/d. In the fourth quarter, that figure was 272 MMcfed.

Read More »

Oil tops $100 on intensifying Iran war

Oil prices have climbed above $100/bbl for the first time since 2022 as the escalating US-Iran war threatens critical energy flows through the Middle East. Hopes for a near-term de-escalation faded on Friday after US President Trump stated that only an unconditional surrender would be acceptable, heightening concerns that the conflict could become prolonged. Tensions intensified further on Monday, Mar. 9, when Iran launched new attacks on Israel and several Gulf states just hours after declaring Mojtaba Khamenei as the country’s new supreme leader. Analysts warn that a sustained disruption of shipments through the Strait of Hormuz could trigger a severe tightening of global crude supplies and send prices significantly higher. Vikas Dwivedi, global energy strategist at Macquarie Group, said the market could move rapidly into a supply-shock environment if hostilities continue without a diplomatic resolution. “In our analysis, a few weeks of Hormuz closure will create a domino effect of events that could push crude to $150/bbl or higher,” Dwivedi said in a market note. Dwivedi added that without a ceasefire or negotiated agreement, the global oil market could begin to “break in days rather than weeks or months,” as supply disruptions cascade through the region’s production and export systems. Although the Strait of Hormuz has effectively become inaccessible for many tankers due to escalating security risks, Middle East loadings have so far remained relatively resilient. However, reports of production shut-ins have begun to emerge across parts of the region, including Iraq, Kuwait, and Qatar (LNG). If the disruption persists, broader waves of production curtailments could unfold over the coming week. Macquarie analysts believe the final cuts would occur in Saudi Arabia roughly 20 days from now. Shipping risks, oil and beyond  Security risks around the Strait of Hormuz remain a major constraint on shipping. Even with elevated insurance

Read More »

A week unlike any other for crude prices

Oil, fundamental analysis The price of WTI settled last Friday at $90.90, which was already $14/bbl higher than the Friday before the US and Israel began their attacks on Iran. With the conflict continuing last weekend, Iran continued to pummel its neighboring petro-states and threatened any ships attempting to pass through the Strait of Hormuz. So, when the oil markets re-opened Sunday night, a lot of pent-up anxiety turned into the immediate buying of April NYMEX WTI futures with the Open price hitting $98/bbl leading to a session High of $119.50/bbl. Trading Monday during regular business hours would moderate some ending in a closing price of $94.80/bbl but only after a wild day that saw a Low of $81.20 which created a daily Hi/Lo range of $32. The week’s Low was $76.75/bbl on Tuesday in what was seen at the time as numerous positive signs that the Strait of Hormuz would reopen. That didn’t happen. Brent crude followed a similar pattern hitting a High of $119.50/bbl on Sunday evening and a Low of $81.15 on Tuesday. Both contracts settled higher week-on-week. The WTI/Brent spread fluctuated throughout the week but now sits at ($4.80). Neither the International Energy Agency (IEA)-announced reserve release nor a gain in US crude inventories could halt the on-going rally. The Strait of Hormuz remains the key issue impacting global oil prices as conflicting reports exist throughout the media coverage. President Trump said the US Navy would escort ships, if needed while Energy Secretary Wright stated that the US Navy was too involved in the actual conflict with Iran to perform such duties. Secretary of Defense Hegseth stated Friday that the Strait of Hormuz was “open” for ships wishing to pass unless Iran fires upon them which the latter has explicitly threatened to do. The US Central

Read More »

Energy Department Approves Immediate Additional LNG Exports from Plaquemines LNG

WASHINGTON—U.S. Secretary of Energy Chris Wright today authorized an immediate 13% increase in exports at Venture Global’s Plaquemines liquefied natural gas (LNG) Terminal in Louisiana. Today’s signed export authorization allows additional exports of up to 0.45 billion cubic feet per day (Bcf/d) of U.S. natural gas as LNG to non-free trade agreement (FTA) countries from the Plaquemines LNG Terminal. With today’s order, Plaquemines LNG is now authorized to immediately export a total of 3.85 Bcf/d to both FTA and non-FTA countries, strengthening global natural gas supplies with reliable American LNG. “At a time when Iran and its terrorist proxies attempt to disrupt the global energy supply, the Trump Administration remains committed to strengthening American energy dominance,” said Secretary Wright. “Thanks to President Trump and American innovators, the U.S. is not only the largest producer and exporter of LNG but will more than double its LNG exports in the coming years. We will see meaningful additions to U.S. LNG export capacity at Plaquemines immediately and other facilities commencing operations in future weeks and months.” “Our mission to enable secure, reliable, and affordable energy has never been more important than now,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “I am pleased that DOE can take this action to be able to make an immediate difference to help add to global supplies of LNG.” Plaquemines LNG commenced exports in December 2024 and has rapidly been able to increase its export levels to over 3 Bcf/d. This authorization will allow for an immediate increase in the volumes of LNG that Plaquemines LNG can export to non-FTA countries, which import the majority of U.S. LNG. Thanks to President Trump’s leadership and American innovation, the United States is the world’s largest natural gas producer and exporter. Since the President ended the

Read More »

West of Orkney developers helped support 24 charities last year

The developers of the 2GW West of Orkney wind farm paid out a total of £18,000 to 24 organisations from its small donations fund in 2024. The money went to projects across Caithness, Sutherland and Orkney, including a mental health initiative in Thurso and a scheme by Dunnet Community Forest to improve the quality of meadows through the use of traditional scythes. Established in 2022, the fund offers up to £1,000 per project towards programmes in the far north. In addition to the small donations fund, the West of Orkney developers intend to follow other wind farms by establishing a community benefit fund once the project is operational. West of Orkney wind farm project director Stuart McAuley said: “Our donations programme is just one small way in which we can support some of the many valuable initiatives in Caithness, Sutherland and Orkney. “In every case we have been immensely impressed by the passion and professionalism each organisation brings, whether their focus is on sport, the arts, social care, education or the environment, and we hope the funds we provide help them achieve their goals.” In addition to the local donations scheme, the wind farm developers have helped fund a £1 million research and development programme led by EMEC in Orkney and a £1.2m education initiative led by UHI. It also provided £50,000 to support the FutureSkills apprenticeship programme in Caithness, with funds going to employment and training costs to help tackle skill shortages in the North of Scotland. The West of Orkney wind farm is being developed by Corio Generation, TotalEnergies and Renewable Infrastructure Development Group (RIDG). The project is among the leaders of the ScotWind cohort, having been the first to submit its offshore consent documents in late 2023. In addition, the project’s onshore plans were approved by the

Read More »

Biden bans US offshore oil and gas drilling ahead of Trump’s return

US President Joe Biden has announced a ban on offshore oil and gas drilling across vast swathes of the country’s coastal waters. The decision comes just weeks before his successor Donald Trump, who has vowed to increase US fossil fuel production, takes office. The drilling ban will affect 625 million acres of federal waters across America’s eastern and western coasts, the eastern Gulf of Mexico and Alaska’s Northern Bering Sea. The decision does not affect the western Gulf of Mexico, where much of American offshore oil and gas production occurs and is set to continue. In a statement, President Biden said he is taking action to protect the regions “from oil and natural gas drilling and the harm it can cause”. “My decision reflects what coastal communities, businesses, and beachgoers have known for a long time: that drilling off these coasts could cause irreversible damage to places we hold dear and is unnecessary to meet our nation’s energy needs,” Biden said. “It is not worth the risks. “As the climate crisis continues to threaten communities across the country and we are transitioning to a clean energy economy, now is the time to protect these coasts for our children and grandchildren.” Offshore drilling ban The White House said Biden used his authority under the 1953 Outer Continental Shelf Lands Act, which allows presidents to withdraw areas from mineral leasing and drilling. However, the law does not give a president the right to unilaterally reverse a drilling ban without congressional approval. This means that Trump, who pledged to “unleash” US fossil fuel production during his re-election campaign, could find it difficult to overturn the ban after taking office. Sunset shot of the Shell Olympus platform in the foreground and the Shell Mars platform in the background in the Gulf of Mexico Trump

Read More »

The Download: our 10 Breakthrough Technologies for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: MIT Technology Review’s 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. It’s hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. It’s all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as: + How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs.+ What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird data—and to shed much-needed light on their migration habits. Read the full story. 
+ How poop could help feed the planet—yes, really. Read the full story.
Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if you’re interested in previous discussions about topics ranging from mixed reality tech to gene editing to AI’s climate impact, check out some of the highlights from the past year’s events. This international surveillance project aims to protect wheat from deadly diseases For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world.But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change.  An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories. Read the full story. —Shaoni Bhattacharya

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge) 2 Uber and Lyft are racing to catch up with their robotaxi rivalsAfter abandoning their own self-driving projects years ago. (WSJ $)+ China’s Pony.ai is gearing up to expand to Hong Kong.  (Reuters)3 Elon Musk is going after NASA He’s largely veered away from criticising the space agency publicly—until now. (Wired $)+ SpaceX’s Starship rocket has a legion of scientist fans. (The Guardian)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAIFeaturing three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (MIT Technology Review) 5 The dangerous allure of TikTokMigrants’ online portrayal of their experiences in America aren’t always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketingAnd AI is only a part of it. (Economist $)+ AI’s search for more energy is growing more urgent. (MIT Technology Review) 7 The messy ethics of writing religious sermons using AISkeptics aren’t convinced the technology should be used to channel spirituality. (NYT $)
8 How a wildlife app became an invaluable wildfire trackerWatch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review) 9 Computer scientists just love oracles 🔮 Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)
10 Pet tech is booming 🐾But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review) Quote of the day “The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI?” —Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies’ AI claims are overblown.
The big story Broadband funding for Native communities could finally connect some of America’s most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.
The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. —Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ It’s not just you, some people really do look like their dogs!+ I’m not sure if this is actually the world’s healthiest meal, but it sure looks tasty.+ Ah, the old “bitten by a rabid fox chestnut.”

Read More »

Equinor Secures $3 Billion Financing for US Offshore Wind Project

Equinor ASA has announced a final investment decision on Empire Wind 1 and financial close for $3 billion in debt financing for the under-construction project offshore Long Island, expected to power 500,000 New York homes. The Norwegian majority state-owned energy major said in a statement it intends to farm down ownership “to further enhance value and reduce exposure”. Equinor has taken full ownership of Empire Wind 1 and 2 since last year, in a swap transaction with 50 percent co-venturer BP PLC that allowed the former to exit the Beacon Wind lease, also a 50-50 venture between the two. Equinor has yet to complete a portion of the transaction under which it would also acquire BP’s 50 percent share in the South Brooklyn Marine Terminal lease, according to the latest transaction update on Equinor’s website. The lease involves a terminal conversion project that was intended to serve as an interconnection station for Beacon Wind and Empire Wind, as agreed on by the two companies and the state of New York in 2022.  “The expected total capital investments, including fees for the use of the South Brooklyn Marine Terminal, are approximately $5 billion including the effect of expected future tax credits (ITCs)”, said the statement on Equinor’s website announcing financial close. Equinor did not disclose its backers, only saying, “The final group of lenders includes some of the most experienced lenders in the sector along with many of Equinor’s relationship banks”. “Empire Wind 1 will be the first offshore wind project to connect into the New York City grid”, the statement added. “The redevelopment of the South Brooklyn Marine Terminal and construction of Empire Wind 1 will create more than 1,000 union jobs in the construction phase”, Equinor said. On February 22, 2024, the Bureau of Ocean Energy Management (BOEM) announced

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.2 million barrels from the week ending December 20 to the week ending December 27, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 2. Crude oil stocks, excluding the SPR, stood at 415.6 million barrels on December 27, 416.8 million barrels on December 20, and 431.1 million barrels on December 29, 2023, the report revealed. Crude oil in the SPR came in at 393.6 million barrels on December 27, 393.3 million barrels on December 20, and 354.4 million barrels on December 29, 2023, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on December 27, the report revealed. This figure was up 9.6 million barrels week on week and up 17.8 million barrels year on year, the report outlined. “At 415.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are slightly below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.4 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.6 million barrels from last week and are 10 percent above the five year average for this time of year,” it went on to state. In the report, the EIA noted

Read More »

More telecom firms were breached by Chinese hackers than previously reported

Broader implications for US infrastructure The Salt Typhoon revelations follow a broader pattern of state-sponsored cyber operations targeting the US technology ecosystem. The telecom sector, serving as a backbone for industries including finance, energy, and transportation, remains particularly vulnerable to such attacks. While Chinese officials have dismissed the accusations as disinformation, the recurring breaches underscore the pressing need for international collaboration and policy enforcement to deter future attacks. The Salt Typhoon campaign has uncovered alarming gaps in the cybersecurity of US telecommunications firms, with breaches now extending to over a dozen networks. Federal agencies and private firms must act swiftly to mitigate risks as adversaries continue to evolve their attack strategies. Strengthening oversight, fostering industry-wide collaboration, and investing in advanced defense mechanisms are essential steps toward safeguarding national security and public trust.

Read More »

From games to biology and beyond: 10 years of AlphaGo’s impact

Catalyzing breakthroughs in scienceBy proving it could navigate the massive search space of a Go board, AlphaGo demonstrated the potential for AI to help us better understand the vast complexities of the physical world. We started by attempting to solve the protein folding problem, a 50-year grand challenge of predicting the 3D structure of proteins – information that is crucial for understanding diseases and developing new drugs.In 2020, we finally cracked this longstanding scientific problem with our AlphaFold 2 system. From there, we folded the structures for all 200 million proteins known to science and made them freely available to scientists in an open-source database. Today, over 3 million researchers around the world use the AlphaFold database to accelerate their important work on everything from malaria vaccines to plastic-eating enzymes. And in 2024, it was the honor of a lifetime for John Jumper and I to be awarded the Nobel Prize in Chemistry for leading this project, on behalf of the entire AlphaFold team.Since AlphaGo’s win, we’ve applied its groundbreaking approach to many other areas of science and mathematics, including:Mathematical reasoning: The most direct descendant of AlphaGo’s architecture, AlphaProof learned to prove formal mathematical statements using a combination of language models and AlphaZero’s reinforcement learning and search algorithms. Alongside AlphaGeometry 2, it became the first system to achieve a medal-standard (silver) at the International Mathematical Olympiad (IMO), proving AlphaGo’s methods could unlock advanced mathematical reasoning and laying the foundation for our most capable general models.Gemini, our largest and most capable model, recently went even further. An advanced version of its Deep Think mode achieved gold-medal level performance at the 2025 IMO using an approach inspired by AlphaGo. Since then, Deep Think has been applied to even more complex, open-ended challenges across science and engineering.Algorithm discovery: Just as AlphaGo searched for the best move in a game, our coding agent AlphaEvolve explores the space of computer code to discover more efficient algorithms. It had its own Move 37 moment when it found a novel way to multiply matrices, a fundamental mathematical operation powering nearly all modern neural networks. AlphaEvolve is now being tested on problems ranging from data center optimization to quantum computing.Scientific collaboration: We are integrating the search and reasoning principles pioneered with AlphaGo into an AI co-scientist. By having agents ‘debate’ scientific ideas and hypotheses, this system acts as a collaborator capable of performing the rigorous thinking necessary to identify patterns in data and solve sophisticated problems. In validation studies at Imperial College London, it analyzed decades of literature and independently arrived at the same hypothesis about antimicrobial resistance that researchers had spent years developing and validating experimentally.We’ve also used AI to better understand the genome, advance fusion energy research, improve weather prediction and more.As impressive as our scientific models are, they are highly specialized. To achieve fundamental breakthroughs like creating limitless clean energy or solving diseases that we don’t understand today, we need general AI systems that can find underlying structure and connections between different subject areas, and help us to come up with new hypotheses like the best scientists do.Future of intelligenceFor an AI to be truly general, it needs to understand the physical world. We built Gemini to be multimodal from the beginning so it could understand not just language, but also audio, video, images and code to build a model of the world.To think and reason across these modalities, the latest Gemini models use some of the techniques we pioneered with AlphaGo and AlphaZero.The next generation of AI systems will also need to be able to call upon specialized tools. For example, if a model needed to know the structure of a protein it could use AlphaFold for that.We think the combination of Gemini’s world models, AlphaGo’s search and planning techniques, and specialized AI tool use will prove to be critical for AGI.True creativity is a key capability that such an AGI system would need to exhibit. Move 37 was a glimpse of AI’s potential to think outside the box, but true original invention will require something more. It would need to not only come up with a novel Go strategy, as AlphaGo impressively did, but actually invent a game as deep and elegant, and as worthy of study as Go.Ten years after AlphaGo’s legendary victory, our ultimate goal is on the horizon. The creative spark first seen in Move 37 catalyzed breakthroughs that are now converging to pave the path towards AGI – and usher in a new golden age of scientific discovery.

Read More »

How Pokémon Go is helping robots deliver pizza on time

Pokémon Go was the world’s first augmented-reality megahit. Released in 2016 by the Google spinout Niantic, the AR twist on the juggernaut Pokémon franchise fast became a global phenomenon. From Chicago to Oslo to Enoshima, players hit the streets in the urgent hope of catching a Jigglypuff or a Squirtle or (with a huge amount of luck) an ultra-rare Galarian Zapdos hovering just out of reach, superimposed on the everyday world. In short, we’re talking about a huge number of people pointing their phones at a huge number of buildings. “Five hundred million people installed that app in 60 days,” says Brian McClendon, CTO at Niantic Spatial, an AI company that Niantic spun out in May last year. According to the video-game firm Scopely, which bought Pokémon Go from Niantic at the same time, the game still drew more than 100 million players in 2024, eight years after it launched.  Now Niantic Spatial is using that vast and unparalleled trove of crowdsourced data—images of urban landmarks tagged with super-accurate location markers taken from the phones of hundreds of millions of Pokémon Go players around the world—to build a kind of world model, a buzzy new technology that grounds the smarts of LLMs in real environments.  The company’s latest product is a model that it says can pinpoint your location on a map to within a few centimeters, based on a handful of snapshots of the buildings or other landmarks in view. The firm wants to use it to help robots navigate with greater precision in places where GPS is unreliable.
In the first big test of its technology, Niantic Spatial has just teamed up with Coco Robotics, a startup that deploys last-mile delivery robots in a number of cities across the US and Europe. “Everybody thought that AR was the future, that AR glasses were coming,” says McClendon. “And then robots became the audience.” From Pikachu to pizza delivery Coco Robotics deploys around 1,000 flight-case-size robots—built to carry up to eight extra-large pizzas or four grocery bags—in Los Angeles, Chicago, Jersey City, Miami, and Helsinki. According to CEO Zach Rash, the robots have made more than half a million deliveries to date, covering a few million miles in all weather conditions.
But to compete with human couriers, Coco’s robots, which trundle along sidewalks at around five miles per hour, must be as reliable as possible. “The best way we can do our job is by arriving exactly when we told you we were going to arrive,” says Rash. And that means not getting lost. The problem Coco faces is that it cannot rely on GPS, which can be weak in cities because radio signals bounce off buildings and interfere with each other. “We do deliveries in a lot of dense areas with high-rises and underpasses and freeways, and those are the areas where GPS just never really works,” says Rash.  “The urban canyon is the worst place in the world for GPS,” says McClendon. “If you look at that blue dot on your phone, you’ll often see it drift 50 meters, which puts you on a different block going a different direction on the wrong side of the street.” That’s where Niantic Spatial comes in.  For the last few years, Niantic Spatial has been taking the data collected from players of Pokémon Go and Ingress (Niantic’s previous phone-based AR game, launched in 2013) and building a visual positioning system, technology that tells you where you are based on what you can see. “It turns out that getting Pikachu to realistically run around and getting Coco’s robot to safely and accurately move through the world is actually the same problem,” says John Hanke, CEO of Niantic Spatial. “Visual positioning is not a very new technology,” says Konrad Wenzel at ESRI, a company that develops digital mapping and geospatial analysis software. “But it’s obvious that the more cameras we have out there, the better it becomes.”  Niantic Spatial has trained its model on 30 billion images captured in urban environments. In particular, the images are clustered around hot spots—places that served as important locations in Niantic’s games that players were encouraged to visit, such as Pokémon battle arenas. “We had a million-plus locations around the world where we can locate you precisely,” says McClendon. “We know where you’re standing within several centimeters of accuracy and, most importantly, where you’re looking.” The upshot is that for each of those million locations, Niantic Spatial has many thousands of images taken in more or less the same place but from different angles, at different times of day, and in different weather conditions. Each of those images comes with detailed metadata that pinpoints where in space the phone was at the time it captured the image, including which way the phone was facing, which way up it was, whether or not it was moving, how fast and in which direction, and more.    The firm has used this data set to train a model to predict exactly where it is by taking into account what it is looking at—even for locations other than those million hot spots, where good sources of image and location data are scarcer.

In addition to GPS, Coco’s robots, which are fitted with four cameras, will now use this model to try to figure out where they are and where they are headed. The robots’ cameras are hip-height and point in all directions at once, so their viewpoint is a little different from a Pokémon Go player’s, but adapting the data was straightforward, says Rash.  Rival companies use visual positioning systems too. For example, Starship Technologies, a robot delivery firm founded in Estonia in 2014, says its robots use their sensors to build a 3D map of their surroundings, plotting the edges of buildings and the position of streetlights.  But Rash is betting that Niantic Spatial’s tech will give Coco an edge. He claims it will allow his robots to position themselves in the correct pickup spots outside restaurants, making sure they don’t get in anybody’s way, and stop just outside the customer’s door instead of a few steps away, which might have happened in the past.   A Cambrian explosion in robotics  When Niantic Spatial started work on its visual positioning system, the idea was to apply it to augmented reality, says Hanke. “If you are wearing AR glasses and you want the world to lock in to where you’re looking, then you need some method for doing that,” he says. “But now we’re seeing a Cambrian explosion in robotics.” Some of those robots may need to share spaces with humans—spaces such as construction sites and sidewalks. “If robots are ever going to assimilate into that environment in a way that’s not disruptive for human beings, they’re going to have to have a similar level of spatial understanding,” says Hanke. “We can help robots find exactly where they are when they’ve been jostled and bumped.” The Coco Robotics partnership is the start. What Niantic Spatial is putting in place, says Hanke, are the first pieces of what he calls a living map: a hyper-detailed virtual simulation of the world that changes as the world changes. As robots from Coco and other firms move about the world, they will provide new sources of map data, feeding into more and more detailed digital replicas of the world.  But the way Hanke and McClendon see it, maps are not only becoming more detailed; they are being used more and more by machines. That shifts what maps are for. Maps have long been used to help people locate themselves in the world. As they moved from 2D to 3D to 4D (think of real-time simulations, such as digital twins), the basic principle hasn’t changed: Points on the map correspond to points in space or time. And yet maps for machines may need to become more like guidebooks, full of information that humans take for granted. Companies like Niantic Spatial and ESRI want to add descriptions that tell machines what they’re actually looking at, with every object tagged with a list of its properties. “This era is about building useful descriptions of the world for machines to comprehend,” says Hanke. “The data that we have is a great starting point in terms of building up an understanding of how the connective tissue of the world works.” There is a lot of buzz about world models right now—and Niantic Spatial knows it. LLMs may seem like know-it-alls, but they have very little common sense when it comes to interpreting and interacting with everyday environments. World models aim to fix that. Some firms, such as Google DeepMind and World Labs, are developing models that generate virtual fantasy worlds on the fly, which can then be used as training dojos for AI agents.  Niantic Spatial says it is coming at the problem from a different angle. Push map-making far enough and you’ll end up capturing everything, says McClendon: “I’m very focused on trying to re-create the real world. We’re not there yet, but we want to be there.”

Read More »

The Download: AI’s role in the Iran war, and an escalating legal fight

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How AI is turning the Iran conflict into theater  Much of the spotlight on AI in the Iran conflict has focused on models like Claude helping the US military decide where to strike. But a wave of “vibe-coded” intelligence dashboards—and the ecosystem surrounding them—reflect a new role that AI is playing in wartime: mediating information, often for the worse.  These sorts of intelligence tools have much promise. Yet there are real reasons to be suspicious of their data feeds. Read the full story.  —James O’Donnell 
This story is from The Algorithm, our weekly newsletter on AI. Sign up to receive it in your inbox every Monday.  The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 Anthropic has sued the US government  The AI firm wants to stop the Pentagon from blacklisting it. (Reuters) + The White House is preparing a new executive order to weed out the company’s technology. (Axios) + Defense experts are alarmed. (CNBC) +.Google and OpenAI staff have filed a legal brief backing Anthropic against Trump. (Wired $) + The company’s stance won many supporters. (MIT Technology Review)  2 GPS jamming has become a crucial battleground in the Middle East  The interference is endangering—and protecting—ships and planes. (BBC) + Signal jamming has made navigating the Strait of Hormuz even more difficult. (Bloomberg) + Quantum navigation offers a potential solution. (MIT Technology Review)   3 A tech journalist found his AI clone editing for Grammarly It’s providing AI-generated feedback “inspired by” real writers without their consent. (Platformer) + Could ChatGPT do the jobs of journalists and copywriters? (MIT Technology Review)  4 Nvidia plans to launch an open-source platform for AI agents  It’s already pitching the “NemoClaw” product to enterprise software firms. (Wired $) + But don’t let the AI agents hype get ahead of reality (MIT Technology Review)  5 A startup wants to launch a space mirror that reflects sunlight onto Earth Reflect Orbital reckons it could power solar panels at night. Scientists are appalled. (NYT)  6 Yann LeCun’s AI startup has raised over $1bn in Europe’s largest seed round  Meta’s former chief AI scientist plans to build systems that “understand the world.” (Bloomberg)  7 Hinge’s CEO insists the app doesn’t rate users’ attractiveness Jackie Jantos’ strategy has helped Hinge defy the decline in dating apps. (FT $) + AI companions are stealing hearts—and it’s getting weird. (New Yorker $) + It’s surprisingly easy to fall into a relationship with a chatbot. (MIT Technology Review)  8 “AI psychosis” could be afflicting your loved ones  If so, here’s how you can help them. (404 Media) + One solution: AI should be able to “hang up” on you. (MIT Technology Review) 

9 Nintendo is suing Trump over illegal tariffs The gaming giant has joined a lawsuit seeking over $200 billion in refunds. (Ars Technica)  10 Bio-tech is turning ancient poop into a map of lost civilizations  Molecular sensors are finding human traces where physical ruins have vanished. (Nature)     Quote of the day  “I don’t think any of us, whether it’s me or Dario [Amodei], Sam Altman, or Elon Musk, has any legitimacy to decide for society what is a good or bad use of AI.” —Yann LeCun gives Wired his take on the Anthropic’s spat the Pentagon.  One More Thing  This giant microwave may change the future of war  YOSHI SODEOKA armed forces are hunting for a weapon that disables drones en masse—and they want it fast.   One solution focuses on microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up.  Defense tech startup Epirus may have the winning formula. The company has developed a cutting-edge, cost-efficient drone zapper that’s sparking the interest of the US military. And drones are just one of its targets. Read the full story. 
—Sam Dean  We can still have nice things 
A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + Werner Herzog’s magnificent movie about Africa’s ghost elephants has arrived on Disney+ and Hulu. + A “city killer” asteroid won’t hit Earth after all. Phew.  + The Met is publishing high-definition 3D scans of over 100 iconic works. + Marty and Doc from Back to the Future are still BFFs in real life.  Top image credit: MIT TECHNOLOGY REVIEW (ILLUSTRATION) | PHOTO OF MISSILE (US NAVY), AI-GENERATED IMAGE OF RUBBLE VIA X, SCREENSHOTS VIA WORLDMONITOR, GLOBALTHREATMAP  Send asteroids to [email protected].   You can follow me on LinkedIn. Thanks for reading!    —Thomas  

Read More »

Prioritizing energy intelligence for sustainable growth

In partnership withEverpure Loudoun County, Virginia, once known for its pastoral scenery and proximity to Washington, DC, has earned a more modern reputation in recent years: The area has the highest concentration of data centers on the planet. Ten years ago, these facilities powered email and e-commerce. Today, thanks to the meteoric rise in demand for AI-infused everything, local utility Dominion Energy is working hard to keep pace with surging power demands. The pressure is so acute that Dulles International Airport is constructing the largest airport solar installation in the country, a highly visible bid to bolster the region’s power mix. Data center campuses like Loudoun’s are cropping up across the country to accommodate an insatiable appetite for AI. But this buildout comes at an enormous cost. In the US alone, data centers consumed roughly 4% of national electricity in 2024. Projections suggest that figure could stretch to 12% by 2028. To put this in perspective, a single 100-megawatt data center consumes roughly as much electricity as 80,000 American homes. Data centers being built today are gearing up for gigawatt scale, enough to power a mid-sized city. For enterprise leaders, energy costs associated with AI and data infrastructure are quickly becoming both a budget concern and a potential bottleneck on growth. Meeting this moment calls for a capability most organizations are only beginning to develop: energy intelligence. The emerging discipline refers to understanding where, when, and why energy is consumed, and using that insight to optimize operations and control costs.
These efforts stand to address both immediate financial pressures and longer-term reputational risks, as communities like Loudoun County grow increasingly concerned about the energy demands associated with nearby data center development. In December 2025, MIT Technology Review Insights conducted a survey of 300 executives to understand how companies are thinking about energy intelligence today, as well as where they’re anticipating challenges in the future.
Here are five of our most notable findings: Energy intelligence is becoming a universal business priority. One hundred percent of executives surveyed expect the ability to measure and strategically manage power consumption to become an important business metric in the next two years. AI workloads are already driving measurable cost increases, and the surge is just beginning. Two-thirds of executives (68%) report their companies have faced energy cost increases of 10% or more in the past 12 months due to AI and data workloads. Nearly all respondents (97%) anticipate their organization’s AI-related energy consumption will increase over the next 12-18 months. Mounting costs are the top energy-related threat to AI innovation. Half of executives (51%) rank rising costs as the single greatest energy-related risk to their digital and AI initiatives. Most companies currently tracking and attempting to optimize data center energy consumption are motivated by cost management. Organizations are responding through infrastructure optimization and energy-efficient partnerships. To address mounting energy demands, three in four leaders (74%) are optimizing existing infrastructure, while 69% are partnering with energy-efficient cloud and storage providers. More than half are also implementing AI workload scheduling (61%) and investing in more efficient hardware (56%). Closing the measurement gap is the next frontier. Most enterprises still lack the granular data needed for true energy intelligence. This gap is especially pronounced for companies relying on third-party cloud providers and managed services for their data compute and storage needs, where 71% say rising consumption-based costs originate, yet energy metrics are often opaque. Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

How AI is turning the Iran conflict into theater

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. “Anyone wanna host a get together in SF and pull this up on a 100 inch TV?”  The author of that post on X was referring to an online intelligence dashboard following the US-Israel strikes against Iran in real time. Built by two people from the venture capital firm Andreessen Horowitz, it combines open-source data like satellite imagery and ship tracking with a chat function, news feeds, and links to prediction markets, where people can bet on things like who Iran’s next “supreme leader” will be (the recent selection of Mojtaba Khamenei left some bettors with a payout).  I’ve reviewed over a dozen other dashboards like this in the last week. Many were apparently “vibe-coded” in a couple of days with the help of AI tools, including one that got the attention of a founder of the intelligence giant Palantir, the platform through which the US military is accessing AI models like Claude during the war. Some were built before the conflict in Iran, but nearly all of them are being advertised by their creators as a way to beat the slow and ineffective media by getting straight to the truth of what’s happening on the ground. “Just learned more in 30 seconds watching this map than reading or watching any major news network,” one commenter wrote on LinkedIn, responding to a visualization of Iran’s airspace being shut down before the strikes.
Much of the spotlight on AI and the Iran conflict has rightfully been on the role that models like Claude might be playing in helping the US military make decisions about where to strike. But these intelligence dashboards and the ecosystem surrounding them reflect a new role that AI is playing in wartime: mediating information, often for the worse. There’s a confluence of factors at play. AI coding tools mean people don’t need much technical skill to assemble open-source intelligence anymore, and chatbots can offer fast, if dubious, analysis of it. The rise in fake content leaves observers of the war wanting the sort of raw, accurate analysis normally accessible only to intelligence agencies. Demand for these dashboards is also driven by real-time prediction markets that promise financial rewards to anyone sufficiently informed. And the fact that the US military is using Anthropic’s Claude in the conflict (despite its designation as a supply chain risk) has signaled to observers that AI is the intelligence tool the pros use. Together, these trends are creating a new kind of AI-enabled wartime circus that can distort the flow of information as much as it clarifies it.
As a journalist, I believe these sorts of intelligence tools have a lot of promise. While many of us know that real-time data on shipping routes or power outages exist, it’s a powerful thing to actually see it all assembled in one place (though using it to watch a war unfold while you munch on popcorn and place bets turns the war into perverse entertainment). But there are real reasons to think that these sorts of raw data feeds are not as informative as they may feel.  Craig Silverman, a digital investigations expert who teaches investigative techniques, has been keeping a log of these dashboards (he’s up to 20). “The concern,” he says, “is there’s an illusion of being on top of things and being in control, where all you’re really doing is just pulling in a ton of signals and not necessarily understanding what you’re seeing, or being able to pull out true insights from it.”  One problem has to do with the quality of the information. Many dashboards feature “intel feeds” with AI-generated summaries of complex, ever-changing news events. These can introduce inaccuracies. By design, the data is not especially curated. Instead, the feeds just display everything at once, with a map of strike locations in Iran next to the prices of obscure cryptocurrencies.  Intelligence agencies, on the other hand, pair data feeds with people who can offer expertise and historical context. They also, of course, have access to proprietary information that doesn’t show up on the open web.  The implicit promise from the people building and selling this sort of information pipeline about the Iran conflict is that AI can be a great democratizing force. There’s a secret feed of information that only the elites have had access to, the thinking goes, but now AI can bring it to everyone to do with what they wish, whether that’s simply to be more informed or to make bets on nuclear strikes. But an abundance of information, which AI is undeniably good at assembling, does not come with the accuracy or context required for real understanding. Intelligence agencies do this in-house; good journalism does the same work for the rest of us. It is, by the way, hard to overstate the connection this all has with betting markets. The dashboard created by the pair at Andreessen Horowitz has a scrolling list of bets being made on the prediction platform Kalshi (which Andreessen Horowitz has invested in). Other dashboards link to Polymarket, offering bets on whether the US will strike Iraq or when Iran’s internet will return. AI has also long made it cheaper and easier to spread fake content, and that problem is on full display during the Iran conflict: last week the Financial Times found a slew of AI-generated satellite imagery spreading online.  “The emergence of manipulated or outright fake satellite imagery is really concerning,” Silverman says. The average person tends to see such imagery as very trustworthy. The spread of such fakes could erode confidence in one of the most important pieces of evidence used to show what’s actually happening in the war.  The result is an ocean of AI-enabled content—dashboards, betting markets, photos both real and fake—that makes this war harder, not easier, to comprehend.

Read More »

The Download: murky AI surveillance laws, and the White House cracks down on defiant labs

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Is the Pentagon allowed to surveil Americans with AI? The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans? Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows.  Today, the legal complexity has a new edge: AI is supercharging surveillance—and our laws haven’t caught up. Read the full story.
—Michelle Kim The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The White House has tightened its AI rules amid the Anthropic spatNew guidelines require companies to allow “any lawful” use of their ‌models. (FT $)+ London’s mayor has slammed Trump’s treatment of Anthropic and invited the firm to expand in the city. (BBC) 2 A satellite firm has stopped sharing imagery after exposing Iranian strikesPlanet Lab said it wants to stop “adversarial actors” from using the data. (Ars Technica)+ AI is turbocharging the conflict in Iran. (WSJ $)+ War is adding a brutal new element to the country’s internet issues. (Wired $) 3 The OpenAI-Anthropic feud is getting messyThe Pentagon contract controversy has intensified a deeply personal animosity between the founders. (NYT $)+ Sam Altman and Dario Amodei’s rivalry could reshape the future of AI. (WSJ $) + OpenAI’s robotics lead has quit over concerns about surveillance and “lethal autonomy.” (TechCrunch)+ The company’s DoD “compromise” has brought Anthropic’s fears to life. (MIT Technology Review) 4 Staff at Block are outraged over the company’s “AI layoffs” They’re pushing back against Jack Dorsey’s bullishness on AI. (The Guardian)+ They’ve also cast doubt on the payroll savings. (Gizmodo)+ It’s not the first case of fears over AI taking everyone’s jobs. (MIT Technology Review) 5 Data center “man camps” are springing up in TexasAimed at luring workers to help build the centers, they will offer free steaks and golf simulators. (Bloomberg $) 6 The OpenClaw craze is sparking a rally in Chinese tech stocksShares surged after government agencies and tech leaders promoted the AI agent. (Bloomberg $)+ Why is China falling so hard for it? (SCMP) 7 AI-generated videos are altering our relationship to natureAnd could lead to “distorted expectations” of animal behavior. (NYT $)+ AI slop could form a new kind of pop culture. (MIT Technology Review)

8 A rogue AI agent freed itself to mine crypto in secret The model escaped its sandbox to start a side hustle in digital currency. (Axios)+ AI agents are also starting to harass people. (MIT Technology Review) 9 In a first, a spacecraft has changed an asteroid’s orbit around the sunThe feat was a test of Earth’s future defenses. (Engadget) 10 How the Furby brought creepy-cute robotics into playtime   A new show traces the legacy of the surprisingly high-tech toy. (The Verge) Quote of the day “I wanted to approach the whole situation with love.” —Block cofounder and CEO Jack Dorsey tells Wired why he wore a hat with the word ‘Love’ on it during a meeting where he laid off 40% of his workforce.  One more thing LINDA NYLIND / EYEVINE VIA REDUX Geoffrey Hinton tells us why he’s now scared of the tech he helped build Geoffrey Hinton is a pioneer of deep learning who helped develop some of the most important techniques at the heart of modern artificial intelligence, but after a decade at Google, he’s stepped down to focus on concerns he now has about AI. Hinton wants to spend his time on what he describes as “more philosophical work.” And that will focus on the small but—to him—very real danger that AI will turn out to be a disaster. Read the full story.
—Will Douglas Heaven We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)+ De La Soul’s Tiny Desk concert is a masterclass in joy and grief, proving their “Daisy Age” philosophy is timeless.+ These original Disney concepts of beloved characters are a portal into an alternate childhood.+ This square phone traverses two decades of nostalgia by rotating into a Game Boy AND a BlackBerry.+ A newly discovered Rembrandt shows the Old Masters still have new tricks to reveal.

Read More »

Cisco extends its Secure AI Factory with Nvidia

“Customers can now control and manage this environment and operate it like it was a traditional data center fabric,” Wollenweber said. “The ability to bring it under the same Nexus umbrella is actually a huge selling point for AI customers, because their IT infrastructure folks, their operational people that are running the network, already understand how to use these Nexus tools, and so they can now add AI workloads and kind of accelerated computing technologies like GPUs, but in that same Nexus umbrella,” Wollenweber said.  “As Al becomes operational and distributed, complexity becomes the enemy of scale. Fragmented architectures force customers to manage integration, policy enforcement, observability, and security across silos, increasing cost and slowing innovation,” said Wollenweber. “Architecting silicon, networking, compute, security, and Al software into a cohesive system gives organizations a unified operating model, stronger performance guarantees, and embedded trust.” Those are the driving ideas around Cisco Secure AI Factory with Nvidia, Wollenweber said. Introduced a year ago, Secure AI Factory with Nvidia integrates Cisco’s Hypershield and AI Defense packages to help protect the development, deployment, and use of AI models and applications. Hypershield uses AI to dynamically refine security policies based on application identity and behavior. It automates policy creation, optimization, and enforcement across workloads. AI Defense discovers the various models being used in a customer’s AI development and uses four features to help customers enforce AI protection: AI access, AI cloud visibility, AI model and application validation, and AI runtime protection. Cisco integrates Hybrid Mesh Firewall technology On the security side, Cisco said it will embed its Hybrid Mesh Firewall technology to allow for security policy enforcement on Nvidia BlueField data processing units (DPU) that are embedded in Nvidia GPU servers connected to Cisco Nexus One fabrics. Cisco Hybrid Mesh Firewall offers a distributed security fabric

Read More »

Middle East war fosters concerns about physical data center security

The most common issue that Guidepost talks about with its clients is insider threats, which can be anyone that is rightfully permitted into your data center. Data centers have very strict rules regarding movement of visitors, but employees pretty much have free rule of the place. “Insider threat could be someone simply putting a USB stick in a server or having access to a data device that they’re not supposed to,” he said. “A threat actor could potentially cause harm within the facility, whether that’s mechanical, electrical, plumbing spaces or the data halls themselves is our number one preventative item that we’re trying to thwart.” When it comes to external threats, Guidepost looks after vehicle-borne IEDs and vehicle ramming, even if it’s accidental. That’s why data centers have high, anti-climb perimeter fences, multi-layered gates. and vehicle barriers that are put in place help to prevent any unwanted vehicles outside of the facility. “It’s a lot of what we call Crime Prevention Through Environmental Design,” said Bekisz. “It’s a theory that we utilize in our industry for ensuring that we are detecting and thwarting individuals before they are willing to commit some type of offensive action or some type of unwanted behavior.” That includes simple things like lighting right or reducing the visibility of the data center through shrubs and trees and berms and using that in consortium with physical preventative devices. Drones are a growing problem, even if they are not being used in kamikaze attacks. Bekisz said the only thing you can do is put in drone detection, so you have some type of device in the air in the area of your facility, and then you call for support from local emergency services.

Read More »

Palantir partners with Nvidia to streamline AI data center deployment

This collaboration grants enterprises full control over their data, AI models, and applications while supporting the use of open-source AI models and related data acceleration tools. The Palantir AI OS reference architecture gives enterprises total control over their data, AI models and applications. It is particularly critical for customers with existing GPU infrastructure, latency-sensitive workflows, data sovereignty requirements, and high geographic distribution. “From our first deployment with the United States government and in every deployment since, our software has had to meet the moment in the most complex and sensitive environments where customers must maintain control,” says Akshay Krishnaswamy, Palantir’s chief architect in a statement. “Together with Nvidia — and building on many customers’ existing investments — we are proud to deliver a fully integrated AI operating system that is optimized for Nvidia accelerated compute infrastructure and enables customers to realize the promise of on-premises, edge, and sovereign cloud deployments,” he added. Sovereign AI is an emerging market that represents a country’s efforts to develop and maintain control of its own AI, using its own data, and keeping the data within its borders.

Read More »

Where OpenAI’s technology could show up in Iran

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. It’s been just over two weeks since OpenAI reached a controversial agreement to allow the Pentagon to use its AI in classified environments. There are still pressing questions about what exactly OpenAI’s agreement allows for; Sam Altman said the military can’t use his company’s technology to build autonomous weapons, but the agreement really just demands that the military follow its own (quite permissive) guidelines about such weapons. OpenAI’s other main claim, that the agreement will prevent use of its technology for domestic surveillance, appears equally dubious. It’s unclear what OpenAI’s motivations are. It’s not the first tech giant to embrace military contracts it had once vowed never to enter into, but the speed of the pivot was notable. Perhaps it’s just about money; OpenAI is spending lots on AI training and is on the hunt for more revenue (from sources including ads). Or perhaps Altman truly believes the ideological framing he often invokes: that liberal democracies (and their militaries) must have access to the most powerful AI to compete with China. The more consequential question is what happens next. OpenAI has decided it is comfortable operating right in the messy heart of combat, just as the US escalates its strikes against Iran (with AI playing a larger role in that than ever before). So where exactly could OpenAI’s tech show up in this fight? And which applications will its customers (and employees) tolerate?
Targets and strikes Though its Pentagon agreement is in place, it’s unclear when OpenAI’s technology will be ready for classified environments, since it must be integrated with other tools the military uses (Elon Musk’s xAI, which recently struck its own deal with the Pentagon, is expected to go through the same process with its AI model Grok). But there’s pressure to do this quickly because of controversy around the technology in use to date: After Anthropic refused to allow its AI to be used for “any lawful use,” President Trump ordered the military to stop using it, and Anthropic was designated a supply chain risk by the Pentagon. (Anthropic is fighting the designation in court.) If the Iran conflict is still underway by the time OpenAI’s tech is in the system, what could it be used for? A recent conversation I had with a defense official suggests it might look something like this: A human analyst could put a list of potential targets into the AI model and ask it to analyze the information and prioritize which to strike first. The model could account for logistics information, like where particular planes or supplies are located. It could analyze lots of different inputs in the form of text, image, and video. 
A human would then be responsible for manually checking these outputs, the official said. But that raises an obvious question: If a person is truly double-checking AI’s outputs, how is it speeding up targeting and strike decisions? For years the military has been using another AI system, called Maven, which can handle things like automatically analyzing drone footage to identify possible targets. It’s likely that OpenAI’s models, like Anthropic’s Claude, will offer a conversational interface on top of that, allowing users to ask for interpretations of intelligence and recommendations for which targets to strike first.  It’s hard to overstate how new this is: AI has long done analysis for the military, drawing insights out of oceans of data. But using generative AI’s advice about which actions to take in the field is being tested in earnest for the first time in Iran. Drone defense At the end of 2024, OpenAI announced a partnership with Anduril, which makes both drones and counter-drone technologies for the military. The agreement said OpenAI would work with Anduril to do time-sensitive analysis of drones attacking US forces and help take them down. An OpenAI spokesperson told me at the time that this didn’t violate the company’s policies, which prohibited “systems designed to harm others,” because the technology was being used to target drones and not people.  Anduril provides a suite of counter-drone technologies to military bases around the world (though the company declined to tell me whether its systems are deployed near Iran). Neither company has provided updates on how the project has developed since it was announced. However, Anduril has long trained its own AI models to analyze camera footage and sensor data to identify threats; what it focuses less on are conversational AI systems that allow soldiers to query those systems directly or receive guidance in natural language—an area where OpenAI’s models may fit. The stakes are high. Six US service members were killed in Kuwait on March 1 following an Iranian drone attack that was not intercepted by US air defenses.  Anduril’s interface, called Lattice, is where soldiers can control everything from drone defenses to missiles and autonomous submarines. And the company is winning massive contracts—$20 billion from the US Army just last week—to connect its systems with legacy military equipment and layer AI on them. If OpenAI’s models prove useful to Anduril, Lattice is designed to incorporate them quickly across this broader warfare stack.  Back-office AI In December, Defense Secretary Pete Hegseth started encouraging millions of people in more administrative roles in the military—contracts, logistics, purchasing—to use a new AI tool. Called GenAI.mil, it provided a way for personnel to securely access commercial AI models and use them for the same sorts of things as anyone in the business world.  Google Gemini was one of the first to be available. In January, the Pentagon announced that xAI’s Grok was going to be added to the GenAI.mil platform as well, despite incidents in which the model had spread antisemitic content and created nonconsensual deepfakes. OpenAI followed in February, with the company announcing that its models would be used for drafting policy documents and contracts and assisting with administrative support of missions. Anyone using ChatGPT for unclassified tasks on this platform is unlikely to have much sway over sensitive decisions in Iran, but the prospect of OpenAI deploying on the platform is important in another way. It serves the all-in attitude toward AI that Hegseth has been pushing relentlessly across the Pentagon (even if many early users aren’t entirely sure what they’re supposed to use it for). The message is that AI is transforming every aspect of how the US fights, from targeting decisions down to paperwork. And OpenAI is increasingly winning a piece of it all.

Read More »

Who’s in the data-center space race?

But not everyone is that optimistic. According to Gartner, space-based data centers won’t be useful for decades, so companies should focus on expanding capacity down here on Earth. “I honestly think the idea with the current landscape of putting data centers in space is ridiculous,” OpenAI CEO Sam Altman told The Indian Express in February. Current satellite computing can’t easily scale to data centers, agrees Holger Mueller, an analyst at Constellation Research. “Weight is still the restriction,” he says. “It’s the equivalent of you buying a tablet or small laptop to travel across Latin America versus putting in a data center in the Amazon. Different power requirements, investment, totally different setup.” Then there are issues like damaged solar panels from meteorite storms and satellite debris, he adds. “You would have to pay for operational redundancy, which is further investment.” “Data centers will be built where they are affordable,” he says. “I don’t see space happening soon. Remember the Microsoft submerged one? Crickets…” But he agrees that solar power is nice, though the sun is only visible from one side of the planet at any given time. And space is cold, he says. Cooling down in outer space In fact, space is very cold. Close to absolute zero cold. But vacuum is also a great insulator, and there’s no air to move the heat around. “You can’t convect heat away,” says Richard Bonner, CTO at Accelsius, a liquid cooling company. Bonner has worked on NASA research projects about the challenge of cooling in space and is very familiar with the problem. A small proportion of the heat might be turned back into useful electricity, but that’s not really a solution, he says, because computer chips don’t get quite that hot. Instead, heat is radiated. When an object warms up, it generates

Read More »

Quantum Elements cuts quantum error rates using AI-powered digital twin

“That’s pretty clever, actually,” Sutor says. “It’s a little microwave pulse. That fixes some of the errors.” The Quantum Elements paper specifically addressed quantum error correction in IBM’s 127-qubit superconducting processor. But these techniques might also be able to be generalized to other types of quantum computers, Sutor says. And any improvement in error correction will bring usable quantum computers that much closer. So will the other aspect of this announcement—the fact that the new error-correction technique was developed using Quantum Element’s AI-powered, digital-twin-style quantum computer simulator, Constellation. Most quantum computer simulators allow people developing quantum applications to test them in ideal environments. But real quantum computers have errors and noise. Quantum Elements’ simulator models that noise, allowing developers to test in near-real-world conditions. There are also other simulation platforms, including IBM’s Qiskit Aer and Quantinuum’s H-Series Emulator. According to Medalsy, the simulators from IBM and Quantinuum use simplified models that don’t reproduce all the noise. “Quantum Elements’ digital twin is aimed at hardware-faithful simulation at experiment scale,” he says. “It is designed to preserve the full noise signature, both coherent and incoherent.”

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE