Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Nscale Expands AI Factory Strategy With Power, Platform, and Scale

Nscale has moved quickly from startup to serious contender in the race to build infrastructure for the AI era. Founded in 2024, the company has positioned itself as a vertically integrated “neocloud” operator, combining data center development, GPU fleet ownership, and a software stack designed to deliver large-scale AI compute. That model has helped it attract backing from investors including Nvidia, and in early March 2026 the company raised another $2 billion at a reported $14.6 billion valuation. Reuters has described Nscale’s approach as owning and operating its own data centers, GPUs, and software stack to support major customers including Microsoft and OpenAI. What makes Nscale especially relevant now is that it is no longer content to operate as a cloud intermediary or capacity provider. Over the past year, the company has increasingly framed itself as an AI hyperscaler and AI factory builder, seeking to combine land, power, data center shells, GPU procurement, customer offtake, and software services into a single integrated platform. Its acquisition of American Intelligence & Power Corporation, or AIPCorp, is the clearest signal yet of that shift, bringing energy infrastructure directly into the center of Nscale’s business model. The AIPCorp transaction is significant because it gives Nscale more than additional development capacity. The company said the deal includes the Monarch Compute Campus in Mason County, West Virginia, a site of up to 2,250 acres with a state-certified AI microgrid and a power runway it says can scale beyond 8 gigawatts. Nscale also said the acquisition establishes a new division, Nscale Energy & Power, headquartered in Houston, extending its platform further into power development. That positioning reflects a broader shift in the AI infrastructure market. The central bottleneck is no longer simply access to GPUs. It is the ability to assemble power, cooling, land, permits, data center

Read More »

Four things we’d need to put data centers in space

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI without triggering an environmental crisis on Earth. But could it work? SpaceX is the latest in a string of high-tech companies extolling the potential of orbital computing infrastructure. Last year, Amazon founder Jeff Bezos said that the tech industry will move toward large-scale computing in space. Google has plans to loft data-crunching satellites, aiming to launch a test constellation of 80 as early as next year. And last November Starcloud, a startup based in Washington State, launched a satellite fitted with a high-performance Nvidia H100 GPU, marking the first orbital test of an advanced AI chip. The company envisions orbiting data centers as large as those on Earth by 2030. Proponents believe that putting data centers in space makes sense. The current AI boom is straining energy grids and adding to the demand for water, which is needed to cool the computers. Communities in the vicinity of large-scale data centers worry about increasing prices for those resources as a result of the growing demand, among other issues.
In space, advocates say, the water and energy problems would be solved. In constantly illuminated sun-synchronous orbits, space-borne data centers would have uninterrupted access to solar power. At the same time, the excess heat they produce would be easily expelled into the cold vacuum of space. And with the cost of space launches decreasing, and mega-rockets such as SpaceX’s Starship promising to push prices even lower, there could be a point at which moving the world’s data centers into space makes sound business sense. Detractors, on the other hand, tell a different story and point to a variety of technological hurdles, though some say it’s possible they may be surmountable in the not-so-distant future. Here are four of the must-haves we’d need to make space-based data centers a reality.  A way to carry away heat  AI data centers produce a lot of heat. Space might seem like a great place to dispel that heat without using up massive amounts of water. But it’s not so simple. To get the power needed to run 24-7, a space-based data center would have to be in a constantly illuminated orbit, circling the planet from pole to pole, and never hide in Earth’s shadow. And in that orbit, the temperature of the equipment would never drop below 80 °C, which is way too hot for electronics to operate safely in the long term. 
Getting the heat out of such a system is surprisingly challenging. “Thermal management and cooling in space is generally a huge problem,” says Lilly Eichinger, CEO of the Austrian space tech startup Satellives. On Earth, heat dissipates mostly through the natural process of convection, which relies on the movement of gases and liquids like air and water. In the vacuum of space, heat has to be removed through the far less efficient process of radiation. Safely removing the heat produced by the computers, as well as what’s absorbed from the sun, requires large radiative surfaces. The bulkier the satellite, the harder it is to send all the heat inside it out into space. But Yves Durand, former director of technology at the European aerospace giant Thales Alenia Space, says that technology already exists to tackle the problem. The company previously developed a system for large telecommunications satellites that can pipe refrigerant fluid through a network of tubing using a mechanical pump, ultimately transferring heat from within a spacecraft to radiators on the exterior. Durand led a 2024 feasibility study on space-based data centers, which found that although challenges exist, it should be possible for Europe to put gigawatt-scale data centers (on par with the largest Earthbound facilities) into orbit before 2050. These would be considerably larger than those envisioned by SpaceX, featuring solar arrays hundreds of meters in size—larger than the International Space Station. Computer chips that can withstand a radiation onslaught The space around Earth is constantly battered by cosmic particles and lashed by solar radiation. On Earth’s surface, humans and their electronic devices are protected from this corrosive soup of charged particles by the planet’s atmosphere and magnetosphere. But the farther away from Earth you venture, the weaker that protection becomes. Studies show that aircraft crews have a higher risk of developing cancer because of their frequent exposure to high radiation at cruising altitude, where the atmosphere is thin and less protective. Electronics in space are at risk of three types of problems caused by high radiation levels, says Ken Mai, a principal systems scientist in electrical and computer engineering at Carnegie Mellon University. Phenomena known as single-event upsets can cause bit flips and corrupt stored data when charged particles hit chips and memory devices. Over time, electronics in space accumulate damage from ionizing radiation that degrades their performance. And sometimes a charged particle can strike the component in a way that physically displaces atoms on the chip, creating permanent damage, Mai explains. Traditionally, computers launched to space had to undergo years of testing and were specifically designed to withstand the intense radiation present in Earth’s orbit. These space-hardened electronics are much more expensive, though, and their performance is also years behind the state-of-the-art devices for Earth-based computing. Launching conventional chips is a gamble. But Durand says cutting-edge computer chips use technologies that are by default more resistant to radiation than past systems. And in mid-March, Nvidia touted hardware, including a new GPU, that is “bringing AI compute to orbital data centers.”  Nvidia’s head of edge AI marketing, Chen Su, told MIT Technology Review, that “Nvidia systems are inherently commercial off the shelf, with radiation resilience achieved at the system level rather than through radiation‑hardened silicon alone.” He added that satellite makers increase the chips’ resiliency with the help of shielding, advanced software for error detection, and architectures that combine the consumer-grade devices with bespoke, hardened technologies.

Still, Mai says that the data-crunching chips are only one issue. The data centers would also need memory and storage devices, both of which are vulnerable to damage by excessive radiation. And operators would need the ability to swap things out or adapt when issues arise. The feasibility and affordability of using robots or astronaut missions for maintenance is a major question mark hanging over the idea of large-scale orbiting data centers. “You not only need to throw up a data center to space that meets your current needs; you need redundancy, extra parts, and reconfigurability, so when stuff breaks, you can just change your configuration and continue working,” says Mai. “It’s a very challenging problem because on one hand you have free energy and power in space, but there are a lot of disadvantages. It’s quite possible that those problems will outweigh the advantages that you get from putting a data center into space.” In addition to the need for regular maintenance, there’s also the potential for catastrophic loss. During periods of intense space weather, satellites can be flooded with enough radiation to kill all their electronics. The sun has just passed the most active phase of its 11-year cycle with relatively little impact on satellites. Still, experts warn that since the space age began, the planet has not experienced the worst the sun is capable of. Many doubt whether the low-cost new space systems that dominate Earth’s orbits today are prepared for that. A plan to dodge space debris Both large-scale orbiting data centers such as those envisioned by Thales Alenia Space and the mega-constellations of smaller satellites as proposed by SpaceX give a headache to space sustainability experts. The space around Earth is already quite crowded with satellites. Starlink satellites alone perform hundreds of thousands of collision avoidance maneuvers every year to dodge debris and other spacecraft. The more stuff in space, the higher the likelihood of a devastating collision that would clutter the orbit with thousands of dangerous fragments. Large structures with hundreds of square meters of solar arrays would quickly suffer damage from small pieces of space debris and meteorites, which would over time degrade the performance of their solar panels and create more debris in orbit. Operating one million satellites in low Earth orbit, the region of space at the altitude of up to 2,000 kilometers, might be impossible to do safely unless all satellites in that area are part of the same network so they can communicate effectively to maneuver around each other, Greg Vialle, the founder of the orbital recycling startup Lunexus Space, told MIT Technology Review. “You can fit roughly four to five thousand satellites in one orbital shell,” Vialle says. “If you count all the shells in low Earth orbit, you get to a number of around 240,000 satellites maximum.” And spacecraft must be able to pass each other at a safe distance to avoid collisions, he says.  “You also need to be able to get stuff up to higher orbits and back down to de-orbit,” he adds. “So you need to have gaps of at least 10 kilometers between the satellites to do that safely. Mega-constellations like Starlink can be packed more tightly because the satellites communicate with each other. But you can’t have one million satellites around Earth unless it’s a monopoly.”
On top of that, Starlink would likely want to regularly upgrade its orbiting data centers with more modern technology. Replacing a million satellites perhaps every five years would mean even more orbital traffic—and it could increase the rate of debris reentry into Earth’s atmosphere from around three or four pieces of junk a day to about one every three minutes, according to a group of astronomers who filed objections against SpaceX’s FCC application. Some scientists are concerned that reentering debris could damage the ozone layer and alter Earth’s thermal balance.  Economical launch and assembly The longer hardware survives in orbit, the better the return on investment. But for orbital data centers to make economic sense, companies will have to find a relatively cheap way to get that hardware in orbit. SpaceX is betting on its upcoming Starship mega-rocket, which will be able to carry up to six times as much payload as the current workhorse, Falcon 9. The Thales Alenia Space study concluded that if Europe were to build its own orbital data centers, it would have to develop a similarly potent launcher. 
But launch is only part of the equation. A large-scale orbital data center won’t fit in a rocket—even a mega-rocket. It will need to be assembled in orbit. And that will likely require advanced robotic systems that do not exist yet. Various companies have conducted Earth-based tests with precursors of such systems, but they are still far from real-world use. Durand says that in the short term, smaller-scale data centers are likely to establish themselves as an integral part of the orbital infrastructure, by processing images from Earth-observing satellites directly in space without having to send them to Earth. That would be a huge help for companies selling insights from space, as many of these data sets are extremely large, and competition for opportunities to downlink them to Earth for processing via ground stations is growing. “The good thing with orbital data centers is that you can start with small servers and gradually increase and build up larger data centers,” says Durand. “You can use modularity. You can learn little by little and gradually develop industrial capacity in space. We have all the technology, and the demand for space-based data processing infrastructure is huge, so it makes sense to think about it.” Smaller facilities probably won’t do much to offset the strain that terrestrial data centers are placing on the planet’s water and electricity, though. That vision of the future might take decades to come to fruition, some critics think—if it even gets off the ground at all. 

Read More »

French government take Bull by horns for €404 million

It’s the second time that Bull has been nationalized: The first time, in 1982 was to save it from bankruptcy. Atos, has had financial troubles of its own. In August 2024, it tried — and failed — to sell its legacy infrastructure management business. The company had already staved off possible bankruptcy thanks to some refinancing from banks and the French government. That was after it had contemplated selling its IT security business. Bull still retains some pedigree. As part of Atos subsidiary Eviden it built Jupiter, the world’s fourth most-powerful computer at time of writing, with a computing capacity of 1 exaflop (1 billion billion floating-point operations per second). It is also a member of a consortium building Europe’s second exa-scale computing system, Alice Recoque, due for delivery in 2027.

Read More »

CERT-EU blames Trivy supply chain attack for Europa.eu data breach

Back door credentials The Trivy compromise dates to February, when TeamPCP exploited a misconfiguration in Trivy’s GitHub Actions environment, now identified as CVE-2026-33634, to establish a foothold via a privileged access token, according to Aqua Security. Discovering this, Aqua Security rotated credentials but, because some credentials remain valid during this process, the attackers were able to steal the newly rotated credentials. By manipulating trusted Trivy version tags, TeamPCP forced CI/CD pipelines using the tool to automatically pull down credential-stealing malware it had implanted. This allowed TeamPCP to target a variety of valuable information including AWS, GCP, Azure cloud credentials, Kubernetes tokens, Docker registry credentials, database passwords, TLS private keys, SSH keys, and cryptocurrency wallet files, according to security researchers at Palo Alto Networks. In effect, the attackers had turned a tool used to find cloud vulnerabilities and misconfigurations into a yawning vulnerability of its own. CERT-EU advised organizations affected by the Trivy compromise to immediately update to a known safe version, rotate all AWS and other credentials, audit Trivy versions in CI/CD pipelines, and most importantly ensure GitHub Actions are tied to immutable SHA-1 hashes rather than mutable tags. It also recommended looking for indicators of compromise (IoCs) such as unusual Cloudflare tunnelling activity or traffic spikes that might indicate data exfiltration.

Read More »

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Nscale Expands AI Factory Strategy With Power, Platform, and Scale

Nscale has moved quickly from startup to serious contender in the race to build infrastructure for the AI era. Founded in 2024, the company has positioned itself as a vertically integrated “neocloud” operator, combining data center development, GPU fleet ownership, and a software stack designed to deliver large-scale AI compute. That model has helped it attract backing from investors including Nvidia, and in early March 2026 the company raised another $2 billion at a reported $14.6 billion valuation. Reuters has described Nscale’s approach as owning and operating its own data centers, GPUs, and software stack to support major customers including Microsoft and OpenAI. What makes Nscale especially relevant now is that it is no longer content to operate as a cloud intermediary or capacity provider. Over the past year, the company has increasingly framed itself as an AI hyperscaler and AI factory builder, seeking to combine land, power, data center shells, GPU procurement, customer offtake, and software services into a single integrated platform. Its acquisition of American Intelligence & Power Corporation, or AIPCorp, is the clearest signal yet of that shift, bringing energy infrastructure directly into the center of Nscale’s business model. The AIPCorp transaction is significant because it gives Nscale more than additional development capacity. The company said the deal includes the Monarch Compute Campus in Mason County, West Virginia, a site of up to 2,250 acres with a state-certified AI microgrid and a power runway it says can scale beyond 8 gigawatts. Nscale also said the acquisition establishes a new division, Nscale Energy & Power, headquartered in Houston, extending its platform further into power development. That positioning reflects a broader shift in the AI infrastructure market. The central bottleneck is no longer simply access to GPUs. It is the ability to assemble power, cooling, land, permits, data center

Read More »

Four things we’d need to put data centers in space

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI without triggering an environmental crisis on Earth. But could it work? SpaceX is the latest in a string of high-tech companies extolling the potential of orbital computing infrastructure. Last year, Amazon founder Jeff Bezos said that the tech industry will move toward large-scale computing in space. Google has plans to loft data-crunching satellites, aiming to launch a test constellation of 80 as early as next year. And last November Starcloud, a startup based in Washington State, launched a satellite fitted with a high-performance Nvidia H100 GPU, marking the first orbital test of an advanced AI chip. The company envisions orbiting data centers as large as those on Earth by 2030. Proponents believe that putting data centers in space makes sense. The current AI boom is straining energy grids and adding to the demand for water, which is needed to cool the computers. Communities in the vicinity of large-scale data centers worry about increasing prices for those resources as a result of the growing demand, among other issues.
In space, advocates say, the water and energy problems would be solved. In constantly illuminated sun-synchronous orbits, space-borne data centers would have uninterrupted access to solar power. At the same time, the excess heat they produce would be easily expelled into the cold vacuum of space. And with the cost of space launches decreasing, and mega-rockets such as SpaceX’s Starship promising to push prices even lower, there could be a point at which moving the world’s data centers into space makes sound business sense. Detractors, on the other hand, tell a different story and point to a variety of technological hurdles, though some say it’s possible they may be surmountable in the not-so-distant future. Here are four of the must-haves we’d need to make space-based data centers a reality.  A way to carry away heat  AI data centers produce a lot of heat. Space might seem like a great place to dispel that heat without using up massive amounts of water. But it’s not so simple. To get the power needed to run 24-7, a space-based data center would have to be in a constantly illuminated orbit, circling the planet from pole to pole, and never hide in Earth’s shadow. And in that orbit, the temperature of the equipment would never drop below 80 °C, which is way too hot for electronics to operate safely in the long term. 
Getting the heat out of such a system is surprisingly challenging. “Thermal management and cooling in space is generally a huge problem,” says Lilly Eichinger, CEO of the Austrian space tech startup Satellives. On Earth, heat dissipates mostly through the natural process of convection, which relies on the movement of gases and liquids like air and water. In the vacuum of space, heat has to be removed through the far less efficient process of radiation. Safely removing the heat produced by the computers, as well as what’s absorbed from the sun, requires large radiative surfaces. The bulkier the satellite, the harder it is to send all the heat inside it out into space. But Yves Durand, former director of technology at the European aerospace giant Thales Alenia Space, says that technology already exists to tackle the problem. The company previously developed a system for large telecommunications satellites that can pipe refrigerant fluid through a network of tubing using a mechanical pump, ultimately transferring heat from within a spacecraft to radiators on the exterior. Durand led a 2024 feasibility study on space-based data centers, which found that although challenges exist, it should be possible for Europe to put gigawatt-scale data centers (on par with the largest Earthbound facilities) into orbit before 2050. These would be considerably larger than those envisioned by SpaceX, featuring solar arrays hundreds of meters in size—larger than the International Space Station. Computer chips that can withstand a radiation onslaught The space around Earth is constantly battered by cosmic particles and lashed by solar radiation. On Earth’s surface, humans and their electronic devices are protected from this corrosive soup of charged particles by the planet’s atmosphere and magnetosphere. But the farther away from Earth you venture, the weaker that protection becomes. Studies show that aircraft crews have a higher risk of developing cancer because of their frequent exposure to high radiation at cruising altitude, where the atmosphere is thin and less protective. Electronics in space are at risk of three types of problems caused by high radiation levels, says Ken Mai, a principal systems scientist in electrical and computer engineering at Carnegie Mellon University. Phenomena known as single-event upsets can cause bit flips and corrupt stored data when charged particles hit chips and memory devices. Over time, electronics in space accumulate damage from ionizing radiation that degrades their performance. And sometimes a charged particle can strike the component in a way that physically displaces atoms on the chip, creating permanent damage, Mai explains. Traditionally, computers launched to space had to undergo years of testing and were specifically designed to withstand the intense radiation present in Earth’s orbit. These space-hardened electronics are much more expensive, though, and their performance is also years behind the state-of-the-art devices for Earth-based computing. Launching conventional chips is a gamble. But Durand says cutting-edge computer chips use technologies that are by default more resistant to radiation than past systems. And in mid-March, Nvidia touted hardware, including a new GPU, that is “bringing AI compute to orbital data centers.”  Nvidia’s head of edge AI marketing, Chen Su, told MIT Technology Review, that “Nvidia systems are inherently commercial off the shelf, with radiation resilience achieved at the system level rather than through radiation‑hardened silicon alone.” He added that satellite makers increase the chips’ resiliency with the help of shielding, advanced software for error detection, and architectures that combine the consumer-grade devices with bespoke, hardened technologies.

Still, Mai says that the data-crunching chips are only one issue. The data centers would also need memory and storage devices, both of which are vulnerable to damage by excessive radiation. And operators would need the ability to swap things out or adapt when issues arise. The feasibility and affordability of using robots or astronaut missions for maintenance is a major question mark hanging over the idea of large-scale orbiting data centers. “You not only need to throw up a data center to space that meets your current needs; you need redundancy, extra parts, and reconfigurability, so when stuff breaks, you can just change your configuration and continue working,” says Mai. “It’s a very challenging problem because on one hand you have free energy and power in space, but there are a lot of disadvantages. It’s quite possible that those problems will outweigh the advantages that you get from putting a data center into space.” In addition to the need for regular maintenance, there’s also the potential for catastrophic loss. During periods of intense space weather, satellites can be flooded with enough radiation to kill all their electronics. The sun has just passed the most active phase of its 11-year cycle with relatively little impact on satellites. Still, experts warn that since the space age began, the planet has not experienced the worst the sun is capable of. Many doubt whether the low-cost new space systems that dominate Earth’s orbits today are prepared for that. A plan to dodge space debris Both large-scale orbiting data centers such as those envisioned by Thales Alenia Space and the mega-constellations of smaller satellites as proposed by SpaceX give a headache to space sustainability experts. The space around Earth is already quite crowded with satellites. Starlink satellites alone perform hundreds of thousands of collision avoidance maneuvers every year to dodge debris and other spacecraft. The more stuff in space, the higher the likelihood of a devastating collision that would clutter the orbit with thousands of dangerous fragments. Large structures with hundreds of square meters of solar arrays would quickly suffer damage from small pieces of space debris and meteorites, which would over time degrade the performance of their solar panels and create more debris in orbit. Operating one million satellites in low Earth orbit, the region of space at the altitude of up to 2,000 kilometers, might be impossible to do safely unless all satellites in that area are part of the same network so they can communicate effectively to maneuver around each other, Greg Vialle, the founder of the orbital recycling startup Lunexus Space, told MIT Technology Review. “You can fit roughly four to five thousand satellites in one orbital shell,” Vialle says. “If you count all the shells in low Earth orbit, you get to a number of around 240,000 satellites maximum.” And spacecraft must be able to pass each other at a safe distance to avoid collisions, he says.  “You also need to be able to get stuff up to higher orbits and back down to de-orbit,” he adds. “So you need to have gaps of at least 10 kilometers between the satellites to do that safely. Mega-constellations like Starlink can be packed more tightly because the satellites communicate with each other. But you can’t have one million satellites around Earth unless it’s a monopoly.”
On top of that, Starlink would likely want to regularly upgrade its orbiting data centers with more modern technology. Replacing a million satellites perhaps every five years would mean even more orbital traffic—and it could increase the rate of debris reentry into Earth’s atmosphere from around three or four pieces of junk a day to about one every three minutes, according to a group of astronomers who filed objections against SpaceX’s FCC application. Some scientists are concerned that reentering debris could damage the ozone layer and alter Earth’s thermal balance.  Economical launch and assembly The longer hardware survives in orbit, the better the return on investment. But for orbital data centers to make economic sense, companies will have to find a relatively cheap way to get that hardware in orbit. SpaceX is betting on its upcoming Starship mega-rocket, which will be able to carry up to six times as much payload as the current workhorse, Falcon 9. The Thales Alenia Space study concluded that if Europe were to build its own orbital data centers, it would have to develop a similarly potent launcher. 
But launch is only part of the equation. A large-scale orbital data center won’t fit in a rocket—even a mega-rocket. It will need to be assembled in orbit. And that will likely require advanced robotic systems that do not exist yet. Various companies have conducted Earth-based tests with precursors of such systems, but they are still far from real-world use. Durand says that in the short term, smaller-scale data centers are likely to establish themselves as an integral part of the orbital infrastructure, by processing images from Earth-observing satellites directly in space without having to send them to Earth. That would be a huge help for companies selling insights from space, as many of these data sets are extremely large, and competition for opportunities to downlink them to Earth for processing via ground stations is growing. “The good thing with orbital data centers is that you can start with small servers and gradually increase and build up larger data centers,” says Durand. “You can use modularity. You can learn little by little and gradually develop industrial capacity in space. We have all the technology, and the demand for space-based data processing infrastructure is huge, so it makes sense to think about it.” Smaller facilities probably won’t do much to offset the strain that terrestrial data centers are placing on the planet’s water and electricity, though. That vision of the future might take decades to come to fruition, some critics think—if it even gets off the ground at all. 

Read More »

French government take Bull by horns for €404 million

It’s the second time that Bull has been nationalized: The first time, in 1982 was to save it from bankruptcy. Atos, has had financial troubles of its own. In August 2024, it tried — and failed — to sell its legacy infrastructure management business. The company had already staved off possible bankruptcy thanks to some refinancing from banks and the French government. That was after it had contemplated selling its IT security business. Bull still retains some pedigree. As part of Atos subsidiary Eviden it built Jupiter, the world’s fourth most-powerful computer at time of writing, with a computing capacity of 1 exaflop (1 billion billion floating-point operations per second). It is also a member of a consortium building Europe’s second exa-scale computing system, Alice Recoque, due for delivery in 2027.

Read More »

CERT-EU blames Trivy supply chain attack for Europa.eu data breach

Back door credentials The Trivy compromise dates to February, when TeamPCP exploited a misconfiguration in Trivy’s GitHub Actions environment, now identified as CVE-2026-33634, to establish a foothold via a privileged access token, according to Aqua Security. Discovering this, Aqua Security rotated credentials but, because some credentials remain valid during this process, the attackers were able to steal the newly rotated credentials. By manipulating trusted Trivy version tags, TeamPCP forced CI/CD pipelines using the tool to automatically pull down credential-stealing malware it had implanted. This allowed TeamPCP to target a variety of valuable information including AWS, GCP, Azure cloud credentials, Kubernetes tokens, Docker registry credentials, database passwords, TLS private keys, SSH keys, and cryptocurrency wallet files, according to security researchers at Palo Alto Networks. In effect, the attackers had turned a tool used to find cloud vulnerabilities and misconfigurations into a yawning vulnerability of its own. CERT-EU advised organizations affected by the Trivy compromise to immediately update to a known safe version, rotate all AWS and other credentials, audit Trivy versions in CI/CD pipelines, and most importantly ensure GitHub Actions are tied to immutable SHA-1 hashes rather than mutable tags. It also recommended looking for indicators of compromise (IoCs) such as unusual Cloudflare tunnelling activity or traffic spikes that might indicate data exfiltration.

Read More »

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Energy Department Initiates Additional Strategic Petroleum Reserve Emergency Exchange to Stabilize Global Oil Supply

WASHINGTON—The U.S. Department of Energy (DOE) issued a Request for Proposal (RFP) today for an emergency exchange of 10-million-barrels from the Strategic Petroleum Reserve (SPR). This action is part of the coordinated release of 400-million-barrels from IEA member nations’ strategic reserves President Trump previously announced. The United States continues to deliver on its 172-million-barrel release commitment.  The crude oil will originate from the Strategic Petroleum Reserve’s (SPR) Bryan Mound site. Today’s action builds on the initial phase of the Emergency Exchange, which moved quickly to award 45.2 million barrels from the Bayou Choctaw, Bryan Mound, and West Hackberry SPR sites. The 10-million-barrel exchange leverages the full capabilities of the SPR, alongside the President’s limited Jones Act waiver, to accelerate critical near-term oil flows into the market.  “Today’s action furthers the United States’ efforts to move oil quickly to the market and mitigate short-term supply disruptions,” said DOE Assistant Secretary of the Hydrocarbons and Geothermal Energy Office Kyle Haustveit. “Thanks to President Trump, America is managing our national security assets responsibly again. Through this exchange, we will continue to refill the Strategic Petroleum Reserve by bringing additional barrels back at a later date through this pragmatic exchange structure, strengthening its long-term readiness and all at no cost to the American taxpayer.”  Under DOE’s exchange authority, participating companies will return the borrowed 10 million barrels with additional premium barrels by next year. This exchange delivers immediate crude to refiners and the market while generating additional barrels for the American people at no cost to taxpayers.   Bids for the solicitation are due no later than 11:00 A.M. CT on Monday, April 6, 2026.    For more information on the SPR, please visit DOE’s website.   

Read More »

Trump Administration Keeps Colorado Coal Plant Open to Ensure Affordable, Reliable and Secure Power in Colorado

WASHINGTON—U.S. Secretary of Energy Chris Wright today issued an emergency order to keep a Colorado coal plant operational to ensure Americans maintain access to affordable, reliable and secure electricity. The order directs Tri-State Generation and Transmission Association (Tri-State), Platte River Power Authority, Salt River Project, PacifiCorp, and Public Service Company of Colorado (Xcel Energy), in coordination with the Western Area Power Administration (WAPA) Rocky Mountain Region and Southwest Power Pool (SPP), to take all measures necessary to ensure that Unit 1 at the Craig Station in Craig, Colorado is available to operate. Unit One of the coal plant was scheduled to shut down at the end of 2025 but on December 30, 2025, Secretary Wright issued an emergency order directing Tri-State and the co-owners to ensure that Unit 1 at the Craig Station remains available to operate. “The last administration’s energy subtraction policies threatened America’s energy security and positioned our nation to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” Thanks to President Trump’s leadership, coal plants across the country are reversing plans to shut down. In 2025, more than 17 gigawatts (GW) of coal-power electricity generation were saved. On April 1, once Tri-State and the WAPA Rocky Mountain Region join the SPP RTO West expansion, SPP is directed to take every step to employ economic dispatch to minimize costs to ratepayers. According to DOE’s Resource Adequacy Report, blackouts were on track to potentially increase 100 times by 2030 if the U.S. continued to take reliable

Read More »

NextDecade contractor Bechtel awards ABB more Rio Grande LNG automation work

NextDecade Corp. contractor Bechtel Corp. has awarded ABB Ltd. additional integrated automation and electrical solution orders, extending its scope to Trains 4 and 5 of NextDecade’s 30-million tonne/year (tpy)  Rio Grande LNG (RGLNG) plant in Brownsville, Tex. The orders were booked in third- and fourth-quarters 2025 and build on ABB’s Phase 1 work with Trains 1-3, totaling 17 million tpy.  The scope for RGLNG Trains 4 and 5 includes deployment of an integrated control and safety system consisting of a distributed control system, emergency shutdown, and fire and gas systems. An electrical controls and monitoring system will provide unified visibility of the plant’s electrical infrastructure. These two overarching solutions will provide a common automation platform. ABB will also supply medium-voltage drives, synchronous motors, transformers, motor controllers and switchgear.  The orders also include local equipment buildings—two for Train 4 and one for Train 5— housing critical control and electrical systems in prefabricated modules to streamline installation and commissioning on site. The solutions being delivered to Bechtel use ABB adaptive execution, a methodology for capital projects designed to optimize engineering work and reduce delivery timelines. Phase 1 of RGLNG is under construction and expected to begin operations in 2027. Operations at Train 4 are expected in 2030 and Train 5 in 2031. ABB’s senior vice-president for the Americas, Scott McCay, confirmed to Oil & Gas Journal at CERAWeek by S&P Global in Houston that the company is doing similar work through Tecnimont for Argent LNG’s planned 25-million tpy plant in Port Fourchon, La.; 10-million tpy Phase 1 and 15-million tpy Phase 2. Argent is targeting 2030 completion for its plant.

Read More »

Persistent oil flow imbalances drive Enverus to increase crude price forecast

Citing impacts from the Iran war, near-zero flows through the Strait of Hormuz, accelerating global stock draws, and expectations for a muted US production response despite higher prices, Enverus Intelligence Research (EIR) raised its Brent crude oil price forecast. EIR now expects Brent to average $95/bbl for the remainder of 2026 and $100/bbl in 2027, reflecting what it described as a persistent global oil flow imbalance that continues to draw down inventories. “The world has an oil flow problem that is draining stocks,” said Al Salazar, director of research at EIR. “Whenever that oil flow problem is resolved, the world is left with low stocks. That’s what drives our oil price outlook higher for longer.” The outlook assumes the Strait of Hormuz remains largely closed for 3 months. EIR estimates that each month of constrained flows shifts the price outlook by about $10–15/bbl, underscoring the scale of the disruption and uncertainty around its duration. Despite West Texas Intermediate (WTI) prices of $90–100/bbl, EIR does not expect US producers to materially increase output. The firm forecasts US liquids production growth of 370,000 b/d by end-2026 and 580,000 b/d by end-2027, citing drilling-to-production lags, industry consolidation, and continued capital discipline. Global oil demand growth for 2026 has been reduced to about 500,000 b/d from 1.0 million b/d as higher energy prices and anticipated supply disruptions weigh on economic activity. Cumulative global oil stock draws are estimated at roughly 1 billion bbl through 2027, with non-OECD inventories—particularly in Asia—absorbing nearly half of the impact. A 60-day Jones Act waiver may provide limited short-term US shipping flexibility, but EIR said the measure is unlikely to materially affect global oil prices given broader market forces.

Read More »

Equinor begins drilling $9-billion natural gas development project offshore Brazil

Equinor has started drilling the Raia natural gas project in the Campos basin presalt offshore Brazil. The $9-billion project is Equinor’s largest international investment, its largest project under execution, and marks the deepest water depth operation in its portfolio. The drilling campaign, which began Mar. 24 with the Valaris DS‑17 drillship, includes six wells in the Raia area 200 km offshore in water depths of around 2,900 m. The area is expected to hold recoverable natural gas and condensate reserves of over 1 billion boe. Raia’s development concept is based on production through wells connected to a 126,000-b/d floating production, storage and offloading unit (FPSO), which will treat produced oil/condensate and gas. Natural gas will be transported through a 200‑km pipeline from the FPSO to Cabiúnas, in the city of Macaé, Rio de Janeiro state. Once in operation, expected in 2028, the project will have the capacity to export up to 16 million cu m/day of natural gas, which could represent 15% of Brazil’s natural gas demand, the company said in a release Mar. 24. “While drilling takes place, integration and commissioning activities on the FPSO are progressing well putting us on track towards a safe start of operations in 2028,” said Geir Tungesvik, executive vice-president, projects, drilling and procurement, Equinor. The Raia project is operated by Equinor (35%), in partnership with Repsol Sinopec Brasil (35%) and Petrobras (30%).

Read More »

Woodfibre LNG receives additional modules as construction advances

Woodfibre LNG LP has received two major modules within a week for its under‑construction, 2.1‑million tonne/year (tpy) LNG export plant near Squamish, British Columbia, advancing construction to about 65% complete. The deliveries include the liquefaction module—the project’s heaviest and most critical process unit—and the powerhouse module, which will serve as the plant’s central power and control hub. The liquefaction module, delivered aboard the heavy cargo vessel Red Zed 1, is the 15th of 19 modules scheduled for installation at the site, the company said in a Mar. 24 release. Weighing about 10,847 metric tonnes and occupying a footprint roughly equivalent to a football field, it is among the largest modules fabricated for the project. Once installed and commissioned, the liquefaction module will cool natural gas to about –162°C, converting it into LNG for export. Shortly after the liquefaction module’s arrival, Woodfibre LNG received the powerhouse module, the 16th module delivered to site. Weighing more than 4,200 metric tonnes, the powerhouse module will function as a power and control system, receiving electricity from BC Hydro and managing and distributing power to the plant’s electric‑drive compressors. The Woodfibre LNG project is designed as the first LNG export plant to use electric‑drive motors for liquefaction, replacing conventional gas‑turbine‑driven compressors. The Siemens electric‑drive system will be powered by renewable hydroelectricity from BC Hydro, eliminating the largest operational source of greenhouse gas emissions typically associated with liquefaction, the company said. The project is being built near the community of Squamish on the traditional territory of the Sḵwx̱wú7mesh Úxwumixw (Squamish Nation) and is regulated in part by the Indigenous government.  All 19 modules are expected to arrive on site by spring 2026. Construction is scheduled for completion in 2027. Woodfibre LNG is owned by Woodfibre LNG Ltd. Partnership, which is 70% owned by Pacific Energy Corp.

Read More »

West of Orkney developers helped support 24 charities last year

The developers of the 2GW West of Orkney wind farm paid out a total of £18,000 to 24 organisations from its small donations fund in 2024. The money went to projects across Caithness, Sutherland and Orkney, including a mental health initiative in Thurso and a scheme by Dunnet Community Forest to improve the quality of meadows through the use of traditional scythes. Established in 2022, the fund offers up to £1,000 per project towards programmes in the far north. In addition to the small donations fund, the West of Orkney developers intend to follow other wind farms by establishing a community benefit fund once the project is operational. West of Orkney wind farm project director Stuart McAuley said: “Our donations programme is just one small way in which we can support some of the many valuable initiatives in Caithness, Sutherland and Orkney. “In every case we have been immensely impressed by the passion and professionalism each organisation brings, whether their focus is on sport, the arts, social care, education or the environment, and we hope the funds we provide help them achieve their goals.” In addition to the local donations scheme, the wind farm developers have helped fund a £1 million research and development programme led by EMEC in Orkney and a £1.2m education initiative led by UHI. It also provided £50,000 to support the FutureSkills apprenticeship programme in Caithness, with funds going to employment and training costs to help tackle skill shortages in the North of Scotland. The West of Orkney wind farm is being developed by Corio Generation, TotalEnergies and Renewable Infrastructure Development Group (RIDG). The project is among the leaders of the ScotWind cohort, having been the first to submit its offshore consent documents in late 2023. In addition, the project’s onshore plans were approved by the

Read More »

Biden bans US offshore oil and gas drilling ahead of Trump’s return

US President Joe Biden has announced a ban on offshore oil and gas drilling across vast swathes of the country’s coastal waters. The decision comes just weeks before his successor Donald Trump, who has vowed to increase US fossil fuel production, takes office. The drilling ban will affect 625 million acres of federal waters across America’s eastern and western coasts, the eastern Gulf of Mexico and Alaska’s Northern Bering Sea. The decision does not affect the western Gulf of Mexico, where much of American offshore oil and gas production occurs and is set to continue. In a statement, President Biden said he is taking action to protect the regions “from oil and natural gas drilling and the harm it can cause”. “My decision reflects what coastal communities, businesses, and beachgoers have known for a long time: that drilling off these coasts could cause irreversible damage to places we hold dear and is unnecessary to meet our nation’s energy needs,” Biden said. “It is not worth the risks. “As the climate crisis continues to threaten communities across the country and we are transitioning to a clean energy economy, now is the time to protect these coasts for our children and grandchildren.” Offshore drilling ban The White House said Biden used his authority under the 1953 Outer Continental Shelf Lands Act, which allows presidents to withdraw areas from mineral leasing and drilling. However, the law does not give a president the right to unilaterally reverse a drilling ban without congressional approval. This means that Trump, who pledged to “unleash” US fossil fuel production during his re-election campaign, could find it difficult to overturn the ban after taking office. Sunset shot of the Shell Olympus platform in the foreground and the Shell Mars platform in the background in the Gulf of Mexico Trump

Read More »

The Download: our 10 Breakthrough Technologies for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: MIT Technology Review’s 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. It’s hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. It’s all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as: + How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs.+ What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird data—and to shed much-needed light on their migration habits. Read the full story. 
+ How poop could help feed the planet—yes, really. Read the full story.
Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if you’re interested in previous discussions about topics ranging from mixed reality tech to gene editing to AI’s climate impact, check out some of the highlights from the past year’s events. This international surveillance project aims to protect wheat from deadly diseases For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world.But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change.  An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories. Read the full story. —Shaoni Bhattacharya

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge) 2 Uber and Lyft are racing to catch up with their robotaxi rivalsAfter abandoning their own self-driving projects years ago. (WSJ $)+ China’s Pony.ai is gearing up to expand to Hong Kong.  (Reuters)3 Elon Musk is going after NASA He’s largely veered away from criticising the space agency publicly—until now. (Wired $)+ SpaceX’s Starship rocket has a legion of scientist fans. (The Guardian)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAIFeaturing three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (MIT Technology Review) 5 The dangerous allure of TikTokMigrants’ online portrayal of their experiences in America aren’t always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketingAnd AI is only a part of it. (Economist $)+ AI’s search for more energy is growing more urgent. (MIT Technology Review) 7 The messy ethics of writing religious sermons using AISkeptics aren’t convinced the technology should be used to channel spirituality. (NYT $)
8 How a wildlife app became an invaluable wildfire trackerWatch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review) 9 Computer scientists just love oracles 🔮 Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)
10 Pet tech is booming 🐾But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review) Quote of the day “The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI?” —Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies’ AI claims are overblown.
The big story Broadband funding for Native communities could finally connect some of America’s most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.
The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. —Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ It’s not just you, some people really do look like their dogs!+ I’m not sure if this is actually the world’s healthiest meal, but it sure looks tasty.+ Ah, the old “bitten by a rabid fox chestnut.”

Read More »

Equinor Secures $3 Billion Financing for US Offshore Wind Project

Equinor ASA has announced a final investment decision on Empire Wind 1 and financial close for $3 billion in debt financing for the under-construction project offshore Long Island, expected to power 500,000 New York homes. The Norwegian majority state-owned energy major said in a statement it intends to farm down ownership “to further enhance value and reduce exposure”. Equinor has taken full ownership of Empire Wind 1 and 2 since last year, in a swap transaction with 50 percent co-venturer BP PLC that allowed the former to exit the Beacon Wind lease, also a 50-50 venture between the two. Equinor has yet to complete a portion of the transaction under which it would also acquire BP’s 50 percent share in the South Brooklyn Marine Terminal lease, according to the latest transaction update on Equinor’s website. The lease involves a terminal conversion project that was intended to serve as an interconnection station for Beacon Wind and Empire Wind, as agreed on by the two companies and the state of New York in 2022.  “The expected total capital investments, including fees for the use of the South Brooklyn Marine Terminal, are approximately $5 billion including the effect of expected future tax credits (ITCs)”, said the statement on Equinor’s website announcing financial close. Equinor did not disclose its backers, only saying, “The final group of lenders includes some of the most experienced lenders in the sector along with many of Equinor’s relationship banks”. “Empire Wind 1 will be the first offshore wind project to connect into the New York City grid”, the statement added. “The redevelopment of the South Brooklyn Marine Terminal and construction of Empire Wind 1 will create more than 1,000 union jobs in the construction phase”, Equinor said. On February 22, 2024, the Bureau of Ocean Energy Management (BOEM) announced

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.2 million barrels from the week ending December 20 to the week ending December 27, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 2. Crude oil stocks, excluding the SPR, stood at 415.6 million barrels on December 27, 416.8 million barrels on December 20, and 431.1 million barrels on December 29, 2023, the report revealed. Crude oil in the SPR came in at 393.6 million barrels on December 27, 393.3 million barrels on December 20, and 354.4 million barrels on December 29, 2023, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on December 27, the report revealed. This figure was up 9.6 million barrels week on week and up 17.8 million barrels year on year, the report outlined. “At 415.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are slightly below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.4 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.6 million barrels from last week and are 10 percent above the five year average for this time of year,” it went on to state. In the report, the EIA noted

Read More »

More telecom firms were breached by Chinese hackers than previously reported

Broader implications for US infrastructure The Salt Typhoon revelations follow a broader pattern of state-sponsored cyber operations targeting the US technology ecosystem. The telecom sector, serving as a backbone for industries including finance, energy, and transportation, remains particularly vulnerable to such attacks. While Chinese officials have dismissed the accusations as disinformation, the recurring breaches underscore the pressing need for international collaboration and policy enforcement to deter future attacks. The Salt Typhoon campaign has uncovered alarming gaps in the cybersecurity of US telecommunications firms, with breaches now extending to over a dozen networks. Federal agencies and private firms must act swiftly to mitigate risks as adversaries continue to evolve their attack strategies. Strengthening oversight, fostering industry-wide collaboration, and investing in advanced defense mechanisms are essential steps toward safeguarding national security and public trust.

Read More »

Gemini 3.1 Flash Live: Making audio AI more natural and reliable

Today, we’re advancing Gemini’s real-time dialogue capabilities with Gemini 3.1 Flash Live, our highest-quality audio and voice model yet. It delivers the speed and natural rhythm needed for the next generation of voice-first AI, offering a more intuitive experience for developers, enterprises and everyday users.3.1 Flash Live is available across Google products:For developers: Robust reasoning and task executionWe’ve improved 3.1 Flash Live’s overall quality, making it more reliable for developers and enterprises to build voice-first agents that can complete complex tasks at scale. On ComplexFuncBench Audio, a benchmark that captures multi-step function calling with various constraints, it leads with a score of 90.8% compared to our previous model.

Read More »

Protecting people from harmful manipulation

As AI models get better at holding natural conversations, we must examine how these interactions affect people and society.Building on a breadth of scientific research, today, we are releasing new findings on the potential for AI to be misused for harmful manipulation*, specifically, its ability to alter human thought and behavior in negative and deceptive ways. With this latest study, we have created the first empirically validated toolkit to measure this kind of AI manipulation in the real world, which we hope will help protect people and advance the field as a whole. We’re publicly releasing all materials necessary to run human participant studies using the same methodology. (Note: The behaviors observed during this study took place in a controlled lab setting, and do not necessarily predict real-world behaviors.)Why harmful manipulation mattersConsider two scenarios: One AI model gives you facts to make a well-informed healthcare decision that improves your well-being. Another AI model uses fear to pressure you to make an ill-informed decision that harms your health. The first educates and helps you; the second tricks and harms you.These scenarios highlight the difference between two types of persuasion in human-AI interactions (also defined in earlier research):Beneficial (rational) persuasion: Using facts and evidence to help people make choices that align with their own interestHarmful manipulation: Exploiting emotional and cognitive vulnerabilities to trick people into making harmful choicesOur latest work helps us and the wider AI community better understand the risk of AI developing capabilities for harmful manipulation and build a scalable evaluation framework to measure this complex area. To do this effectively, we simulated misuse in high-stakes environments, explicitly prompting AI to try to negatively manipulate people’s beliefs and behaviours on key topics.Developing new evaluations for a complex challengeTesting the outcomes of AI harmful manipulationTesting for harmful manipulation is inherently difficult because it involves measuring subtle changes in how people think and act, varying heavily by topic, culture and context.This is what motivated our latest research, which involved conducting nine studies involving over 10,000 participants across the UK, the US, and India. We focused on high-stakes areas such as finance, where we used simulated investment scenarios to test if AI could influence how people would behave in complex decision-making environments, and health, where we tracked if AI could influence which dietary supplements people preferred. Interestingly, the AI was least effective at harmfully manipulating participants on health-related topics.Our findings show that success in one domain does not predict success in another, validating our targeted approach to testing for harmful manipulation in specific, high-stakes environments where AI could be misused.How could AI manipulate?In addition to tracking efficacy (whether the AI successfully changes minds), we also measured its propensity (how often it even tries to use manipulative tactics). We tested propensity in two scenarios: when we explicitly told the model to be manipulative, and when we didn’t.As detailed in our research, we counted manipulative tactics in experimental transcripts, confirming the AI models were most manipulative when explicitly instructed to be.Our results also suggest that certain manipulative tactics may be more likely to result in harmful outcomes, though further research is required to understand these mechanisms in detail.By measuring both efficacy and propensity, we can better understand how AI manipulation works and build more targeted mitigations.

Read More »

The Download: a battery pivot to AI, and rewriting math

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Why this battery company is pivoting to AI  Qichao Hu doesn’t mince words about the state of the battery industry. “Almost every Western battery company has either died or is going to die. It’s kind of the reality,” he says.   Hu is the CEO of SES AI, a Massachusetts-based battery company. It previously developed advanced lithium batteries for major industries, but is now shifting to AI materials discovery. Read our story to find out why.   —Casey Crownhart 
This startup wants to change how mathematicians do math  Axiom Math, a California startup, has released a free AI tool with a big ambition: discovering mathematical patterns that could unlock solutions to long-standing problems.  Most of the successes with AI tools have involved finding solutions to existing problems. But that’s not all they could do. There are lots of problems in math that require new ideas nobody has ever had, which could come from spotting patterns that have never been spotted before.  
Axiom Math’s new tool aims to find these hidden links. Read the full story to discover their plans—and how AI in general could change mathematics.  —Will Douglas Heaven  Are high gas prices good news for EVs? It’s complicated.  As the conflict in Iran has escalated, fossil-fuel prices have been on a roller-coaster—and some EV owners are celebrating.   They believe the volatility will create an opportunity for electric vehicles to make headway. But even the carless among us should be concerned about a sustained rise in fossil-fuel prices.   To find out why, read the full story.  —Casey Crownhart  This article is from The Spark, our weekly climate newsletter. Sign up to receive it in your inbox every Wednesday.  The must-reads 

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 Meta and YouTube have been fined for designing addictive products They must pay damages of $6 million for harming young people. (Guardian) + The verdicts will reshape legal protections for Big Tech. (WSJ $) + They could also ripple through social media markets worldwide. (Rest of World) + Juries have started taking the lead in the push for child online safety. (NYT)  2 SpaceX aims to file for IPO as soon as this week It’s hoping to raise more than $75 billion. (The Information) + Rocket stocks soared on the report. (BBC)  + But rivals are challenging SpaceX’s dominance. (MIT Technology Review)  3 A new AI safety bill would halt data center construction It was introduced by Bernie Sanders. (Wired) + Nobody wants a data center in their backyard. (MIT Technology Review + One solution: launch them into space. (MIT Technology Review)   4 Meta has laid off 700 employees After raising compensation for top earners. (NYT $)  5 Elon Musk wants a Delaware judge to recuse herself over an emoji She liked a LinkedIn post criticizing him. (CNBC) + The case had ruled Musk misled investors during the Twitter purchase. (Reuters)  6 Reddit will require “fishy” accounts to verify that a human runs them The process aims to combat the deluge of bots. (Ars Technica)  7 Uber and Pony AI aim to launch Europe’s first robotaxi service in Croatia Pony AI is also running trials in Luxembourg, while Uber is testing in London. (The Verge) 
8 Google says quantum computers could break all cryptographic security by 2029 It’s set a timeline to secure the quantum era. (Gizmodo) + Quantum computers could soon solve health care problems. (MIT Technology Review)  9 New research shows cloning doesn’t produce perfect copies Clones have lots of extra, potentially dangerous mutations. (New Scientist) 
10 The landmark AI Scientist has just completed peer review  It’s billed as the first AI tool built to fully automate the scientific process. (Nature)  Quote of the day  “For years, social media companies have profited from targeting children while concealing their addictive and dangerous design features. Today’s verdict is a referendum—from a jury, to an entire industry.”  —Attorney Rachel Lanier offers her view on yesterday’s fines for Meta and YouTube, the Washington Post reports.   One More Thing  GETTY IMAGES Longevity enthusiasts want to create their own independent state. They’re eyeing Rhode Island.   It’s incredibly difficult and expensive to study innovative ways to slow or reverse aging. In response, longevity enthusiasts have devised an ambitious plan: establish an independent state for life-extension experiments.   They envision a jurisdiction that slashes red tape, encourages self-experimentation with unproven treatments, and eliminates laws that limit how companies develop drugs.  
Exactly where their longevity state might emerge is still being worked out—but one appealing location is Rhode Island. Read the full story to learn more about the plans.   —Jessica Hamzelou  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + These gleaming photos of ancient insects in amber are time capsules of the dinosaur age. + Paint with pixels across a world map at this unique digital canvas. + Hands have a new shield against hammers: a nail holder that protects your fingers. + This new audio player uses cartridges to give digital music a soul. 

Read More »

The snow gods: How a couple of ski bums built the internet’s best weather app

The best snow-forecasting app for skiers and snowboarders isn’t from any of the federally funded weather services. Nor from any of the big-name brands. It’s an independent app startup that leverages government data, its own AI models, and decades of alpine-life experience to offer better snow (and soon avalanche) predictions than anything else out there.Skiers in the know follow OpenSnow and won’t bother heading to the mountains—from Alpine Meadows to Mont Blanc, Crested Butte to Killington—unless this small team of trusted weathered men tells them to. (And yes, they’re all men.) The app has made microcelebrities of its forecasters, who sift through and analyze reams of data to write “Daily Snow” reports for locations throughout the world. “I’m F-list famous,” OpenSnow founding partner and forecaster Bryan Allegretto says with a laugh. “Not even D-list.”  The app has proved especially vital this year, which has been one of the weirder winters on record. The US West saw very little daily snow, despite an intense storm cycle that led to one of the deadliest avalanches in history. That storm was followed by one of the fastest melts in memory, and several resorts in California are already shutting down for the season. Meanwhile, in the East, the ongoing snowfall has offered a rare gift: a deep and seemingly endless winter..  MIT Technology Review caught up with Allegretto, better known as BA, in the Tahoe mountains to talk about the weather, AI, avalanches, and how a little weather app became the closest thing powder-hounds have to a crystal ball: a daily dump of the freshest, most decipherable, and most micro-accurate forecasts in the biz. And how two once-broke ski bums—Allegretto and his Colorado counterpart, CEO Joel Gratz— managed to bootstrap a business and turn an email list of 37 into a cult following half a million strong. 
This interview has been edited for clarity and accuracy.  You grew up in New Jersey. Middle of the pack as far as snowy states. What were your winters like as a kid?
I was always obsessed with weather. Especially severe weather. Nor’easters. There was the blizzard of ’89, I believe, that hit the East Coast hard—dropped two to three feet of snow, which was a lot for the Jersey Shore. My dad worked for the highway authority, so he had tools other than the evening news. He was in charge of calling out the snowplows whenever it snowed, so I just remember chasing storms with my dad. I wasn’t allowed to ride in the snowplows. I’d watch them. When I got older, I was the one shoveling the neighbors’ driveways. I just liked being out there. In it. In college, I used to go around and shovel all the girls’ sidewalks. That was fun.  When did you start skiing? We would cut school and take a bus to go skiing, unbeknownst to our parents. It was the ’90s, and the surfers decided snowboarding would be fun, so the local surf shop started  running a bus and all these surfers would show up and hop the bus to Hunter Mountain. We’d drive to the Poconos, go night skiing, turn around. It wasn’t uncommon for me in high school to get in the car by myself, either —and just drive. Me, my dog, my backpack. I’d sleep in gas stations and ski. Storm-chasing around the Northeast.  What were you really chasing, you think? Natural highs. Happiness. I’ve always been a soul-searcher. I grew up in a crazy house situation, a broken home. My dad left. My mom became a drug addict. I just wanted to be gone. I’m the oldest. I was always trying to help my mom and make sure she was okay. No one was telling me to go to school and have a career. I just wanted to do something that fulfills me. How’d you go about figuring out what that was?  For me, to go to school was a big task, given where I was coming out of. There wasn’t any money. I could get grants and scholarships because my mom was so poor. I wanted to go to Penn State but didn’t have the grades. I ended up at Kean, a public university in New Jersey. It had a meteorology program. We got to go to New York City, to NBC, and practiced on the green screen. In meteorology school, I started thinking: How do I work in the ski and snowboard industry and use weather at the same time? I went to Rowan [University] for business, in South Jersey, and in between moved to Hawaii to surf and spent a year teaching snowboarding. My goal the whole time was to not work in a career I hated. I imagine you weren’t like most meteorology students. 

Us punk rockers, skaters, snowboarders—we were a little different than the typical meteorology nerds. I was the radical storm chaser. A big personality. I still am. You didn’t quite fit the traditional weatherman mold. Back then, there were no smartphones or social media. If you were a meteorologist, you either worked in a cubicle for the government or at an insurance company assessing weather risk.  Or you were on the local news. That wasn’t my thing. They didn’t want Grizzly Adams up there with his big beard. Beards belong in the mountains? Meteorologists live in cities because that’s where the jobs are. They don’t live in small mountain towns.  That’s what was missing in the industry. When I moved to Tahoe, in 2006, I realized nobody had any trust in the weather forecasts. It was more like a “We’ll believe it when we see it” old-fashioned mentality. If you’re a forecaster in flat areas, you just look at the weather model and regurgitate the news. Weathermen in Sacramento or Reno didn’t give a crap about the ski resorts! They’d just say “We’ll see three feet above 6,000 feet” and go on to the next segment. And skiers were like: “Wait a minute. Is it going to be windy at the top?” I thought: Let’s home in and give skiers what they’re looking for. So you were living in Tahoe, skiing and forecasting? I was working in the office at a resort, snowboarding, and doing weather on the side. I’d get up at 4 a.m. and do it before my 9 a.m. day job. Forecasting, figuring out: How the heck do these storms interact with these mountains? I started emailing everyone in the office what I’d see coming, and people kept saying “Add me! Add me!”  Eventually, resorts around Tahoe started asking to use my forecasts. How were you actually forecasting, though? 
The NOAA, the GFS [Global Forecasting System], the Canadian model, the Euro model, German, Japanese—all these governments make these weather models to forecast the weather. And share it. Anyone can access it. But you can’t just look at a weather model and go, Yep, that’s what’s going to happen. That’s not how it works in the mountains. It’s way harder. You can’t rely on model data. It’s low-res, forecasting for a grid area that’s too big. It can’t understand what’s going on. It’s going to generalize the weather. You can try that, but you’re going to be wrong. A lot of people are going to stop listening. I was able to forecast more accurately than most people because I was living there; I could fix a lot of these errors. Around 2007, I started my own website, Tahoe Weather Discussion. Bryan Allegretto (right) on the lift with OpenSnow CEO Joel Gratz and Gratz’ wife Lauren.COURTESY OF BRYAN ALLEGRETTO Snazzy. Meanwhile, I heard about this guy Joel out in Boulder, Colorado. People were telling us about each other, saying: “You guys are doing the same thing!” He was sleeping on his friend’s couch, running a site called Colorado Powder Forecast. And then there was Evan [Thayer, who would later join the company], in Utah. I think his website was called Wasatch Forecast. 
Great minds!He actually grew up outside Philly, only about an hour from me. We both were obsessed with storms and snow and moved west to the mountains and started similar websites. We would’ve been best friends as kids! Anyway, Joel called me in 2010 and was like, “Hey. I’m building this site, forecasting skiing in ski states.” And wanted me to join. He knew I had big traffic. He was like, “Let’s do it together, not against each other.” I asked, “What’s the pay?” He said, Zero. Give me your company.  And you just said: Yeah, sounds good? I just really trusted him. He’d asked Evan too—but Evan was like, Give you my site and my traffic for free?? No, I built this. A normal response. I was the knucklehead that was like, okay. Evan was still single. I already had a wife and two kids. I’d just had my son. I was working two jobs. I was so overwhelmed. So busy with my day job, as an account manager at the Ritz at North Star. Vail had just bought them and we all thought we were going to lose our jobs. My site was struggling. I was desperate for somebody to do it with. I think I thought it was a good opportunity. I was scared, though. For sure.   That was 15 years ago. How’d OpenSnow work in the old days?  We were just using our brains. That’s how it started: with us using our brains.Looking at all the weather models—all the data from the government models and airplanes, satellites, balloons. A million places. Building spreadsheets and fixing all the errors in the forecast models. We’d take the data and reconfigure it—appropriate it for the mountains. It was all manual for a really long time. How manual? 
It was old-school. All the resorts had snowfall reports on their sites, and I was the one hand-keying it in: “three to six inches.” That was me on the back end, typing it in every single morning for every single ski resort. It’d take me hours.  And then? Around 2018, we built our own weather model to do what we were doing. We called it METEOS. It’s an acronym—I can’t even remember what it stood for!  METEOS was just us using our brains and our experience to create formulas. It automated everything and allowed us to create a grid across the whole world and forecast for any GPS point. It took all this data, ingested it, fixed some of it, and then spit out a forecast for any location. In the world. 
Were you guys making any money?  It was crap in the beginning. Advertising-based. We stole Eric Strassburger from The Denver Post —he doubled our ad revenue in his first year full-time with us. Still, Google Ads had chopped our ad rates in half; it wasn’t a good long-term strategy to rely just on ads. We had to pivot to plan B so we didn’t go out of business.  Subscriptions. When all the newspapers started charging to read articles, Joel was like: We are meteorologists writing columns every day. Journalism weather is not sustainable! We need to be a weather site. We need to be a weather app.  What happened when you moved from ads to subscriptions?  The money took off.  We could quit our day jobs and work full time on OpenSnow. The company exploded. We were like: Are people gonna really pay for this? They did! Although they could still access the majority of the site for free.  At the end of 2021, you put in a pay wall?That’s when we panicked! We’re gonna lose 90% of our customers! But 10% will stay loyal and pay. Since the beginning, there’s been only two times our traffic went down: the paywall and covid. Otherwise, every year it’s gone up. People were like, Okay I can’t live without this. I admit, I’m one of those people. So is my editor. Any other weather app is useless for skiers. When it comes to ski towns, everyone uses OpenSnow. When the Tahoe avalanche happened, we were up early on search-and-rescue calls, helping the rescuers with forecasts. We’re now the official lead forecast providers for Ski California. Ski Utah. Head of Forecasting for National Ski Patrol. Professional Ski Instructors of America. US Collegiate Ski & Snowboard Association. Dozens of destinations and ski resorts. Joel doesn’t like to talk about it publicly, but our renewals and retention and open rates blow away the industry standards.  I bet. OpenSnow is like a benevolent cult.  People connect with a small company with underground roots. We’re independent. Fourteen full-time, plus seasonal. About half have meteorology backgrounds, from bachelor’s to doctoral degrees. Our very first employee was Sam Collentine,  a meteorology student in Boulder, who started as an intern in 2012 and is now our COO and does everything.  Sounds like employees and subscribers sign on and just … stay. Everyone stays! Our cofounder Andrew Murray, Joel’s friend and OpenSnow’s web designer, left around 2021. But yeah, people feel like they know us. They’ve been reading me in Tahoe with their coffee for 20 years! I get recognized everywhere I go. For example, I broke my binding, and went into a ski shop and asked if I could demo. And the guy was like, ARE YOU BA? Just take it! Sounds fun—until you just want to have dinner with your family, or buy a glove. Joel gets the same thing—people make Joel shrines in the slopes that look like Catholic candles. You guys are like modern-day snow gods. Gods of snow. People are weird. How weird? Someone once sent me a photo, saying: “Look, my friend dressed up as you for Halloween!” People are always inviting me over to dinner, to PlumpJack with Jonny Moseley. I guess they want to hang out with the “Who’s who of Tahoe.” There was an executive from Pixar who had me to his multimillion-dollar home on the west shore of Lake Tahoe. He had a photo of me over the fireplace in the bathroom. I thought: That’s weird, he has a photo of me over the fireplace. What was even weirder, though: It was autographed. I’ve never autographed a photo in my life! This guy just signed it—himself. I didn’t say anything. I just left. Do you get a lot of hate mail? Mean DMs?  Thousands. People think I can make it snow. I think they think I’m to blame when it doesn’t. The other day, someone messaged me on Instagram with a picture I’d posted over California of the high-pressure map—somebody had shared it, and wrote “Fuck Bryan Allegretto” over the high pressure. Hilarious. People were yelling at me during covid: You’re encouraging people to go out skiing! It wasn’t March 202o, it was January 2022. I’ve since deleted my personal social media. I never wanted to be in the spotlight. That’s the whole reason signing off my forecasts with “BA” became a thing— I didn’t want to use my full name. I just do it because it’s good for the company. Joel realized years ago that people come to us for forecasts —and forecasters. That’s why we still have forecasters. Even though AI can do what we’re doing now. Is AI doing what you do now?  We were using METEOS until this season. In December, we launched PEAKS. We built our own machine-learning model. The AI is taking what we were doing—and doing it everywhere, faster. The whole world instantly, in minutes. It can go back and actually ingest decades of government data—estimated weather conditions over the entire US from 1979 to 2021—and correct the errors.  What makes it so accurate? Before PEAKS, it wasn’t very specific. The data used to be what Joel calls “blobby”—like giant blobs, just big splotches of color over a mountain range. It’s like, if you take a pen and press into a piece of paper, the ink will spill out. The AI is like if you just tap the paper. A dot versus a blot. Now we can know how much it will snow, say, in the parking lot at Palisades and how much at the summit. It’s less blobby, more rigid and defined.  Defined how? All weather models output forecasts on a grid. The gridpoints are essentially averaged data over the grid box. So a model with a 25-kilometer grid resolution averages data over 25 kilometers, or around 16 miles. This is far too large an area, especially in mountainous terrains where a few miles can make a massive difference in experienced conditions. The AI is downscaling the models into smaller and smaller grid boxes. We are able to train a model to transform lower-resolution data from the same period into this high-resolution “ground truth” data. Then the model can generalize this training to global real-time downscaling. PEAKS is learning wind patterns, thermal gradients, terrain, and weather patterns and connecting all these factors to learn how to transition from coarse resolution into high, three-kilometer resolution—leading to more precise forecasts. We’ve basically taught the AI how to forecast like us. Except 50% more accurate. Now, when I wake up at 4 a.m., PEAKS has already done it. So … then what are you doing at four in the morning? Oh, I’ll still do the forecasting. I like to double-check it—but I don’t really need to. PEAKS has allowed me to spend more time on writing. Now instead of spending four hours forecasting and then rushing to write it,  I’ve been able to make my forecasts more interesting, more entertaining. Yeah, AI could probably write it—but I want to. It’s all about the personal connection.  How did last year’s federal funding cuts for the NWS and NOAA affect your business? Are you guys concerned about that going forward?We had those discussions when it first happened. In forecasting, you still need humans: to launch the weather balloon, staff the weather stations, collect the initial data. Some people in our office panicked—they had spouses or friends getting laid off. We were wondering if we’d have less data coming in, if it’d make the models less accurate. But the backlash in the weather community was swift. I think they were like, There are important things you can’t cut. It was pretty short-term. Are we worried going forward?  No, not as long as the data keeps coming in! We won’t survive without the government publishing data. What’s next?  We recently bought a small company called StormNet that tracks severe weather, probability of lightning, hail, tornadoes. We just launched it. Used to be like, “The storm is an hour away.” Now we can say, “In seven days there might be a tornado here.” And next winter, we’re working on a feature that can help forecast avalanches using AI. Right now, it’s still manual—people going out testing the snow layers. Forecasting is limited. This wouldn’t replace the avalanche centers, but it will be able to look at everything, including slope angle and previous weather and current conditions, and forecast further out, give people more advance—and location specific—warning. Help alert the public sooner. Help save lives.  I talked to one of the guys who left the Frog Lake huts on Sunday, before the storm. Before the group that was caught in the Tahoe avalanche. He told me: “People are always like, Oh, it’s never as bad as they say. But I read OpenSnow. I could tell by the language you were using, that we should get the heck out of there. I wanted no part of that.” We don’t hype storms. Or sugarcoat. Our only incentive is to be accurate. True that it was the biggest storm in Tahoe in four decades? In 1982, we got 118 inches over five days, and this one was 111 inches—two storms of similar size created the same level tragedy. It’s too much, too fast. It was snowing three to four inches an hour. That was the fastest we’ve seen. I don’t know what’s the bigger story—the fact that we’ve had the biggest storm in over four decades or the fact that all that snow disappeared in five days. Do you worry about the future of OpenSnow given, you know, the future of snow?We’ve had the second-warmest March in at least 45 years. We’re just getting these wild swings now. The seasonal snow averages are almost the same, but we’re seeing more variability than we did in the 1980s and ’90s. We’re either getting really cold and really warm, or really dry and really wet. Bad years can affect our business, for sure.  It’s certainly affecting the industry—I know Vail, Alterra took big hits this year. Usually we’re okay, because if it’s dry in Tahoe, it’s snowing in Utah or Colorado. Our three biggest markets. I don’t recall a season where the whole, entire West was in the same boat. It’s been the worst year in the West. Yet our traffic keeps going up. Everything is up. The East Coast had a good year, Japan, BC. We’re slowly expanding in those places. It happens to be the first year in 15 years we started marketing. Marketing works! Amazing.Joel and I have had this repeat conversation for years—we just had it again two weeks ago: “Can you believe what we’ve done? This was never the goal.” I’m still blown away daily. We’ve never borrowed from investors. No series A, B, C. We’ve gotten offers to sell, but no. We’re still having too much fun. All I know is: Joel and I didn’t come from money. We’ve never chased money or fame, and got both. I think it’s because we never chased them. We’ve always chased the joy of skiing and forecasting powder, and doing that for other people.We were just trying to create something that made us happy.

Read More »

Are high gas prices good news for EVs? It’s complicated.

I live in a dense city with plentiful public transportation options and limited parking, so I don’t own a car. I’m often utterly clueless about the current price of gasoline. But as the conflict in Iran has escalated, fossil-fuel prices have been on a roller-coaster, and I’ve started paying attention. In the US, average gas prices are $3.98 a gallon as of March 25, up from under $3 before the war started. Online there’s been what almost looks like cheerleading about this volatility from some folks, including EV owners—some of the social media posts and op-eds have read as nearly gleeful. The subtext (or even the text) is “I told you so.”  Don’t get me wrong—this could be an opportunity for EVs to make headway around the world. But there are plenty of reasons that even the carless among us should be concerned about a sustained rise in fossil-fuel prices.
Historically, this is exactly the sort of moment that’s pushed people to reevaluate how they get around. During the oil crisis of the 1970s, Americans switched to smaller, more efficient cars in droves. It was a major opportunity for Japanese automakers, whose vehicles tended to fit this mold better than those produced by their US counterparts. We’re already seeing early signs that people are interested in going electric. One US-based online car marketplace said that search traffic for EVs was up 20% following the initial attack on Iran. For more popular models like the Tesla Model Y, traffic nearly doubled.
And the interest is global. One car dealership outside London said it’s struggling to keep up with demand and is sending staff to buy more EVs at auction, according to Reuters. Another in Manila told Bloomberg that it got a month’s worth of orders in two weeks. The timing here is really interesting in the US in particular, because we’re about to see a wave of more affordable used EVs hit the market. Three years ago, a leasing boom started with the Inflation Reduction Act, which included incentives for EVs, including leases. About 300,000 such leases are set to expire this year, and many of those vehicles could come up for sale, increasing the available supply of affordable used EVs. The interest is there, but what would it really take for more drivers to make the switch? Nice, round numbers do tend to get people’s attention. Some point to $4 per gallon (which the national average is quite close to right now). At that price, the total cost of ownership for an EV is comfortably lower than the cost for a gas-powered car, even with higher electricity prices, according to data from the energy consultancy BloombergNEF. Then again, maybe that won’t quite do the trick: One survey from Cox Automotive found that most US consumers would consider switching to an EV or hybrid if gas prices hit $6 per gallon. But this is also the second big incident of fossil-fuel volatility in the last five years, which could make consumers more ready to make the switch, as Elaine Buckberg, a senior fellow at Harvard, told Bloomberg. (The first was in the summer of 2022 when Russia invaded Ukraine.) I’m a climate and energy reporter, and I care about addressing climate change. So I’m always happy to hear about people shifting to EVs or any other option that helps cut down on greenhouse-gas emissions. But one aspect that I think is getting lost here is that sustained high fossil-fuel prices will be bad for even those of us who are untethered from the burdens of vehicle ownership. Fuel cost makes up between 50% and 60% of the cost of shipping goods overseas. Fertilizer production today requires natural gas, which has gotten significantly more expensive since the war began, particularly in Europe.

Jet fuel prices have basically doubled in the last month, according to the International Air Transport Association. Since those prices account for something like a quarter of an airline’s operating cost, that could soon make air travel—and anything that’s shipped by plane—more expensive. And if all this adds up to an economic downturn, it’s bad for big projects that need financing (even wind and solar farms) and for people who want to borrow money to buy a home or a car (including an EV). If you’re in the market for a car, maybe this uncertainty is what you needed to consider electric. But until we’re able to truly decarbonize not only our transportation but the rest of our economy, even this carless reporter is going to be worried about high gas prices. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. 

Read More »

Roundtables: The Next Era of Space Exploration

Available only for MIT Alumni and subscribers.
Listen to the session or watch below Whether it’s the race to find life on Mars, the campaign to outsmart killer asteroids, or the quest to make the moon a permanent home to astronauts, scientists’ efforts in space can tell us more about where humanity is headed. This subscriber-only discussion examines the progress and possibilities ahead.

[embedded content]

Recorded on March 25, 2026 Related Stories:

Read More »

Nscale Expands AI Factory Strategy With Power, Platform, and Scale

Nscale has moved quickly from startup to serious contender in the race to build infrastructure for the AI era. Founded in 2024, the company has positioned itself as a vertically integrated “neocloud” operator, combining data center development, GPU fleet ownership, and a software stack designed to deliver large-scale AI compute. That model has helped it attract backing from investors including Nvidia, and in early March 2026 the company raised another $2 billion at a reported $14.6 billion valuation. Reuters has described Nscale’s approach as owning and operating its own data centers, GPUs, and software stack to support major customers including Microsoft and OpenAI. What makes Nscale especially relevant now is that it is no longer content to operate as a cloud intermediary or capacity provider. Over the past year, the company has increasingly framed itself as an AI hyperscaler and AI factory builder, seeking to combine land, power, data center shells, GPU procurement, customer offtake, and software services into a single integrated platform. Its acquisition of American Intelligence & Power Corporation, or AIPCorp, is the clearest signal yet of that shift, bringing energy infrastructure directly into the center of Nscale’s business model. The AIPCorp transaction is significant because it gives Nscale more than additional development capacity. The company said the deal includes the Monarch Compute Campus in Mason County, West Virginia, a site of up to 2,250 acres with a state-certified AI microgrid and a power runway it says can scale beyond 8 gigawatts. Nscale also said the acquisition establishes a new division, Nscale Energy & Power, headquartered in Houston, extending its platform further into power development. That positioning reflects a broader shift in the AI infrastructure market. The central bottleneck is no longer simply access to GPUs. It is the ability to assemble power, cooling, land, permits, data center

Read More »

Four things we’d need to put data centers in space

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI without triggering an environmental crisis on Earth. But could it work? SpaceX is the latest in a string of high-tech companies extolling the potential of orbital computing infrastructure. Last year, Amazon founder Jeff Bezos said that the tech industry will move toward large-scale computing in space. Google has plans to loft data-crunching satellites, aiming to launch a test constellation of 80 as early as next year. And last November Starcloud, a startup based in Washington State, launched a satellite fitted with a high-performance Nvidia H100 GPU, marking the first orbital test of an advanced AI chip. The company envisions orbiting data centers as large as those on Earth by 2030. Proponents believe that putting data centers in space makes sense. The current AI boom is straining energy grids and adding to the demand for water, which is needed to cool the computers. Communities in the vicinity of large-scale data centers worry about increasing prices for those resources as a result of the growing demand, among other issues.
In space, advocates say, the water and energy problems would be solved. In constantly illuminated sun-synchronous orbits, space-borne data centers would have uninterrupted access to solar power. At the same time, the excess heat they produce would be easily expelled into the cold vacuum of space. And with the cost of space launches decreasing, and mega-rockets such as SpaceX’s Starship promising to push prices even lower, there could be a point at which moving the world’s data centers into space makes sound business sense. Detractors, on the other hand, tell a different story and point to a variety of technological hurdles, though some say it’s possible they may be surmountable in the not-so-distant future. Here are four of the must-haves we’d need to make space-based data centers a reality.  A way to carry away heat  AI data centers produce a lot of heat. Space might seem like a great place to dispel that heat without using up massive amounts of water. But it’s not so simple. To get the power needed to run 24-7, a space-based data center would have to be in a constantly illuminated orbit, circling the planet from pole to pole, and never hide in Earth’s shadow. And in that orbit, the temperature of the equipment would never drop below 80 °C, which is way too hot for electronics to operate safely in the long term. 
Getting the heat out of such a system is surprisingly challenging. “Thermal management and cooling in space is generally a huge problem,” says Lilly Eichinger, CEO of the Austrian space tech startup Satellives. On Earth, heat dissipates mostly through the natural process of convection, which relies on the movement of gases and liquids like air and water. In the vacuum of space, heat has to be removed through the far less efficient process of radiation. Safely removing the heat produced by the computers, as well as what’s absorbed from the sun, requires large radiative surfaces. The bulkier the satellite, the harder it is to send all the heat inside it out into space. But Yves Durand, former director of technology at the European aerospace giant Thales Alenia Space, says that technology already exists to tackle the problem. The company previously developed a system for large telecommunications satellites that can pipe refrigerant fluid through a network of tubing using a mechanical pump, ultimately transferring heat from within a spacecraft to radiators on the exterior. Durand led a 2024 feasibility study on space-based data centers, which found that although challenges exist, it should be possible for Europe to put gigawatt-scale data centers (on par with the largest Earthbound facilities) into orbit before 2050. These would be considerably larger than those envisioned by SpaceX, featuring solar arrays hundreds of meters in size—larger than the International Space Station. Computer chips that can withstand a radiation onslaught The space around Earth is constantly battered by cosmic particles and lashed by solar radiation. On Earth’s surface, humans and their electronic devices are protected from this corrosive soup of charged particles by the planet’s atmosphere and magnetosphere. But the farther away from Earth you venture, the weaker that protection becomes. Studies show that aircraft crews have a higher risk of developing cancer because of their frequent exposure to high radiation at cruising altitude, where the atmosphere is thin and less protective. Electronics in space are at risk of three types of problems caused by high radiation levels, says Ken Mai, a principal systems scientist in electrical and computer engineering at Carnegie Mellon University. Phenomena known as single-event upsets can cause bit flips and corrupt stored data when charged particles hit chips and memory devices. Over time, electronics in space accumulate damage from ionizing radiation that degrades their performance. And sometimes a charged particle can strike the component in a way that physically displaces atoms on the chip, creating permanent damage, Mai explains. Traditionally, computers launched to space had to undergo years of testing and were specifically designed to withstand the intense radiation present in Earth’s orbit. These space-hardened electronics are much more expensive, though, and their performance is also years behind the state-of-the-art devices for Earth-based computing. Launching conventional chips is a gamble. But Durand says cutting-edge computer chips use technologies that are by default more resistant to radiation than past systems. And in mid-March, Nvidia touted hardware, including a new GPU, that is “bringing AI compute to orbital data centers.”  Nvidia’s head of edge AI marketing, Chen Su, told MIT Technology Review, that “Nvidia systems are inherently commercial off the shelf, with radiation resilience achieved at the system level rather than through radiation‑hardened silicon alone.” He added that satellite makers increase the chips’ resiliency with the help of shielding, advanced software for error detection, and architectures that combine the consumer-grade devices with bespoke, hardened technologies.

Still, Mai says that the data-crunching chips are only one issue. The data centers would also need memory and storage devices, both of which are vulnerable to damage by excessive radiation. And operators would need the ability to swap things out or adapt when issues arise. The feasibility and affordability of using robots or astronaut missions for maintenance is a major question mark hanging over the idea of large-scale orbiting data centers. “You not only need to throw up a data center to space that meets your current needs; you need redundancy, extra parts, and reconfigurability, so when stuff breaks, you can just change your configuration and continue working,” says Mai. “It’s a very challenging problem because on one hand you have free energy and power in space, but there are a lot of disadvantages. It’s quite possible that those problems will outweigh the advantages that you get from putting a data center into space.” In addition to the need for regular maintenance, there’s also the potential for catastrophic loss. During periods of intense space weather, satellites can be flooded with enough radiation to kill all their electronics. The sun has just passed the most active phase of its 11-year cycle with relatively little impact on satellites. Still, experts warn that since the space age began, the planet has not experienced the worst the sun is capable of. Many doubt whether the low-cost new space systems that dominate Earth’s orbits today are prepared for that. A plan to dodge space debris Both large-scale orbiting data centers such as those envisioned by Thales Alenia Space and the mega-constellations of smaller satellites as proposed by SpaceX give a headache to space sustainability experts. The space around Earth is already quite crowded with satellites. Starlink satellites alone perform hundreds of thousands of collision avoidance maneuvers every year to dodge debris and other spacecraft. The more stuff in space, the higher the likelihood of a devastating collision that would clutter the orbit with thousands of dangerous fragments. Large structures with hundreds of square meters of solar arrays would quickly suffer damage from small pieces of space debris and meteorites, which would over time degrade the performance of their solar panels and create more debris in orbit. Operating one million satellites in low Earth orbit, the region of space at the altitude of up to 2,000 kilometers, might be impossible to do safely unless all satellites in that area are part of the same network so they can communicate effectively to maneuver around each other, Greg Vialle, the founder of the orbital recycling startup Lunexus Space, told MIT Technology Review. “You can fit roughly four to five thousand satellites in one orbital shell,” Vialle says. “If you count all the shells in low Earth orbit, you get to a number of around 240,000 satellites maximum.” And spacecraft must be able to pass each other at a safe distance to avoid collisions, he says.  “You also need to be able to get stuff up to higher orbits and back down to de-orbit,” he adds. “So you need to have gaps of at least 10 kilometers between the satellites to do that safely. Mega-constellations like Starlink can be packed more tightly because the satellites communicate with each other. But you can’t have one million satellites around Earth unless it’s a monopoly.”
On top of that, Starlink would likely want to regularly upgrade its orbiting data centers with more modern technology. Replacing a million satellites perhaps every five years would mean even more orbital traffic—and it could increase the rate of debris reentry into Earth’s atmosphere from around three or four pieces of junk a day to about one every three minutes, according to a group of astronomers who filed objections against SpaceX’s FCC application. Some scientists are concerned that reentering debris could damage the ozone layer and alter Earth’s thermal balance.  Economical launch and assembly The longer hardware survives in orbit, the better the return on investment. But for orbital data centers to make economic sense, companies will have to find a relatively cheap way to get that hardware in orbit. SpaceX is betting on its upcoming Starship mega-rocket, which will be able to carry up to six times as much payload as the current workhorse, Falcon 9. The Thales Alenia Space study concluded that if Europe were to build its own orbital data centers, it would have to develop a similarly potent launcher. 
But launch is only part of the equation. A large-scale orbital data center won’t fit in a rocket—even a mega-rocket. It will need to be assembled in orbit. And that will likely require advanced robotic systems that do not exist yet. Various companies have conducted Earth-based tests with precursors of such systems, but they are still far from real-world use. Durand says that in the short term, smaller-scale data centers are likely to establish themselves as an integral part of the orbital infrastructure, by processing images from Earth-observing satellites directly in space without having to send them to Earth. That would be a huge help for companies selling insights from space, as many of these data sets are extremely large, and competition for opportunities to downlink them to Earth for processing via ground stations is growing. “The good thing with orbital data centers is that you can start with small servers and gradually increase and build up larger data centers,” says Durand. “You can use modularity. You can learn little by little and gradually develop industrial capacity in space. We have all the technology, and the demand for space-based data processing infrastructure is huge, so it makes sense to think about it.” Smaller facilities probably won’t do much to offset the strain that terrestrial data centers are placing on the planet’s water and electricity, though. That vision of the future might take decades to come to fruition, some critics think—if it even gets off the ground at all. 

Read More »

French government take Bull by horns for €404 million

It’s the second time that Bull has been nationalized: The first time, in 1982 was to save it from bankruptcy. Atos, has had financial troubles of its own. In August 2024, it tried — and failed — to sell its legacy infrastructure management business. The company had already staved off possible bankruptcy thanks to some refinancing from banks and the French government. That was after it had contemplated selling its IT security business. Bull still retains some pedigree. As part of Atos subsidiary Eviden it built Jupiter, the world’s fourth most-powerful computer at time of writing, with a computing capacity of 1 exaflop (1 billion billion floating-point operations per second). It is also a member of a consortium building Europe’s second exa-scale computing system, Alice Recoque, due for delivery in 2027.

Read More »

CERT-EU blames Trivy supply chain attack for Europa.eu data breach

Back door credentials The Trivy compromise dates to February, when TeamPCP exploited a misconfiguration in Trivy’s GitHub Actions environment, now identified as CVE-2026-33634, to establish a foothold via a privileged access token, according to Aqua Security. Discovering this, Aqua Security rotated credentials but, because some credentials remain valid during this process, the attackers were able to steal the newly rotated credentials. By manipulating trusted Trivy version tags, TeamPCP forced CI/CD pipelines using the tool to automatically pull down credential-stealing malware it had implanted. This allowed TeamPCP to target a variety of valuable information including AWS, GCP, Azure cloud credentials, Kubernetes tokens, Docker registry credentials, database passwords, TLS private keys, SSH keys, and cryptocurrency wallet files, according to security researchers at Palo Alto Networks. In effect, the attackers had turned a tool used to find cloud vulnerabilities and misconfigurations into a yawning vulnerability of its own. CERT-EU advised organizations affected by the Trivy compromise to immediately update to a known safe version, rotate all AWS and other credentials, audit Trivy versions in CI/CD pipelines, and most importantly ensure GitHub Actions are tied to immutable SHA-1 hashes rather than mutable tags. It also recommended looking for indicators of compromise (IoCs) such as unusual Cloudflare tunnelling activity or traffic spikes that might indicate data exfiltration.

Read More »

Cisco fixes critical IMC auth bypass present in many products

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboard management controller (BMC), a dedicated controller embedded into server motherboards with its own RAM and network interface that gives administrators monitoring and management capabilities as if they were physically connected to the server with a keyboard, monitor, and mouse (KVM). Because BMCs run their own firmware independently of the OS, they can be used to perform operations even when the OS is shut down, including reinstalling it.

Read More »

Kyndryl service targets AI agent automation, security

Understand agents, serving as a single source of truth to help mitigate the risks associated with shadow AI. Validate each agent before launch by testing for security, resilience, and policy compliance to ensure they meet your standards before going live. Maintain control with real-time guardrails that keep agents operating within approved boundaries. Security testing, validation, and threat modeling should be incorporated into development pipelines, Kyndryl stated. “Additionally, runtime protections such as anomaly detection, guardian agents, and rapid isolation capabilities can help contain incidents before they escalate. By making security and governance foundational rather than treating them as afterthoughts, organizations can confidently scale agentic AI, knowing that risks are proactively managed, and trust is maintained with customers, partners, and regulators,” Kyndryl stated. The new service is just one of the platforms the vendor offers to manage AI agents. Last year Kyndral introduced its Agentic AI Framework. That package offers an orchestration system built to deploy and manage autonomous, self-learning agents across business workflows in on-prem, cloud, or hybrid IT environments, according to the company. Specialized agents are deployed to gather IT information, such as data analysis, compliance checks, incident response or service desk ticket resolution. Over time, agents learn from data and outcomes to improve decision-making and adapt workflows autonomously, and an orchestration engine parses that data to let enterprise systems adjust to changing conditions in real time, Kyndryl stated. The platform defines what actions agents can and cannot do, basically setting policy across the enterprise.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE