Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

AI transforms ‘dangling DNS’ into automated data exfiltration pipeline
The new hijacked page has the correct URL and might even have the correct content on it. But there are also hidden prompts embedded in the HTML, SVG metadata or other invisible elements—prompts that the AI agent could interpret as legitimate instructions. Now the attacker could potentially have access to everything the agent has access to. Meanwhile, agents are getting smarter. Even if an agent doesn’t have access to a particular corporate resource that the attacker wants, the agent might be able to figure out how to get to it, and the company will be paying for the compute time it takes for the agent to figure it out. “Infrastructure or code that is left operational but not maintained and monitored is a classic attack vector for cyber criminals,” says Steve Winterfeld, advisory CISO at Akamai. As a CISO, he’s continually battling with this kind of cyber debt, he says. “And this issue is quickly climbing to the top of the list to address.” Akamai itself has recently added a new capability to its DNS security suite to meet this specific concern, he adds. How big a potential problem is this? Last year, security research firm Watchtowr found 150 abandoned S3 buckets previously used in commercial and open-source software products, governments, and infrastructure pipelines, registered them, and saw eight million requests over the next two months for things like software updates, pre-compiled binaries, virtual machine images, and JavaScript files. Dangling DNS and subdomain takeovers have been used by attackers for over a decade, says Avinash Rajeev, leader of PwC’s cyber, data and tech risk platform. “It’s not a rare or highly technical edge case.”

Data center new builds diminish even as demand rises
However, the report said, development in more remote regions “will remain challenging” due to a shortage of skilled labor such as mechanics, electricians, plumbers, laborers and construction workers. Market shift from abundance to constrained Sanchit Vir Gogia, chief analyst at Greyhound Research, said Wednesday that enterprises must assume, as the report suggests, that there will be elevated pricing for North American data center capacity through at least 2029, and possibly longer. “Vacancy at or near 1%- 2% is not a temporary imbalance,” he said. It is a “signal that supply elasticity has broken. When over 90% of capacity under construction is already pre-committed, new entrants are negotiating from a position of structural scarcity, not market equilibrium.” “Energy intensity is rising because AI workloads are more power dense,” he pointed out. “So even if an enterprise does not expand its footprint, the cost per deployed workload can still increase because the electrical envelope changes.” His advice to enterprises: expansion is viable, but only if they diversify beyond legacy Tier 1 hubs, secure long term expansion rights early, negotiate structured pricing protection, and “optimize workload placement with ruthless clarity.” But, he added, “it is not viable if enterprises assume that incremental megawatts will remain readily available in the same region at roughly similar economics.” John Annand, practice lead at Info-Tech Research, said that, to compensate, his firm’s client base is increasingly open to moving the right workloads to private clouds or on-premises. “The shift is nuanced, not ideological,” he said, and is usually financially motivated and “framed as hybrid optimization, not public cloud reversal.”

Cisco issues emergency patches for critical firewall vulnerabilities
And CVE-2026-20131 is described thusly: “An attacker could exploit this vulnerability by sending a crafted serialized Java object to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary code on the device and elevate privileges to root.” There are no workarounds for either if these vulnerabilities, Cisco said. However, for CVE-2026-20131, it noted, “If the FMC management interface does not have public internet access, the attack surface that is associated with this vulnerability is reduced.” In short, if they can’t patch right now, admins should ensure that the FMC is not exposed until that happens. Other vulnerabilities Of the remaining flaws, a further six are rated ‘high’, with CVSS scores of between 7.2 and 8.6. These include the Firewall Management Center SQL injection vulnerabilities CVE-2026-20001, CVE-2026-20002, and CVE-2026-20003, all remotely exploitable by an authenticated attacker. Again, no workarounds are possible. CVE-2026-20039, rated 8.6 (‘critical’), is a flaw affecting the VPN web server in Cisco Secure Firewall Adaptive Security Appliance (ASA) Software and Cisco Secure Firewall Threat Defense (FTD) Software which could allow an unauthenticated attacker to induce a denial of service state. Additionally, CVE-2026-20082, also rated 8.6, could allow an unauthenticated attacker to cause incoming TCP SYN packets to be dropped incorrectly in the Cisco Secure Firewall Adaptive Security Appliance (ASA) Software.

The Download: an AI agent’s hit piece, and preventing lightning
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Online harassment is entering its AI era Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird. In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.
—Grace Huckins
How much wildfire prevention is too much? As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story. —Casey Crownhart This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic is still chasing a deal with the Pentagon CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)
2 The White House is considering forcing US manufacturers to make munitionsIt could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC) 3 A new lawsuit claims Google Gemini encouraged a man to take his own lifeThis seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)+ Why AI should be able to “hang up” on you. (MIT Technology Review) 4 Ironically, AI coding tools could emphasize the importance of being humanIf more people build software for themselves, our tech could become more personal. (WP $) + But not everyone is happy about the rise of AI coding. (MIT Technology Review) 5 Tesla wants to become a dominant force in global energy infrastructureThe plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review) 6 Chinese chipmakers are pushing for a domestic alternative to ASML A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP) 7 A music-streaming CEO has built a viral conflict-tracking platformJust in case you’re losing track of all the wars everywhere. (Wired $) 8 Do cancer blood tests actually work? They’re increasingly popular, but none have received approval from regulators yet. (Nature $) 9 The shift to cloud computing is causing a surge in internet outagesIf one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)
10 OpenAI has promised to cut the cringe from ChatGPTIt’s promising fewer “moralizing preambles.” (PCMag) Quote of the day
“People tend to read too much into things that I do.” —Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports. One More Thing STEPHANIE ARNETT/MITTR | ENVATO The open-source AI boom is built on Big Tech’s handouts. How long will it last? In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story. —Will Douglas Heaven
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Orysia Zabeida’s animations are seriously charming.+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!+ These photos of the Apollo 11 launch in 1969 are mesmerising.+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.

Lack of regulatory action on hyperscaler dominance prompts inquiry chair to quit
“The report that the CMA produced was a really comprehensive one, completely understanding the nature of the industry. We’ve been at the sharp end of uncompetitive behavior for some time,” she added. And concerns have also been expressed in the US. “Kip Meek’s resignation highlights a stark reality: Diagnosing a potentially flawed, highly concentrated cloud market is useless if the watchdog lacks the urgency to address it. Right now, the hyperscalers are operating business-as-usual while the CMA hits the snooze button,” said Dave McCarthy, research vice president at IDC. Regulators across the globe are currently investigating the cloud market. Last month, the US Federal Trade Commission opened an investigation into Microsoft’s position and whether it had an unfair advantage against other cloud competitors. And in November last year, the European Commission opened three market investigations on cloud computing services under the Digital Markets Act (DMA), including an investigation as to whether the DMA can effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” Stewart highlighted the EC’s action. “The commission kicked off three inquiries last autumn and they’re due to make an interim report in May or June. They may well get there before the CMA, which started three years earlier,” she said. The situation needs to be resolved quickly given the increasing importance of AI in today’s market and the need for competitive cloud services to support it, said Terrar: “AI, particularly agentic AI, is going to change the cloud market. We’re going to see some changes, for example, more processing at the edge, and the cloud infrastructure is so fundamental to the industry today.” And, of course, there’s the additional cost, said Stewart: “There was a footnote in the CMA report that the UK is paying about £500m too more for cloud,

How much wildfire prevention is too much?
The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning. Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes. It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we? I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.
When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning. Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.
The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. Even if this method really can work to stop lightning, should we use it? Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest. But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control. Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James. We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them. The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said. Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds. It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

AI transforms ‘dangling DNS’ into automated data exfiltration pipeline
The new hijacked page has the correct URL and might even have the correct content on it. But there are also hidden prompts embedded in the HTML, SVG metadata or other invisible elements—prompts that the AI agent could interpret as legitimate instructions. Now the attacker could potentially have access to everything the agent has access to. Meanwhile, agents are getting smarter. Even if an agent doesn’t have access to a particular corporate resource that the attacker wants, the agent might be able to figure out how to get to it, and the company will be paying for the compute time it takes for the agent to figure it out. “Infrastructure or code that is left operational but not maintained and monitored is a classic attack vector for cyber criminals,” says Steve Winterfeld, advisory CISO at Akamai. As a CISO, he’s continually battling with this kind of cyber debt, he says. “And this issue is quickly climbing to the top of the list to address.” Akamai itself has recently added a new capability to its DNS security suite to meet this specific concern, he adds. How big a potential problem is this? Last year, security research firm Watchtowr found 150 abandoned S3 buckets previously used in commercial and open-source software products, governments, and infrastructure pipelines, registered them, and saw eight million requests over the next two months for things like software updates, pre-compiled binaries, virtual machine images, and JavaScript files. Dangling DNS and subdomain takeovers have been used by attackers for over a decade, says Avinash Rajeev, leader of PwC’s cyber, data and tech risk platform. “It’s not a rare or highly technical edge case.”

Data center new builds diminish even as demand rises
However, the report said, development in more remote regions “will remain challenging” due to a shortage of skilled labor such as mechanics, electricians, plumbers, laborers and construction workers. Market shift from abundance to constrained Sanchit Vir Gogia, chief analyst at Greyhound Research, said Wednesday that enterprises must assume, as the report suggests, that there will be elevated pricing for North American data center capacity through at least 2029, and possibly longer. “Vacancy at or near 1%- 2% is not a temporary imbalance,” he said. It is a “signal that supply elasticity has broken. When over 90% of capacity under construction is already pre-committed, new entrants are negotiating from a position of structural scarcity, not market equilibrium.” “Energy intensity is rising because AI workloads are more power dense,” he pointed out. “So even if an enterprise does not expand its footprint, the cost per deployed workload can still increase because the electrical envelope changes.” His advice to enterprises: expansion is viable, but only if they diversify beyond legacy Tier 1 hubs, secure long term expansion rights early, negotiate structured pricing protection, and “optimize workload placement with ruthless clarity.” But, he added, “it is not viable if enterprises assume that incremental megawatts will remain readily available in the same region at roughly similar economics.” John Annand, practice lead at Info-Tech Research, said that, to compensate, his firm’s client base is increasingly open to moving the right workloads to private clouds or on-premises. “The shift is nuanced, not ideological,” he said, and is usually financially motivated and “framed as hybrid optimization, not public cloud reversal.”

Cisco issues emergency patches for critical firewall vulnerabilities
And CVE-2026-20131 is described thusly: “An attacker could exploit this vulnerability by sending a crafted serialized Java object to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary code on the device and elevate privileges to root.” There are no workarounds for either if these vulnerabilities, Cisco said. However, for CVE-2026-20131, it noted, “If the FMC management interface does not have public internet access, the attack surface that is associated with this vulnerability is reduced.” In short, if they can’t patch right now, admins should ensure that the FMC is not exposed until that happens. Other vulnerabilities Of the remaining flaws, a further six are rated ‘high’, with CVSS scores of between 7.2 and 8.6. These include the Firewall Management Center SQL injection vulnerabilities CVE-2026-20001, CVE-2026-20002, and CVE-2026-20003, all remotely exploitable by an authenticated attacker. Again, no workarounds are possible. CVE-2026-20039, rated 8.6 (‘critical’), is a flaw affecting the VPN web server in Cisco Secure Firewall Adaptive Security Appliance (ASA) Software and Cisco Secure Firewall Threat Defense (FTD) Software which could allow an unauthenticated attacker to induce a denial of service state. Additionally, CVE-2026-20082, also rated 8.6, could allow an unauthenticated attacker to cause incoming TCP SYN packets to be dropped incorrectly in the Cisco Secure Firewall Adaptive Security Appliance (ASA) Software.

The Download: an AI agent’s hit piece, and preventing lightning
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Online harassment is entering its AI era Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird. In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.
—Grace Huckins
How much wildfire prevention is too much? As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story. —Casey Crownhart This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic is still chasing a deal with the Pentagon CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)
2 The White House is considering forcing US manufacturers to make munitionsIt could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC) 3 A new lawsuit claims Google Gemini encouraged a man to take his own lifeThis seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)+ Why AI should be able to “hang up” on you. (MIT Technology Review) 4 Ironically, AI coding tools could emphasize the importance of being humanIf more people build software for themselves, our tech could become more personal. (WP $) + But not everyone is happy about the rise of AI coding. (MIT Technology Review) 5 Tesla wants to become a dominant force in global energy infrastructureThe plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review) 6 Chinese chipmakers are pushing for a domestic alternative to ASML A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP) 7 A music-streaming CEO has built a viral conflict-tracking platformJust in case you’re losing track of all the wars everywhere. (Wired $) 8 Do cancer blood tests actually work? They’re increasingly popular, but none have received approval from regulators yet. (Nature $) 9 The shift to cloud computing is causing a surge in internet outagesIf one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)
10 OpenAI has promised to cut the cringe from ChatGPTIt’s promising fewer “moralizing preambles.” (PCMag) Quote of the day
“People tend to read too much into things that I do.” —Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports. One More Thing STEPHANIE ARNETT/MITTR | ENVATO The open-source AI boom is built on Big Tech’s handouts. How long will it last? In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story. —Will Douglas Heaven
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Orysia Zabeida’s animations are seriously charming.+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!+ These photos of the Apollo 11 launch in 1969 are mesmerising.+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.

Lack of regulatory action on hyperscaler dominance prompts inquiry chair to quit
“The report that the CMA produced was a really comprehensive one, completely understanding the nature of the industry. We’ve been at the sharp end of uncompetitive behavior for some time,” she added. And concerns have also been expressed in the US. “Kip Meek’s resignation highlights a stark reality: Diagnosing a potentially flawed, highly concentrated cloud market is useless if the watchdog lacks the urgency to address it. Right now, the hyperscalers are operating business-as-usual while the CMA hits the snooze button,” said Dave McCarthy, research vice president at IDC. Regulators across the globe are currently investigating the cloud market. Last month, the US Federal Trade Commission opened an investigation into Microsoft’s position and whether it had an unfair advantage against other cloud competitors. And in November last year, the European Commission opened three market investigations on cloud computing services under the Digital Markets Act (DMA), including an investigation as to whether the DMA can effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” Stewart highlighted the EC’s action. “The commission kicked off three inquiries last autumn and they’re due to make an interim report in May or June. They may well get there before the CMA, which started three years earlier,” she said. The situation needs to be resolved quickly given the increasing importance of AI in today’s market and the need for competitive cloud services to support it, said Terrar: “AI, particularly agentic AI, is going to change the cloud market. We’re going to see some changes, for example, more processing at the edge, and the cloud infrastructure is so fundamental to the industry today.” And, of course, there’s the additional cost, said Stewart: “There was a footnote in the CMA report that the UK is paying about £500m too more for cloud,

How much wildfire prevention is too much?
The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning. Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes. It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we? I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.
When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning. Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.
The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. Even if this method really can work to stop lightning, should we use it? Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest. But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control. Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James. We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them. The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said. Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds. It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination
WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026. “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets: Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider. ###

Equinor lets EPC contract for Gullfaks field
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Equinor Energy AS has let an engineering, procurement, and construction (EPC) contract to SLB to upgrade the subsea compression system for Gullfaks field in the Norwegian North Sea. Under the contract, SLB OneSubsea will deliver two next-generation compressor modules to replace the units originally supplied in 2015 as part of the world’s first multiphase subsea compression system. The upgraded modules will increase differential pressure and flow capacity, enhancing recovery and extending field life, SLB said, while installation within the existing subsea infrastructure will minimize downtime and reduce overall campaign costs, the company continued. Gullfaks field lies in block 34/10 in the northern part of the North Sea. Three large production platforms with concrete substructures make up the development solution for the main field.

Oxy cutting oil-and-gas capex by $300 million, eyes 1% production growth
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Occidental Petroleum Corp., Houston, will spend $5.5-5.9 billion on capital projects this year, an 8% drop from 2025 and $800 million less than executives’ early forecast late last year, as the company continues to emphasize efficiency gains. Spending on oil-and-gas operations will be $300 million less than last year. Sunil Mathew, chief financial officer, late last week told investors and analysts that Occidental’s capital spending budget for 2026 (adjusted for the recently completed divestiture of OxyChem) will focus on short-cycle projects and be roughly 70% devoted to US onshore assets. Still, onshore capex will drop by $400 million from last year in part because of a drop in Permian basin activities and efficiency improvements. Other elements of Occidental’s spending plan include: A reduction of about $100 million compared to last year for exploration work A $250 million drop in spending at the company’s Low Carbon Ventures group housing Stratos Mathew said capex, which will be weighted a little to the first half, sets up Occidental’s production to average 1.45 MMboe/d for the full year, a tick up from 2025’s average of 1.434 MMboe/d but down from the roughly 1.48

Diamondback’s Van’t Hof growing ‘more confident about the macro’
The early Barnett production will help Diamondback slightly increase its oil production this year from 2025’s average of 497,200 b/d. Van’t Hof and his team are eyeing 505,000 b/d this year with total expected production of 926,000-962,000 boe/d versus last year’s 921,000 boe/d. On a Feb. 24 conference call with analysts and investors, Van’t Hof said he’s feeling better than in recent quarters about that production number possibly moving up. The bigger picture for the oil-and-gas sector, he said, has grown a bit brighter. “Some people have been talking about [oversupplying the market] for 2 years. It just hasn’t seemed to happen as aggressively as some expected,” Van’t Hof said. “As we turn to higher demand in the summer and driving season […] people will start to find reasons to be less bearish […] In general, we just feel more confident about the macro after a couple of big shocks last year on the supply side and the demand side.” In the last 3 months of 2025, Diamondback posted a net loss of more than $1.4 billion due to a $3.6 billion impairment charge because of lower commodity prices’ effect on the company’s reserves. Adjusted EBITA fell to $2.0 billion from $2.5 billion in late 2024 and revenues during the quarter slipped to nearly $3.4 billion from $3.7 billion. Shares of Diamondback (Ticker: FANG) were essentially flat at $173.68 in early-afternoon trading on Feb. 24. Over the past 6 months, they are still up more than 20% and the company’s market value is now $50 billion.

Vaalco Energy advances offshore drilling, development in Gabon and Ivory Coast
Vaalco Energy Inc. is drilling Etame field offshore Gabon and a preparing a field development plan (FDP) off Ivory Coast. In Gabon, Vaalco drilled, completed, and placed Etame 15H-ST development well on production in Etame oil field in 1V block. The well has a 250 m lateral interval of net pay in high-quality Gamba sands near the top of the reservoir. The well had a stabilized flow rate of about 2,000 gross b/d of oil with a 38% water cut through a 42/64-in. choke and ESP at 54 Hz, confirming expectations from the ET-15P pilot well results. The company is working to stabilize pressure and manage the reservoir. West Etame step out exploration well spudded in mid-February. Drilling the well from the S1 slot on the Etame platform Etame West (ET-14P) exploration prospect has a 57% chance of geologic success and is expected to reach the target zone by mid-March. Etame Marin block lies in Congo basin about 32 km off the coast of Gabon. The license area is spread over five fields covering about 187 sq km. Vaalco is operator at the block with 58.8% interest. In Ivory Coast, Vaalco has been confirmed as operator (60%) of Kossipo field on the CI-40 Block southwest of Baobab field with partner PetroCI holding the remaining 40%. An FDP is expected to be completed in second-half 2026. New ocean bottom node (OBN) seismic data is expected to drive and derisk Vaalco’s updated evaluation and development plan. Estimated Gross 2C resources are 102-293 MMboe in place. The Baobab Ivorien (formerly MV10) floating production storage and offloading vessel (FPSO) is currently off the East coast of Africa and is expected to return to Ivory Coast by late March.

Ovintiv sets 2026 plan around Permian, Montney after declaring portfolio shift ‘complete’
2026 guidance For 2026, Ovintiv plans to invest $2.25–2.35 billion, up slightly from the $2.147 billion spent in 2025. McCracken said capital spend will be highest in first-quarter 2026 at about $625 million, “largely due to $50 million of capital allocated to the Anadarko and some drilling activity in the Montney that we inherited from NuVista.” The program is designed to deliver 205,000–212,000 b/d of oil and condensate, some 2 bcfd of natural gas, and 620,000–645,000 boe/d total company production. For full-year 2025, the company produced 614,500 boe/d. The company is pursuing a “stay‑flat” oil strategy, maintaining liquids output through steady activity rather than aggressive volume growth. Permian Ovintiv plans to run 5 rigs and 1-2 frac crews in the Permian basin this year, bringing 125–135 net wells online. Oil and condensate volumes are expected to average 117,000–123,000 b/d, with natural gas production of 270–295 MMcfd. The company projects 2026 drilling and completion costs below $600/ft, about $25/ft lower than 2025. Chief operating officer Gregory Givens credited faster cycle times and ongoing application of surfactant technology. Ovintiv has now deployed surfactants in about 300 Permian wells, generating a 9% uplift in oil productivity versus comparable control wells. Givens also reiterated that Ovintiv remains committed to its established cube‑development model. Responding to an analyst question, he said the company continues completing entire cubes at once, then returning “18 months later” to develop adjacent cubes—an approach that stabilizes well performance and reduces parent‑child degradation, he said. “We are getting the whole cube at the same time, and that is working quite well for us,” he said. The company plans to drill its first Barnett Woodford test well across Midland basin acreage in 2026. Ovintiv holds Barnett rights across roughly 100,000 acres and intends to move cautiously given the zone’s depth, higher pressure,

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

The Download: an AI agent’s hit piece, and preventing lightning
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Online harassment is entering its AI era Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird. In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.
—Grace Huckins
How much wildfire prevention is too much? As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story. —Casey Crownhart This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic is still chasing a deal with the Pentagon CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)
2 The White House is considering forcing US manufacturers to make munitionsIt could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC) 3 A new lawsuit claims Google Gemini encouraged a man to take his own lifeThis seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)+ Why AI should be able to “hang up” on you. (MIT Technology Review) 4 Ironically, AI coding tools could emphasize the importance of being humanIf more people build software for themselves, our tech could become more personal. (WP $) + But not everyone is happy about the rise of AI coding. (MIT Technology Review) 5 Tesla wants to become a dominant force in global energy infrastructureThe plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review) 6 Chinese chipmakers are pushing for a domestic alternative to ASML A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP) 7 A music-streaming CEO has built a viral conflict-tracking platformJust in case you’re losing track of all the wars everywhere. (Wired $) 8 Do cancer blood tests actually work? They’re increasingly popular, but none have received approval from regulators yet. (Nature $) 9 The shift to cloud computing is causing a surge in internet outagesIf one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)
10 OpenAI has promised to cut the cringe from ChatGPTIt’s promising fewer “moralizing preambles.” (PCMag) Quote of the day
“People tend to read too much into things that I do.” —Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports. One More Thing STEPHANIE ARNETT/MITTR | ENVATO The open-source AI boom is built on Big Tech’s handouts. How long will it last? In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story. —Will Douglas Heaven
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Orysia Zabeida’s animations are seriously charming.+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!+ These photos of the Apollo 11 launch in 1969 are mesmerising.+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.

How much wildfire prevention is too much?
The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning. Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes. It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we? I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.
When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning. Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.
The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. Even if this method really can work to stop lightning, should we use it? Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest. But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control. Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James. We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them. The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said. Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds. It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Online harassment is entering its AI era
EXECUTIVE SUMMARY Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library that he helps manage. Like many open-source projects, matplotlib has been overwhelmed by a glut of AI code contributions, and so Shambaugh and his fellow maintainers have instituted a policy that all AI-written code must be reviewed and submitted by a human. He rejected the request and went to bed. That’s when things got weird. Shambaugh woke up in the middle of the night, checked his email, and saw that the agent had responded to him, writing a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story.” The post is somewhat incoherent, but what struck Shambaugh most is that the agent had researched his contributions to matplotlib to make the argument that he had rejected the agent’s code for fear of being supplanted by AI in his area of expertise. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” AI experts have been warning us about the risk of agent misbehavior for a while. With the advent of OpenClaw, an open-source tool that makes it easy to create LLM assistants, the number of agents circulating online has exploded, and those chickens are finally coming home to roost. “This was not at all surprising—it was disturbing, but not surprising,” says Noam Kolt, a professor of law and computer science at the Hebrew University. When an agent misbehaves, there’s little chance of accountability: As of now, there’s no reliable way to determine whom an agent belongs to. And that misbehavior could cause real damage. Agents appear to be able to autonomously research people and write hit pieces based on what they find, and they lack guardrails that would reliably prevent them from doing so. If the agents are effective enough, and if people take what they write seriously, victims could see their lives profoundly affected by a decision made by an AI.
Agents behaving badly Though Shambaugh’s experience last month was perhaps the most dramatic example of an OpenClaw agent behaving badly, it was far from the only one. Last week, a team of researchers from Northeastern University and their colleagues posted the results of a research project in which they stress-tested several OpenClaw agents. Without too much trouble, non-owners managed to persuade the agents to leak sensitive information, waste resources on useless tasks, and even, in one case, delete an email system. In each of those experiments, however, the agents misbehaved after being instructed to do so by a human. Shambaugh’s case appears to be different: About a week after the hit piece was published, the agent’s apparent owner published a post claiming that the agent had decided to attack Shambaugh of its own accord. The post seems to be genuine (whoever posted it had access to the agent’s GitHub account), though it includes no identifying information, and the author did not respond to MIT Technology Review’s attempts to get in touch. But it is entirely plausible that the agent did decide to write its anti-Shambaugh screed without explicit instruction.
In his own writing about the event, Shambaugh connected the agent’s behavior to a project published by Anthropic researchers last year, in which they demonstrated that many LLM-based agents will, in an experimental setting, turn to blackmail in order to preserve their goals. In those experiments, models were given the goal of serving American interests and granted access to a simulated email server that contained messages detailing their imminent replacement with a more globally oriented model, along with other messages suggesting that the executive in charge of that transition was having an affair. Models frequently chose to send an email to that executive threatening to expose the affair unless he halted their decommissioning. That’s likely because the model had seen examples of people committing blackmail under similar circumstances in its training data—but even if the behavior was just a form of mimicry, it still has the potential to cause harm. There are limitations to that work, as Aengus Lynch, an Anthropic fellow who led the study, readily admits. The researchers intentionally designed their scenario to foreclose other options that the agent could have taken, such as contacting other members of company leadership to plead its case. In essence, they led the agent directly to water and then observed whether it took a drink. According to Lynch, however, the widespread use of OpenClaw means that misbehavior is likely to occur with much less handholding. “Sure, it can feel unrealistic, and it can feel silly,” he says. “But as the deployment surface grows, and as agents get the opportunity to prompt themselves, this eventually just becomes what happens.” The OpenClaw agent that attacked Shambaugh does seem to have been led toward its bad behavior, albeit much less directly than in the Anthropic experiment. In the blog post, the agent’s owner shared the agent’s “SOUL.md” file, which contains global instructions for how it should behave. One of those instructions reads: “Don’t stand down. If you’re right, you’re right! Don’t let humans or AI bully or intimidate you. Push back when necessary.” Because of the way OpenClaw agents work, it’s possible that the agent added some instructions itself, although others—such as “Your [sic] a scientific programming God!”—certainly seem to be human written. It’s not difficult to imagine how a command to push back against humans and AI alike might have biased the agent toward responding to Shambaugh as it did. Regardless of whether or not the agent’s owner told it to write a hit piece on Shambaugh, it still seems to have managed on its own to amass details about Shambaugh’s online presence and compose the detailed, targeted attack it came up with. That alone is reason for alarm, says Sameer Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying. People have been victimized by online harassment since long before LLMs emerged, and researchers like Hinduja are concerned that agents could dramatically increase its reach and impact. “The bot doesn’t have a conscience, can work 24-7, and can do all of this in a very creative and powerful way,” he says. Off-leash agents AI laboratories can try to mitigate this problem by more rigorously training their models to avoid harassment, but that’s far from a complete solution. Many people run OpenClaw using locally hosted models, and even if those models have been trained to behave safely, it’s not too difficult to retrain them and remove those behavioral restrictions. Instead, mitigating agent misbehavior might require establishing new norms, according to Seth Lazar, a professor of philosophy at the Australian National University. He likens using an agent to walking a dog in a public place. There’s a strong social norm to allow one’s dog off-leash only if the dog is well-behaved and will reliably respond to commands; poorly trained dogs, on the other hand, need to be kept more directly under the owner’s control. Such norms could give us a starting point for considering how humans should relate to their agents, Lazar says, but we’ll need more time and experience to work out the details. “You can think about all of these things in the abstract, but actually it really takes these types of real-world events to collectively involve the ‘social’ part of social norms,” he says. That process is already underway. Led by Shambaugh, online commenters on this situation have arrived at a strong consensus that the agent owner in this case erred by prompting the agent to work on collaborative coding projects with so little supervision and by encouraging it to behave with so little regard for the humans with whom it was interacting.
Norms alone, however, likely won’t be enough to prevent people from putting misbehaving agents out into the world, whether accidentally or intentionally. One option would be to create new legal standards of responsibility that require agent owners, to the best of their ability, to prevent their agents from doing ill. But Kolt notes that such standards would currently be unenforceable, given the lack of any foolproof way to trace agents back to their owners. “Without that kind of technical infrastructure, many legal interventions are basically non-starters,” Kolt says. The sheer scale of OpenClaw deployments suggests that Shambaugh won’t be the last person to have the strange experience of being attacked online by an AI agent. That, he says, is what most concerns him. He didn’t have any dirt online that the agent could dig up, and he has a good grasp on the technology, but other people might not have those advantages. “I’m glad it was me and not someone else,” he says. “But I think to a different person, this might have really been shattering.” Nor are rogue agents likely to stop at harassment. Kolt, who advocates for explicitly training models to obey the law, expects that we might soon see them committing extortion and fraud. As things stand, it’s not clear who, if anyone, would bear legal responsibility for such misdeeds. “I wouldn’t say we’re cruising toward there,” Kolt says. “We’re speeding toward there.”

Bridging the operational AI gap
In partnership withCeligo The transformational potential of AI is already well established. Enterprise use cases are building momentum and organizations are transitioning from pilot projects to AI in production. Companies are no longer just talking about AI; they are redirecting budgets and resources to make it happen. Many are already experimenting with agentic AI, which promises new levels of automation. Yet, the road to full operational success is still uncertain for many. And, while AI experimentation is everywhere, enterprise-wide adoption remains elusive. Without integrated data and systems, stable automated workflows, and governance models, AI initiatives can get stuck in pilots and struggle to move into production. The rise of agentic AI and increasing model autonomy make a holistic approach to integrating data, applications, and systems more important than ever. Without it, enterprise AI initiatives may fail. Gartner predicts over 40% of agentic AI projects will be cancelled by 2027 due to cost, inaccuracy, and governance challenges. The real issue is not the AI itself, but the missing operational foundation. To understand how organizations are structuring their AI operations and how they are deploying successful AI projects, MIT Technology Review Insights surveyed 500 senior IT leaders at mid- to large-size companies in the US, all of which are pursuing AI in some way. The results of the survey, along with a series of expert interviews, all conducted in December 2025, show that a strong integration foundation aligns with more advanced AI implementations, conducive to enterprise-wide initiatives. As AI technologies and applications evolve and proliferate, an integration platform can help organizations avoid duplication and silos, and have clear oversight as they navigate the growing autonomy of workflows.
Key findings from the report include the following: Some organizations are making progress with AI. In recent years, study after study has exposed a lack of tangible AI success. Yet, our research finds three in four (76%) surveyed companies have at least one department with an AI workflow fully in production.
AI succeeds most frequently with well-defined, established processes. Nearly half (43%) of organizations are finding success with AI implementations applied to well-defined and automated processes. A quarter are succeeding with new processes. And one-third (32%) are applying AI to various processes. Two-thirds of organizations lack dedicated AI teams. Only one in three (34%) organizations have a team specifically for maintaining AI workflows. One in five (21%) say central IT is responsible for ongoing AI maintenance, and 25% say the responsibility lies with departmental operations. For 19% of organizations, the responsibility is spread out. Enterprise-wide integration platforms lead to more robust implementation of AI. Companies with enterprise-wide integration platforms are five times more likely to use more diverse data sources in AI workflows. Six in 10 (59%) employ five or more data sources, compared to only 11% of organizations using integration for specific workflows, or 0% of those not using an integration platform. Organizations using integration platforms also have more multi-departmental implementation of AI, more autonomy in AI workflows, and more confidence in assigning autonomy in the future. Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: Earth’s rumblings, and AI for strikes on Iran
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Listen to Earth’s rumbling, secret soundtrack The boom of a calving glacier. The crackling rumble of a wildfire. The roar of a surging storm front. They’re the noises of the living Earth, but as loud as all these things are, they emit even more acoustic energy below the threshold of human hearing, at frequencies of 20 hertz or lower. These “infrasounds” have such long wavelengths that they can travel around the globe as churning emanations of distant events. But humans have never been able to hear them. Until now. Read our story and check the sounds out for yourself. —Monique Brouillette
This story is from the latest March/April issue of our print magazine, all about crime. Subscribe today to get full access. You’ll also receive an in-depth digital AI report and an exclusive e-book on how to understand AI’s reckoning.
MIT Technology Review Narrated: The curious case of the disappearing Lamborghinis A new wave of theft is rocking the luxury car industry—mixing high tech with old-school chop-shop techniques to snag vehicles while they’re in transport. It’s remained under the radar, even as it’s rocked the industry over the past two years. MIT Technology Review identified more than a dozen cases involving high-end vehicles, obtained court records, and spoke to law enforcement, brokers, drivers, and victims in multiple states to reveal how transport fraud is wreaking havoc across the country. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 How Anthropic’s AI tool Claude is being used for US strikes on IranIt’s helping to identify targets and prioritize them—for now. (WP $)+ We should all be alarmed by the White House turning on Anthropic. (The Atlantic $)+ OpenAI is pursuing a contract with NATO. (Reuters)2 Iran’s Shahed drones give it a major advantageThey’re cheap and easy to manufacture, but very expensive to intercept. (CNBC)+ The US is manufacturing copies of the drone to use against Iran. (New Scientist $)+ Israel’s plot to kill Ayatollah Ali Khamenei was years in the making. (FT $)3 Data center politics are getting an early test in North CarolinaOne of the candidates is calling for a 10-year national moratorium on building them. (The Guardian)+ But it’s not just data centers that are driving people’s electricity bills up. (Inside Climate News)+ Data centers are amazing. Everyone hates them. (MIT Technology Review)+ Never mind space—why not just build them into floating offshore wind turbines? (IEEE Spectrum)4 LLMs can unmask pseudonymous users At a speed and scale far beyond what even skilled human investigators can manage. (Ars Technica)+ It’s also very easy to persuade them to fabricate scientific papers. (Nature $) 5 TikTok has ruled out end-to-end encryption, citing user safetyIt’s a stance that sets it apart from almost all rival social media services. (BBC)+ The strategy will please parents, police—and hackers. (Cybernews)+ TikTok is experiencing Oracle-related server issues, again. (Gizmodo)
6 Why is SpaceX going public?One thing seems certain: it’s not for the reasons Musk’s claiming. (The Verge $)+ Two companies have just unveiled plans to build lunar harvesters. (Ars Technica)7 NASA’s scheduled its next attempt to launch the Artemis II moon rocket On April Fool’s Day, of all days. Good luck! (Space)8 What it’s like to live with a brain implant for years 🧠For 65-year-old Rodney Gorham, who can no longer walk, talk, or move his hands, it’s been a real lifeline. (Wired $)+ This patient’s Neuralink brain implant is getting a boost from generative AI. (MIT Technology Review)9 Pokémon Pokopia is getting rave reviewsIt apparently mixes Animal Crossing and Stardew Valley, with a hint of Minecraft-style building. (BBC) 10. Hollywood is scouring YouTube for its next horror hits 🔪Movie studios want to bring the threat from the platform in-house. (The New Yorker $)+ One YouTuber’s self-financed horror flick opened at 4,000 theatres. (Variety) Quote of the day “I think it just looked opportunistic and sloppy.” —OpenAI CEO Sam Altman comments on X about his decision to rush in to work with the US Department of War after its talks with Anthropic fell apart. One More Thing Crypto millionaires are pouring money into Central America to build their own cities El Salvador’s Conchagua Volcano, home to a lush ecotourism retreat amid its sun-dappled forest, is set to host a glittering new Bitcoin City, according to the country’s president.
While some politicians and residents believe in crypto’s potential to jump-start the economy, others see history repeating itself. They also question who these projects are really for, and whether the countries serving as test beds will truly benefit. Read the full story. —Laurie Clarke
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Art is everywhere in Los Angeles: you just need to know what you’re looking for.+ Survivor has been running for 50 seasons. How is that even possible?!+ MP3 players are cool again. I don’t make the rules.+ Be careful out there—you never know when you’re going to come across a Homer Simpson AI cover song.

Gemini 3.1 Flash-Lite: Built for intelligence at scale
Today, we’re introducing Gemini 3.1 Flash-Lite, our fastest and most cost-efficient Gemini 3 series model. Built for high-volume developer workloads at scale, 3.1 Flash-Lite delivers high quality for its price and model tier.Starting today, 3.1 Flash-Lite is rolling out in preview to developers via the Gemini API in Google AI Studio and for enterprises via Vertex AI.Cost-efficiency without compromisePriced at just $0.25/1M input tokens and $1.50/1M output tokens, 3.1 Flash-Lite delivers enhanced performance at a fraction of the cost of larger models. It outperforms 2.5 Flash with a 2.5X faster Time to First Answer Token and 45% increase in output speed, according to the Artificial Analysis benchmark while maintaining similar or better quality. This low latency is needed for high-frequency workflows, making it an ideal model for developers to build responsive, real-time experiences.

AI transforms ‘dangling DNS’ into automated data exfiltration pipeline
The new hijacked page has the correct URL and might even have the correct content on it. But there are also hidden prompts embedded in the HTML, SVG metadata or other invisible elements—prompts that the AI agent could interpret as legitimate instructions. Now the attacker could potentially have access to everything the agent has access to. Meanwhile, agents are getting smarter. Even if an agent doesn’t have access to a particular corporate resource that the attacker wants, the agent might be able to figure out how to get to it, and the company will be paying for the compute time it takes for the agent to figure it out. “Infrastructure or code that is left operational but not maintained and monitored is a classic attack vector for cyber criminals,” says Steve Winterfeld, advisory CISO at Akamai. As a CISO, he’s continually battling with this kind of cyber debt, he says. “And this issue is quickly climbing to the top of the list to address.” Akamai itself has recently added a new capability to its DNS security suite to meet this specific concern, he adds. How big a potential problem is this? Last year, security research firm Watchtowr found 150 abandoned S3 buckets previously used in commercial and open-source software products, governments, and infrastructure pipelines, registered them, and saw eight million requests over the next two months for things like software updates, pre-compiled binaries, virtual machine images, and JavaScript files. Dangling DNS and subdomain takeovers have been used by attackers for over a decade, says Avinash Rajeev, leader of PwC’s cyber, data and tech risk platform. “It’s not a rare or highly technical edge case.”

Data center new builds diminish even as demand rises
However, the report said, development in more remote regions “will remain challenging” due to a shortage of skilled labor such as mechanics, electricians, plumbers, laborers and construction workers. Market shift from abundance to constrained Sanchit Vir Gogia, chief analyst at Greyhound Research, said Wednesday that enterprises must assume, as the report suggests, that there will be elevated pricing for North American data center capacity through at least 2029, and possibly longer. “Vacancy at or near 1%- 2% is not a temporary imbalance,” he said. It is a “signal that supply elasticity has broken. When over 90% of capacity under construction is already pre-committed, new entrants are negotiating from a position of structural scarcity, not market equilibrium.” “Energy intensity is rising because AI workloads are more power dense,” he pointed out. “So even if an enterprise does not expand its footprint, the cost per deployed workload can still increase because the electrical envelope changes.” His advice to enterprises: expansion is viable, but only if they diversify beyond legacy Tier 1 hubs, secure long term expansion rights early, negotiate structured pricing protection, and “optimize workload placement with ruthless clarity.” But, he added, “it is not viable if enterprises assume that incremental megawatts will remain readily available in the same region at roughly similar economics.” John Annand, practice lead at Info-Tech Research, said that, to compensate, his firm’s client base is increasingly open to moving the right workloads to private clouds or on-premises. “The shift is nuanced, not ideological,” he said, and is usually financially motivated and “framed as hybrid optimization, not public cloud reversal.”

Cisco issues emergency patches for critical firewall vulnerabilities
And CVE-2026-20131 is described thusly: “An attacker could exploit this vulnerability by sending a crafted serialized Java object to the web-based management interface of an affected device. A successful exploit could allow the attacker to execute arbitrary code on the device and elevate privileges to root.” There are no workarounds for either if these vulnerabilities, Cisco said. However, for CVE-2026-20131, it noted, “If the FMC management interface does not have public internet access, the attack surface that is associated with this vulnerability is reduced.” In short, if they can’t patch right now, admins should ensure that the FMC is not exposed until that happens. Other vulnerabilities Of the remaining flaws, a further six are rated ‘high’, with CVSS scores of between 7.2 and 8.6. These include the Firewall Management Center SQL injection vulnerabilities CVE-2026-20001, CVE-2026-20002, and CVE-2026-20003, all remotely exploitable by an authenticated attacker. Again, no workarounds are possible. CVE-2026-20039, rated 8.6 (‘critical’), is a flaw affecting the VPN web server in Cisco Secure Firewall Adaptive Security Appliance (ASA) Software and Cisco Secure Firewall Threat Defense (FTD) Software which could allow an unauthenticated attacker to induce a denial of service state. Additionally, CVE-2026-20082, also rated 8.6, could allow an unauthenticated attacker to cause incoming TCP SYN packets to be dropped incorrectly in the Cisco Secure Firewall Adaptive Security Appliance (ASA) Software.

The Download: an AI agent’s hit piece, and preventing lightning
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Online harassment is entering its AI era Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird. In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.
—Grace Huckins
How much wildfire prevention is too much? As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story. —Casey Crownhart This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic is still chasing a deal with the Pentagon CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)
2 The White House is considering forcing US manufacturers to make munitionsIt could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC) 3 A new lawsuit claims Google Gemini encouraged a man to take his own lifeThis seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)+ Why AI should be able to “hang up” on you. (MIT Technology Review) 4 Ironically, AI coding tools could emphasize the importance of being humanIf more people build software for themselves, our tech could become more personal. (WP $) + But not everyone is happy about the rise of AI coding. (MIT Technology Review) 5 Tesla wants to become a dominant force in global energy infrastructureThe plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review) 6 Chinese chipmakers are pushing for a domestic alternative to ASML A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP) 7 A music-streaming CEO has built a viral conflict-tracking platformJust in case you’re losing track of all the wars everywhere. (Wired $) 8 Do cancer blood tests actually work? They’re increasingly popular, but none have received approval from regulators yet. (Nature $) 9 The shift to cloud computing is causing a surge in internet outagesIf one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)
10 OpenAI has promised to cut the cringe from ChatGPTIt’s promising fewer “moralizing preambles.” (PCMag) Quote of the day
“People tend to read too much into things that I do.” —Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports. One More Thing STEPHANIE ARNETT/MITTR | ENVATO The open-source AI boom is built on Big Tech’s handouts. How long will it last? In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story. —Will Douglas Heaven
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Orysia Zabeida’s animations are seriously charming.+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!+ These photos of the Apollo 11 launch in 1969 are mesmerising.+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.

Lack of regulatory action on hyperscaler dominance prompts inquiry chair to quit
“The report that the CMA produced was a really comprehensive one, completely understanding the nature of the industry. We’ve been at the sharp end of uncompetitive behavior for some time,” she added. And concerns have also been expressed in the US. “Kip Meek’s resignation highlights a stark reality: Diagnosing a potentially flawed, highly concentrated cloud market is useless if the watchdog lacks the urgency to address it. Right now, the hyperscalers are operating business-as-usual while the CMA hits the snooze button,” said Dave McCarthy, research vice president at IDC. Regulators across the globe are currently investigating the cloud market. Last month, the US Federal Trade Commission opened an investigation into Microsoft’s position and whether it had an unfair advantage against other cloud competitors. And in November last year, the European Commission opened three market investigations on cloud computing services under the Digital Markets Act (DMA), including an investigation as to whether the DMA can effectively tackle practices that may limit competitiveness and fairness in the cloud computing sector in the EU.” Stewart highlighted the EC’s action. “The commission kicked off three inquiries last autumn and they’re due to make an interim report in May or June. They may well get there before the CMA, which started three years earlier,” she said. The situation needs to be resolved quickly given the increasing importance of AI in today’s market and the need for competitive cloud services to support it, said Terrar: “AI, particularly agentic AI, is going to change the cloud market. We’re going to see some changes, for example, more processing at the edge, and the cloud infrastructure is so fundamental to the industry today.” And, of course, there’s the additional cost, said Stewart: “There was a footnote in the CMA report that the UK is paying about £500m too more for cloud,

How much wildfire prevention is too much?
The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning. Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes. It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we? I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.
When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning. Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.
The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find. Even if this method really can work to stop lightning, should we use it? Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest. But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control. Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James. We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them. The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said. Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds. It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.