Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Desalination plants in the Middle East are increasingly vulnerable
EXECUTIVE SUMMARY MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. As the conflict in Iran has escalated, a crucial resource is under fire: the desalination technology that supplies water across much of the region. In early March, Iran’s foreign minister accused the US of attacking a desalination plant on Qeshm Island in the Strait of Hormuz and disrupting the water supply to nearly 30 villages. (The US denied responsibility.) In the weeks since, both Bahrain and Kuwait have reported damage to desalination plants and blamed Iran, though Iran also denied responsibility. In late March, President Donald Trump threatened the destruction of “possibly all desalinization plants” in Iran if the Strait of Hormuz was not reopened. Since then, he’s escalated his threats against Iran, warning of plans to attack other crucial civilian infrastructure like power plants and bridges.
Countries in the Middle East, particularly the Gulf states, rely on the technology to turn salt water into fresh water for farming, industry, and—crucially—drinking. The mounting attacks and threats to date highlight just how vital the industry is to the region—a situation made even more precarious by rising temperatures and extreme weather driven by climate change. Right now, 83% of the Middle East is under extremely high water stress, says Liz Saccoccia, a water security associate at the World Resources Institute. Future projections suggest that’s going to increase to about 100% by 2050, she adds: “This is a continuing trend, and it’s getting worse, not better.”
Here’s a look at desalination technology in the Middle East and what wartime threats to the critical infrastructure could mean for people in the region. A vital resource Desalination technology has helped provide water supplies in the Middle East since the early 20th century and became widespread in the 1960s and 1970s. There are two major categories of desalination plants. Thermal plants use heat to evaporate water, leaving salt and other impurities behind. The vapor can then be condensed into usable fresh water. The alternative is membrane-based technology like reverse osmosis, which pushes water through membranes that have tiny pores—so small that salt can’t get through. Early desalination plants in the Middle East were the first type, burning fossil fuels to evaporate water, leaving the salt behind. This technique is incredibly energy-intensive, and over time, processes that rely on filters became the dominant choice. Membrane technologies have made up essentially all new desalination capacity in recent years; the last major thermal plant built in the Gulf came online in 2018. Many reverse osmosis plants still rely on fossil fuels, but they’re more efficient. Since then, membrane technologies have added more than 15 million cubic meters of daily capacity—enough to supply water to millions of people. Capacity has expanded quickly in recent years; between 2006 and 2024, countries across the Middle East collectively spent over $50 billion building and upgrading desalination facilities, and nearly that much operating them. Today, there are nearly 5,000 desalination plants operational across the Middle East. And looking ahead, growth is continuing. Between 2024 and 2028, daily capacity is expected to grow from about 29 million cubic meters to 41 million cubic meters.
Uneven vulnerabilities Some countries rely on the technology more than others. Iran, for example, uses desalination for about 3% of its municipal fresh water. The country has access to groundwater and some surface water, including rivers, though these resources are being stretched thin by agriculture and extreme drought. Other nations in the region, particularly the Gulf countries (Bahrain, Qatar, Kuwait, the United Arab Emirates, Saudi Arabia, and Oman), have much more limited water resources and rely heavily on desalination. Across these six nations, all but the UAE get more than half their drinking water from desalination, and for Bahrain, Qatar, and Kuwait the figure is more than 90%. “The Gulf countries are much, much more vulnerable to attacks on their desalination plants than Iran is,” says David Michel, a senior associate in the global food and water security program at the Center for Strategic and International Studies. There are thousands of desalination facilities across the region, so the system wouldn’t collapse if a small number were taken offline, Michel says. However, in recent years there’s been a trend toward larger, more centralized plants. The average desalination plant is about 10 times larger than it was 15 years ago, according to data from the International Energy Agency. The largest desalination plants today can produce 1 million cubic meters of water daily, enough for hundreds of thousands of people. Taking one or more of these massive facilities offline could have a significant effect on the system, Michel says. Escalating threats Desalination facilities are quite linear, meaning there are multiple steps and pieces of equipment that work in sequence—and the failure of a component in that chain can take an entire facility down. Attacks on water inlets, transportation networks, and power supplies can also disrupt the system, Michel says. During the Gulf War in 1991, Iraqi forces pumped oil into the gulf, contaminating the water and shutting down desalination plants in Kuwait. The facilities are also generally located close to other targets in this conflict. Desalination is incredibly energy intensive, so about three-quarters of facilities in the region are next to power plants. Trump has repeatedly threatened power plants in Iran. In response, Iran’s military has said that if civilian targets are hit, the country will respond with strikes that are “much more devastating and widespread.” Other governments and organizations, including the United Nations, the European Union, and the Red Cross, have broadly condemned threats to infrastructure as illegal.
But war isn’t the only danger facing these plants, even if it is the most immediate. Some studies have suggested that global warming could strengthen cyclones in the region, and these extreme weather events could force shutdowns or damage equipment. Water pollution could also cause shutdowns. Oil spills, whether accidental or intentional, as in the case of the Gulf War, can wreak havoc. And in 2009, a red algae bloom closed desalination plants in Oman and the United Arab Emirates for weeks. The algae fouled membranes and blocked the plants from being able to take water in from the Persian Gulf and the Gulf of Oman.
Desalination facilities could become more resilient to threats in the future, and they may need to as their importance continues to grow. There’s increasing interest in running desalination facilities at least partially on solar power, which could help reduce dependence on the oil that powers most facilities today. The Hassyan seawater desalination project in the UAE, currently under construction, would be the largest reverse osmosis plant in the world to operate solely with renewable energy. Another way to increase resilience is for countries to build up more strategic water storage to meet demand. Qatar recently issued new policies that aim to improve management and storage of desalinated water, for example. Countries could also work together to invest in shared infrastructure and policies that help strengthen the water supply through the region. Preparedness, resilience, and cooperation will be key for the Middle East broadly as critical infrastructure, including the water supply, is increasingly under threat. “The longer the conflict goes on, the more likely we’ll see significant water infrastructure damage,” says Ginger Matchett, an assistant director at the Atlantic Council. “What worries me is that after this war ends, some of the lessons will show how water can be weaponized more strategically than previously imagined.”

Enabling agent-first process redesign
In association withthe Deloitte Microsoft Technology Practice Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, people, and other agents in real time, AI agents can execute entire workflows autonomously. But unlocking their potential requires redesigning processes around agents rather than bolting them onto fragmented legacy workflows using traditional optimization methods. Companies must become agent first. In an agent-first enterprise, AI systems operate processes while humans set goals, define policy constraints, and handle exceptions. “You need to shift the operating model to humans as governors and agents as operators,” says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice.
The agent-first imperative With technology budgets for AI expected to increase more than 70% over the next two years, AI agents, powered by generative AI, are poised to fundamentally transform organizations and achieve results beyond traditional automation. These initiatives have the potential to produce significant performance gains, while shifting humans toward higher value work. AI is advancing so quickly that static approaches to task automation will likely only produce incremental gains. Because legacy processes aren’t built for autonomous systems, AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows, according to Rodgers.
Further complicating matters, many organizations don’t understand the full economic drivers of their business, such as cost to serve and per-transaction costs. As a result, they have trouble prioritizing agents that can create the most value and instead focus on flashy pilots. To achieve structural change, executives should think differently. Companies must instead orchestrate outcomes faster than competitors. “The real risk isn’t that AI won’t work—it’s that competitors will redesign their operating models while you’re still piloting agents and copilots,” says Rodgers. “Nonlinear gains come when companies create agent-centric workflows with human governance and adaptive orchestration.” Routine and repetitive tasks are increasingly handled automatically, freeing employees to focus on higher value, creative, and strategic work. This shift improves operational efficiency, fosters stronger collaboration, and generates faster decision-making—helping organizations modernize the workplace without sacrificing enterprise security. Download the article. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny
Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

The Download: AI’s impact on jobs, and data centres in space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The one piece of data that could actually shed light on your job and AI Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. Now even economists who have downplayed the threat are coming around to the idea. Alex Imas, based at the University of Chicago, is one of them. He believes that any plan to address AI’s impact will depend on collecting one vital piece of data: price elasticity. Imas argues that “we need a Manhattan Project” for this. Read the full story to find out why.
—James O’Donnell This article is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
Four things we’d need to put data centers in space In January, Elon Musk’s SpaceX applied to launch up to 1 million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI—without triggering an environmental crisis on Earth. SpaceX is among a growing list of tech firms pursuing orbital computing infrastructure. But can their plans really work? Here are four must-haves for making space-based data centers a reality. —Tereza Pultarova This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Trump has again proposed major cuts to US science and tech spending He wants to slash nearly every science-focused agency. (Ars Technica) + If Trump gets his way, the US could face a costly brain drain. (NYT $) + Top research talent is already fleeing the country. (Guardian) + Basic science deserves our boldest investment. (MIT Technology Review) 2 Sam Altman lobbied against AI regulations he publicly welcomed A bombshell report reveals many OpenAI insiders don’t trust him. (The New Yorker $) + Some have called him a sociopath. (Futurism) + OpenAI’s CFO fears it won’t be IPO-ready this year. (The Information $) + A war over AI regulation is brewing in the US. (MIT Technology Review)
3 NASA’s Artemis II has broken humanity’s all-time distance record The astronauts have flown farther than any humans before them. (BBC) + Their mission includes MIT-developed technology. (Axios) 4 Chinese tech firms are selling intel “exposing” US forces It comes from combining AI with open-source data.. (WP $) + AI is turning the Iran conflict into theater. (MIT Technology Review) 5 War is pushing countries to ditch hyperscalers Driven by Iran naming tech giants as military targets. (Rest of World) + No one wants a data center in their backyard. (MIT Technology Review) 6 OpenAI, Anthropic, and Google have united against China’s AI copying They’re sharing information on “adversarial distillation” (Bloomberg $) 7 Anduril and Impulse Space are working on Trump’s “Golden Dome” They’re developing space-based missile tracking for the project. (Gizmodo) 8 OpenAI has urged California to probe Elon Musk’s “anti-competitive behavior.” It accuses Musk of trying to “take control of the future of AGI.” (Reuters $) + And claims he coordinated attacks with Mark Zuckerberg. (CNBC) + A former Tesla president has revealed how he survived working for Musk. (WP $) 9 DeepSeek’s new AI model will run on Huawei chips It’s expected to launch in the next few weeks. (The Information $) 10 Memes have nuked our culture Internet “brain rot” has escaped our phones to take over everything. (NYT $)
Quote of the day “I must say, it was actually quite nice.” —Astronaut Victor Glover tells President Donald Trump what it was like when Artemis II was out of communication with the rest of humanity, The New York Times reports.
One More Thing PABLO ALBARENGA Inside the controversial tree farms powering Apple’s carbon-neutral goal In 2020, Apple set a goal to become net zero by the end of the decade. To hit that target, the company is offsetting its emissions by planting millions of eucalyptus trees in Brazil. Apple is betting that the strategy will lead to a greener future. But critics warn that the industrial tree farms will do more harm than good. Find out why the plans have sparked a backlash. —Gregory Barber
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Japan’s automated bike garage is a cyclist’s dream come true. + This deep dive into bird behavior reveals the secrets of their dining habits. (Big thanks to reader Terry Gordon for the find!) + The first photo from the Artemis astronauts vividly captures the glow of our atmosphere. + There’s a new contender for the world’s most gorgeous website: RobertDeNiro.com.

Cisco: AI simplifies wireless operations but also taxes legacy Wi-Fi networks
Additionally, 28% of organizations surveyed have already deployed AI workloads, and that percentage is estimated to climb to 79% by 2027. Some 29% will be in pilot stage, and 22% are planning deployment over the next 12 months. “This rapid trajectory further highlights that AI deployment is indeed the defining use case for next-generation wireless infrastructure,” the Cisco study asserted. “While core use cases such as wireless for physical security are already widely deployed, the next phase of wireless growth is being driven by emerging applications that depend on high-performing, resilient networks. Organizations are increasingly piloting or planning wireless investments to support autonomous systems and robotics, smart facilities and energy management, space analytics, and immersive collaboration,” the study found. The 6 GHz opportunity One wireless advancement that is being used to handle AI and other capacity challenges is the addition of the 6 GHz band added by Wi-Fi 6E, “and utilization further improved with Wi-Fi 7,” Cisco stated. “Organizations are using it to solve capacity and congestion issues (46%), enable high-bandwidth applications (32%), and support AI workloads (31%). And the research shows that those adopting this added spectrum are seeing strong benefits.” “Organizations already deploying 6 GHz show almost double the rate of AI applications and workloads (45%) compared to non-adopters (26%),” the study reported. “It’s no surprise that Cisco telemetry highlights a 60% increase in 6 GHz clients went live in 2025. Wireless professionals are seeing the 6 GHz opportunity and are helping their organizations,” the study found.

Hyperscaler backlogs show growing demand for AI infrastructure
Amazon’s backlog is now $244 billion, up 40% year over year. “There’s a lot of demand for AWS right now, in the AI space and also in the core AWS space,” Jassy said. CEO Sundar Pichai said that Google plans to spend around $180 billion on capex in 2026. And Google also reported a backlog of $240 billion in February. “The number of deals in 2025 over a billion dollars surpassed the previous three years combined,” said Pichai. The driver for this growth? AI spending. AI companies need more compute capacity to train more powerful models. And enterprises deploying AI are driving demand for inference. In survey after survey, the majority of companies say that they plan to increase their AI-related spending this year. According to a survey of nearly 2,400 executives that Boston Consulting Group released in January, companies will double their AI spending this year, from .8% to 1.7% of revenues. And more than 90% of CEOs say that they will continue to invest in AI at current or higher levels, even if the investments do not pay off in the next year.

Desalination plants in the Middle East are increasingly vulnerable
EXECUTIVE SUMMARY MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. As the conflict in Iran has escalated, a crucial resource is under fire: the desalination technology that supplies water across much of the region. In early March, Iran’s foreign minister accused the US of attacking a desalination plant on Qeshm Island in the Strait of Hormuz and disrupting the water supply to nearly 30 villages. (The US denied responsibility.) In the weeks since, both Bahrain and Kuwait have reported damage to desalination plants and blamed Iran, though Iran also denied responsibility. In late March, President Donald Trump threatened the destruction of “possibly all desalinization plants” in Iran if the Strait of Hormuz was not reopened. Since then, he’s escalated his threats against Iran, warning of plans to attack other crucial civilian infrastructure like power plants and bridges.
Countries in the Middle East, particularly the Gulf states, rely on the technology to turn salt water into fresh water for farming, industry, and—crucially—drinking. The mounting attacks and threats to date highlight just how vital the industry is to the region—a situation made even more precarious by rising temperatures and extreme weather driven by climate change. Right now, 83% of the Middle East is under extremely high water stress, says Liz Saccoccia, a water security associate at the World Resources Institute. Future projections suggest that’s going to increase to about 100% by 2050, she adds: “This is a continuing trend, and it’s getting worse, not better.”
Here’s a look at desalination technology in the Middle East and what wartime threats to the critical infrastructure could mean for people in the region. A vital resource Desalination technology has helped provide water supplies in the Middle East since the early 20th century and became widespread in the 1960s and 1970s. There are two major categories of desalination plants. Thermal plants use heat to evaporate water, leaving salt and other impurities behind. The vapor can then be condensed into usable fresh water. The alternative is membrane-based technology like reverse osmosis, which pushes water through membranes that have tiny pores—so small that salt can’t get through. Early desalination plants in the Middle East were the first type, burning fossil fuels to evaporate water, leaving the salt behind. This technique is incredibly energy-intensive, and over time, processes that rely on filters became the dominant choice. Membrane technologies have made up essentially all new desalination capacity in recent years; the last major thermal plant built in the Gulf came online in 2018. Many reverse osmosis plants still rely on fossil fuels, but they’re more efficient. Since then, membrane technologies have added more than 15 million cubic meters of daily capacity—enough to supply water to millions of people. Capacity has expanded quickly in recent years; between 2006 and 2024, countries across the Middle East collectively spent over $50 billion building and upgrading desalination facilities, and nearly that much operating them. Today, there are nearly 5,000 desalination plants operational across the Middle East. And looking ahead, growth is continuing. Between 2024 and 2028, daily capacity is expected to grow from about 29 million cubic meters to 41 million cubic meters.
Uneven vulnerabilities Some countries rely on the technology more than others. Iran, for example, uses desalination for about 3% of its municipal fresh water. The country has access to groundwater and some surface water, including rivers, though these resources are being stretched thin by agriculture and extreme drought. Other nations in the region, particularly the Gulf countries (Bahrain, Qatar, Kuwait, the United Arab Emirates, Saudi Arabia, and Oman), have much more limited water resources and rely heavily on desalination. Across these six nations, all but the UAE get more than half their drinking water from desalination, and for Bahrain, Qatar, and Kuwait the figure is more than 90%. “The Gulf countries are much, much more vulnerable to attacks on their desalination plants than Iran is,” says David Michel, a senior associate in the global food and water security program at the Center for Strategic and International Studies. There are thousands of desalination facilities across the region, so the system wouldn’t collapse if a small number were taken offline, Michel says. However, in recent years there’s been a trend toward larger, more centralized plants. The average desalination plant is about 10 times larger than it was 15 years ago, according to data from the International Energy Agency. The largest desalination plants today can produce 1 million cubic meters of water daily, enough for hundreds of thousands of people. Taking one or more of these massive facilities offline could have a significant effect on the system, Michel says. Escalating threats Desalination facilities are quite linear, meaning there are multiple steps and pieces of equipment that work in sequence—and the failure of a component in that chain can take an entire facility down. Attacks on water inlets, transportation networks, and power supplies can also disrupt the system, Michel says. During the Gulf War in 1991, Iraqi forces pumped oil into the gulf, contaminating the water and shutting down desalination plants in Kuwait. The facilities are also generally located close to other targets in this conflict. Desalination is incredibly energy intensive, so about three-quarters of facilities in the region are next to power plants. Trump has repeatedly threatened power plants in Iran. In response, Iran’s military has said that if civilian targets are hit, the country will respond with strikes that are “much more devastating and widespread.” Other governments and organizations, including the United Nations, the European Union, and the Red Cross, have broadly condemned threats to infrastructure as illegal.
But war isn’t the only danger facing these plants, even if it is the most immediate. Some studies have suggested that global warming could strengthen cyclones in the region, and these extreme weather events could force shutdowns or damage equipment. Water pollution could also cause shutdowns. Oil spills, whether accidental or intentional, as in the case of the Gulf War, can wreak havoc. And in 2009, a red algae bloom closed desalination plants in Oman and the United Arab Emirates for weeks. The algae fouled membranes and blocked the plants from being able to take water in from the Persian Gulf and the Gulf of Oman.
Desalination facilities could become more resilient to threats in the future, and they may need to as their importance continues to grow. There’s increasing interest in running desalination facilities at least partially on solar power, which could help reduce dependence on the oil that powers most facilities today. The Hassyan seawater desalination project in the UAE, currently under construction, would be the largest reverse osmosis plant in the world to operate solely with renewable energy. Another way to increase resilience is for countries to build up more strategic water storage to meet demand. Qatar recently issued new policies that aim to improve management and storage of desalinated water, for example. Countries could also work together to invest in shared infrastructure and policies that help strengthen the water supply through the region. Preparedness, resilience, and cooperation will be key for the Middle East broadly as critical infrastructure, including the water supply, is increasingly under threat. “The longer the conflict goes on, the more likely we’ll see significant water infrastructure damage,” says Ginger Matchett, an assistant director at the Atlantic Council. “What worries me is that after this war ends, some of the lessons will show how water can be weaponized more strategically than previously imagined.”

Enabling agent-first process redesign
In association withthe Deloitte Microsoft Technology Practice Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, people, and other agents in real time, AI agents can execute entire workflows autonomously. But unlocking their potential requires redesigning processes around agents rather than bolting them onto fragmented legacy workflows using traditional optimization methods. Companies must become agent first. In an agent-first enterprise, AI systems operate processes while humans set goals, define policy constraints, and handle exceptions. “You need to shift the operating model to humans as governors and agents as operators,” says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice.
The agent-first imperative With technology budgets for AI expected to increase more than 70% over the next two years, AI agents, powered by generative AI, are poised to fundamentally transform organizations and achieve results beyond traditional automation. These initiatives have the potential to produce significant performance gains, while shifting humans toward higher value work. AI is advancing so quickly that static approaches to task automation will likely only produce incremental gains. Because legacy processes aren’t built for autonomous systems, AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows, according to Rodgers.
Further complicating matters, many organizations don’t understand the full economic drivers of their business, such as cost to serve and per-transaction costs. As a result, they have trouble prioritizing agents that can create the most value and instead focus on flashy pilots. To achieve structural change, executives should think differently. Companies must instead orchestrate outcomes faster than competitors. “The real risk isn’t that AI won’t work—it’s that competitors will redesign their operating models while you’re still piloting agents and copilots,” says Rodgers. “Nonlinear gains come when companies create agent-centric workflows with human governance and adaptive orchestration.” Routine and repetitive tasks are increasingly handled automatically, freeing employees to focus on higher value, creative, and strategic work. This shift improves operational efficiency, fosters stronger collaboration, and generates faster decision-making—helping organizations modernize the workplace without sacrificing enterprise security. Download the article. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny
Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

The Download: AI’s impact on jobs, and data centres in space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The one piece of data that could actually shed light on your job and AI Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. Now even economists who have downplayed the threat are coming around to the idea. Alex Imas, based at the University of Chicago, is one of them. He believes that any plan to address AI’s impact will depend on collecting one vital piece of data: price elasticity. Imas argues that “we need a Manhattan Project” for this. Read the full story to find out why.
—James O’Donnell This article is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
Four things we’d need to put data centers in space In January, Elon Musk’s SpaceX applied to launch up to 1 million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI—without triggering an environmental crisis on Earth. SpaceX is among a growing list of tech firms pursuing orbital computing infrastructure. But can their plans really work? Here are four must-haves for making space-based data centers a reality. —Tereza Pultarova This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Trump has again proposed major cuts to US science and tech spending He wants to slash nearly every science-focused agency. (Ars Technica) + If Trump gets his way, the US could face a costly brain drain. (NYT $) + Top research talent is already fleeing the country. (Guardian) + Basic science deserves our boldest investment. (MIT Technology Review) 2 Sam Altman lobbied against AI regulations he publicly welcomed A bombshell report reveals many OpenAI insiders don’t trust him. (The New Yorker $) + Some have called him a sociopath. (Futurism) + OpenAI’s CFO fears it won’t be IPO-ready this year. (The Information $) + A war over AI regulation is brewing in the US. (MIT Technology Review)
3 NASA’s Artemis II has broken humanity’s all-time distance record The astronauts have flown farther than any humans before them. (BBC) + Their mission includes MIT-developed technology. (Axios) 4 Chinese tech firms are selling intel “exposing” US forces It comes from combining AI with open-source data.. (WP $) + AI is turning the Iran conflict into theater. (MIT Technology Review) 5 War is pushing countries to ditch hyperscalers Driven by Iran naming tech giants as military targets. (Rest of World) + No one wants a data center in their backyard. (MIT Technology Review) 6 OpenAI, Anthropic, and Google have united against China’s AI copying They’re sharing information on “adversarial distillation” (Bloomberg $) 7 Anduril and Impulse Space are working on Trump’s “Golden Dome” They’re developing space-based missile tracking for the project. (Gizmodo) 8 OpenAI has urged California to probe Elon Musk’s “anti-competitive behavior.” It accuses Musk of trying to “take control of the future of AGI.” (Reuters $) + And claims he coordinated attacks with Mark Zuckerberg. (CNBC) + A former Tesla president has revealed how he survived working for Musk. (WP $) 9 DeepSeek’s new AI model will run on Huawei chips It’s expected to launch in the next few weeks. (The Information $) 10 Memes have nuked our culture Internet “brain rot” has escaped our phones to take over everything. (NYT $)
Quote of the day “I must say, it was actually quite nice.” —Astronaut Victor Glover tells President Donald Trump what it was like when Artemis II was out of communication with the rest of humanity, The New York Times reports.
One More Thing PABLO ALBARENGA Inside the controversial tree farms powering Apple’s carbon-neutral goal In 2020, Apple set a goal to become net zero by the end of the decade. To hit that target, the company is offsetting its emissions by planting millions of eucalyptus trees in Brazil. Apple is betting that the strategy will lead to a greener future. But critics warn that the industrial tree farms will do more harm than good. Find out why the plans have sparked a backlash. —Gregory Barber
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Japan’s automated bike garage is a cyclist’s dream come true. + This deep dive into bird behavior reveals the secrets of their dining habits. (Big thanks to reader Terry Gordon for the find!) + The first photo from the Artemis astronauts vividly captures the glow of our atmosphere. + There’s a new contender for the world’s most gorgeous website: RobertDeNiro.com.

Cisco: AI simplifies wireless operations but also taxes legacy Wi-Fi networks
Additionally, 28% of organizations surveyed have already deployed AI workloads, and that percentage is estimated to climb to 79% by 2027. Some 29% will be in pilot stage, and 22% are planning deployment over the next 12 months. “This rapid trajectory further highlights that AI deployment is indeed the defining use case for next-generation wireless infrastructure,” the Cisco study asserted. “While core use cases such as wireless for physical security are already widely deployed, the next phase of wireless growth is being driven by emerging applications that depend on high-performing, resilient networks. Organizations are increasingly piloting or planning wireless investments to support autonomous systems and robotics, smart facilities and energy management, space analytics, and immersive collaboration,” the study found. The 6 GHz opportunity One wireless advancement that is being used to handle AI and other capacity challenges is the addition of the 6 GHz band added by Wi-Fi 6E, “and utilization further improved with Wi-Fi 7,” Cisco stated. “Organizations are using it to solve capacity and congestion issues (46%), enable high-bandwidth applications (32%), and support AI workloads (31%). And the research shows that those adopting this added spectrum are seeing strong benefits.” “Organizations already deploying 6 GHz show almost double the rate of AI applications and workloads (45%) compared to non-adopters (26%),” the study reported. “It’s no surprise that Cisco telemetry highlights a 60% increase in 6 GHz clients went live in 2025. Wireless professionals are seeing the 6 GHz opportunity and are helping their organizations,” the study found.

Hyperscaler backlogs show growing demand for AI infrastructure
Amazon’s backlog is now $244 billion, up 40% year over year. “There’s a lot of demand for AWS right now, in the AI space and also in the core AWS space,” Jassy said. CEO Sundar Pichai said that Google plans to spend around $180 billion on capex in 2026. And Google also reported a backlog of $240 billion in February. “The number of deals in 2025 over a billion dollars surpassed the previous three years combined,” said Pichai. The driver for this growth? AI spending. AI companies need more compute capacity to train more powerful models. And enterprises deploying AI are driving demand for inference. In survey after survey, the majority of companies say that they plan to increase their AI-related spending this year. According to a survey of nearly 2,400 executives that Boston Consulting Group released in January, companies will double their AI spending this year, from .8% to 1.7% of revenues. And more than 90% of CEOs say that they will continue to invest in AI at current or higher levels, even if the investments do not pay off in the next year.

Latin America returns to the energy security conversation at CERAWeek
With geopolitical risk central to conversations about energy, and with long-cycle supply once again in focus, Latin America’s mix of hydrocarbons and export potential drew renewed attention at CERAWeek by S&P Global in Houston. Argentina, resource story to export platform Among the regional stories, Argentina stood out as Vaca Muerta was no longer discussed simply as a large unconventional resource, but whether the country could turn resource quality into sustained export capacity. Country officials talked about scale: more operators, more services, more infrastructure, and a larger industrial base around the unconventional play. Daniel González, Vice Minister of Energy and Mining for Argentina, put it plainly: “The time has come to expand the Vaca Muerta ecosystem.” What is at stake now is not whether the basin works, but whether the country can build enough above-ground capacity and regulatory consistency to keep development moving. Horacio Marín, chairman and chief executive officer of YPF, offered an expansive version of that argument. He said Argentina’s energy exports could reach $50 billion/year by 2031, backed by roughly $130 billion in cumulative investment in oil, LNG, and transportation infrastructure. He said Argentine crude output could reach 1 million b/d by end-2026. He said Argentina wants to be seen less as a recurrent frontier story and more as a future supplier with scale. “The time to invest in Vaca Muerta is now,” Marín said. The LNG piece is starting to take shape. Eni, YPF, and XRG signed a joint development agreement in February to move Argentina LNG forward, with a first phase planned at 12 million tonnes/year. Southern Energy—backed by PAE, YPF, Pampa Energía, Harbour Energy, and Golar LNG—holds a long-term agreement with SEFE for 2 million tonnes/year over 8 years. The movement by global standards is early-stage and relatively modest, but it adds to Argentina’s export

Market Focus: LNG supply shocks expose limited market flexibility
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } In this Market Focus episode of the Oil & Gas Journal ReEnterprised podcast, Conglin Xu, managing editor, economics, takes a look into the LNG market shock caused by the effective closure of the Strait of Hormuz and the sudden loss of Qatari LNG supply as the Iran war continues. Xu speaks with Edward O’Toole, director of global gas analysis, RBAC Inc., to examine how these disruptions are intensifying global supply constraints at a time when European inventories were already under pressure following a colder-than-average winter and weaker storage levels. Drawing on RBAC’s G2M2 global gas market model, O’Toole outlines disruption scenarios analyzed in the firm’s recent report and explains how current events align with their findings. With global LNG production already operating near maximum utilization, the market response is being driven by higher prices and reduced consumption. Europe faces sharper price pressure due to storage refill needs, while Asian markets are expected to see greater demand reductions as consumers switch fuels. O’Toole underscores the importance of scenario-based modeling and supply diversification as geopolitical risk exposes structural vulnerabilities in the LNG market—offering insights for stakeholders navigating an increasingly uncertain global

Libya’s NOC, Chevron sign MoU for technical study for offshore Block NC146
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } The National Oil Corp. of Libya (NOC) signed a memorandum of understanding (MoU) with Chevron Corp. to conduct a comprehensive technical study of offshore Block NC146. The block is an unexplored area with “encouraging geological indicator that could lead to significant discoveries, helping to strengthen national reserves,” NOC noted Chairman Masoud Suleman as saying, noting that the partnership is “a message of confidence in the Libyan investment environment and evidence of the return of major companies to work and explore promising opportunities in our country.” According to the NOC, Libya produces 1.4 million b/d of oil and aims to increase oil production in the coming 3-5 years to 2 million b/d and then to 3 million b/d following years of instability that impacted the country’s production. Chevron is working to add to its diverse exploration and production portfolio in the Mediterranean and Africa and continues to assess potential future opportunities in the region. The operator earlier this year entered Libya after it was designated as a winning bidder for Contract Area 106 in the Sirte basin in the 2025 Libyan Bid Round. That followed the January 2026 signing of a

Finder Energy advances KTJ Project with development area approval
Finder Energy Holdings Ltd. received regulatory approval for a development area covering the Kuda Tasi and Jahal oil fields offshore Timor‑Leste, enabling progression toward field development. Autoridade Nacional do Petróleo (ANP) approved an 88‑sq km development area over the Kuda Tasi and Jahal oil fields (KTJ Project) within PSC 19‑11 offshore Timor‑Leste, representing the first stage of the regulatory approvals process for the project. The declaration of the development area is a precursor to the field development plan (FDP), which Finder is currently preparing for submission to ANP in second‑quarter 2026. Upon approval of the FDP, the development area would secure tenure for up to 25 years or until production ceases, allowing Finder to conduct development and production operations within the area, subject to applicable regulatory approvals and conditions. The company said its upside strategy centers on the potential for the Petrojarl I FPSO to serve as a central processing and export hub for future tiebacks of surrounding discoveries, contingent on successful appraisal and/or exploration activities within PSC 19‑11. Alternatively, longer tie‑back distances could be accommodated through a secondary standalone development in the southern portion of the PSC. Finder is continuing technical evaluation of appraisal and exploration opportunities to generate drilling targets. PSC 19‑11 lies within the Laminaria High oil province of Timor‑Leste. The KTJ Project contains an estimated 25 million bbl of gross 2C contingent resources, with identified upside of an additional 23 million bbl gross 2C contingent resources and 116 million bbl gross 2U prospective resources. Finder operates PSC 19‑11 with a 66% working interest.

Newly formed Polar LNG aims to develop nearshore LNG project on Alaska’s North Slope
Polar Train LNG LLC, a newly launched company aiming to build an LNG plant (Polar LNG) on Alaska’s North Slope, has appointed Joel Riddle as president and chief executive officer. “Alaska’s North Slope holds one of the most significant undeveloped natural gas resources in the world,” said Riddle, adding “Polar LNG is uniquely positioned to bring this resource online—delivering reliable energy for Alaska and a strategic supply for the United States… and provides trusted energy to our allies.” In a release Mar. 31, the company said it is advancing a nearshore project at Prudhoe Bay, citing “one of the shortest LNG shipping routes from North America to key Asian markets, approximately 3,600 miles to Japan compared to over 10,000 miles from the US Gulf Coast.” The company is aiming for first LNG from the 7-million tonnes/year plant—to be developed nearshore with modular infrastructure—in 2029-2030 at a cost of $8–9 billion. According to Polar LNG, natural gas would be sourced from existing infrastructure at Prudhoe Bay and transported via a short pipeline to a nearshore plant. There, a modular gravity-based structure would process and liquefy the gas. LNG would then be loaded onto specialized ice-class carriers for year-round export. The company is exploring potential repurposing of sanctioned equipment built for Russia’s Arctic LNG 2 project and is seeking permission from the US govenment to acquire parts impacted by the sanctions, according to reports. Before joining Polar LNG, Riddle served as managing director and chief executive officer of Tamboran Resources Ltd.

Asia bears brunt of energy shock as Middle East war disrupts liquid flows
Asia is facing a dual energy crisis marked by both soaring prices and physical supply disruptions as escalating war in the Middle East constrains flows through the Strait of Hormuz, according to a new report by Morningstar DBRS. The report highlights that roughly one-fifth of global crude oil and LNG supply has been affected by disruptions at the critical chokepoint, with Asia absorbing the majority of the impact due to its heavy dependence on imported hydrocarbons. About 83% of oil and LNG shipments passing through Hormuz are destined for Asian markets, amplifying the region’s exposure. Asia’s structural reliance on Middle Eastern energy imports has intensified the shock. Countries such as Japan and South Korea import nearly all of their energy needs, while China and India depend heavily on foreign supplies, much of it sourced from the Gulf. This dependence, combined with limited alternative shipping routes, has turned what initially appeared to be a price-driven shock into a broader supply and logistics crisis. Governments across the region have begun implementing emergency measures, including fuel rationing, price controls, and strategic reserve releases, to manage shortages and rising costs. Policy responses vary In North Asia, policymakers are leveraging stronger buffers. Japan has tapped strategic oil reserves and introduced subsidies to cushion consumers, while South Korea is relying on LNG stockpiles and fuel-switching capabilities. China has deployed administrative controls to stabilize domestic fuel prices and restrict refined product exports. By contrast, parts of South and Southeast Asia are more vulnerable. India has introduced tax relief and prioritized gas allocation, while countries such as the Philippines and Vietnam have declared energy emergencies and rolled out conservation measures. Several ASEAN (the Association of Southeast Asian Nations) economies have even implemented partial work-from-home policies to curb fuel consumption. Broader economic spillovers intensify Beyond energy markets, the disruption

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Desalination plants in the Middle East are increasingly vulnerable
EXECUTIVE SUMMARY MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. As the conflict in Iran has escalated, a crucial resource is under fire: the desalination technology that supplies water across much of the region. In early March, Iran’s foreign minister accused the US of attacking a desalination plant on Qeshm Island in the Strait of Hormuz and disrupting the water supply to nearly 30 villages. (The US denied responsibility.) In the weeks since, both Bahrain and Kuwait have reported damage to desalination plants and blamed Iran, though Iran also denied responsibility. In late March, President Donald Trump threatened the destruction of “possibly all desalinization plants” in Iran if the Strait of Hormuz was not reopened. Since then, he’s escalated his threats against Iran, warning of plans to attack other crucial civilian infrastructure like power plants and bridges.
Countries in the Middle East, particularly the Gulf states, rely on the technology to turn salt water into fresh water for farming, industry, and—crucially—drinking. The mounting attacks and threats to date highlight just how vital the industry is to the region—a situation made even more precarious by rising temperatures and extreme weather driven by climate change. Right now, 83% of the Middle East is under extremely high water stress, says Liz Saccoccia, a water security associate at the World Resources Institute. Future projections suggest that’s going to increase to about 100% by 2050, she adds: “This is a continuing trend, and it’s getting worse, not better.”
Here’s a look at desalination technology in the Middle East and what wartime threats to the critical infrastructure could mean for people in the region. A vital resource Desalination technology has helped provide water supplies in the Middle East since the early 20th century and became widespread in the 1960s and 1970s. There are two major categories of desalination plants. Thermal plants use heat to evaporate water, leaving salt and other impurities behind. The vapor can then be condensed into usable fresh water. The alternative is membrane-based technology like reverse osmosis, which pushes water through membranes that have tiny pores—so small that salt can’t get through. Early desalination plants in the Middle East were the first type, burning fossil fuels to evaporate water, leaving the salt behind. This technique is incredibly energy-intensive, and over time, processes that rely on filters became the dominant choice. Membrane technologies have made up essentially all new desalination capacity in recent years; the last major thermal plant built in the Gulf came online in 2018. Many reverse osmosis plants still rely on fossil fuels, but they’re more efficient. Since then, membrane technologies have added more than 15 million cubic meters of daily capacity—enough to supply water to millions of people. Capacity has expanded quickly in recent years; between 2006 and 2024, countries across the Middle East collectively spent over $50 billion building and upgrading desalination facilities, and nearly that much operating them. Today, there are nearly 5,000 desalination plants operational across the Middle East. And looking ahead, growth is continuing. Between 2024 and 2028, daily capacity is expected to grow from about 29 million cubic meters to 41 million cubic meters.
Uneven vulnerabilities Some countries rely on the technology more than others. Iran, for example, uses desalination for about 3% of its municipal fresh water. The country has access to groundwater and some surface water, including rivers, though these resources are being stretched thin by agriculture and extreme drought. Other nations in the region, particularly the Gulf countries (Bahrain, Qatar, Kuwait, the United Arab Emirates, Saudi Arabia, and Oman), have much more limited water resources and rely heavily on desalination. Across these six nations, all but the UAE get more than half their drinking water from desalination, and for Bahrain, Qatar, and Kuwait the figure is more than 90%. “The Gulf countries are much, much more vulnerable to attacks on their desalination plants than Iran is,” says David Michel, a senior associate in the global food and water security program at the Center for Strategic and International Studies. There are thousands of desalination facilities across the region, so the system wouldn’t collapse if a small number were taken offline, Michel says. However, in recent years there’s been a trend toward larger, more centralized plants. The average desalination plant is about 10 times larger than it was 15 years ago, according to data from the International Energy Agency. The largest desalination plants today can produce 1 million cubic meters of water daily, enough for hundreds of thousands of people. Taking one or more of these massive facilities offline could have a significant effect on the system, Michel says. Escalating threats Desalination facilities are quite linear, meaning there are multiple steps and pieces of equipment that work in sequence—and the failure of a component in that chain can take an entire facility down. Attacks on water inlets, transportation networks, and power supplies can also disrupt the system, Michel says. During the Gulf War in 1991, Iraqi forces pumped oil into the gulf, contaminating the water and shutting down desalination plants in Kuwait. The facilities are also generally located close to other targets in this conflict. Desalination is incredibly energy intensive, so about three-quarters of facilities in the region are next to power plants. Trump has repeatedly threatened power plants in Iran. In response, Iran’s military has said that if civilian targets are hit, the country will respond with strikes that are “much more devastating and widespread.” Other governments and organizations, including the United Nations, the European Union, and the Red Cross, have broadly condemned threats to infrastructure as illegal.
But war isn’t the only danger facing these plants, even if it is the most immediate. Some studies have suggested that global warming could strengthen cyclones in the region, and these extreme weather events could force shutdowns or damage equipment. Water pollution could also cause shutdowns. Oil spills, whether accidental or intentional, as in the case of the Gulf War, can wreak havoc. And in 2009, a red algae bloom closed desalination plants in Oman and the United Arab Emirates for weeks. The algae fouled membranes and blocked the plants from being able to take water in from the Persian Gulf and the Gulf of Oman.
Desalination facilities could become more resilient to threats in the future, and they may need to as their importance continues to grow. There’s increasing interest in running desalination facilities at least partially on solar power, which could help reduce dependence on the oil that powers most facilities today. The Hassyan seawater desalination project in the UAE, currently under construction, would be the largest reverse osmosis plant in the world to operate solely with renewable energy. Another way to increase resilience is for countries to build up more strategic water storage to meet demand. Qatar recently issued new policies that aim to improve management and storage of desalinated water, for example. Countries could also work together to invest in shared infrastructure and policies that help strengthen the water supply through the region. Preparedness, resilience, and cooperation will be key for the Middle East broadly as critical infrastructure, including the water supply, is increasingly under threat. “The longer the conflict goes on, the more likely we’ll see significant water infrastructure damage,” says Ginger Matchett, an assistant director at the Atlantic Council. “What worries me is that after this war ends, some of the lessons will show how water can be weaponized more strategically than previously imagined.”

Enabling agent-first process redesign
In association withthe Deloitte Microsoft Technology Practice Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, people, and other agents in real time, AI agents can execute entire workflows autonomously. But unlocking their potential requires redesigning processes around agents rather than bolting them onto fragmented legacy workflows using traditional optimization methods. Companies must become agent first. In an agent-first enterprise, AI systems operate processes while humans set goals, define policy constraints, and handle exceptions. “You need to shift the operating model to humans as governors and agents as operators,” says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice.
The agent-first imperative With technology budgets for AI expected to increase more than 70% over the next two years, AI agents, powered by generative AI, are poised to fundamentally transform organizations and achieve results beyond traditional automation. These initiatives have the potential to produce significant performance gains, while shifting humans toward higher value work. AI is advancing so quickly that static approaches to task automation will likely only produce incremental gains. Because legacy processes aren’t built for autonomous systems, AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows, according to Rodgers.
Further complicating matters, many organizations don’t understand the full economic drivers of their business, such as cost to serve and per-transaction costs. As a result, they have trouble prioritizing agents that can create the most value and instead focus on flashy pilots. To achieve structural change, executives should think differently. Companies must instead orchestrate outcomes faster than competitors. “The real risk isn’t that AI won’t work—it’s that competitors will redesign their operating models while you’re still piloting agents and copilots,” says Rodgers. “Nonlinear gains come when companies create agent-centric workflows with human governance and adaptive orchestration.” Routine and repetitive tasks are increasingly handled automatically, freeing employees to focus on higher value, creative, and strategic work. This shift improves operational efficiency, fosters stronger collaboration, and generates faster decision-making—helping organizations modernize the workplace without sacrificing enterprise security. Download the article. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

The Download: AI’s impact on jobs, and data centres in space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The one piece of data that could actually shed light on your job and AI Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. Now even economists who have downplayed the threat are coming around to the idea. Alex Imas, based at the University of Chicago, is one of them. He believes that any plan to address AI’s impact will depend on collecting one vital piece of data: price elasticity. Imas argues that “we need a Manhattan Project” for this. Read the full story to find out why.
—James O’Donnell This article is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
Four things we’d need to put data centers in space In January, Elon Musk’s SpaceX applied to launch up to 1 million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI—without triggering an environmental crisis on Earth. SpaceX is among a growing list of tech firms pursuing orbital computing infrastructure. But can their plans really work? Here are four must-haves for making space-based data centers a reality. —Tereza Pultarova This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Trump has again proposed major cuts to US science and tech spending He wants to slash nearly every science-focused agency. (Ars Technica) + If Trump gets his way, the US could face a costly brain drain. (NYT $) + Top research talent is already fleeing the country. (Guardian) + Basic science deserves our boldest investment. (MIT Technology Review) 2 Sam Altman lobbied against AI regulations he publicly welcomed A bombshell report reveals many OpenAI insiders don’t trust him. (The New Yorker $) + Some have called him a sociopath. (Futurism) + OpenAI’s CFO fears it won’t be IPO-ready this year. (The Information $) + A war over AI regulation is brewing in the US. (MIT Technology Review)
3 NASA’s Artemis II has broken humanity’s all-time distance record The astronauts have flown farther than any humans before them. (BBC) + Their mission includes MIT-developed technology. (Axios) 4 Chinese tech firms are selling intel “exposing” US forces It comes from combining AI with open-source data.. (WP $) + AI is turning the Iran conflict into theater. (MIT Technology Review) 5 War is pushing countries to ditch hyperscalers Driven by Iran naming tech giants as military targets. (Rest of World) + No one wants a data center in their backyard. (MIT Technology Review) 6 OpenAI, Anthropic, and Google have united against China’s AI copying They’re sharing information on “adversarial distillation” (Bloomberg $) 7 Anduril and Impulse Space are working on Trump’s “Golden Dome” They’re developing space-based missile tracking for the project. (Gizmodo) 8 OpenAI has urged California to probe Elon Musk’s “anti-competitive behavior.” It accuses Musk of trying to “take control of the future of AGI.” (Reuters $) + And claims he coordinated attacks with Mark Zuckerberg. (CNBC) + A former Tesla president has revealed how he survived working for Musk. (WP $) 9 DeepSeek’s new AI model will run on Huawei chips It’s expected to launch in the next few weeks. (The Information $) 10 Memes have nuked our culture Internet “brain rot” has escaped our phones to take over everything. (NYT $)
Quote of the day “I must say, it was actually quite nice.” —Astronaut Victor Glover tells President Donald Trump what it was like when Artemis II was out of communication with the rest of humanity, The New York Times reports.
One More Thing PABLO ALBARENGA Inside the controversial tree farms powering Apple’s carbon-neutral goal In 2020, Apple set a goal to become net zero by the end of the decade. To hit that target, the company is offsetting its emissions by planting millions of eucalyptus trees in Brazil. Apple is betting that the strategy will lead to a greener future. But critics warn that the industrial tree farms will do more harm than good. Find out why the plans have sparked a backlash. —Gregory Barber
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Japan’s automated bike garage is a cyclist’s dream come true. + This deep dive into bird behavior reveals the secrets of their dining habits. (Big thanks to reader Terry Gordon for the find!) + The first photo from the Artemis astronauts vividly captures the glow of our atmosphere. + There’s a new contender for the world’s most gorgeous website: RobertDeNiro.com.

The one piece of data that could actually shed light on your job and AI
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. The mood is so grim that a societal impacts researcher at Anthropic, responding Wednesday to a call for more optimistic visions of AI’s future, said there might be a recession in the near term and a “breakdown of the early-career ladder.” Her less-measured colleague Dario Amodei, the company’s CEO, has called AI “a general labor substitute for humans” that could do all jobs in less than five years. And those ideas are not just coming from Anthropic, of course. These conversations have unsurprisingly left many workers in a panic (and are probably contributing to support for efforts to entirely pause the construction of data centers, some of which gained steam last week). The panic isn’t being helped by lawmakers, none of whom have articulated a coherent plan for what comes next. Even economists who have cautioned that AI has not yet cut jobs and may not result in a cliff ahead are coming around to the idea that it could have a unique and unprecedented impact on how we work.
Alex Imas, based at the University of Chicago, is one of those economists. He shared two things with me when we spoke on Friday morning: a blunt assessment that our tools for predicting what this will look like are pretty abysmal, and a “call to arms” for economists to start collecting the one type of data that could make a plan to address AI in the workforce possible at all. On our abysmal tools: consider the fact that any job is made up of individual tasks. One part of a real estate agent’s job, for example, is to ask clients what sort of property they want to buy. The US government chronicled thousands of these tasks in a massive catalogue first launched in 1998 and updated regularly since then. This was the data that researchers at OpenAI used in December to judge how “exposed” a job is to AI (they found a real estate agent to be 28% exposed, for example). Then in February, Anthropic used this data in its analysis of millions of Claude conversations to see which tasks people are actually using its AI to complete and where the two lists overlapped.
But knowing the AI exposure of tasks leads to an illusory understanding of how much a given job is at risk, Imas says. “Exposure alone is a completely meaningless tool for predicting displacement,” he told me. Sure, it is illustrative in the gloomiest case—for a job in which literally every task could be done by AI with no human direction. If it costs less for an AI model to do all those tasks than what you’re paid—which is not a given, since reasoning models and agentic AI can rack up quite a bill—and it can do them well, the job likely disappears, Imas says. This is the oft-mentioned case of the elevator operator from decades ago; maybe today’s parallel is a customer service agent solely doing phone call triage. But for the vast majority of jobs, the case is not so simple. And the specifics matter, too: Some jobs are likely to have dark days ahead, but knowing how and when this will play out is hard to answer when only looking at exposure. Take writing code, for example. Someone who builds premium dating apps, let’s say, might use AI coding tools to create in one day what used to take three days. That means the worker is more productive. The worker’s employer, spending the same amount of money, can now get more output. So then will the employer want more employees or fewer? This is the question that Imas says should keep any policymaker up at night, because the answer will change depending on the industry. And we are operating in the dark. In this coder’s case, these efficiencies make it possible for dating apps to lower prices. (A skeptic might expect companies to simply pocket the gains, but in a competitive market, they risk being undercut if they do.) These lower prices will always drive some increase in demand for the apps. But how much? If millions more people want it, the company might grow and ultimately hire more engineers to meet this demand. But if demand barely ticks up—maybe the people who don’t use premium dating apps still won’t want them even at a lower price—fewer coders are needed, and layoffs will happen. Repeat this hypothetical across every job with tasks that AI can do, and you have the most pressing economic question of our time: the specifics of price elasticity, or how much demand for something changes when its price changes. And this is the second part of what Imas emphasized last week: We don’t currently have this data across the economy. But we could. We do have the numbers for grocery items like cereal and milk, Imas says, because the University of Chicago partners with supermarkets to get data from their price scanners. But we don’t have such figures for tutors or web developers or dietitians (all jobs found to have “exposure” to AI, by the way). Or at least not in a way that’s been widely compiled or made accessible to researchers; sometimes it’s scattered across private companies or consultancies. “We need, like, a Manhattan Project to collect this,” Imas says. And we don’t need it just for jobs that could obviously be affected by AI now: “Fields that are not exposed now will become exposed in the future, so you just want to track these statistics across the entire economy.” Getting all this information would take time and money, but Imas makes the case that it’s worth it; it would give economists the first realistic look at how our AI-enabled future could unfold and give policymakers a shot at making a plan for it.

AI is changing how small online sellers decide what to make
For years Mike McClary sold the Guardian LTE Flashlight, a heavy-duty black model, online through his small outdoor brand. The product, designed for brightness and durability, became one of his most popular items ever. Even after he stopped offering it around 2017, customers kept sending him emails asking where they could buy it. When McClary decided to revisit the Guardian flashlight in 2025, he didn’t begin the way he might have in the past, by combing through supplier listings and sending inquiries to factories. Instead, he opened Accio, an AI sourcing and researching tool on Alibaba.com. For small entrepreneurs in the US, deciding what to sell and where to make it has traditionally been a slow, labor-intensive process that can take months. Now that work is increasingly being done by AI tools like Accio, which help connect businesses with manufacturers in countries including China and India. Business owners and e-commerce experts told MIT Technology Review that these AI tools are making sourcing more accessible and significantly shortening the time it takes to go from product idea to launch. McClary, 51, who runs his business from his Illinois living room, has sold products ranging from leather conditioner to camping lights, including one rechargeable lantern that brought in half a million dollars. Like many small online merchants, he built his business by being extremely scrappy—spotting demand for a product, tweaking existing designs, finding a factory, doing modest marketing, and getting the goods in front of customers fast.
This time, though, he began by telling Accio about the flashlight’s original design, production cost, and profit margin. Then Accio suggested several changes, making it smaller and slightly less bright and switching its charging method to battery power. It also identified a manufacturer in Ningbo, China, that McClary said could cut the manufacturing cost from $17 to about $2.50 per unit. McClary took the process from there, contacting the supplier himself to discuss the revised design. Within a month, the new version of the Guardian flashlight was back up for sale on Amazon and on his brand’s website.
The new factory hunt Although Alibaba is better known for owning Taobao, the biggest shopping site in China, its first business was Alibaba.com, the primary website that lists Chinese factories open for bulk orders. Placing an order with a manufacturer usually requires far more than clicking “Buy.” Sellers often spend days or weeks browsing listings, comparing suppliers’ reviews and manufacturing capacities, asking about minimum order quantities, requesting samples, and negotiating timelines and customization options. But Accio has gained significant momentum by changing how that sourcing gets done. Launched in 2024, Accio exceeded 10 million monthly active users in March 2026, according to the company. That means about one in five Alibaba users consults with AI about product sourcing. Accio’s interface looks a lot like ChatGPT or Claude: Users type a question into an empty box and choose between “fast” and “thinking” modes. But when asked about products, the tool returns more than text, offering charts, links, and visuals and asking follow-up questions to clarify the buyer’s needs. It then narrows the field to one or a handful of suppliers that appear capable of delivering. After that, the human work begins: Users still have to reach out to suppliers themselves and negotiate the details. Zhang Kuo, the president of Alibaba.com, told MIT Technology Review that the tool is built on multiple frontier models, including the company’s own Qwen series, a popular family of open-source large language models. The system is able to pull from the site’s millions of supplier profiles and is trained on 26 years of proprietary transaction data. For tasks like product research and sourcing analysis, the tool “blows it away” compared with general AI tools like ChatGPT, says Richard Kostick, CEO of the beauty brand 100% Pure. Many websites have tried using AI to assist shopping, but Alibaba has been one of the most aggressive. In March, Eddie Wu, CEO of the site’s parent company Alibaba Group, told managers that integrating the company’s core services with Qwen’s AI capabilities is a top priority. During a Chinese New Year promotion of Qwen’s personal shopping AI agent, where the company gave away cash, customers placed 200 million orders, the firm says. Vincenzo Toscano, an e-commerce seller and consultant, recommended Accio to his clients before deciding to try it himself for a new sunglasses brand. He came in with a rough vision: a brand shaped by his Italian heritage, his personal style, and a boutique aesthetic. He says the AI helped turn that concept into something more concrete, suggesting materials, refining the look, and pointing to design ideas that felt current. But the tool has clear limits. McClary, who uses AI tools regularly, says Accio is strongest when it comes to product ideation, but less helpful on marketing questions such as advertising and social media outreach. To use it well, he says, buyers still need to challenge its recommendations, since some can be generic.
The rest of the business As platforms become more AI-driven, manufacturers are adjusting too. Sally Yan, a representative at a makeup packaging company in Wuhan, China, says her firm has started writing more detailed product descriptions and adding information about its equipment and manufacturing experience on Alibaba.com because it suspects those details make its listings more likely to be surfaced by AI. Yan says manufacturers cannot tell whether an inquiry from a customer was generated or guided by AI, and that her firm is not using AI to negotiate pricing or product details. “AI agents are increasingly used by people to assist decision making or even directly making transactions, and in certain situations, they can become extremely useful,” “AI agents are increasingly used by people to assist purchase decisions and even directly making transactions, and with clear data guardrails, they can become extremely useful,” says Jiaxin Pei, a research scientist at the Stanford Institute for Human-Centered AI, “but agents need to act transparently, securely, and in the customer’s best interest.” Pei says developers of these tools should disclose the data they collect and the incentives built into them to ensure that the marketplace remains fair. Zhang, of Alibaba.com, says Accio currently does not include advertising. Suppliers can pay for higher placement in Alibaba.com’s regular search results, but Zhang says Accio is “not integrated” with that system. “We haven’t had a clear answer in terms of how to monetize this tool,” he says. For now, users can pay for additional tokens to continue chatting with the agent after their free queries run out. Sellers say that while AI tools have made it easier to come up with ideas and get a business off the ground, they do not replace the core skills that make someone good at e-commerce. McClary believes that even when sellers have access to the same market information, some are still better at making decisions, acting quickly, and actually delivering on orders. Those differences, he says, still go a long way. Toscano, the brand founder and e-commerce consultant, feels good about officially launching his new brand of sunglasses in just a few months: “We [small business owners] always have to bootstrap a lot of decisions. Deciding what to sell often comes down to an educated guess,” he says, “And we’re now in an era when making those decisions is easier than ever.”

Four things we’d need to put data centers in space
MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. In January, Elon Musk’s SpaceX filed an application with the US Federal Communications Commission to launch up to one million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI without triggering an environmental crisis on Earth. But could it work? SpaceX is the latest in a string of high-tech companies extolling the potential of orbital computing infrastructure. Last year, Amazon founder Jeff Bezos said that the tech industry will move toward large-scale computing in space. Google has plans to loft data-crunching satellites, aiming to launch a test constellation of 80 as early as next year. And last November Starcloud, a startup based in Washington State, launched a satellite fitted with a high-performance Nvidia H100 GPU, marking the first orbital test of an advanced AI chip. The company envisions orbiting data centers as large as those on Earth by 2030. Proponents believe that putting data centers in space makes sense. The current AI boom is straining energy grids and adding to the demand for water, which is needed to cool the computers. Communities in the vicinity of large-scale data centers worry about increasing prices for those resources as a result of the growing demand, among other issues.
In space, advocates say, the water and energy problems would be solved. In constantly illuminated sun-synchronous orbits, space-borne data centers would have uninterrupted access to solar power. At the same time, the excess heat they produce would be easily expelled into the cold vacuum of space. And with the cost of space launches decreasing, and mega-rockets such as SpaceX’s Starship promising to push prices even lower, there could be a point at which moving the world’s data centers into space makes sound business sense. Detractors, on the other hand, tell a different story and point to a variety of technological hurdles, though some say it’s possible they may be surmountable in the not-so-distant future. Here are four of the must-haves we’d need to make space-based data centers a reality. A way to carry away heat AI data centers produce a lot of heat. Space might seem like a great place to dispel that heat without using up massive amounts of water. But it’s not so simple. To get the power needed to run 24-7, a space-based data center would have to be in a constantly illuminated orbit, circling the planet from pole to pole, and never hide in Earth’s shadow. And in that orbit, the temperature of the equipment would never drop below 80 °C, which is way too hot for electronics to operate safely in the long term.
Getting the heat out of such a system is surprisingly challenging. “Thermal management and cooling in space is generally a huge problem,” says Lilly Eichinger, CEO of the Austrian space tech startup Satellives. On Earth, heat dissipates mostly through the natural process of convection, which relies on the movement of gases and liquids like air and water. In the vacuum of space, heat has to be removed through the far less efficient process of radiation. Safely removing the heat produced by the computers, as well as what’s absorbed from the sun, requires large radiative surfaces. The bulkier the satellite, the harder it is to send all the heat inside it out into space. But Yves Durand, former director of technology at the European aerospace giant Thales Alenia Space, says that technology already exists to tackle the problem. The company previously developed a system for large telecommunications satellites that can pipe refrigerant fluid through a network of tubing using a mechanical pump, ultimately transferring heat from within a spacecraft to radiators on the exterior. Durand led a 2024 feasibility study on space-based data centers, which found that although challenges exist, it should be possible for Europe to put gigawatt-scale data centers (on par with the largest Earthbound facilities) into orbit before 2050. These would be considerably larger than those envisioned by SpaceX, featuring solar arrays hundreds of meters in size—larger than the International Space Station. Computer chips that can withstand a radiation onslaught The space around Earth is constantly battered by cosmic particles and lashed by solar radiation. On Earth’s surface, humans and their electronic devices are protected from this corrosive soup of charged particles by the planet’s atmosphere and magnetosphere. But the farther away from Earth you venture, the weaker that protection becomes. Studies show that aircraft crews have a higher risk of developing cancer because of their frequent exposure to high radiation at cruising altitude, where the atmosphere is thin and less protective. Electronics in space are at risk of three types of problems caused by high radiation levels, says Ken Mai, a principal systems scientist in electrical and computer engineering at Carnegie Mellon University. Phenomena known as single-event upsets can cause bit flips and corrupt stored data when charged particles hit chips and memory devices. Over time, electronics in space accumulate damage from ionizing radiation that degrades their performance. And sometimes a charged particle can strike the component in a way that physically displaces atoms on the chip, creating permanent damage, Mai explains. Traditionally, computers launched to space had to undergo years of testing and were specifically designed to withstand the intense radiation present in Earth’s orbit. These space-hardened electronics are much more expensive, though, and their performance is also years behind the state-of-the-art devices for Earth-based computing. Launching conventional chips is a gamble. But Durand says cutting-edge computer chips use technologies that are by default more resistant to radiation than past systems. And in mid-March, Nvidia touted hardware, including a new GPU, that is “bringing AI compute to orbital data centers.” Nvidia’s head of edge AI marketing, Chen Su, told MIT Technology Review, that “Nvidia systems are inherently commercial off the shelf, with radiation resilience achieved at the system level rather than through radiation‑hardened silicon alone.” He added that satellite makers increase the chips’ resiliency with the help of shielding, advanced software for error detection, and architectures that combine the consumer-grade devices with bespoke, hardened technologies.
Still, Mai says that the data-crunching chips are only one issue. The data centers would also need memory and storage devices, both of which are vulnerable to damage by excessive radiation. And operators would need the ability to swap things out or adapt when issues arise. The feasibility and affordability of using robots or astronaut missions for maintenance is a major question mark hanging over the idea of large-scale orbiting data centers. “You not only need to throw up a data center to space that meets your current needs; you need redundancy, extra parts, and reconfigurability, so when stuff breaks, you can just change your configuration and continue working,” says Mai. “It’s a very challenging problem because on one hand you have free energy and power in space, but there are a lot of disadvantages. It’s quite possible that those problems will outweigh the advantages that you get from putting a data center into space.” In addition to the need for regular maintenance, there’s also the potential for catastrophic loss. During periods of intense space weather, satellites can be flooded with enough radiation to kill all their electronics. The sun has just passed the most active phase of its 11-year cycle with relatively little impact on satellites. Still, experts warn that since the space age began, the planet has not experienced the worst the sun is capable of. Many doubt whether the low-cost new space systems that dominate Earth’s orbits today are prepared for that. A plan to dodge space debris Both large-scale orbiting data centers such as those envisioned by Thales Alenia Space and the mega-constellations of smaller satellites as proposed by SpaceX give a headache to space sustainability experts. The space around Earth is already quite crowded with satellites. Starlink satellites alone perform hundreds of thousands of collision avoidance maneuvers every year to dodge debris and other spacecraft. The more stuff in space, the higher the likelihood of a devastating collision that would clutter the orbit with thousands of dangerous fragments. Large structures with hundreds of square meters of solar arrays would quickly suffer damage from small pieces of space debris and meteorites, which would over time degrade the performance of their solar panels and create more debris in orbit. Operating one million satellites in low Earth orbit, the region of space at the altitude of up to 2,000 kilometers, might be impossible to do safely unless all satellites in that area are part of the same network so they can communicate effectively to maneuver around each other, Greg Vialle, the founder of the orbital recycling startup Lunexus Space, told MIT Technology Review. “You can fit roughly four to five thousand satellites in one orbital shell,” Vialle says. “If you count all the shells in low Earth orbit, you get to a number of around 240,000 satellites maximum.” And spacecraft must be able to pass each other at a safe distance to avoid collisions, he says. “You also need to be able to get stuff up to higher orbits and back down to de-orbit,” he adds. “So you need to have gaps of at least 10 kilometers between the satellites to do that safely. Mega-constellations like Starlink can be packed more tightly because the satellites communicate with each other. But you can’t have one million satellites around Earth unless it’s a monopoly.”
On top of that, Starlink would likely want to regularly upgrade its orbiting data centers with more modern technology. Replacing a million satellites perhaps every five years would mean even more orbital traffic—and it could increase the rate of debris reentry into Earth’s atmosphere from around three or four pieces of junk a day to about one every three minutes, according to a group of astronomers who filed objections against SpaceX’s FCC application. Some scientists are concerned that reentering debris could damage the ozone layer and alter Earth’s thermal balance. Economical launch and assembly The longer hardware survives in orbit, the better the return on investment. But for orbital data centers to make economic sense, companies will have to find a relatively cheap way to get that hardware in orbit. SpaceX is betting on its upcoming Starship mega-rocket, which will be able to carry up to six times as much payload as the current workhorse, Falcon 9. The Thales Alenia Space study concluded that if Europe were to build its own orbital data centers, it would have to develop a similarly potent launcher.
But launch is only part of the equation. A large-scale orbital data center won’t fit in a rocket—even a mega-rocket. It will need to be assembled in orbit. And that will likely require advanced robotic systems that do not exist yet. Various companies have conducted Earth-based tests with precursors of such systems, but they are still far from real-world use. Durand says that in the short term, smaller-scale data centers are likely to establish themselves as an integral part of the orbital infrastructure, by processing images from Earth-observing satellites directly in space without having to send them to Earth. That would be a huge help for companies selling insights from space, as many of these data sets are extremely large, and competition for opportunities to downlink them to Earth for processing via ground stations is growing. “The good thing with orbital data centers is that you can start with small servers and gradually increase and build up larger data centers,” says Durand. “You can use modularity. You can learn little by little and gradually develop industrial capacity in space. We have all the technology, and the demand for space-based data processing infrastructure is huge, so it makes sense to think about it.” Smaller facilities probably won’t do much to offset the strain that terrestrial data centers are placing on the planet’s water and electricity, though. That vision of the future might take decades to come to fruition, some critics think—if it even gets off the ground at all.

Desalination plants in the Middle East are increasingly vulnerable
EXECUTIVE SUMMARY MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. As the conflict in Iran has escalated, a crucial resource is under fire: the desalination technology that supplies water across much of the region. In early March, Iran’s foreign minister accused the US of attacking a desalination plant on Qeshm Island in the Strait of Hormuz and disrupting the water supply to nearly 30 villages. (The US denied responsibility.) In the weeks since, both Bahrain and Kuwait have reported damage to desalination plants and blamed Iran, though Iran also denied responsibility. In late March, President Donald Trump threatened the destruction of “possibly all desalinization plants” in Iran if the Strait of Hormuz was not reopened. Since then, he’s escalated his threats against Iran, warning of plans to attack other crucial civilian infrastructure like power plants and bridges.
Countries in the Middle East, particularly the Gulf states, rely on the technology to turn salt water into fresh water for farming, industry, and—crucially—drinking. The mounting attacks and threats to date highlight just how vital the industry is to the region—a situation made even more precarious by rising temperatures and extreme weather driven by climate change. Right now, 83% of the Middle East is under extremely high water stress, says Liz Saccoccia, a water security associate at the World Resources Institute. Future projections suggest that’s going to increase to about 100% by 2050, she adds: “This is a continuing trend, and it’s getting worse, not better.”
Here’s a look at desalination technology in the Middle East and what wartime threats to the critical infrastructure could mean for people in the region. A vital resource Desalination technology has helped provide water supplies in the Middle East since the early 20th century and became widespread in the 1960s and 1970s. There are two major categories of desalination plants. Thermal plants use heat to evaporate water, leaving salt and other impurities behind. The vapor can then be condensed into usable fresh water. The alternative is membrane-based technology like reverse osmosis, which pushes water through membranes that have tiny pores—so small that salt can’t get through. Early desalination plants in the Middle East were the first type, burning fossil fuels to evaporate water, leaving the salt behind. This technique is incredibly energy-intensive, and over time, processes that rely on filters became the dominant choice. Membrane technologies have made up essentially all new desalination capacity in recent years; the last major thermal plant built in the Gulf came online in 2018. Many reverse osmosis plants still rely on fossil fuels, but they’re more efficient. Since then, membrane technologies have added more than 15 million cubic meters of daily capacity—enough to supply water to millions of people. Capacity has expanded quickly in recent years; between 2006 and 2024, countries across the Middle East collectively spent over $50 billion building and upgrading desalination facilities, and nearly that much operating them. Today, there are nearly 5,000 desalination plants operational across the Middle East. And looking ahead, growth is continuing. Between 2024 and 2028, daily capacity is expected to grow from about 29 million cubic meters to 41 million cubic meters.
Uneven vulnerabilities Some countries rely on the technology more than others. Iran, for example, uses desalination for about 3% of its municipal fresh water. The country has access to groundwater and some surface water, including rivers, though these resources are being stretched thin by agriculture and extreme drought. Other nations in the region, particularly the Gulf countries (Bahrain, Qatar, Kuwait, the United Arab Emirates, Saudi Arabia, and Oman), have much more limited water resources and rely heavily on desalination. Across these six nations, all but the UAE get more than half their drinking water from desalination, and for Bahrain, Qatar, and Kuwait the figure is more than 90%. “The Gulf countries are much, much more vulnerable to attacks on their desalination plants than Iran is,” says David Michel, a senior associate in the global food and water security program at the Center for Strategic and International Studies. There are thousands of desalination facilities across the region, so the system wouldn’t collapse if a small number were taken offline, Michel says. However, in recent years there’s been a trend toward larger, more centralized plants. The average desalination plant is about 10 times larger than it was 15 years ago, according to data from the International Energy Agency. The largest desalination plants today can produce 1 million cubic meters of water daily, enough for hundreds of thousands of people. Taking one or more of these massive facilities offline could have a significant effect on the system, Michel says. Escalating threats Desalination facilities are quite linear, meaning there are multiple steps and pieces of equipment that work in sequence—and the failure of a component in that chain can take an entire facility down. Attacks on water inlets, transportation networks, and power supplies can also disrupt the system, Michel says. During the Gulf War in 1991, Iraqi forces pumped oil into the gulf, contaminating the water and shutting down desalination plants in Kuwait. The facilities are also generally located close to other targets in this conflict. Desalination is incredibly energy intensive, so about three-quarters of facilities in the region are next to power plants. Trump has repeatedly threatened power plants in Iran. In response, Iran’s military has said that if civilian targets are hit, the country will respond with strikes that are “much more devastating and widespread.” Other governments and organizations, including the United Nations, the European Union, and the Red Cross, have broadly condemned threats to infrastructure as illegal.
But war isn’t the only danger facing these plants, even if it is the most immediate. Some studies have suggested that global warming could strengthen cyclones in the region, and these extreme weather events could force shutdowns or damage equipment. Water pollution could also cause shutdowns. Oil spills, whether accidental or intentional, as in the case of the Gulf War, can wreak havoc. And in 2009, a red algae bloom closed desalination plants in Oman and the United Arab Emirates for weeks. The algae fouled membranes and blocked the plants from being able to take water in from the Persian Gulf and the Gulf of Oman.
Desalination facilities could become more resilient to threats in the future, and they may need to as their importance continues to grow. There’s increasing interest in running desalination facilities at least partially on solar power, which could help reduce dependence on the oil that powers most facilities today. The Hassyan seawater desalination project in the UAE, currently under construction, would be the largest reverse osmosis plant in the world to operate solely with renewable energy. Another way to increase resilience is for countries to build up more strategic water storage to meet demand. Qatar recently issued new policies that aim to improve management and storage of desalinated water, for example. Countries could also work together to invest in shared infrastructure and policies that help strengthen the water supply through the region. Preparedness, resilience, and cooperation will be key for the Middle East broadly as critical infrastructure, including the water supply, is increasingly under threat. “The longer the conflict goes on, the more likely we’ll see significant water infrastructure damage,” says Ginger Matchett, an assistant director at the Atlantic Council. “What worries me is that after this war ends, some of the lessons will show how water can be weaponized more strategically than previously imagined.”

Enabling agent-first process redesign
In association withthe Deloitte Microsoft Technology Practice Unlike static, rules-based systems, AI agents can learn, adapt, and optimize processes dynamically. As they interact with data, systems, people, and other agents in real time, AI agents can execute entire workflows autonomously. But unlocking their potential requires redesigning processes around agents rather than bolting them onto fragmented legacy workflows using traditional optimization methods. Companies must become agent first. In an agent-first enterprise, AI systems operate processes while humans set goals, define policy constraints, and handle exceptions. “You need to shift the operating model to humans as governors and agents as operators,” says Scott Rodgers, global chief architect and U.S. CTO of the Deloitte Microsoft Technology Practice.
The agent-first imperative With technology budgets for AI expected to increase more than 70% over the next two years, AI agents, powered by generative AI, are poised to fundamentally transform organizations and achieve results beyond traditional automation. These initiatives have the potential to produce significant performance gains, while shifting humans toward higher value work. AI is advancing so quickly that static approaches to task automation will likely only produce incremental gains. Because legacy processes aren’t built for autonomous systems, AI agents require machine-readable process definitions, explicit policy constraints, and structured data flows, according to Rodgers.
Further complicating matters, many organizations don’t understand the full economic drivers of their business, such as cost to serve and per-transaction costs. As a result, they have trouble prioritizing agents that can create the most value and instead focus on flashy pilots. To achieve structural change, executives should think differently. Companies must instead orchestrate outcomes faster than competitors. “The real risk isn’t that AI won’t work—it’s that competitors will redesign their operating models while you’re still piloting agents and copilots,” says Rodgers. “Nonlinear gains come when companies create agent-centric workflows with human governance and adaptive orchestration.” Routine and repetitive tasks are increasingly handled automatically, freeing employees to focus on higher value, creative, and strategic work. This shift improves operational efficiency, fosters stronger collaboration, and generates faster decision-making—helping organizations modernize the workplace without sacrificing enterprise security. Download the article. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Nvidia’s SchedMD acquisition puts open-source AI scheduling under scrutiny
Is the concern valid? Dr. Danish Faruqui, CEO of Fab Economics, a US-based AI hardware and datacenter advisory, said the risk was real. “The skepticism that Nvidia may prioritize its own hardware in future software updates, potentially delaying or under-optimizing support for rivals, is a feasible outcome,” he said. As the primary developer, Nvidia now controls Slurm’s official development roadmap and code review process, Faruqui said, “which could influence how quickly competing chips are integrated on new development or continuous improvement elements.” Owning the control plane alongside GPUs and networking infrastructure such as InfiniBand, he added, allows Nvidia to create a tightly vertically integrated stack that can lead to what he described as “shallow moats, where advanced features are only available or performant on Nvidia hardware.” One concrete test of that, industry observers say, will be how quickly Nvidia integrates support for AMD’s next-generation chips into Slurm’s codebase compared with how quickly it integrates its own forthcoming hardware and networking technologies, such as InfiniBand. Does the Bright Computing precedent hold? Analysts point to Nvidia’s 2022 acquisition of Bright Computing as a reference point, saying the software became optimized for Nvidia chips in ways that disadvantaged users of competing hardware. Nvidia disputed that characterization, saying Bright Computing supports “nearly any CPU or GPU-accelerated cluster.” Rawat said the comparison was instructive but imperfect. “Nvidia’s acquisition of Bright Computing highlights its preference for vertical integration, embedding Bright tightly into DGX and AI Factory stacks rather than maintaining a neutral, multi-vendor orchestration role,” he said. “This reflects a broader strategic pattern — Nvidia seeks to control the full-stack AI infrastructure experience.”

The Download: AI’s impact on jobs, and data centres in space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The one piece of data that could actually shed light on your job and AI Within Silicon Valley’s orbit, an AI-fueled jobs apocalypse is spoken about as a given. Now even economists who have downplayed the threat are coming around to the idea. Alex Imas, based at the University of Chicago, is one of them. He believes that any plan to address AI’s impact will depend on collecting one vital piece of data: price elasticity. Imas argues that “we need a Manhattan Project” for this. Read the full story to find out why.
—James O’Donnell This article is from The Algorithm, our weekly newsletter giving you the inside track on all things AI. Sign up to receive it in your inbox every Monday.
Four things we’d need to put data centers in space In January, Elon Musk’s SpaceX applied to launch up to 1 million data centers into Earth’s orbit. The goal? To fully unleash the potential of AI—without triggering an environmental crisis on Earth. SpaceX is among a growing list of tech firms pursuing orbital computing infrastructure. But can their plans really work? Here are four must-haves for making space-based data centers a reality. —Tereza Pultarova This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Trump has again proposed major cuts to US science and tech spending He wants to slash nearly every science-focused agency. (Ars Technica) + If Trump gets his way, the US could face a costly brain drain. (NYT $) + Top research talent is already fleeing the country. (Guardian) + Basic science deserves our boldest investment. (MIT Technology Review) 2 Sam Altman lobbied against AI regulations he publicly welcomed A bombshell report reveals many OpenAI insiders don’t trust him. (The New Yorker $) + Some have called him a sociopath. (Futurism) + OpenAI’s CFO fears it won’t be IPO-ready this year. (The Information $) + A war over AI regulation is brewing in the US. (MIT Technology Review)
3 NASA’s Artemis II has broken humanity’s all-time distance record The astronauts have flown farther than any humans before them. (BBC) + Their mission includes MIT-developed technology. (Axios) 4 Chinese tech firms are selling intel “exposing” US forces It comes from combining AI with open-source data.. (WP $) + AI is turning the Iran conflict into theater. (MIT Technology Review) 5 War is pushing countries to ditch hyperscalers Driven by Iran naming tech giants as military targets. (Rest of World) + No one wants a data center in their backyard. (MIT Technology Review) 6 OpenAI, Anthropic, and Google have united against China’s AI copying They’re sharing information on “adversarial distillation” (Bloomberg $) 7 Anduril and Impulse Space are working on Trump’s “Golden Dome” They’re developing space-based missile tracking for the project. (Gizmodo) 8 OpenAI has urged California to probe Elon Musk’s “anti-competitive behavior.” It accuses Musk of trying to “take control of the future of AGI.” (Reuters $) + And claims he coordinated attacks with Mark Zuckerberg. (CNBC) + A former Tesla president has revealed how he survived working for Musk. (WP $) 9 DeepSeek’s new AI model will run on Huawei chips It’s expected to launch in the next few weeks. (The Information $) 10 Memes have nuked our culture Internet “brain rot” has escaped our phones to take over everything. (NYT $)
Quote of the day “I must say, it was actually quite nice.” —Astronaut Victor Glover tells President Donald Trump what it was like when Artemis II was out of communication with the rest of humanity, The New York Times reports.
One More Thing PABLO ALBARENGA Inside the controversial tree farms powering Apple’s carbon-neutral goal In 2020, Apple set a goal to become net zero by the end of the decade. To hit that target, the company is offsetting its emissions by planting millions of eucalyptus trees in Brazil. Apple is betting that the strategy will lead to a greener future. But critics warn that the industrial tree farms will do more harm than good. Find out why the plans have sparked a backlash. —Gregory Barber
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.) + Japan’s automated bike garage is a cyclist’s dream come true. + This deep dive into bird behavior reveals the secrets of their dining habits. (Big thanks to reader Terry Gordon for the find!) + The first photo from the Artemis astronauts vividly captures the glow of our atmosphere. + There’s a new contender for the world’s most gorgeous website: RobertDeNiro.com.

Cisco: AI simplifies wireless operations but also taxes legacy Wi-Fi networks
Additionally, 28% of organizations surveyed have already deployed AI workloads, and that percentage is estimated to climb to 79% by 2027. Some 29% will be in pilot stage, and 22% are planning deployment over the next 12 months. “This rapid trajectory further highlights that AI deployment is indeed the defining use case for next-generation wireless infrastructure,” the Cisco study asserted. “While core use cases such as wireless for physical security are already widely deployed, the next phase of wireless growth is being driven by emerging applications that depend on high-performing, resilient networks. Organizations are increasingly piloting or planning wireless investments to support autonomous systems and robotics, smart facilities and energy management, space analytics, and immersive collaboration,” the study found. The 6 GHz opportunity One wireless advancement that is being used to handle AI and other capacity challenges is the addition of the 6 GHz band added by Wi-Fi 6E, “and utilization further improved with Wi-Fi 7,” Cisco stated. “Organizations are using it to solve capacity and congestion issues (46%), enable high-bandwidth applications (32%), and support AI workloads (31%). And the research shows that those adopting this added spectrum are seeing strong benefits.” “Organizations already deploying 6 GHz show almost double the rate of AI applications and workloads (45%) compared to non-adopters (26%),” the study reported. “It’s no surprise that Cisco telemetry highlights a 60% increase in 6 GHz clients went live in 2025. Wireless professionals are seeing the 6 GHz opportunity and are helping their organizations,” the study found.

Hyperscaler backlogs show growing demand for AI infrastructure
Amazon’s backlog is now $244 billion, up 40% year over year. “There’s a lot of demand for AWS right now, in the AI space and also in the core AWS space,” Jassy said. CEO Sundar Pichai said that Google plans to spend around $180 billion on capex in 2026. And Google also reported a backlog of $240 billion in February. “The number of deals in 2025 over a billion dollars surpassed the previous three years combined,” said Pichai. The driver for this growth? AI spending. AI companies need more compute capacity to train more powerful models. And enterprises deploying AI are driving demand for inference. In survey after survey, the majority of companies say that they plan to increase their AI-related spending this year. According to a survey of nearly 2,400 executives that Boston Consulting Group released in January, companies will double their AI spending this year, from .8% to 1.7% of revenues. And more than 90% of CEOs say that they will continue to invest in AI at current or higher levels, even if the investments do not pay off in the next year.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.