Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? National security, specifically concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the struggling offshore wind industry in the US.
This pause affects $25 billion in investment in five wind farms: Vineyard Wind 1 off Massachusetts, Revolution Wind off Rhode Island, Sunrise Wind and Empire Wind off New York, and Coastal Virginia Offshore Wind off Virginia. Together, those projects had been expected to create 10,000 jobs and power more than 2.5 million homes and businesses. In a statement announcing the move, the Department of the Interior said that “recently completed classified reports” revealed national security risks, and that the pause would give the government time to work through concerns with developers. The statement specifically says that turbines can create radar interference (more on the technical details here in a moment).
Three of the companies involved have already filed lawsuits, and they’re seeking preliminary injunctions that would allow construction to continue. Orsted and Equinor (the developers for Revolution Wind and Empire Wind, respectively) told the New York Times that their projects went through lengthy federal reviews, which did address concerns about national security. This is just the latest salvo from the Trump administration against offshore wind. On Trump’s first day in office, he signed an executive order stopping all new lease approvals for offshore wind farms. (That order was struck down by a judge in December.) The administration previously ordered Revolution Wind to stop work last year, also citing national security concerns. A federal judge lifted the stop-work order weeks later, after the developer showed that the financial stakes were high, and that government agencies had previously found no national security issues with the project. There are real challenges that wind farms introduce for radar systems, which are used in everything from air traffic control to weather forecasting to national defense operations. A wind turbine’s spinning can create complex signatures on radar, resulting in so-called clutter. Previous government reports, including one 2024 report from the Department of Energy and a 2025 report from the Government Accountability Office (an independent government watchdog), have pointed out this issue in the past. “To date, no mitigation technology has been able to fully restore the technical performance of impacted radars,” as the DOE report puts it. However, there are techniques that can help, including software that acts to remove the signatures of wind turbines. (Think of this as similar to how noise-canceling headphones work, but more complicated, as one expert told TechCrunch.) But the most widespread and helpful tactic, according to the DOE report, is collaboration between developers and the government. By working together to site and design wind farms strategically, the groups can ensure that the projects don’t interfere with government or military operations. The 2025 GAO report found that government officials, researchers, and offshore wind companies were collaborating effectively, and any concerns could be raised and addressed in the permitting process. This and other challenges threaten an industry that could be a major boon for the grid. On the East Coast where these projects are located, and in New England specifically, winter can bring tight supplies of fossil fuels and spiking prices because of high demand. It just so happens that offshore winds blow strongest in the winter, so new projects, including the five wrapped up in this fight, could be a major help during the grid’s greatest time of need.

One 2025 study found that if 3.5 gigawatts’ worth of offshore wind had been operational during the 2024-2025 winter, it would have lowered energy prices by 11%. (That’s the combined capacity of Revolution Wind and Vineyard Wind, two of the paused projects, plus two future projects in the pipeline.) Ratepayers would have saved $400 million. Before Donald Trump was elected, the energy consultancy BloombergNEF projected that the US would build 39 gigawatts of offshore wind by 2035. Today, that expectation has dropped to just 6 gigawatts. These legal battles could push it lower still. What’s hardest to wrap my head around is that some of the projects being challenged are nearly finished. The developers of Revolution Wind have installed all the foundations and 58 of 65 turbines, and they say the project is over 87% complete. Empire Wind is over 60% done and is slated to deliver electricity to the grid next year. To hit the pause button so close to the finish line is chilling, not just for current projects but for future offshore wind efforts in the US. Even if these legal battles clear up and more developers can technically enter the queue, why would they want to? Billions of dollars are at stake, and if there’s one word to describe the current state of the offshore wind industry in the US, it’s “unpredictable.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Holes in Veeam Backup suite allow remote code execution, creation of malicious backup config files

CVE-2025-59470 (with a CVSS score of 9) allows a Backup or Tape Operator to perform remote code execution (RCE) as the Postgres user by sending a malicious interval or order parameter; CVE-2025-59469 (with a severity score of 7.2) allows a Backup or Tape Operator to write files as root; CVE-2025-55125 (with a severity score of 7.2) allows a Backup or Tape Operator to perform remote code execution (RCE) as root by creating a malicious backup configuration file; CVE-2025-59468 (with a severity score of 6.7) allows a Backup Administrator to perform remote code execution (RCE) as the Postgres user by sending a malicious password parameter. The patch to version 13.0.1.1071 will be an “easy installation” that won’t be disruptive, Vanover said. As of Tuesday afternoon, Veeam hadn’t received reports of exploitation, he added. “The good news is, if a Veeam server is broken, we can create a new server right away – presumably with this patch installed – import the backups and carry on. The core data is completely unimpacted by this,” Vanover said. “The worst type of thing would be the [backup] environment isn’t working right or the Postgres database is messed up on the Veeam server, so jobs might not behave in a way one might expect.” In these cases, admins using the Veeam One monitoring management suite would get an alert if, for example, a job was unable to connect to the backup server or backup jobs were failing. The four vulnerabilities being patched are less severe than some because an attacker, internal or external, would need valid credentials for the three specific roles, noted Johannes Ullrich, dean of research at the SANS Institute. On the other hand, he added, backup systems like Veeam are targets for attackers, in particular those who inject ransomware, who often attempt to erase backups. “Backup systems

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Deploying a hybrid approach to Web3 in the AI era

In partnership withAIOZ Network When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information. Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks. What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3. Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.
Gaining a Web3 edge Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets. More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.
But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives. “We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra. In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in. Overcoming Web3 deployment challenges Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale. One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks. “We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.” Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations. “Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.” Building a bridge to Web3 Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential. AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.” The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs). “Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.” Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.” This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet. “Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.
“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.” Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.
To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? National security, specifically concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the struggling offshore wind industry in the US.
This pause affects $25 billion in investment in five wind farms: Vineyard Wind 1 off Massachusetts, Revolution Wind off Rhode Island, Sunrise Wind and Empire Wind off New York, and Coastal Virginia Offshore Wind off Virginia. Together, those projects had been expected to create 10,000 jobs and power more than 2.5 million homes and businesses. In a statement announcing the move, the Department of the Interior said that “recently completed classified reports” revealed national security risks, and that the pause would give the government time to work through concerns with developers. The statement specifically says that turbines can create radar interference (more on the technical details here in a moment).
Three of the companies involved have already filed lawsuits, and they’re seeking preliminary injunctions that would allow construction to continue. Orsted and Equinor (the developers for Revolution Wind and Empire Wind, respectively) told the New York Times that their projects went through lengthy federal reviews, which did address concerns about national security. This is just the latest salvo from the Trump administration against offshore wind. On Trump’s first day in office, he signed an executive order stopping all new lease approvals for offshore wind farms. (That order was struck down by a judge in December.) The administration previously ordered Revolution Wind to stop work last year, also citing national security concerns. A federal judge lifted the stop-work order weeks later, after the developer showed that the financial stakes were high, and that government agencies had previously found no national security issues with the project. There are real challenges that wind farms introduce for radar systems, which are used in everything from air traffic control to weather forecasting to national defense operations. A wind turbine’s spinning can create complex signatures on radar, resulting in so-called clutter. Previous government reports, including one 2024 report from the Department of Energy and a 2025 report from the Government Accountability Office (an independent government watchdog), have pointed out this issue in the past. “To date, no mitigation technology has been able to fully restore the technical performance of impacted radars,” as the DOE report puts it. However, there are techniques that can help, including software that acts to remove the signatures of wind turbines. (Think of this as similar to how noise-canceling headphones work, but more complicated, as one expert told TechCrunch.) But the most widespread and helpful tactic, according to the DOE report, is collaboration between developers and the government. By working together to site and design wind farms strategically, the groups can ensure that the projects don’t interfere with government or military operations. The 2025 GAO report found that government officials, researchers, and offshore wind companies were collaborating effectively, and any concerns could be raised and addressed in the permitting process. This and other challenges threaten an industry that could be a major boon for the grid. On the East Coast where these projects are located, and in New England specifically, winter can bring tight supplies of fossil fuels and spiking prices because of high demand. It just so happens that offshore winds blow strongest in the winter, so new projects, including the five wrapped up in this fight, could be a major help during the grid’s greatest time of need.

One 2025 study found that if 3.5 gigawatts’ worth of offshore wind had been operational during the 2024-2025 winter, it would have lowered energy prices by 11%. (That’s the combined capacity of Revolution Wind and Vineyard Wind, two of the paused projects, plus two future projects in the pipeline.) Ratepayers would have saved $400 million. Before Donald Trump was elected, the energy consultancy BloombergNEF projected that the US would build 39 gigawatts of offshore wind by 2035. Today, that expectation has dropped to just 6 gigawatts. These legal battles could push it lower still. What’s hardest to wrap my head around is that some of the projects being challenged are nearly finished. The developers of Revolution Wind have installed all the foundations and 58 of 65 turbines, and they say the project is over 87% complete. Empire Wind is over 60% done and is slated to deliver electricity to the grid next year. To hit the pause button so close to the finish line is chilling, not just for current projects but for future offshore wind efforts in the US. Even if these legal battles clear up and more developers can technically enter the queue, why would they want to? Billions of dollars are at stake, and if there’s one word to describe the current state of the offshore wind industry in the US, it’s “unpredictable.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Holes in Veeam Backup suite allow remote code execution, creation of malicious backup config files

CVE-2025-59470 (with a CVSS score of 9) allows a Backup or Tape Operator to perform remote code execution (RCE) as the Postgres user by sending a malicious interval or order parameter; CVE-2025-59469 (with a severity score of 7.2) allows a Backup or Tape Operator to write files as root; CVE-2025-55125 (with a severity score of 7.2) allows a Backup or Tape Operator to perform remote code execution (RCE) as root by creating a malicious backup configuration file; CVE-2025-59468 (with a severity score of 6.7) allows a Backup Administrator to perform remote code execution (RCE) as the Postgres user by sending a malicious password parameter. The patch to version 13.0.1.1071 will be an “easy installation” that won’t be disruptive, Vanover said. As of Tuesday afternoon, Veeam hadn’t received reports of exploitation, he added. “The good news is, if a Veeam server is broken, we can create a new server right away – presumably with this patch installed – import the backups and carry on. The core data is completely unimpacted by this,” Vanover said. “The worst type of thing would be the [backup] environment isn’t working right or the Postgres database is messed up on the Veeam server, so jobs might not behave in a way one might expect.” In these cases, admins using the Veeam One monitoring management suite would get an alert if, for example, a job was unable to connect to the backup server or backup jobs were failing. The four vulnerabilities being patched are less severe than some because an attacker, internal or external, would need valid credentials for the three specific roles, noted Johannes Ullrich, dean of research at the SANS Institute. On the other hand, he added, backup systems like Veeam are targets for attackers, in particular those who inject ransomware, who often attempt to erase backups. “Backup systems

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Deploying a hybrid approach to Web3 in the AI era

In partnership withAIOZ Network When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information. Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks. What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3. Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.
Gaining a Web3 edge Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets. More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.
But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives. “We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra. In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in. Overcoming Web3 deployment challenges Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale. One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks. “We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.” Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations. “Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.” Building a bridge to Web3 Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential. AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.” The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs). “Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.” Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.” This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet. “Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.
“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.” Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.
To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Wood Says Mideast Contract Wins Exceeded $1B in 2025

John Wood Group PLC said Tuesday it has won more than $1 billion in contracts across the Middle East this year, exceeding last year’s company record. “Wood has seen a near 20 percent increase in awards compared to 2024, with wins across United Arab Emirates, Iraq, Kingdom of Saudi Arabia, Bahrain, Kuwait, Oman and Qatar”, the Aberdeen, Scotland-based engineering and consulting company said in an online statement. Ellis Renforth, president of operations for Europe, Middle East and Africa at Wood, said, “This year we’ve delivered critical solutions across the Middle East to improve asset reliability and cut emissions”. “In 2026, we’ll build on this success by expanding our operations and maintenance services in the region. Our focus is on proven approaches to asset management and modifications that improve efficiency and reduce downtime – practical steps that strengthen energy security and decarbonization”, Renforth added. Stuart Turl, Wood vice president for Middle East consulting, said, “Decarbonization and digitalization remain central to how we support clients in the Middle East. This year, we launched our specialist Middle East Energy Transition and Digital & AI Hubs to further support clients in accelerating emissions reduction while unlocking efficiencies through AI-driven solutions”. “This in-region advisory enables practical pathways to carbon reduction while supporting national visions for a sustainable energy future. Delivery has already spanned initiatives such as minerals procurement, hydrogen production facilities and carbon capture and storage infrastructure”, Turl said. On May 27 Wood said it had secured a contract from TA’ZIZ, a joint venture of Abu Dhabi National Oil Co (ADNOC) PJSC, TA’ZIZ to provide project management consultancy for the development of the UAE’s first methanol production facility, to rise in Al Ruwais Industrial City. “Construction will be completed by 2028 and the plant will be one of the largest methanol plants in the world, producing 1.8 million tonnes per year. It will be powered using the latest clean energy technology”, Wood noted. On June 10 Wood said it

Read More »

EU to Scrap Combustion Engine Ban

The European Union is set to propose softening emissions rules for new cars, scrapping an effective ban on combustion engines following months of pressure from the automotive industry. The proposal will allow carmakers to slow the rollout of electric vehicles in Europe and aligns the region more closely with the US, where President Donald Trump is tearing up efficiency standards for cars put in place by the previous administration. Globally, automakers are struggling to make the shift profitable, with Ford Motor Co. announcing it will take $19.5 billion in charges tied to a sweeping overhaul of its EV business. The European stepback – to be unveiled Tuesday – follows a global pullback from green policies as economic realities of major transformations set in. Mounting trade tensions with the US and China are pushing Europe to further prioritize shoring up its own industry. Although the bloc is legally bound to reach climate neutrality by 2050, governments and companies are intensifying calls for more flexibility, warning that rigid targets could jeopardize economic stability. Under the new proposal, the European Commission will lower the requirements that would have halted sales of new gasoline and diesel-fueled cars starting in 2035, instead allowing a number of plug-in hybrids and electric vehicles with fuel-powered range extenders, according to people with knowledge of the matter.  Tailpipe emissions will have to be reduced by 90 percent by the middle of the next decade compared with the current goal of a 100 percent reduction, said the people, who asked not to be identified because talks on the proposal are private. The commission will set a condition that carmakers need to compensate for the additional pollution by using low-carbon or renewable fuels or locally produced green steel. The European Commission declined to comment. The proposal is set to be adopted by EU commissioners on

Read More »

Oil Sinks as Oversupply Pressures Intensify

West Texas Intermediate oil fell below $55 a barrel for the first time since February 2021 on signs that supply is outpacing demand, while progress in Ukraine peace talks could lead to a deal that may allow more Russian oil to flow onto global markets. US crude futures pared some losses, settling down 2.7% to $55.27. Brent, the global benchmark, fell 2.7% to settle at $58.92. Signs of weakness are proliferating across the supply side of the oil market, with Middle Eastern crude prices entering a bearish pattern known as contango early on Tuesday. The same already had happened with some barrels sold on the US Gulf Coast, with near-dated prices cheaper than contracts for delivery further out. On the WTI futures curve, the front-month contract was trading as little as 9 cents higher than the following month. The demand side looks similarly fragile. Elevated premiums for fuels like gasoline and diesel relative to crude, which supported prices last month, have eased. Meanwhile, weak job growth in the US signaled a potential slowdown in demand, adding further downward price pressure. While markets have been in a period of oversupply, a steady stream of geopolitical risks, and the fact that significant oil supply has gone to stockpiles at sea or in China, has kept markets tight, said Rory Johnston, oil market researcher and founder of Commodity Context. “The market has been trending this way,” Johnston said. “It’s been wanting to sell off, flip into contango for six months now, but it just keeps being delayed from doing so.” Trend-following commodity advisers remained 100% short in both Brent and WTI on Tuesday, according to data from Bridgeton Research Group. Widespread short positioning means that bullish news could push markets higher as automated traders cover positions, Johnston said. “My base case expectation is

Read More »

Energy Department Grants Woodside Louisiana LNG Project Additional Time to Commence Exports

WASHINGTON – U.S. Secretary of Energy Chris Wright today signed an amendment order granting an additional 44 months for Woodside Energy to commence exports of liquefied natural gas (LNG) to non-free trade agreement (non-FTA) countries from the Woodside Louisiana LNG Project under construction in Calcasieu Parish, LA. Once fully constructed, the project will be capable of exporting up to 3.88 billion cubic feet per day (Bcf/d) of natural gas as LNG.    Woodside Louisiana took final investment decision on its first phase earlier this year and has off-take agreements with Germany’s Uniper as well as U.S. pipeline operator Williams who will be marketing natural gas through the Woodside Louisiana LNG project.  “It is exciting to take this action to provide the needed runway for this project to fully take off and realize its potential in providing reliable and secure energy to the world,” said Kyle Haustveit, Assistant Secretary of the Office of Hydrocarbons and Geothermal Energy. “Thanks to President Trump’s leadership, the Department of Energy is redefining what it means to unleash American energy to strengthen energy reliability and affordability for American families, businesses, and our allies.” The United States is the largest global producer and exporter of natural gas. There are currently eight large-scale LNG projects operating in the United States and several additional projects are expanding or under construction. Under President Trump’s leadership, the Department has approved applications from projects authorized to export more than 17.7 Bcf/d of natural gas as LNG, an increase of approximately 25% from 2024 levels. So far in 2025, over 8 Bcf/d of U.S. LNG export capacity, including from Woodside Louisiana LNG, has reached a final investment decision and gone under construction.

Read More »

Russia Oil Prices Hit Lowest Since War Began

Russian crude prices are at their lowest since the war in Ukraine began, as sanctions deepen the discounts the nation’s oil industry needs to offer and benchmark futures tumble.  On average, Russian oil exporters are receiving just over $40 a barrel for cargoes shipped from the Baltic, Black Sea and the eastern port of Kozmino, according to data from Argus Media. That’s down 28% over the last three months, with recent restrictions targeting oil giants Rosneft PJSC and Lukoil PJSC widening the markdowns.  Mounting Western pressure on Russia’s oil trade has made it increasingly difficult to sell and deliver the barrels, with measures also targeting refiners at top buyers like India. In addition, global benchmark oil prices are sliding, trading below $60 a barrel for the first time since May on Tuesday.  The revenues Russia receives for its oil — which combined with gas account or about a quarter of the nation’s state budget — are critical to fund its war. Lower income strains the finances of the nation’s oil companies and reduces the amount of tax they pay into the Kremlin’s coffers.  The Trump administration has engaged in a diplomatic flurry geared toward ending the conflict over the last few weeks. President Vladimir Putin acknowledged that Russian economic growth was slowing down on a recent visit to India.  Indian officials said they expect imports from Russia to be about 800,000 barrels a day this month, sharply lower than in November, though still a significant volume of supplies. A Chinese refiner recently bought a shipment of crude from Russia’s eastern ports at the steepest discount this year. The two Asian nations are the main buyers of Russian oil.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate

Read More »

EIA Again Raises WTI Price Forecast for Both 2025 and 2026

In its latest short term energy outlook (STEO), which was released on December 9, the U.S. Energy Information Administration (EIA) again raised its West Texas Intermediate (WTI) price forecast for both 2025 and 2026. According to this STEO, the EIA now expects the WTI spot price to average $65.32 per barrel in 2025 and $51.42 per barrel in 2026. The EIA’s December STEO marks the latest in a line of STEOs with average WTI spot price forecast increases for both 2025 and 2026. In its previous November STEO, the EIA projected that the WTI spot price would average $65.15 per barrel in 2025 and $51.26 per barrel in 2026. The EIA’s October STEO projected that the WTI spot price would average $65.00 per barrel this year and $48.50 per barrel next year, and its September STEO forecast that the WTI spot price would average $64.16 per barrel in 2025 and $47.77 per barrel in 2026. Although the September STEO included an increase in the average WTI spot price forecast for 2025, compared to the previous August STEO, the average WTI spot price forecast for 2026 was unchanged from the previous STEO. A quarterly breakdown included in the EIA’s December STEO projected that the WTI spot price will average $59.31 per barrel in the fourth quarter of 2025, $50.93 per barrel in the first quarter of 2026, $50.68 per barrel in the second quarter, and $52.00 per barrel across the third and fourth quarters of next year. The WTI spot price averaged $71.85 per barrel in the first quarter, $64.63 per barrel in the second quarter, and $65.78 per barrel in the third quarter, the December STEO showed. It highlighted that the WTI spot price averaged $76.60 per barrel overall in 2024. In a J.P. Morgan report sent to Rigzone by

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

The Download: expanded carrier screening, and how Southeast Asia plans to get to space

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Expanded carrier screening: Is it worth it? Carrier screening  tests would-be parents for hidden genetic mutations that might affect their children. It initially involved testing for specific genes in at-risk populations. Expanded carrier screening takes things further, giving would-be parents an option to test for a wide array of diseases in prospective parents and egg and sperm donors.
The companies offering these screens “started out with 100 genes, and now some of them go up to 2,000,” Sara Levene, genetics counsellor at Guided Genetics, said at a meeting I attended this week. “It’s becoming a bit of an arms race amongst labs, to be honest.” But expanded carrier screening comes with downsides. And it isn’t for everyone. Read the full story.
—Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. Southeast Asia seeks its place in space It’s a scorching October day in Bangkok and I’m wandering through the exhibits at the Thai Space Expo, held in one of the city’s busiest shopping malls, when I do a double take. Amid the flashy space suits and model rockets on display, there’s a plain-looking package of Thai basil chicken. I’m told the same kind of vacuum-­sealed package has just been launched to the International Space Station.It’s an unexpected sight, one that reflects the growing excitement within the Southeast Asian space sector. And while there is some uncertainty about how exactly the region’s space sector may evolve, there is plenty of optimism, too. Read the full story. —Jonathan O’Callaghan This story is from the next print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land.

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Disney just signed a major deal with OpenAIMeaning you’ll soon be able to create Sora clips starring 200 Marvel, Pixel and Star Wars characters. (Hollywood Reporter $)+ Disney used to be openly skeptical of AI. What changed? (WSJ $)+ It’s not feeling quite so friendly towards Google, however. (Ars Technica)+ Expect a load of AI slop making its way to Disney Plus. (The Verge) 2 Donald Trump has blocked US states from enforcing their own AI rulesBut technically, only Congress has the power to override state laws. (NYT $)+ A new task force will seek out states with “inconsistent” AI rules. (Engadget)+ The move is particularly bad news for California. (The Markup)3 Reddit is challenging Australia’s social media ban for teensIt’s arguing that the ban infringes on their freedom of political communication. (Bloomberg $)+ We’re learning more about the mysterious machinations of the teenage brain. (Vox)4 ChatGPT’s “adult mode” is due to launch early next yearBut OpenAI admits it needs to improve its age estimation tech first. (The Verge)+ It’s pretty easy to get DeepSeek to talk dirty. (MIT Technology Review) 5 The death of Running Tide’s carbon removal dreamThe company’s demise is a wake-up call to others dabbling in experimental tech. (Wired $)+ We first wrote about Running Tide’s issues back in 2022. (MIT Technology Review)+ What’s next for carbon removal? (MIT Technology Review)6 That dirty-talking AI teddy bear wasn’t a one-offIt turns out that a wide range of LLM-powered toys aren’t suitable for children. (NBC News) + AI toys are all the rage in China—and now they’re appearing on shelves in the US too. (MIT Technology Review) 7 These are the cheapest places to create a fake online accountFor a few cents, scammers can easily set up bots. (FT $) 8 How professors are attempting to AI-proof examsChatGPT won’t help you cut corners to ace an oral examination. (WP $) 9 Can a font be woke?Marco Rubio seems to think so. (The Atlantic $)10 Next year is all about maximalist circus decor 🎪That’s according to Pinterest’s trend predictions for 2026. (The Guardian)
Quote of the day
 “Trump is delivering exactly what his billionaire benefactors demanded—all at the expense of our kids, our communities, our workers, and our planet.”  —Senator Ed Markey criticizes Donald Trump’s decision to sign an order cracking down on US states’ ability to self-regulate AI, the Wall Street Journal reports. One more thing Taiwan’s “silicon shield” could be weakeningTaiwanese politics increasingly revolves around one crucial question: Will China invade? China’s ruling party has wanted to seize Taiwan for more than half a century. But in recent years, China’s leader, Xi Jinping, has placed greater emphasis on the idea of “taking back” the island (which the Chinese Communist Party, or CCP, has never controlled).Many in Taiwan and elsewhere think one major deterrent has to do with the island’s critical role in semiconductor manufacturing. Taiwan produces the majority of the world’s semiconductors and more than 90% of the most advanced chips needed for AI applications.But now some Taiwan specialists and some of the island’s citi­zens are worried that this “silicon shield,” if it ever existed, is cracking. Read the full story. —Johanna M. Costigan
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ Reasons to be cheerful: people are actually nicer than we think they are.+ This year’s Krampus Run in Whitby—the Yorkshire town that inspired Bram Stoker’s Dracula—looks delightfully spooky.+ How to find the magic in that most mundane of locations: the airport.+ The happiest of birthdays to Dionne Warwick, who turns 85 today.

Read More »

Southeast Asia seeks its place in space

__________________________Thai Space Expo October 16-18, 2025___Bangkok, Thailand It’s a scorching October day in Bangkok and I’m wandering through the exhibits at the Thai Space Expo, held in one of the city’s busiest shopping malls, when I do a double take. Amid the flashy space suits and model rockets on display, there’s a plain-looking package of Thai basil chicken. I’m told the same kind of vacuum-­sealed package has just been launched to the International Space Station. “This is real chicken that we sent to space,” says a spokesperson for the business behind the stunt, Charoen Pokphand Foods, the biggest food company in Thailand. It’s an unexpected sight, one that reflects the growing excitement within the Southeast Asian space sector. At the expo, held among designer shops and street-food stalls, enthusiastic attendees have converged from emerging space nations such as Vietnam, Malaysia, Singapore, and of course Thailand to showcase Southeast Asia’s fledgling space industry. While there is some uncertainty about how exactly the region’s space sector may evolve, there is plenty of optimism, too. “Southeast Asia is perfectly positioned to take leadership as a space hub,” says Candace Johnson, a partner in Seraphim Space, a UK investment firm that operates in Singapore. “There are a lot of opportunities.”
A sample package of pad krapow was also on display.COURTESY OF THE AUTHOR For example, Thailand may build a spaceport to launch rockets in the next few years, the country’s Geo-Informatics and Space Technology Development Agency announced the day before the expo started. “We don’t have a spaceport in Southeast Asia,” says Atipat Wattanuntachai, acting head of the space economy advancement division at the agency. “We saw a gap.” Because Thailand is so close to the equator, those rockets would get an additional boost from Earth’s rotation. All kinds of companies here are exploring how they might tap into the global space economy. VegaCosmos, a startup based in Hanoi, Vietnam, is looking at ways to use satellite data for urban planning. The Electricity Generating Authority of Thailand is monitoring rainstorms from space to predict landslides. And the startup Spacemap, from Seoul, South Korea, is developing a new tool to better track satellites in orbit, which the US Space Force has invested in.
It’s the space chicken that caught my eye, though, perhaps because it reflects the juxtaposition of tradition and modernity seen across Bangkok, a city of ancient temples nestled next to glittering skyscrapers. In June, astronauts on the space station were treated to this popular dish, known as pad krapow. It’s more commonly served up by street vendors, but this time it was delivered on a private mission operated by the US-based company Axiom Space. Charoen Pokphand is now using the stunt to say its chicken is good enough for NASA (sadly, I wasn’t able to taste it to weigh in). Other Southeast Asian industries could also lend expertise to future space missions. Johnson says the region could leverage its manufacturing prowess to develop better semiconductors for satellites, for example, or break into the in-space manufacturing market. I left the expo on a Thai longboat down the Chao Phraya River that weaves through Bangkok, with visions of astronauts tucking into some pad krapow in my head and imagining what might come next. Jonathan O’Callaghan is a freelance space journalist based in Bangkok who covers commercial spaceflight, astrophysics, and space exploration.

Read More »

Expanded carrier screening: Is it worth it?

This week I’ve been thinking about babies. Healthy ones. Perfect ones. As you may have read last week, my colleague Antonio Regalado came face to face with a marketing campaign in the New York subway asking people to “have your best baby.” The company behind that campaign, Nucleus Genomics, says it offers customers a way to select embryos for a range of traits, including height and IQ. It’s an extreme proposition, but it does seem to be growing in popularity—potentially even in the UK, where it’s illegal. The other end of the screening spectrum is transforming too. Carrier screening, which tests would-be parents for hidden genetic mutations that might affect their children, initially involved testing for specific genes in at-risk populations. Now, it’s open to almost everyone who can afford it. Companies will offer to test for hundreds of genes to help people make informed decisions when they try to become parents. But expanded carrier screening comes with downsides. And it isn’t for everyone.
That’s what I found earlier this week when I attended the Progress Educational Trust’s annual conference in London. First, a bit of background. Our cells carry 23 pairs of chromosomes, each with thousands of genes. The same gene—say, one that codes for eye color—can come in different forms, or alleles. If the allele is dominant, you only need one copy to express that trait. That’s the case for the allele responsible for brown eyes. 
If the allele is recessive, the trait doesn’t show up unless you have two copies. This is the case with the allele responsible for blue eyes, for example. Things get more serious when we consider genes that can affect a person’s risk of disease. Having a single recessive disease-causing gene typically won’t cause you any problems. But a genetic disease could show up in children who inherit the same recessive gene from both parents. There’s a 25% chance that two “carriers” will have an affected child. And those cases can come as a shock to the parents, who tend to have no symptoms and no family history of disease. This can be especially problematic in communities with high rates of those alleles. Consider Tay-Sachs disease—a rare and fatal neurodegenerative disorder caused by a recessive genetic mutation. Around one in 25 members of the Ashkenazi Jewish population is a healthy carrier for Tay-Sachs. Screening would-be parents for those recessive genes can be helpful. Carrier screening efforts in the Jewish community, which have been running since the 1970s, have massively reduced cases of Tay-Sachs. Expanded carrier screening takes things further. Instead of screening for certain high-risk alleles in at-risk populations, there’s an option to test for a wide array of diseases in prospective parents and egg and sperm donors. The companies offering these screens “started out with 100 genes, and now some of them go up to 2,000,” Sara Levene, genetics counsellor at Guided Genetics, said at the meeting. “It’s becoming a bit of an arms race amongst labs, to be honest.” There are benefits to expanded carrier screening. In most cases, the results are reassuring. And if something is flagged, prospective parents have options; they can often opt for additional testing to get more information about a particular pregnancy, for example, or choose to use other donor eggs or sperm to get pregnant. But there are also downsides. For a start, the tests can’t entirely rule out the risk of genetic disease. Earlier this week, the BBC reported news of a sperm donor who had unwittingly passed on to at least 197 children in Europe a genetic mutation that dramatically increased the risk of cancer. Some of those children have already died. It’s a tragic case. That donor had passed screening checks. The (dominant) mutation appears to have occurred in his testes, affecting around 20% of his sperm. It wouldn’t have shown up in a screen for recessive alleles, or even a blood test. Even recessive diseases can be influenced by many genes, some of which won’t be included in the screen. And the screens don’t account for other factors that could influence a person’s risk of disease, such as epigenetics, microbiome, or even lifestyle.

“There’s always a 3% to 4% chance [of having] a child with a medical issue regardless of the screening performed,” said Jackson Kirkman-Brown, professor of reproductive biology at the University of Birmingham, at the meeting. The tests can also cause stress. As soon as a clinician even mentions expanded carrier screening, it adds to the mental load of the patient, said Kirkman-Brown: “We’re saying this is another piece of information you need to worry about.” People can also feel pressured to undergo expanded carrier screening even when they are ambivalent about it, said Heidi Mertes, a medical ethicist at Ghent University. “Once the technology is there, people feel like if they don’t take this opportunity up, then they are kind of doing something wrong or missing out,” she said. My takeaway from the presentations was that while expanded carrier screening can be useful, especially for people from populations with known genetic risks, it won’t be for everyone. I also worry that, as with the genetic tests offered by Nucleus, its availability gives the impression that it is possible to have a “perfect” baby—even if that only means “free from disease.” The truth is that there’s a lot about reproduction that we can’t control. The decision to undergo expanded carrier screening is a personal choice. But as Mertes noted at the meeting: “Just because you can doesn’t mean you should.” This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

Read More »

The Download: solar geoengineering’s future, and OpenAI is being sued

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Solar geoengineering startups are getting serious Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences.A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous.So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. So what does it mean for geoengineering, and for the climate? Read the full story. —Casey Crownhart
This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. If you’re interested in reading more about solar geoengineering, check out:
+ Why the for-profit race into solar geoengineering is bad for science and public trust. Read the full story.+ Why we need more research—including outdoor experiments—to make better-informed decisions about such climate interventions.+ The hard lessons of Harvard’s failed geoengineering experiment, which was officially terminated last year. Read the full story.+ How this London nonprofit became one of the biggest backers of geoengineering research.+ The technology could alter the entire planet. These groups want every nation to have a say. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI is being sued for wrongful deathBy the estate of a woman killed by her son after he engaged in delusion-filled conversations with ChatGPT. (WSJ $)+ The chatbot appeared to validate Stein-Erik Soelberg’s conspiratorial ideas. (WP $)+ It’s the latest in a string of wrongful death legal actions filed against chatbot makers. (ABC News)2 ICE is tracking pregnant immigrants through specifically-developed smartwatchesThey’re unable to take the devices off, even during labor. (The Guardian)+ Pregnant and postpartum women say they’ve been detained in solitary confinement. (Slate $)+ Another effort to track ICE raids has been taken offline. (MIT Technology Review)3 Meta’s new AI hires aren’t making friends with the rest of the companyTensions are rife between the AGI team and other divisions. (NYT $)+ Mark Zuckerberg is keen to make money off the company’s AI ambitions. (Bloomberg $)+ Meanwhile, what’s life like for the remaining Scale AI team? (Insider $) 4 Google DeepMind is building its first materials science lab in the UKIt’ll focus on developing new materials to build superconductors and solar cells. (FT $)  5 The new space race is to build orbital data centersAnd Blue Origin is winning, apparently. (WSJ $)+ Plenty of companies are jostling for their slice of the pie. (The Verge)+ Should we be moving data centers to space? (MIT Technology Review)6 Inside the quest to find out what causes Parkinson’sA growing body of work suggests it may not be purely genetic after all. (Wired $) 7 Are you in TikTok’s cat niche? If so, you’re likely to be in these other niches too. (WP $)8 Why do our brains get tired? 🧠💤Researchers are trying to get to the bottom of it.  (Nature $)

9 Microsoft’s boss has built his own cricket app 🏏Satya Nadella can’t get enough of the sound of leather on willow. (Bloomberg $) 10 How much vibe coding is too much vibe coding? One journalist’s journey into the heart of darkness. (Rest of World)+ What is vibe coding, exactly? (MIT Technology Review) Quote of the day “I feel so much pain seeing his sad face…I hope for a New Year’s miracle.” —A child in Russia sends a message to the Kremlin-aligned Safe Internet League explaining the impact of the country’s decision to block access to the wildly popular gaming platform Roblox on their brother, the Washington Post reports.  One more thing
Why it’s so hard to stop tech-facilitated abuseAfter Gioia had her first child with her then husband, he installed baby monitors throughout their home—to “watch what we were doing,” she says, while he went to work. She’d turn them off; he’d get angry. By the time their third child turned seven, Gioia and her husband had divorced, but he still found ways to monitor her behavior.  One Christmas, he gave their youngest a smartwatch. Gioia showed it to a tech-savvy friend, who found that the watch had a tracking feature turned on. It could be turned off only by the watch’s owner—her ex.
Gioia is far from alone. In fact, tech-facilitated abuse now occurs in most cases of intimate partner violence—and we’re doing shockingly little to prevent it. Read the full story.  —Jessica Klein

Read More »

Solar geoengineering startups are getting serious

Solar geoengineering aims to manipulate the climate by bouncing sunlight back into space. In theory, it could ease global warming. But as interest in the idea grows, so do concerns about potential consequences. A startup called Stardust Solutions recently raised a $60 million funding round, the largest known to date for a geoengineering startup. My colleague James Temple has a new story out about the company, and how its emergence is making some researchers nervous. So far, the field has been limited to debates, proposed academic research, and—sure—a few fringe actors to keep an eye on. Now things are getting more serious. What does it mean for geoengineering, and for the climate? Researchers have considered the possibility of addressing planetary warming this way for decades. We already know that volcanic eruptions, which spew sulfur dioxide into the atmosphere, can reduce temperatures. The thought is that we could mimic that natural process by spraying particles up there ourselves.
The prospect is a controversial one, to put it lightly. Many have concerns about unintended consequences and uneven benefits. Even public research led by top institutions has faced barriers—one famous Harvard research program was officially canceled last year after years of debate. One of the difficulties of geoengineering is that in theory a single entity, like a startup company, could make decisions that have a widespread effect on the planet. And in the last few years, we’ve seen more interest in geoengineering from the private sector. 
Three years ago, James broke the story that Make Sunsets, a California-based company, was already releasing particles into the atmosphere in an effort to tweak the climate. The company’s CEO Luke Iseman went to Baja California in Mexico, stuck some sulfur dioxide into a weather balloon, and sent it skyward. The amount of material was tiny, and it’s not clear that it even made it into the right part of the atmosphere to reflect any sunlight. But fears that this group or others could go rogue and do their own geoengineering led to widespread backlash. Mexico announced plans to restrict geoengineering experiments in the country a few weeks after that news broke. You can still buy cooling credits from Make Sunsets, and the company was just granted a patent for its system. But the startup is seen as something of a fringe actor. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it mattersEnter Stardust Solutions. The company has been working under the radar for a few years, but it has started talking about its work more publicly this year. In October, it announced a significant funding round, led by some top names in climate investing. “Stardust is serious, and now it’s raised serious money from serious people,” as James puts it in his new story. That’s making some experts nervous. Even those who believe we should be researching geoengineering are concerned about what it means for private companies to do so. “Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding,” write David Keith and Daniele Visioni, two leading figures in geoengineering research, in a recent opinion piece for MIT Technology Review. Stardust insists that it won’t move forward with any geoengineering until and unless it’s commissioned to do so by governments and there are rules and bodies in place to govern use of the technology.

But there’s no telling how financial pressure might change that, down the road. And we’re already seeing some of the challenges faced by a private company in this space: the need to keep trade secrets. Stardust is currently not sharing information about the particles it intends to release into the sky, though it says it plans to do so once it secures a patent, which could happen as soon as next year. The company argues that its proprietary particles will be safe, cheap to manufacture, and easier to track than the already abundant sulfur dioxide. But at this point, there’s no way for external experts to evaluate those claims. As Keith and Visioni put it: “Research won’t be useful unless it’s trusted, and trust depends on transparency.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Strengthening our partnership with the UK government to support prosperity and security in the AI era

AI presents an opportunity to build a more prosperous and secure world.The UK has already laid a strong foundation to seize this moment and is uniquely positioned to translate AI innovation into public benefit. That’s why we are excited to deepen our collaboration with the UK government to accelerate this work and offer a blueprint for other countries.Together we will focus on using AI to speed up progress in science and education, modernize public services and advance national security and resilience.Accelerating access to frontier AI in key sectors: Science & EducationOur partnership will center on providing access to frontier AI in two areas foundational to the UK’s long-term success: scientific discovery and education.The UK has a rich history of applying new technologies to drive scientific progress, from Hooke’s microscope to Faraday’s electrical experiments. We aim to build on this heritage, and empower the next generation of scientists with AI tools that can unlock breakthroughs, transform the economy, and solve some of the major challenges facing humanity. We will provide priority access to our “AI for Science” models to UK scientists, including:AlphaEvolve – a Gemini-powered coding agent for designing advanced algorithmsAlphaGenome – an AI model to help scientists better understand our DNAAI co-scientist – a multi-agent AI system that acts as a virtual scientific collaboratorWeatherNext – a family of state-of-the-art weather forecasting modelsLike the microscope or telescope, these AI tools are designed to enhance scientific capacity, enabling researchers to tackle problems of unprecedented complexity and scale. For example, AlphaFold – our AI system for predicting protein structures – has already enabled almost 190,000 researchers in the UK alone to deepen their understanding of areas such as crop resilience, antimicrobial resistance and other critical biological challenges.Establishing Google DeepMind’s first automated science laboratory in the UKTo help turbocharge scientific discovery, we will establish Google DeepMind’s first automated laboratory in the UK in 2026, specifically focused on materials science research. A multidisciplinary team of researchers will oversee research in the lab, which will be built from the ground up to be fully integrated with Gemini. By directing world-class robotics to synthesize and characterize hundreds of materials per day, the team intends to significantly shorten the timeline for identifying transformative new materials.Discovering new materials is one of the most important pursuits in science, offering the potential to reduce costs and enable entirely new technologies. For example, superconductors that operate at ambient temperature and pressure could allow for low cost medical imaging and reduce power loss in electrical grids. Other novel materials could help us tackle critical energy challenges by unlocking advanced batteries, next-generation solar cells and more efficient computer chips.

Read More »

What new legal challenges mean for the future of US offshore wind

For offshore wind power in the US, the new year is bringing new legal battles. On December 22, the Trump administration announced it would pause the leases of five wind farms currently under construction off the US East Coast. Developers were ordered to stop work immediately. The cited reason? National security, specifically concerns that turbines can cause radar interference. But that’s a known issue, and developers have worked with the government to deal with it for years. Companies have been quick to file lawsuits, and the court battles could begin as soon as this week. Here’s what the latest kerfuffle might mean for the struggling offshore wind industry in the US.
This pause affects $25 billion in investment in five wind farms: Vineyard Wind 1 off Massachusetts, Revolution Wind off Rhode Island, Sunrise Wind and Empire Wind off New York, and Coastal Virginia Offshore Wind off Virginia. Together, those projects had been expected to create 10,000 jobs and power more than 2.5 million homes and businesses. In a statement announcing the move, the Department of the Interior said that “recently completed classified reports” revealed national security risks, and that the pause would give the government time to work through concerns with developers. The statement specifically says that turbines can create radar interference (more on the technical details here in a moment).
Three of the companies involved have already filed lawsuits, and they’re seeking preliminary injunctions that would allow construction to continue. Orsted and Equinor (the developers for Revolution Wind and Empire Wind, respectively) told the New York Times that their projects went through lengthy federal reviews, which did address concerns about national security. This is just the latest salvo from the Trump administration against offshore wind. On Trump’s first day in office, he signed an executive order stopping all new lease approvals for offshore wind farms. (That order was struck down by a judge in December.) The administration previously ordered Revolution Wind to stop work last year, also citing national security concerns. A federal judge lifted the stop-work order weeks later, after the developer showed that the financial stakes were high, and that government agencies had previously found no national security issues with the project. There are real challenges that wind farms introduce for radar systems, which are used in everything from air traffic control to weather forecasting to national defense operations. A wind turbine’s spinning can create complex signatures on radar, resulting in so-called clutter. Previous government reports, including one 2024 report from the Department of Energy and a 2025 report from the Government Accountability Office (an independent government watchdog), have pointed out this issue in the past. “To date, no mitigation technology has been able to fully restore the technical performance of impacted radars,” as the DOE report puts it. However, there are techniques that can help, including software that acts to remove the signatures of wind turbines. (Think of this as similar to how noise-canceling headphones work, but more complicated, as one expert told TechCrunch.) But the most widespread and helpful tactic, according to the DOE report, is collaboration between developers and the government. By working together to site and design wind farms strategically, the groups can ensure that the projects don’t interfere with government or military operations. The 2025 GAO report found that government officials, researchers, and offshore wind companies were collaborating effectively, and any concerns could be raised and addressed in the permitting process. This and other challenges threaten an industry that could be a major boon for the grid. On the East Coast where these projects are located, and in New England specifically, winter can bring tight supplies of fossil fuels and spiking prices because of high demand. It just so happens that offshore winds blow strongest in the winter, so new projects, including the five wrapped up in this fight, could be a major help during the grid’s greatest time of need.

One 2025 study found that if 3.5 gigawatts’ worth of offshore wind had been operational during the 2024-2025 winter, it would have lowered energy prices by 11%. (That’s the combined capacity of Revolution Wind and Vineyard Wind, two of the paused projects, plus two future projects in the pipeline.) Ratepayers would have saved $400 million. Before Donald Trump was elected, the energy consultancy BloombergNEF projected that the US would build 39 gigawatts of offshore wind by 2035. Today, that expectation has dropped to just 6 gigawatts. These legal battles could push it lower still. What’s hardest to wrap my head around is that some of the projects being challenged are nearly finished. The developers of Revolution Wind have installed all the foundations and 58 of 65 turbines, and they say the project is over 87% complete. Empire Wind is over 60% done and is slated to deliver electricity to the grid next year. To hit the pause button so close to the finish line is chilling, not just for current projects but for future offshore wind efforts in the US. Even if these legal battles clear up and more developers can technically enter the queue, why would they want to? Billions of dollars are at stake, and if there’s one word to describe the current state of the offshore wind industry in the US, it’s “unpredictable.” This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Holes in Veeam Backup suite allow remote code execution, creation of malicious backup config files

CVE-2025-59470 (with a CVSS score of 9) allows a Backup or Tape Operator to perform remote code execution (RCE) as the Postgres user by sending a malicious interval or order parameter; CVE-2025-59469 (with a severity score of 7.2) allows a Backup or Tape Operator to write files as root; CVE-2025-55125 (with a severity score of 7.2) allows a Backup or Tape Operator to perform remote code execution (RCE) as root by creating a malicious backup configuration file; CVE-2025-59468 (with a severity score of 6.7) allows a Backup Administrator to perform remote code execution (RCE) as the Postgres user by sending a malicious password parameter. The patch to version 13.0.1.1071 will be an “easy installation” that won’t be disruptive, Vanover said. As of Tuesday afternoon, Veeam hadn’t received reports of exploitation, he added. “The good news is, if a Veeam server is broken, we can create a new server right away – presumably with this patch installed – import the backups and carry on. The core data is completely unimpacted by this,” Vanover said. “The worst type of thing would be the [backup] environment isn’t working right or the Postgres database is messed up on the Veeam server, so jobs might not behave in a way one might expect.” In these cases, admins using the Veeam One monitoring management suite would get an alert if, for example, a job was unable to connect to the backup server or backup jobs were failing. The four vulnerabilities being patched are less severe than some because an attacker, internal or external, would need valid credentials for the three specific roles, noted Johannes Ullrich, dean of research at the SANS Institute. On the other hand, he added, backup systems like Veeam are targets for attackers, in particular those who inject ransomware, who often attempt to erase backups. “Backup systems

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Deploying a hybrid approach to Web3 in the AI era

In partnership withAIOZ Network When the concept of “Web 3.0” first emerged about a decade ago the idea was clear: Create a more user-controlled internet that lets you do everything you can now, except without servers or intermediaries to manage the flow of information. Where Web2, which emerged in the early 2000s, relies on centralized systems to store data and supply compute, all owned—and monetized by—a handful of global conglomerates, Web3 turns that structure on its head. Instead, data and compute are decentralized through technologies like blockchain and peer-to-peer networks. What was once a futuristic concept is quickly becoming a more concrete reality, even at a time when Web2 still dominates. Six out of ten Fortune 500 companies are exploring blockchain-based solutions, most taking a hybrid approach that combines traditional Web2 business models and infrastructure with the decentralized technologies and principles of Web3. Popular use cases include cloud services, supply chain management, and, most notably financial services. In fact, at one point, the daily volume of transactions processed on decentralized finance exchanges exceeded $10 billion.
Gaining a Web3 edge Among the advantages of Web3 for the enterprise are greater ownership and control of sensitive data, says Erman Tjiputra, founder and CEO of the AIOZ Network, which is building infrastructure for Web3, powered by decentralized physical infrastructure networks (DePIN), blockchain-based systems that govern physical infrastructure assets. More cost-effective compute is another benefit, as is enhanced security and privacy as the cyberattack landscape grows more hostile, he adds. And it could even help protect companies from outages caused by a single point of failure, which can lead to downtime, data loss, and revenue deficits.
But perhaps the most exciting opportunity, says Tjiputra, is the ability to build and scale AI reliably and affordably. By leveraging a people-powered internet infrastructure, companies can far more easily access—and contribute to—shared resource like bandwidth, storage, and processing power to run AI inference, train models, and store data. All while using familiar developer tooling and open, usage-based incentives. “We’re in a compute crunch where requirements are insatiable, and Web3 creates this ability to benefit while contributing,” explains Tjiputra. In 2025, AIOZ Network launched a distributed compute platform and marketplace where developers and enterprises can access and monetize AI assets, and run AI inference or training on AIOZ Network’s more than 300,000 contributing devices. The model allows companies to move away from opaque datasets and models and scale flexibly, without centralized lock in. Overcoming Web3 deployment challenges Despite the promise, it is still early days for Web3, and core systemic challenges are leaving senior leadership and developers hesitant about its applicability at scale. One hurdle is a lack of interoperability. The current fragmentation of blockchain networks creates a segregated ecosystem that makes it challenging to transfer assets or data between platforms. This often complicates transactions and introduces new security risks due to the reliance on mechanisms such as cross-chain bridges. These are tools that allow asset transfers between platforms but which have been shown to be vulnerable to targeted attacks. “We have countless blockchains running on different protocols and consensus models,” says Tjiputra. “These blockchains need to work with each other so applications can communicate regardless of which chain they are on. This makes interoperability fundamental.” Regulatory uncertainty is also a challenge. Outdated legal frameworks can sit at odds with decentralized infrastructures, especially when it comes to compliance with data protection and anti-money laundering regulations. “Enterprises care about verifiability and compliance as much as innovation, so we need frameworks where on-chain transparency strengthens accountability instead of adding friction,” Tjiputra says.

And this is compounded by user experience (UX) challenges, says Tjiputra. “The biggest setback in Web3 today is UX,” he says. “For example, in Web2, if I forget my bank username or password, I can still contact the bank, log in and access my assets. The trade-off in Web3 is that, should that key be compromised or lost, we lose access to those assets. So, key recovery is a real problem.” Building a bridge to Web3 Although such systemic challenges won’t be solved overnight, by leveraging DePIN networks, enterprises can bridge the gap between Web2 and Web3, without making a wholesale switch. This can minimize risk while harnessing much of the potential. AIOZ Network’s own ecosystem includes capacity for media streaming, AI compute, and distributed storage that can be plugged into an existing Web2 tech stack. “You don’t need to go full Web3,” says Tjiputra. “You can start by plugging distributed storage into your workflow, test it, measure it, and see the benefits firsthand.” The AIOZ Storage solution, for example, offers scalable distributed object storage by leveraging the global network of contributor devices on AIOZ DePIN. It is also compatible with existing storage systems or commonly used web application programming interfaces (APIs). “Say we have a programmer or developer who uses Amazon S3 Storage or REST APIs, then all they need to do is just repoint the endpoints,” explains Tjiputra. “That’s it. It’s the same tools, it’s really simple. Even with media, with a single one-stop shop, developers can do transcoding and streaming with a simple REST API.” Built on Cosmos, a network of hundreds of different blockchains that can communicate with each other, and a standardized framework enabled by Ethereum Virtual Machine (EVM), AIOZ Network has also prioritized interoperability. “Applications shouldn’t care which chain they’re on. Developers should target APIs without worrying about consensus mechanisms. That’s why we built on Cosmos and EVM—interoperability first.” This hybrid model, which allows enterprises to use both Web2 and Web3 advantages in tandem, underpins what Tjiputra sees as the longer-term ambition for the much-hyped next iteration of the internet. “Our vision is a truly peer-to-peer foundation for a people-powered internet, one that minimizes single points of failure through multi-region, multi-operator design,” says Tjiputra. “By distributing compute and storage across contributors, we gain both cost efficiency and end-to-end security by default.
“Ideally, we want to evolve the internet toward a more people-powered model, but we’re not there yet. We’re still at the starting point and growing.” Indeed, Web3 isn’t quite snapping at the heels of the world’s Web2 giants, but its commercial advantages in an era of AI have become much harder to ignore. And with DePIN bridging the gap, enterprises and developers can step into that potential while keeping one foot on surer ground.
To learn more from AIOZ Network, you can read the AIOZ Network Vision Paper. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE