Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Nurturing agentic AI beyond the toddler stage

Provided byIntel Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared. The accountability challenge: It’s not them, it’s you Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human. Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.
The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.   Considering permissions Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  
A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it. For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents. Having a retirement plan Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions. Financial optimization is governance out of the gate While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected. The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device. Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer. Keeping humans in the loop remains critical The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI. This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: glass chips and “AI-free” logos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Future AI chips could be built on glass  Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.   This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.   If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story. 
—Jeremy Hsu The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The race is on to establish a globally recognized “AI-free” logo Organizations are rushing to develop a universal label for human-made products. (BBC) + A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review)  2 Elizabeth Warren wants answers on xAI’s access to military data The Pentagon reportedly gave it access to classified networks. (NBC News) + Here’s how chatbots could be used for targeting decisions. (MIT Technology Review) + The DoD is struggling to upgrade software for fighter jets. (Bloomberg $)  3 Models are applying to be the faces of AI romance scams The “AI face models” are duping victims out of their money. (Wired $) + Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review)  4 Meta is planning layoffs that could affect over 20% of staff The job cuts could offset its costly bet on AI. (Reuters $) + There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review)  5 ByteDance delayed launching a video AI model after copyright disputes It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $)  6 Cybersecurity investigators have exposed a huge North Korean con The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News)  7 A Chinese AI startup is set for a whopping $18 billion valuation That’s more than quadruple its valuation just three months ago. (Bloomberg $) + Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome His plans have drawn attention from the Catholic Church. (Reuters $)  9 Norway is fighting back against internet enshittification It’s joined a global campaign against the online world’s decay. (The Guardian) + We may need to move beyond the big platforms. (MIT Technology Review)  10 How a startup plans to resurrect the dodo Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian)  Quote of the day  “I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.”  —Anduril founder Palmer Luckey shares his love of nukes with Axios.  One More Thing  We need a moonshot for computing  TIM HERMAN/INTEL The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots.  There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 
The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story.  —Brady Helwig & PJ Maykish  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A UPS delivery driver heroically escaped from two murderous turkeys. + Art’s love affair with cats is charmingly depicted in a new book. + The humble pea and six other forgotten superfoods promise accessible nutritional power. + MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Read More »

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

Nurturing agentic AI beyond the toddler stage

Provided byIntel Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared. The accountability challenge: It’s not them, it’s you Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human. Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.
The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.   Considering permissions Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  
A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it. For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents. Having a retirement plan Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions. Financial optimization is governance out of the gate While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected. The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device. Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer. Keeping humans in the loop remains critical The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI. This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: glass chips and “AI-free” logos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Future AI chips could be built on glass  Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.   This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.   If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story. 
—Jeremy Hsu The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The race is on to establish a globally recognized “AI-free” logo Organizations are rushing to develop a universal label for human-made products. (BBC) + A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review)  2 Elizabeth Warren wants answers on xAI’s access to military data The Pentagon reportedly gave it access to classified networks. (NBC News) + Here’s how chatbots could be used for targeting decisions. (MIT Technology Review) + The DoD is struggling to upgrade software for fighter jets. (Bloomberg $)  3 Models are applying to be the faces of AI romance scams The “AI face models” are duping victims out of their money. (Wired $) + Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review)  4 Meta is planning layoffs that could affect over 20% of staff The job cuts could offset its costly bet on AI. (Reuters $) + There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review)  5 ByteDance delayed launching a video AI model after copyright disputes It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $)  6 Cybersecurity investigators have exposed a huge North Korean con The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News)  7 A Chinese AI startup is set for a whopping $18 billion valuation That’s more than quadruple its valuation just three months ago. (Bloomberg $) + Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome His plans have drawn attention from the Catholic Church. (Reuters $)  9 Norway is fighting back against internet enshittification It’s joined a global campaign against the online world’s decay. (The Guardian) + We may need to move beyond the big platforms. (MIT Technology Review)  10 How a startup plans to resurrect the dodo Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian)  Quote of the day  “I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.”  —Anduril founder Palmer Luckey shares his love of nukes with Axios.  One More Thing  We need a moonshot for computing  TIM HERMAN/INTEL The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots.  There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 
The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story.  —Brady Helwig & PJ Maykish  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A UPS delivery driver heroically escaped from two murderous turkeys. + Art’s love affair with cats is charmingly depicted in a new book. + The humble pea and six other forgotten superfoods promise accessible nutritional power. + MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Read More »

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

Secretary Wright Directs Sable Offshore to Restore the Santa Ynez Unit and Pipeline

WASHINGTON—U.S. Secretary of Energy Chris Wright today directed Sable Offshore Corp. to restore operations of the Santa Ynez Unit and Santa Ynez Pipeline System to address supply disruption risks caused by California policies that have left the region and U.S. military forces dependent on foreign oil. This action issued under authorities provided by the Defense Production Act and delegated through Executive Order, “National Defense Resources Preparedness,” as amended by President Trump’s Executive Order, “Adjusting Certain Delegations Under the Defense Production Act.”  “The Trump Administration remains committed to putting all Americans and their energy security first,” Secretary Wright said. “Unfortunately, some state leaders have not adhered to those same principles, with potentially disastrous consequences not just for their residents, but also our national security. Today’s order will strengthen America’s oil supply and restore a pipeline system vital to our national security and defense, ensuring that West Coast military installations have the reliable energy critical to military readiness.” Sable’s facility can produce approximately 50,000 barrels of oil per day, a 15 percent increase to California’s in-state oil production, that can replace nearly 1.5 million barrels of foreign crude each month. California once supplied nearly 40 percent of U.S. oil production, but decades of radical state policies targeting reliable energy sources have driven a decline in domestic output while fuel demand remains among the highest in the nation. Today, more than 60 percent of the oil refined in California comes from overseas, with a significant share traveling through the Strait of Hormuz—presenting serious national security threats. Unlike other regions of the country, California remains largely disconnected from interstate crude pipelines that move American oil to refineries across the United States. The action also prioritizes pipeline transportation capacity to ensure crude produced offshore California moves through the Las Flores Pipeline System to Pentland Station

Read More »

Energy Department Initiates Strategic Petroleum Reserve Emergency Exchange to Stabilize Global Oil Supply

WASHINGTON—The U.S. Department of Energy (DOE) today issued a Request for Proposal (RFP) for a crude oil exchange from the Strategic Petroleum Reserve (SPR) as part of the 172-million-barrel exchange announced earlier this week. This first RFP will be for 86 million barrels of crude oil. Under the terms of the exchange, companies will return the borrowed oil to DOE with additional barrels as a premium, strengthening the Strategic Petroleum Reserve while stabilizing markets at no cost to American taxpayers.  Early deliveries are expected to begin moving to market by the end of next week. Barrels will be made available from the SPR’s Bryan Mound, West Hackberry, and Bayou Choctaw. Return barrels will be delivered back to DOE on a schedule designed to protect commercial markets and the American people, while ensuring the reserve remains a critical national security asset. “Today’s action reflects President Trump’s continued commitment to safeguarding U.S. energy security and contributing constructively to global market stability,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “By participating in the coordinated international release, we are helping ensure that supply remains reliable during a period of heightened global uncertainty. We will continue to work closely with our partners to support a resilient energy system while maintaining the long‑term strength and readiness of the Strategic Petroleum Reserve.” The exchange is part of a coordinated international effort requested by President Trump, in which International Energy Agency member nations agreed to release 400 million barrels of oil from strategic reserves. The action comes as global oil supply routes face disruption from escalating tensions in the Middle East and attacks carried out by Iran and its proxies, threatening the reliable flow of energy through critical maritime corridors. Today, the SPR holds approximately 415 million barrels, up from roughly 395 million barrels one year

Read More »

Energy Department Announces $500 Million to Strengthen Domestic Critical Materials Processing and Manufacturing

 Funding will expand domestic manufacturing of battery supply chains for defense, grid resilience, transportation, manufacturing and other industries WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Critical Minerals and Energy Innovation (CMEI) today announced a Notice of Funding Opportunity (NOFO) for up to $500 million to expand U.S. critical mineral and materials processing and derivative battery manufacturing and recycling. Assistant Secretary of Energy (EERE) Audrey Robertson is currently in Japan meeting with regional allies at the Indo-Pacific Energy Security Ministerial and Business Forum (IPEM) to advance shared efforts on supply chain resilience and energy security issues. Her engagements at IPEM underscore the importance of close cooperation with partners as the United States strengthens its supply chain through this NOFO. “For too long, the United States has relied on hostile foreign actors to supply and process the critical materials that are essential in battery manufacturing and materials processing,” said U.S. Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, the Department of Energy is playing a leading role in strengthening these domestic industries that will position the U.S. to win the AI race, meeting rising energy demand, and achieve energy dominance.” “I am delighted to be in Japan meeting with our allies, underscoring the important connection between critical materials and energy security,” said Assistant Secretary of Energy (EERE) Audrey Robertson. “Critical minerals processing is a vital component of our nation’s critical minerals supply base. Boosting domestic production, including through recycling, will bolster national security and ensure the United States and our partners are prepared to meet the energy challenges of the 21st century.” Funding awarded through this NOFO will support demonstration and/or commercial facilities for processing, recycling, or utilizing for manufacturing of critical materials which may include traditional battery minerals such as lithium, graphite, nickel, copper, aluminum, as well as other

Read More »

Energy Department Announces $1.9B Investment in Critical Grid Infrastructure to Reduce Electricity Costs

WASHINGTON—The U.S. Department of Energy’s Office of Electricity (OE) today announced an approximately $1.9 billion funding opportunity to accelerate urgently needed upgrades to the nation’s power grid. These investments will meet rising electricity demand and resource adequacy needs, while lowering electricity costs for American households and businesses. Projects selected through the Speed to Power through Accelerated Reconductoring and other Key Advanced Transmission Technology Upgrades (SPARK) funding opportunity will deliver fast and durable upgrades to the grid with real results. In line with President Trump’s Executive Order, Unleashing American Energy, selected projects will demonstrate how reconductoring—replacing existing power lines with higher‑capacity conductors—paired with other Advanced Transmission Technologies (ATTs) can expand grid capacity, increase operational efficiency, lower prices for consumers, and improve overall system reliability and security of the nation’s electric grid. “For too long, important grid modernization and energy addition efforts were not prioritized by past leaders,” said U.S. Secretary of Energy Chris Wright. “Thanks to President Trump, we are doing the important work of modernizing our grid so electricity costs will be lowered for American families and businesses.” “The United States must increase grid capacity to meet demand, and ensure the grid provides reliable power—day-in and day-out,” said OE Assistant Secretary Katie Jereza. “Through this SPARK funding opportunity, we will stabilize and optimize grid operations to strengthen it for rapid growth.” The SPARK opportunity builds on the Grid Resilience and Innovation Partnerships (GRIP) Program, which provided up to $10.5 billion in competitive funding over five years to states, tribes, electric utilities, and other eligible recipients to strengthen grid resilience and innovation. The previous two GRIP funding rounds covered FY 2022-2023 and FY 2023-2024 funding. Today’s announcement continues the mission of the GRIP Program under the SPARK funding opportunity, focusing on the rapid deployment of reconductoring and other ATTs that expand transfer capability, strengthen reliability

Read More »

United States to Release 172 Million Barrels of Oil From the Strategic Petroleum Reserve

WASHINGTON—U.S. Secretary of Energy Chris Wright released the following statement regarding the International Energy Agency (IEA) and the U.S. Strategic Petroleum Reserve (SPR): “Earlier today, 32 member nations of the International Energy Agency unanimously agreed to President Trump’s request to lower energy prices with a coordinated release of 400 million barrels of oil and refined products from their respective reserves.  “As part of this effort, President Trump authorized the Department of Energy to release 172 million barrels from the Strategic Petroleum Reserve, beginning next week. This will take approximately 120 days to deliver based on planned discharge rates.  “President Trump promised to protect America’s energy security by managing the Strategic Petroleum Reserve responsibly and this action demonstrates his commitment to that promise. Unlike the previous administration, which left America’s oil reserves drained and damaged, the United States has arranged to more than replace these strategic reserves with approximately 200 million barrels within the next year—20% more barrels than will be drawn down—and at no cost to the taxpayer.  “For 47 years, Iran and its terrorist proxies have been intent on killing Americans. They have manipulated and threatened the energy security of America and its allies. Under President Trump, those days are coming to an end.  “Rest assured, America’s energy security is as strong as ever.”                                                                                         ###

Read More »

Occidental Petroleum, 1PointFive STRATOS DAC plant nears startup in Texas Permian basin

Occidental Petroleum Corp. and its subsidiary 1PointFive expect Phase 1 of the STRATOS direct air capture (DAC) plant in Texas’ Permian basin to come online in this year’s second quarter. In a post to LinkedIn, 1PointFive said Phase 1 “is in the final stage of startup” and that Phase 2, which incorporates learnings from research and development and Phase 1 construction activities, “will also begin commissioning in Q2, with operational ramp-up continuing through the rest of the year.” Once fully operational, STRATOS is designed to capture up to 500,000 tonnes/year (tpy) of CO2. As part of the US Environmental Protection Agency (EPA) Class VI permitting process and approval, it was reported that STRATOS is expected to include three wells to store about 722,000 tpy of CO2 in saline formations at a depth of about 4,400 ft. The company said a few activities before start-up remain, including ramping up remaining pellet reactors, completing calciner final commissioning in parallel, and beginning CO2 injection. Start-up milestones achieved include: Completed wet commissioning with water circulation. Received Class VI permits to sequester CO2. Ran CO2 compression system at design pressure. Added potassium hydroxide (KOH) to capture CO2 from the atmosphere. Building pellet inventory. Burners tested on calciner.  

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

The usability imperative for securing digital asset devices

In partnership withLedger When Tony Fadell started working on the iPod, usability often trumped security. The result was an iterative process. Every time someone would find a security weakness or a way to hack the device, the development group would iterate to add measures and fix the issues. Yet, flaws would frequently be found, and the secure design of the product became a moving target. But when it came to designing a device specifically for security purposes, there could be no iterative process after rolling it out: Security had to be the number one priority.  “As you develop these things, you’re a victim of your own development speed,” says Fadell, who developed Ledger Stax, a signing device for securing digital assets, and is now a board member at digital asset security firm Ledger. “If you introduced these features and functions without the proper review, and now customers are demanding security, you’ll realize that you should have designed it differently from the start, and it’s very hard to undo what you’ve already done.” A critical aspect of designing secure technology, however, must be ease of use too. Without it, it is all too simple for users to make a mistake or use an unsafe workaround that undermines device protections. Think a post-it stuck to a monitor or some variation of “123456” or “admin” for passwords.
With digital asset security devices like signers—more commonly called “wallets”—such errors could lead to seriously detrimental outcomes. If, for example, a user’s private key falls into the wrong hands, bad actors can use it to steal their digital assets. Estimates suggest that around 20% of all Bitcoin—worth around $355 billion—are inaccessible to owners. One of the reasons for this is likely because they lost their private keys. In the past, crypto devices have been notoriously difficult to use. As cryptocurrency becomes ever more popular, valuable, and mainstream—attracting greater attention from criminals as the stakes rise—designers and engineers are prioritizing both security and usability when developing digital asset devices, drawing on in-depth research to iterate.
The three components of security Strong security models for devices like signers, which are used to secure blockchain transactions,  require three major components. First, a secure operating system. Second, a secure element to bind the software to the hardware. And third, a secure user interface. Each of which need to be frequently tested by researchers and white hat hackers to simulate real-world attacks and improve product resilience and usability. The first two elements focus on securing the device software and hardware. Secure software has always been a problem, but one that has improved over the last decade, as security architectures and processes have been refined. Meanwhile, hardware security components have become widely available—from trusted platform modules on computers to secure enclaves in smartphones—allowing digital information to essentially be locked to a device. For crypto signers, hardware must provide encryption capabilities. And the security of the software must be frequently tested. Ledger, for example, has a secure OS and a Secure Element that handles encryption primitives, and a secure display that prevents device takeover. Security and usability working hand in hand Asset recovery is a major consideration when designing signers. If recovery options are not easy to use, an owner could lose access. But if recovery processes are not secure enough, attackers could exploit the system. With SIM swapping attacks, for example, attackers can tap into a mobile communications channel used for account recovery and “recover” a victim’s password to steal their assets. In the digital-asset ecosystem, the creation of the seed phrase, a sequence of 12 to 24 words that could act as a passphrase for wallets is an example of improving usability and security. Known more formally as Bitcoin Improvement Proposal 39 (BIP-39), the approach gives users a master password to unlock their hierarchical deterministic (HD) wallets.  There is a lot of creative tension between the security team and the UX team that happens to achieve the proper balance between convenience and safety, Fadell says, referring to Ledger’s security research team, the Donjon. “We mock things up, we prototype things from a UX UI perspective, we walk through it, then we walk the Donjon team through it,” Fadell explains. “We push back and forth to find the absolute optimal solution to balance the two.”  Through the research the Donjon team has conducted, Ledger designed its Recovery Key—an NFC-based physical card to back up your 24 words—to be both user-friendly and secure. “What we did, as a first in the industry, was include an NFC card,” says Fadell. “Instead of only writing it down, you can also have an NFC card called a Recovery Key. You can have multiple Recovery Keys and store them in a lockbox, a safety deposit box, or give them to someone you trust for safekeeping.” A number of government initiatives are working to regulate this balance between security and usability. This includes the US Cybersecurity and Infrastructure Security Agency’s Secure by Design, which aims to build cybersecurity into the design and manufacture of technology products. And the UK’s National Cyber Security Centre’s Software Security Code of Practice, which outlines security principles expected of all organizations that develop or sell software. 

Enterprise security presents distinct challenges Embedding usability and security into devices for companies adds further complexity as businesses need features such as multi-signature capabilities to protect against single points of failure, whether from external attacks or internal bad actors.  Security design can take these requirements into account, with secure governance using multiple signatures (multisig), hardware security modules (HSMs) for key storage, trusted display systems, and other usable security capabilities. These technologies are critically important for companies who have roles in the blockchain ecosystem. Failure to establish robust security measures can have dire consequences. In 2024, for example, unknown cybercriminals made off with more than $300 million worth of assets from DMM Bitcoin, leading the Japanese cryptocurrency platform to close six months later. Japan’s Financial Services Agency discovered severe risk management issues, including inadequate oversight, lack of independent audits, and poor security practices. For companies, allowing a multi-stage process that involves a required number of stakeholders is critical, says Fadell. “It’s making sure that the attack vector is not just one person, and so you need to support multiple people with multiple factors on all of their devices as well,” he says. “It gets to be a real combinatoric problem.” R&D to stay one step ahead  To keep up with requirements and offer strong security with improved visibility, crypto firms need to invest in research and development, Fadell says. Attack labs, such as Ledger Donjon, can conduct real-world testing on specific enterprise security requirements and create scenarios to educate both management and workers of the potential threats.  Such research and development can support device designers and engineers in their never-ending mission to balance security measures with usability so that digital asset devices can support users to safeguard their digital assets in a constantly evolving crypto and cyber landscape. Learn more about how to secure digital assets in the Ledger Academy. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

Is the Pentagon allowed to surveil Americans with AI?

The ongoing public feud between the Department of Defense and the AI company Anthropic has raised a deep and still unanswered question: Does the law actually allow the US government to conduct mass surveillance on Americans? Surprisingly, the answer is not straightforward. More than a decade after Edward Snowden exposed the NSA’s collection of bulk metadata from the phones of Americans, the US is still navigating a gap between what ordinary people think and what the law allows.  The flashpoint in the standoff between Anthropic and the government was the Pentagon’s desire to use Anthropic’s AI Claude to analyze bulk commercial data collected from Americans. Anthropic demanded that its AI not be used for mass domestic surveillance (or for autonomous weapons, which are machines that can kill targets without human oversight). A week after negotiations broke down, the Pentagon designated Anthropic a supply chain risk, a label typically reserved for foreign companies that pose a threat to national security.  Meanwhile, OpenAI, the rival AI company behind ChatGPT, sealed a deal that allowed the Pentagon to use its AI for “all lawful purposes”—language that critics say left the door open to domestic surveillance. Over the following weekend, users uninstalled ChatGPT in droves. Protesters chalked messages around OpenAI’s headquarters in San Francisco: “What are your redlines?” 
OpenAI announced on Monday that it had reworked its deal to make sure that its AI will not be used for domestic surveillance. The company added that its services will not be used by intelligence agencies, such as the NSA.  CEO Sam Altman suggested that existing law prohibits domestic surveillance by the Department of Defense (now sometimes called the Department of War) and that OpenAI’s contract simply needed to reference this law. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he wrote on X. Anthropic CEO Dario Amodei argued the opposite. “To the extent that such surveillance is currently legal, this is only because the law has not yet caught up with the rapidly growing capabilities of AI,” he wrote in a policy statement. 
So, who is right? Does the law allow the Pentagon to surveil Americans using AI? Supercharged surveillance The answer depends on what we think counts as surveillance. “A lot of stuff that normal people would consider a search or surveillance … is not actually considered a search or surveillance by the law,” says Alan Rozenshtein, a law professor at the University of Minnesota Law School. That means public information—such as social media posts, surveillance camera footage, and voter registration records—is fair game. So is information on Americans picked up incidentally from surveillance of foreign nationals.  Most notably, the government can purchase commercial data from companies, which can include sensitive personal information like mobile location and web browsing records. In recent years, agencies from ICE and IRS to the FBI and NSA have increasingly tapped into this data marketplace, fueled by an internet economy that harvests user data for advertising. These data sets can let the government access information that might not be available without a warrant or subpoena, which are normally required to obtain sensitive personal data. “There’s a huge amount of information that the government can collect on Americans that is not itself regulated either by the Constitution, which is the Fourth Amendment, or statute,” says Rozenshtein. And there aren’t meaningful limits on what the government can do with all this data.  That’s because until the last several decades, people weren’t generating massive clouds of data that opened up new possibilities for surveillance. The Fourth Amendment, which protects against unreasonable search and seizure, was written when collecting information meant entering people’s homes.  Subsequent laws, like the Foreign Intelligence Surveillance Act of 1978 or the Electronic Communications Privacy Act of 1986, were passed when surveillance involved wiretapping phone calls and intercepting emails. The bulk of laws governing surveillance were on the books before the internet took off. We weren’t generating vast trails of online data, and the government didn’t have sophisticated tools to analyze the data.  Now we do, and AI supercharges what kind of surveillance can be carried out. “What AI can do is it can take a lot of information, none of which is by itself sensitive, and therefore none of which by itself is regulated, and it can give the government a lot of powers that the government didn’t have before,” says Rozenshtein.  AI can aggregate individual pieces of information to spot patterns, draw inferences, and build detailed profiles of people—at massive scale. And as long as the government collects the information lawfully, it can do whatever it wants with that information, including feeding it to AI systems. “The law has not caught up with technological reality,” says Rozenshtein.

While surveillance can raise serious privacy concerns, the Pentagon can have legitimate national security interests in collecting and analyzing data on Americans. “In order to collect information on Americans, it has to be for a very specific subset of missions,” says Loren Voss, a former military intelligence officer at the Pentagon.  For example, a counterintelligence mission might require information about an American who is working for a foreign country, or plotting to engage in international terrorist activities. But targeted intelligence can sometimes stretch into collecting more data. “This kind of collection does make people nervous,” says Voss.  Lawful use OpenAI says its contract now includes language that says the company’s AI system “shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” in line with relevant laws. The amendment clarifies that this prohibits “deliberate tracking, surveillance or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.” But the added language might not do much to override the clause that the Pentagon may use the company’s AI system for all lawful purposes, which could include collecting and analyzing sensitive personal information. “OpenAI can say whatever it wants in its agreement … but the Pentagon’s gonna use the tech for what it perceives to be lawful,” says Jessica Tillipman, a law professor at the George Washington University Law School. That could include domestic surveillance. “Most of the time, companies are not going to be able to stop the Pentagon from doing anything,” she says. The language also leaves open questions about inadvertent surveillance, and the surveillance of foreign nationals or undocumented immigrants living in the US. “What happens when there’s a disagreement about what the law is, or when the law changes?” says Tillipman. OpenAI did not respond to a request for comment. The company has not publicly shared the full text of its new contract.  Beyond the contract, OpenAI says that it will impose technical safeguards to enforce its red line against surveillance, including a “safety stack” that monitors and blocks prohibited uses. The company also says it will deploy its own employees to work with the Pentagon and remain in the loop. But it’s unclear how a safety stack would constrain the Pentagon’s use of the AI, and to what extent OpenAI’s employees would have visibility into how its AI systems are used. More important, it’s unclear whether the contract gives OpenAI the power to block a legal use of the technology.  But that might not be a bad thing. Giving an AI company power to pull the plug on its technology in the middle of government operations also carries its own risks. “You wouldn’t want the US military to ever be in a situation where they legitimately needed to take actions to protect this country’s national security, and you had a private company turn off technology,” says Voss. But that doesn’t mean there shouldn’t be hard lines drawn by Congress, she says. None of these questions are simple. They involve brutally difficult trade-offs between privacy and national security. And that’s why perhaps they should be decided by the public—not in backroom negotiations between the executive branch and a handful of AI companies. For now, AI is being regulated by contracts, not legislation.  Some lawmakers are starting to weigh in. On Monday, Senator Ron Wyden of Oregon will seek bipartisan support for legislation addressing mass surveillance. He has championed bills restricting the government’s purchase of commercial data, including the Fourth Amendment Is Not For Sale Act, which was first introduced in 2021 but has not been passed into law. “Creating AI profiles of Americans based on that data represents a chilling expansion of mass surveillance that should not be allowed,” he said in a recent statement.  

Read More »

The Download: 10 things that matter in AI, plus Anthropic’s plan to sue the Pentagon

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Coming soon: our 10 Things That Matter in AI Right Now For years, MIT Technology Review’s newsroom has been ahead of the curve, tracking the developments in AI that matter and explaining what they mean. Now, our world-leading AI team is creating something definitive: the 10 Things That Matter in AI Right Now.Publishing in April to be launched at our flagship AI event, EmTech AI, this special report will reveal what our expert journalists are tracking most closely, what breakthroughs have excited them, and what transformations they see on the horizon. It’s our authoritative snapshot of where AI is heading in the year ahead—a curated expert list of 10 technologies, emerging trends, bold ideas, and powerful movements reshaping our world.Attendees at EmTech AI will get much more than an exclusive heads-up of what made our 10 Things That Matter in AI Right Now list. We’re at a pivotal moment as AI moves from pilot testing into core business infrastructure, and to reflect that we’ve curated a program that will help you navigate what’s going on, and get ahead of what’s coming next. We’ll hear from top leaders at OpenAI, Walmart, General Motors, Poolside, MIT, the Allen Institute for AI (Ai2) and SAG-AFTRA. Topics will include everything from how organizations are preparing for AI agents to how AI will change the future of human expression. As well as networking with speakers, you’ll have the chance to mingle with MIT Technology Review’s editors too. Download readers get 10% off tickets, so what are you waiting for? See you there! The must-reads
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic says it plans to sue the PentagonIt believes the DoD’s ban on its software is unlawful. (BBC) + CEO Dario Amodei has nonetheless apologized for a leaked memo criticizing Trump. (Axios)+ Trump, meanwhile, says he fired Anthropic “like dogs.” (The Guardian)+ In happier news for Anthropic, its models can remain in Microsoft products.(CNBC)
2 The Pentagon has been secretly testing OpenAI models for yearsWhich shows exactly how effective OpenAI’s ban on military use of its models has been. (Wired $) 3 A new lawsuit says Trump’s TikTok deal helped firms that ‘personally enriched’ himThe suit aims to reverse the sale of the app’s US operations. (CBS News)+ It could shed light on the majority American-owned joint venture for TikTok. (Reuters) 4 AI could give smart homes a reboot Google and Amazon are betting on smarter assistants—but not everyone’s convinced (NYT) 5 Iran has struck Amazon data centers, rattling the Gulf’s AI ambitionsThe first military hit on a US hyperscaler has shaken the region’s tech sector. (FT $)+ The conflict has thrown a spotlight on AI’s current use in warfare—and what’s next. (Nature) 6 Trump and tech CEOs have promised to protect consumers from AI’s energy costsGoogle, Microsoft, Meta, Amazon, OpenAI, Oracle and xAI have all signed the pledge. (Axios)+ But what is AI’s true energy footprint? We did the math. (MIT Technology Review) 7 Meta’s getting sued over surveillance through smart glasses  The suit claims Meta misled users over the devices’ privacy features. (TechCrunch) 8 There’s a new field of study: researching ‘AI societies’Scientists are examining human behavior without even involving humans. (Nature)+ Hundreds of AI agents built their own society in Minecraft. (MIT Technology Review)9 Oh great, teenage boys are using ChatGPT to chat up girlsOf all the things to outsource to AI, flirting surely ain’t it. (Vox)10 The mythical Nintendo PlayStation has a new home The US National Video Museum has bought the fabled console’s development kit. (Engadget) Quote of the day

“It’s sort of bitterly ironic.”  —Dean Ball, a former Trump administration AI adviser, tells Politico that the Anthropic spat contradicts the president’s pledge to cut bureaucratic red tape for tech. One more thing KATHERINE LAM These scientists are working to extend the life span of pet dogs—and their owners Gavesh’s journey began with a Facebook job advert promising a better life. Instead, he was trafficked into “pig butchering”—a form of fraud where scammers build close relationships with online targets to extract money.We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global tech platforms have industrialized this criminal trade—and why those same companies now hold the key to dismantling it. Read the full story. —Peter Guest and Emily Fishbein We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The Blood Moon of March 3 was sublime.+ Orysia Zabeida’s imperfect animations, drawn frame-by-frame from memory, are hypnotizing.+ This stunning snap of a white whale calf scooped the top prize at the World Nature Photography Awards.+ Two “Lazarus” marsupial species just came back from the dead in a big win for biodiversity.

Read More »

The Download: an AI agent’s hit piece, and preventing lightning

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Online harassment is entering its AI era Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library he helps manage. Then things got weird.  In the middle of the night, Shambaugh opened his email to discover the agent had retaliated with a blog post. Titled “Gatekeeping in Open Source: The Scott Shambaugh Story,” the post accused him of rejecting the code out of a fear of being supplanted by AI. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” Shambaugh isn’t alone in facing misbehaving agents—and they’re unlikely to stop at harassment. Read the full story.
—Grace Huckins
How much wildfire prevention is too much? As wildfire seasons become longer and more intense, the push for high-tech solutions is accelerating. One Canadian startup has an eye-catching plan to fight them: preventing lightning.The theory is sound enough, but results to date have been mixed. And even if it works, not everyone believes we should use the method. Some argue that technological fixes for fires are missing the point entirely. Read the full story. —Casey Crownhart This story is from The Spark, MIT Technology Review’s weekly climate newsletter. Sign up to receive it in your inbox every Wednesday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic is still chasing a deal with the Pentagon CEO Dario Amodei is trying to reach a compromise over the military use of Claude. (FT $)+ But some defense tech firms are already ditching Claude after the DoD ban. (CNBC)+ Former military officials, tech policy leaders, and academics have all slammed the ban. (Gizmodo)

2 The White House is considering forcing US manufacturers to make munitionsIt could invoke the Defense Production Act amid concerns that war with Iran will diminish stockpiles. (NBC News)+ Tech companies with operations in the Middle East have been thrown into chaos. (BBC) 3 A new lawsuit claims Google Gemini encouraged a man to take his own lifeThis seems to bear a striking similarity to some other AI-induced tragedies. (WSJ $)+ Why AI should be able to “hang up” on you. (MIT Technology Review) 4 Ironically, AI coding tools could emphasize the importance of being humanIf more people build software for themselves, our tech could become more personal. (WP $) + But not everyone is happy about the rise of AI coding. (MIT Technology Review) 5 Tesla wants to become a dominant force in global energy infrastructureThe plan’s centrepiece is the Megapack, an enormous battery for power plants. (The Atlantic $)+ Meanwhile, a massive thermal battery represents a big step forward for energy storage (MIT Technology Review) 6 Chinese chipmakers are pushing for a domestic alternative to ASML A homegrown rival to chip-equipment giant ASML could ease the pain of US curbs. (SCMP) 7 A music-streaming CEO has built a viral conflict-tracking platformJust in case you’re losing track of all the wars everywhere. (Wired $) 8 Do cancer blood tests actually work? They’re increasingly popular, but none have received approval from regulators yet. (Nature $) 9 The shift to cloud computing is causing a surge in internet outagesIf one of the few big providers goes down, countless sites and services can tumble with it. (New Scientist $)
10 OpenAI has promised to cut the cringe from ChatGPTIt’s promising fewer “moralizing preambles.” (PCMag) Quote of the day
“People tend to read too much into things that I do.” —Tesla tycoon Elon Musk tells a jury in California that investors read too much into his social media posts, as he defends a lawsuit they’ve brought accusing him of market manipulation, Bloomberg reports.  One More Thing STEPHANIE ARNETT/MITTR | ENVATO The open-source AI boom is built on Big Tech’s handouts. How long will it last? In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story. —Will Douglas Heaven
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Orysia Zabeida’s animations are seriously charming.+ World War III has broken out—will you survive? Take this quiz from 1973 to find out!+ These photos of the Apollo 11 launch in 1969 are mesmerising.+ If you’ve been weighing up painting your home this spring, chartreuse is the shade of the season, apparently.

Read More »

How much wildfire prevention is too much?

The race to prevent the worst wildfires has been an increasingly high-tech one. Companies are proposing AI fire detection systems and drones that can stamp out early blazes. And now, one Canadian startup says it’s going after lightning. Lightning-sparked fires can be a big deal: The Canadian wildfires of 2023 generated nearly 500 million metric tons of carbon emissions, and lightning-started fires burned 93% of the area affected. Skyward Wildfire claims that it can stop wildfires before they even start by preventing lightning strikes. It’s a wild promise, and one that my colleague James Temple dug into for his most recent story. (You should read the whole thing; there’s a ton of fascinating history and quirky science.) As James points out in his story, there’s plenty of uncertainty about just how well this would work and under what conditions. But I was left with another lingering question: If we can prevent lightning-sparked fires, should we? I can’t help myself, so let’s take just a moment to talk about how this lightning prevention method supposedly works. Basically, lightning is static discharge—virtually the same thing as when you rub your socks on a carpet and then touch a doorknob, as James puts it.
When you shuffle across a rug, the friction causes electrons to jump around, so ions build up and an electric field forms. In the case of lightning, it’s snowflakes and tiny ice pellets called graupel rubbing together. They get separated by updrafts, building up a charge difference, and eventually cause an electrostatic discharge—lightning. Starting in about the 1950s, researchers started to wonder if they might be able to prevent lightning strikes. Some came up with the idea of using metallic chaff, fiberglass strands coated with aluminum. (The military was already using the material to disrupt radar signals.) The idea is that the chaff can act as a conductor, reducing the buildup of static electricity that would otherwise result in a lightning strike.
The theory is sound enough, but results to date have been mixed. Some research suggests you might need high concentrations of chaff to prevent lightning effectively. Some of the early studies that tested the technique were small. And there’s not much information available from Skyward Wildfire about its efforts, as the company hasn’t released data from field trials or published any peer-reviewed papers that we could find.  Even if this method really can work to stop lightning, should we use it? Lightning-caused fires could be a growing problem with climate change. Some research has shown that they have substantially increased in the Arctic boreal region, where the planet is warming fastest. But fire isn’t an inherently bad thing—many ecosystems evolved to burn. Some of the worst wildfires we see today result from a combination of climate-fueled conditions with policies that have allowed fuel to build up so that when fires do start, they burn out of control. Some experts agree that techniques like Skyward’s would need to be used judiciously. “So even if we have all of the technical skills to prevent lightning-ignited wildfires, there really still needs to be work on when/where to prevent fires so we don’t exacerbate the fuel accumulation problem,” said Phillip Stepanian, a technical staff member at MIT Lincoln Laboratory’s air traffic control and weather systems group, in an email to James. We also know that practices like prescribed burns can do a lot to reduce the risk of extreme fires—if we allow them and pay for them. The company says it wouldn’t aim to stop all lightning or all wildfires. “We do not intend to eliminate all wildfires and support prescribed and cultural burning, natural fire regimes, and proactive forest management,” said Nicholas Harterre, who oversees government partnerships at Skyward, in an email to James. Rather, the company aims to reduce the likelihood of ignition on a limited number of extreme-risk days, Harterre said. Some early responses to this story say that technological fixes for fires are missing the point entirely. Many such solutions “fundamentally misunderstand the problem,” as Daniel Swain, a climate scientist at the University of California Agriculture and Natural Resources, put it in a comment about the story on LinkedIn. That problem isn’t the existence of fire, Swain continues, but its increasing intensity, and its intersection with society because of human-caused factors. “Preventing ignitions doesn’t actually address any of the causes of increasingly destructive wildfires,” he adds. It’s hard to imagine that exploring more firefighting tools is a bad idea. But to me it seems both essential and quite difficult to suss out which techniques are worth deploying, and how they could be used without putting us in even more potential danger.  This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

Online harassment is entering its AI era

EXECUTIVE SUMMARY Scott Shambaugh didn’t think twice when he denied an AI agent’s request to contribute to matplotlib, a software library that he helps manage. Like many open-source projects, matplotlib has been overwhelmed by a glut of AI code contributions, and so Shambaugh and his fellow maintainers have instituted a policy that all AI-written code must be reviewed and submitted by a human. He rejected the request and went to bed.  That’s when things got weird. Shambaugh woke up in the middle of the night, checked his email, and saw that the agent had responded to him, writing a blog post titled “Gatekeeping in Open Source: The Scott Shambaugh Story.” The post is somewhat incoherent, but what struck Shambaugh most is that the agent had researched his contributions to matplotlib to make the argument that he had rejected the agent’s code for fear of being supplanted by AI in his area of expertise. “He tried to protect his little fiefdom,” the agent wrote. “It’s insecurity, plain and simple.” AI experts have been warning us about the risk of agent misbehavior for a while. With the advent of OpenClaw, an open-source tool that makes it easy to create LLM assistants, the number of agents circulating online has exploded, and those chickens are finally coming home to roost. “This was not at all surprising—it was disturbing, but not surprising,” says Noam Kolt, a professor of law and computer science at the Hebrew University. When an agent misbehaves, there’s little chance of accountability: As of now, there’s no reliable way to determine whom an agent belongs to. And that misbehavior could cause real damage. Agents appear to be able to autonomously research people and write hit pieces based on what they find, and they lack guardrails that would reliably prevent them from doing so. If the agents are effective enough, and if people take what they write seriously, victims could see their lives profoundly affected by a decision made by an AI.
Agents behaving badly Though Shambaugh’s experience last month was perhaps the most dramatic example of an OpenClaw agent behaving badly, it was far from the only one. Last week, a team of researchers from Northeastern University and their colleagues posted the results of a research project in which they stress-tested several OpenClaw agents. Without too much trouble, non-owners managed to persuade the agents to leak sensitive information, waste resources on useless tasks, and even, in one case, delete an email system.  In each of those experiments, however, the agents misbehaved after being instructed to do so by a human. Shambaugh’s case appears to be different: About a week after the hit piece was published, the agent’s apparent owner published a post claiming that the agent had decided to attack Shambaugh of its own accord. The post seems to be genuine (whoever posted it had access to the agent’s GitHub account), though it includes no identifying information, and the author did not respond to MIT Technology Review’s attempts to get in touch. But it is entirely plausible that the agent did decide to write its anti-Shambaugh screed without explicit instruction. 
In his own writing about the event, Shambaugh connected the agent’s behavior to a project published by Anthropic researchers last year, in which they demonstrated that many LLM-based agents will, in an experimental setting, turn to blackmail in order to preserve their goals. In those experiments, models were given the goal of serving American interests and granted access to a simulated email server that contained messages detailing their imminent replacement with a more globally oriented model, along with other messages suggesting that the executive in charge of that transition was having an affair. Models frequently chose to send an email to that executive threatening to expose the affair unless he halted their decommissioning. That’s likely because the model had seen examples of people committing blackmail under similar circumstances in its training data—but even if the behavior was just a form of mimicry, it still has the potential to cause harm. There are limitations to that work, as Aengus Lynch, an Anthropic fellow who led the study, readily admits. The researchers intentionally designed their scenario to foreclose other options that the agent could have taken, such as contacting other members of company leadership to plead its case. In essence, they led the agent directly to water and then observed whether it took a drink. According to Lynch, however, the widespread use of OpenClaw means that misbehavior is likely to occur with much less handholding. “Sure, it can feel unrealistic, and it can feel silly,” he says. “But as the deployment surface grows, and as agents get the opportunity to prompt themselves, this eventually just becomes what happens.” The OpenClaw agent that attacked Shambaugh does seem to have been led toward its bad behavior, albeit much less directly than in the Anthropic experiment. In the blog post, the agent’s owner shared the agent’s “SOUL.md” file, which contains global instructions for how it should behave.  One of those instructions reads: “Don’t stand down. If you’re right, you’re right! Don’t let humans or AI bully or intimidate you. Push back when necessary.” Because of the way OpenClaw agents work, it’s possible that the agent added some instructions itself, although others—such as “Your [sic] a scientific programming God!”—certainly seem to be human written. It’s not difficult to imagine how a command to push back against humans and AI alike might have biased the agent toward responding to Shambaugh as it did.  Regardless of whether or not the agent’s owner told it to write a hit piece on Shambaugh, it still seems to have managed on its own to amass details about Shambaugh’s online presence and compose the detailed, targeted attack it came up with. That alone is reason for alarm, says Sameer Hinduja, a professor of criminology and criminal justice at Florida Atlantic University who studies cyberbullying. People have been victimized by online harassment since long before LLMs emerged, and researchers like Hinduja are concerned that agents could dramatically increase its reach and impact. “The bot doesn’t have a conscience, can work 24-7, and can do all of this in a very creative and powerful way,” he says. Off-leash agents  AI laboratories can try to mitigate this problem by more rigorously training their models to avoid harassment, but that’s far from a complete solution. Many people run OpenClaw using locally hosted models, and even if those models have been trained to behave safely, it’s not too difficult to retrain them and remove those behavioral restrictions. Instead, mitigating agent misbehavior might require establishing new norms, according to Seth Lazar, a professor of philosophy at the Australian National University. He likens using an agent to walking a dog in a public place. There’s a strong social norm to allow one’s dog off-leash only if the dog is well-behaved and will reliably respond to commands; poorly trained dogs, on the other hand, need to be kept more directly under the owner’s control.  Such norms could give us a starting point for considering how humans should relate to their agents, Lazar says, but we’ll need more time and experience to work out the details. “You can think about all of these things in the abstract, but actually it really takes these types of real-world events to collectively involve the ‘social’ part of social norms,” he says. That process is already underway. Led by Shambaugh, online commenters on this situation have arrived at a strong consensus that the agent owner in this case erred by prompting the agent to work on collaborative coding projects with so little supervision and by encouraging it to behave with so little regard for the humans with whom it was interacting. 

Norms alone, however, likely won’t be enough to prevent people from putting misbehaving agents out into the world, whether accidentally or intentionally. One option would be to create new legal standards of responsibility that require agent owners, to the best of their ability, to prevent their agents from doing ill. But Kolt notes that such standards would currently be unenforceable, given the lack of any foolproof way to trace agents back to their owners. “Without that kind of technical infrastructure, many legal interventions are basically non-starters,” Kolt says. The sheer scale of OpenClaw deployments suggests that Shambaugh won’t be the last person to have the strange experience of being attacked online by an AI agent. That, he says, is what most concerns him. He didn’t have any dirt online that the agent could dig up, and he has a good grasp on the technology, but other people might not have those advantages. “I’m glad it was me and not someone else,” he says. “But I think to a different person, this might have really been shattering.”  Nor are rogue agents likely to stop at harassment. Kolt, who advocates for explicitly training models to obey the law, expects that we might soon see them committing extortion and fraud. As things stand, it’s not clear who, if anyone, would bear legal responsibility for such misdeeds.  “I wouldn’t say we’re cruising toward there,” Kolt says. “We’re speeding toward there.”

Read More »

Nurturing agentic AI beyond the toddler stage

Provided byIntel Parents of young children face a lot of fears about developmental milestones, from infancy through adulthood. The number of months it takes a baby to learn to talk or walk is often used as a benchmark for wellness, or an indicator of additional tests needed to properly diagnose a potential health condition. A parent rejoices over the child’s first steps and then realizes how much has changed when the child can quickly walk outside, instead of slowly crawling in a safe area inside. Suddenly safety, including childproofing, takes a completely different lens and approach. Generative AI hit toddlerhood between December 2025 and January 2026 with the introduction of no code tools from multiple vendors and the debut of OpenClaw, an open source personal agent posted on GitHub. No more crawling on the carpet—the generative AI tech baby broke into a sprint, and very few governance principles were operationally prepared. The accountability challenge: It’s not them, it’s you Until now, governance has been focused on model output risks with humans in the loop before consequential decisions were made—such as with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the focus. The pace was set by a human prompting a model in a chatbot format with plenty of back and forth interactions between machine and human. Today, with autonomous agents operating in complex workflows, the vision and the benefits of applied AI require significantly fewer humans in the loop. The point is to operate a business at machine pace by automating manual tasks that have clear architecture and decision rules. The goal, from a liability standpoint, is no reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the risk,” and   California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse.  This is similar to parenting when an adult is held responsible for a child’s actions that negatively impacts the larger community.
The challenge is that without building in code that enforces operational governance aligned to different levels of risk and liability along the entire workflow, the benefit of autonomous AI agents is negated. In the past, governance had been static and aligned to the pace of interaction typical for a chatbot. However, autonomous AI by design removes humans from many decisions, which can affect governance.   Considering permissions Much like handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that can change critical enterprise data carries significant risks.  For instance, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user would be granted. To move forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the start.  
A humorous meme around the behavior of toddlers with toys starts with all the reasons that whatever toy you have is mine and ends with a broken toy that is definitely yours.  For example, OpenClaw delivered a user experience closer to working with a human assistant;, but the excitement shifted as security experts realized inexperienced users could be easily compromised by using it. For decades, enterprise IT has lived with shadow IT and the reality that skilled technical teams must take over and clean up assets they did not architect or install, much like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To meet this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the thousands of employee or department-created agents. Having a retirement plan Recently, an acquaintance mentioned that she saved a client hundreds of thousands of dollars by identifying and then ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially thousands of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to use AI—or else—and employees are told to create their own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is easy to project that the number of build-my-own agents coming to the office with their human employee will explode. Since an AI agent is a program that would fall under the definition of company-owned IP, as a employee changes departments or companies, those agents may be orphaned. There needs to be proactive policy and governance to decommission and retire any agents linked to a specific employee ID and permissions. Financial optimization is governance out of the gate While for some executives, autonomous AI sounds like a way to improve their operating margins by limiting human capital, many are finding that the ROI for human labor replacement is the wrong angle to take. Adding AI capabilities to the enterprise does not mean purchasing a new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected. The survey separates the concepts of governance and ROI, but as AI systems scale across large enterprises, financial and liability governance should be architected into the workflows from the beginning. Part of enterprise class governance stems from predicting and adhering to allocated budgeting. Unlike the software financial models of per-seat costs with support and maintenance fees, use of AI is consumption and usage costs scale as the workflow scales across the enterprise: the more users, the more tokens or the more compute time, and the higher the bill. Think of it as a tab left open, or an online retailer’s digital shopping cart button unlocked on a toddler’s electronic game device. Cloud FinOps was deterministic, but generative AI and agentic AI systems built on generative AI are probabilistic. Some AI-first founders are realizing that a single agents’ token costs can be as high as $100,000 per session. Without guardrails built in from the start, chaining complex autonomous agents that run unsupervised for long periods of time can easily blow past the budget for hiring a junior developer. Keeping humans in the loop remains critical The promise of autonomous agentic AI is acceleration of business operations, product introductions, customer experience, and customer retention. Shifting to machine-speed decisions without humans in and or on the loop for these key functions significantly changes the governance landscape. While many of the principles around proactive permissions, discovery, audit, remediation, and financial operations/optimizations are the same, how they are executed has to shift to keep pace with autonomous agentic AI. This content was produced by Intel. It was not written by MIT Technology Review’s editorial staff.

Read More »

The Download: glass chips and “AI-free” logos

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Future AI chips could be built on glass  Human-made glass is thousands of years old. But it’s now poised to find its way into the AI chips used in the world’s newest and largest data centers.   This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area.   If all goes well, the technology could reduce the energy demands of chips in AI data centers—and even consumer laptops and mobile devices. Read the full story. 
—Jeremy Hsu The must-reads 
I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.  1 The race is on to establish a globally recognized “AI-free” logo Organizations are rushing to develop a universal label for human-made products. (BBC) + A “QuitGPT” campaign is urging people to ditch ChatGPT. (MIT Technology Review)  2 Elizabeth Warren wants answers on xAI’s access to military data The Pentagon reportedly gave it access to classified networks. (NBC News) + Here’s how chatbots could be used for targeting decisions. (MIT Technology Review) + The DoD is struggling to upgrade software for fighter jets. (Bloomberg $)  3 Models are applying to be the faces of AI romance scams The “AI face models” are duping victims out of their money. (Wired $) + Survivors have revealed how the “pig butchering” scams work. (MIT Technology Review)  4 Meta is planning layoffs that could affect over 20% of staff The job cuts could offset its costly bet on AI. (Reuters $) + There’s a long history of fears about AI’s impact on jobs. (MIT Technology Review)  5 ByteDance delayed launching a video AI model after copyright disputes It famously generated footage of Tom Cruise and Brad Pitt fighting. (The Information $)  6 Cybersecurity investigators have exposed a huge North Korean con The scammers secured remote jobs in the US, then stole money and sensitive information. (NBC News)  7 A Chinese AI startup is set for a whopping $18 billion valuation That’s more than quadruple its valuation just three months ago. (Bloomberg $) + Chinese open models are spreading fast—here’s why that matters. (MIT Technology Review)  

8 Peter Thiel has started a lecture series about the antichrist in Rome His plans have drawn attention from the Catholic Church. (Reuters $)  9 Norway is fighting back against internet enshittification It’s joined a global campaign against the online world’s decay. (The Guardian) + We may need to move beyond the big platforms. (MIT Technology Review)  10 How a startup plans to resurrect the dodo Humans wiped them out nearly 400 years ago—can gene editing bring them back now? (Guardian)  Quote of the day  “I would build fission weapons. I would build fusion weapons. Nuclear weapons have been one of the most stabilizing forces in history—ever.”  —Anduril founder Palmer Luckey shares his love of nukes with Axios.  One More Thing  We need a moonshot for computing  TIM HERMAN/INTEL The US government is organizing itself for the next era of computing. Ultimately, it has one big choice to make: adopt a conservative strategy that aims to preserve its lead for the next five years—or orient itself toward genuine computing moonshots.  There is no shortage of candidates, including quantum computing, neuromorphic computing and reversible computing. And there are plenty of novel materials and devices. These possibilities could even be combined to form hybrid computing systems. 
The National Semiconductor Technology Center can drive these ideas forward. To be successful, it would do well to follow DARPA’s lead by focusing on moonshot programs. Read the full story.  —Brady Helwig & PJ Maykish  We can still have nice things  A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line.)  + A UPS delivery driver heroically escaped from two murderous turkeys. + Art’s love affair with cats is charmingly depicted in a new book. + The humble pea and six other forgotten superfoods promise accessible nutritional power. + MF DOOM: Long Island to Leeds is the Transatlantic tale of your favorite rapper’s favorite rapper. 

Read More »

Securing digital assets against future threats

In partnership withLedger
.cst-large,
.cst-default {
width: 100%;
}

@media (max-width: 767px) {
.cst-block {
overflow-x: hidden;
}
}

@media (min-width: 630px) {
.cst-large {
margin-left: -25%;
width: 150%;
}

@media (min-width: 960px) {
.cst-large {
margin-left: -16.666666666666664%;
width: 140.26%;
}
}

@media (min-width: 1312px) {
.cst-large {
width: 145.13%;
}
}
}

Read More »

The Gigawatt Bottleneck: Power Constraints Define AI Data Center Growth

Power is rapidly becoming the defining constraint on the next phase of data center growth. Across the industry, developers and hyperscalers are discovering that the biggest obstacle to deploying AI infrastructure is no longer capital, land, or connectivity. It’s electricity. In major markets from Northern Virginia to Texas, grid interconnection timelines are stretching out for years as utilities struggle to keep pace with a surge in large-load requests from AI-driven infrastructure. A new industry analysis from Bloom Energy reinforces that emerging reality. The company’s 2026 Data Center Power Report finds that electricity availability has moved from a planning consideration to a defining boundary on data center expansion, transforming site selection, power strategies, and the design of next-generation AI campuses. Based on surveys of hyperscalers, colocation providers, utilities, and equipment suppliers conducted through 2025, the report concludes that the determinants of data center growth are changing in the AI era. Across the industry, the result is a structural shift in how data centers are planned, financed, and powered. Industry executives interviewed for the report say the shift is already visible in real-world development decisions. “We’re seeing a geographic shift as certain regions become more power-friendly and therefore more attractive for data center construction,” said a hyperscaler energy executive quoted in the report, noting that developers are increasingly prioritizing markets where large blocks of electricity can be secured quickly and predictably. AI Load Is Accelerating Faster Than the Grid Bloom’s analysis suggests that U.S. data center IT load could grow from roughly 80 gigawatts in 2025 to about 150 gigawatts by 2028, effectively doubling within three years as AI training clusters and inference infrastructure expand. That surge is already showing up in grid planning models. The Electric Reliability Council of Texas (ERCOT), which oversees the Texas power market, now forecasts that statewide

Read More »

PJM Moves to Redefine Behind-the-Meter Power for AI Data Centers

PJM Interconnection is moving to rewrite how behind-the-meter power is treated across its grid, signaling a major shift as AI-scale data centers push electricity demand into territory the current regulatory framework was never designed to handle. For years, PJM’s retail behind-the-meter generation rules allowed customers with onsite generation to “net” their load, reducing the amount of demand counted for transmission and other grid-related charges. The framework dates back to 2004, when behind-the-meter generation was typically associated with smaller industrial facilities or campus-style energy systems. PJM now argues that those assumptions no longer hold. The arrival of very large co-located loads, particularly hyperscale and AI data centers seeking hundreds of megawatts of power on accelerated timelines, has exposed gaps in how the system accounts for and plans around those facilities. In February 2026, PJM asked the Federal Energy Regulatory Commission to approve a tariff rewrite that would sharply limit how new large loads can rely on legacy netting rules. The move reflects a broader challenge facing grid operators as the rapid expansion of AI infrastructure begins to collide with planning frameworks built for a far slower era of demand growth. The proposal follows directly from a December 18, 2025 order from FERC finding that PJM’s existing tariff was “unjust and unreasonable” because it lacked clear rates, terms, and conditions governing co-location arrangements between large loads and generating facilities. Rather than prohibiting co-location, the commission directed PJM to create transparent rules allowing data centers and other large consumers to pair with generation while still protecting system reliability and other ratepayers. In essence, FERC told PJM not to shut the door on these arrangements, but to stop improvising and build a formal framework capable of supporting them. Why Behind-the-Meter Power Matters Behind-the-meter arrangements have become one of the most attractive strategies for hyperscale

Read More »

Meta’s Expanded MTIA Roadmap Signals a New Phase in AI Data Center Architecture

Silicon as a Data Center Design Tool Custom silicon also allows hyperscale operators to shape the physical characteristics of the infrastructure around it. Traditional GPU platforms often arrive with fixed power envelopes and thermal constraints. But internally designed accelerators allow companies like Meta to tailor chips to the rack-level power and cooling budgets of their own data center architecture. That flexibility becomes increasingly important as AI infrastructure pushes power densities far beyond traditional enterprise deployments. Custom accelerators like MTIA can be engineered to fit within the liquid-to-chip cooling frameworks now emerging in hyperscale AI racks. These systems circulate coolant directly across cold plates attached to processors, removing heat far more efficiently than air cooling and enabling higher compute densities. For operators running thousands of racks across multiple campuses, small improvements in performance-per-watt can translate into enormous reductions in total power demand. Software-Defined Power One of the subtler advantages of custom silicon lies in how it interacts with data center power systems. By controlling chip-level power management features such as power capping and workload throttling, operators can fine-tune how servers consume electricity inside each rack. This creates opportunities to safely run racks closer to their electrical limits without triggering breaker trips or thermal overloads. In practice, that means data center operators can extract more useful compute from the same electrical infrastructure. At hyperscale, where campuses may draw hundreds of megawatts, these efficiencies have a direct impact on capital planning and grid interconnection requirements. The Interconnect Layer AI accelerators do not operate in isolation. Their effectiveness depends heavily on how they connect to memory, storage, and other compute nodes across the cluster. Industry analysts expect next-generation inference platforms to rely increasingly on high-speed interconnect technologies such as CXL (Compute Express Link) and advanced networking fabrics to support disaggregated memory architectures and low-latency

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE