Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Nvidia partners with optics technology vendors Lumentum and Coherent to enhance AI infrastructure
Jackson added, “it also looks like the bet will be on photon transfer optics. Photonics-based computers have been in development as prototypes for more than a decade, and seek to address the physical limitations of copper as an electrical conduit.” By relying on the transfer of light through glass, he said, “this architectural approach is more energy efficient and promises to be much faster than current chips. If Nvidia can mass-manufacture a next-generation GPU that integrates photonics right into its silicon, then they can solve a couple of big problems for AI developers: power consumption and speed.” Sanchit Vir Gogia, chief analyst at Greyhound Research, said that the dual $2 billion investment “sends a signal about AI infrastructure bottlenecks: this is the moment where the industry quietly admits that AI scaling is no longer primarily a chip story. It is a communication story.” For the last few years, he said, “the visible constraint was straightforward. Enterprises could not get enough GPUs. Hyperscalers reserved allocation. Vendors rationed supply. That was the first choke point. But once accelerators are deployed at scale, the bottleneck moves. It does not disappear.” Gogia added that in today’s AI clusters, “each accelerator depends on dozens of high-speed links to talk to its neighbours. Multiply that across the rack and you end up with thousands of interconnects operating continuously. Every one of those links draws power. Everyone introduces latency and signal integrity considerations. Everyone carries a probability of failure.” What Nvidia is signalling is that the next bottleneck is the fabric itself, he pointed out. “You can add more GPUs, but if the network layer cannot scale proportionally, utilisation falls and economics deteriorate,” he said. “The company is moving upstream to ensure the arteries of AI infrastructure do not become the new point of scarcity. This is

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination
WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026. “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets: Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider. ###

Intel aims advanced Xeon 6+ at AI edge computing
At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A node, Intel 7, and Intel 3 manufacturing processes. [ Related: More Intel news and insights ] Clearwater Forest supports the existing Xeon server platform socket, 12 memory channels, 96 PCIe 5.0 lanes, and 64 CXL 2.0 lanes. It supports memory up to DDR5-8000.The chip contains 288 E-cores, for Efficiency, with a high-bandwidth on-chip fabric to link two chips in a two-socket design.

OpenAI’s “compromise” with the Pentagon is what Anthropic feared
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.” In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon. It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote. OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today. The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences. OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X. But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months. There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote. But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech. There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move. Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI. But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground. If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).

Nvidia partners with telecom providers for open 6G networks
Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, MITRE, Nokia, OCUDU Ecosystem Foundation, ODC, SK Telecom, SoftBank Corp. and T-Mobile. Initial trials for 6G are expected to start as early as 2028, and the new network is expected to launch commercially around 2030. “Unlike 5G, 6G is being born in the AI era, and the networks of today simply aren’t ready for the use cases of tomorrow,” said Ronnie Vasishta, senior vice president of telecommunications at Nvidia on conference call with the tech media. “Remember, AI did not exist when 5G was being defined. So using AI to even improve the networks wasn’t possible in that definitional phase.”

Why network bandwidth matters a lot
One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.

Nvidia partners with optics technology vendors Lumentum and Coherent to enhance AI infrastructure
Jackson added, “it also looks like the bet will be on photon transfer optics. Photonics-based computers have been in development as prototypes for more than a decade, and seek to address the physical limitations of copper as an electrical conduit.” By relying on the transfer of light through glass, he said, “this architectural approach is more energy efficient and promises to be much faster than current chips. If Nvidia can mass-manufacture a next-generation GPU that integrates photonics right into its silicon, then they can solve a couple of big problems for AI developers: power consumption and speed.” Sanchit Vir Gogia, chief analyst at Greyhound Research, said that the dual $2 billion investment “sends a signal about AI infrastructure bottlenecks: this is the moment where the industry quietly admits that AI scaling is no longer primarily a chip story. It is a communication story.” For the last few years, he said, “the visible constraint was straightforward. Enterprises could not get enough GPUs. Hyperscalers reserved allocation. Vendors rationed supply. That was the first choke point. But once accelerators are deployed at scale, the bottleneck moves. It does not disappear.” Gogia added that in today’s AI clusters, “each accelerator depends on dozens of high-speed links to talk to its neighbours. Multiply that across the rack and you end up with thousands of interconnects operating continuously. Every one of those links draws power. Everyone introduces latency and signal integrity considerations. Everyone carries a probability of failure.” What Nvidia is signalling is that the next bottleneck is the fabric itself, he pointed out. “You can add more GPUs, but if the network layer cannot scale proportionally, utilisation falls and economics deteriorate,” he said. “The company is moving upstream to ensure the arteries of AI infrastructure do not become the new point of scarcity. This is

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination
WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026. “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets: Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider. ###

Intel aims advanced Xeon 6+ at AI edge computing
At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A node, Intel 7, and Intel 3 manufacturing processes. [ Related: More Intel news and insights ] Clearwater Forest supports the existing Xeon server platform socket, 12 memory channels, 96 PCIe 5.0 lanes, and 64 CXL 2.0 lanes. It supports memory up to DDR5-8000.The chip contains 288 E-cores, for Efficiency, with a high-bandwidth on-chip fabric to link two chips in a two-socket design.

OpenAI’s “compromise” with the Pentagon is what Anthropic feared
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.” In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon. It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote. OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today. The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences. OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X. But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months. There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote. But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech. There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move. Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI. But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground. If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).

Nvidia partners with telecom providers for open 6G networks
Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, MITRE, Nokia, OCUDU Ecosystem Foundation, ODC, SK Telecom, SoftBank Corp. and T-Mobile. Initial trials for 6G are expected to start as early as 2028, and the new network is expected to launch commercially around 2030. “Unlike 5G, 6G is being born in the AI era, and the networks of today simply aren’t ready for the use cases of tomorrow,” said Ronnie Vasishta, senior vice president of telecommunications at Nvidia on conference call with the tech media. “Remember, AI did not exist when 5G was being defined. So using AI to even improve the networks wasn’t possible in that definitional phase.”

Why network bandwidth matters a lot
One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination
WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026. “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets: Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider. ###

Equinor lets EPC contract for Gullfaks field
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Equinor Energy AS has let an engineering, procurement, and construction (EPC) contract to SLB to upgrade the subsea compression system for Gullfaks field in the Norwegian North Sea. Under the contract, SLB OneSubsea will deliver two next-generation compressor modules to replace the units originally supplied in 2015 as part of the world’s first multiphase subsea compression system. The upgraded modules will increase differential pressure and flow capacity, enhancing recovery and extending field life, SLB said, while installation within the existing subsea infrastructure will minimize downtime and reduce overall campaign costs, the company continued. Gullfaks field lies in block 34/10 in the northern part of the North Sea. Three large production platforms with concrete substructures make up the development solution for the main field.

Oxy cutting oil-and-gas capex by $300 million, eyes 1% production growth
@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Occidental Petroleum Corp., Houston, will spend $5.5-5.9 billion on capital projects this year, an 8% drop from 2025 and $800 million less than executives’ early forecast late last year, as the company continues to emphasize efficiency gains. Spending on oil-and-gas operations will be $300 million less than last year. Sunil Mathew, chief financial officer, late last week told investors and analysts that Occidental’s capital spending budget for 2026 (adjusted for the recently completed divestiture of OxyChem) will focus on short-cycle projects and be roughly 70% devoted to US onshore assets. Still, onshore capex will drop by $400 million from last year in part because of a drop in Permian basin activities and efficiency improvements. Other elements of Occidental’s spending plan include: A reduction of about $100 million compared to last year for exploration work A $250 million drop in spending at the company’s Low Carbon Ventures group housing Stratos Mathew said capex, which will be weighted a little to the first half, sets up Occidental’s production to average 1.45 MMboe/d for the full year, a tick up from 2025’s average of 1.434 MMboe/d but down from the roughly 1.48

Diamondback’s Van’t Hof growing ‘more confident about the macro’
The early Barnett production will help Diamondback slightly increase its oil production this year from 2025’s average of 497,200 b/d. Van’t Hof and his team are eyeing 505,000 b/d this year with total expected production of 926,000-962,000 boe/d versus last year’s 921,000 boe/d. On a Feb. 24 conference call with analysts and investors, Van’t Hof said he’s feeling better than in recent quarters about that production number possibly moving up. The bigger picture for the oil-and-gas sector, he said, has grown a bit brighter. “Some people have been talking about [oversupplying the market] for 2 years. It just hasn’t seemed to happen as aggressively as some expected,” Van’t Hof said. “As we turn to higher demand in the summer and driving season […] people will start to find reasons to be less bearish […] In general, we just feel more confident about the macro after a couple of big shocks last year on the supply side and the demand side.” In the last 3 months of 2025, Diamondback posted a net loss of more than $1.4 billion due to a $3.6 billion impairment charge because of lower commodity prices’ effect on the company’s reserves. Adjusted EBITA fell to $2.0 billion from $2.5 billion in late 2024 and revenues during the quarter slipped to nearly $3.4 billion from $3.7 billion. Shares of Diamondback (Ticker: FANG) were essentially flat at $173.68 in early-afternoon trading on Feb. 24. Over the past 6 months, they are still up more than 20% and the company’s market value is now $50 billion.

Vaalco Energy advances offshore drilling, development in Gabon and Ivory Coast
Vaalco Energy Inc. is drilling Etame field offshore Gabon and a preparing a field development plan (FDP) off Ivory Coast. In Gabon, Vaalco drilled, completed, and placed Etame 15H-ST development well on production in Etame oil field in 1V block. The well has a 250 m lateral interval of net pay in high-quality Gamba sands near the top of the reservoir. The well had a stabilized flow rate of about 2,000 gross b/d of oil with a 38% water cut through a 42/64-in. choke and ESP at 54 Hz, confirming expectations from the ET-15P pilot well results. The company is working to stabilize pressure and manage the reservoir. West Etame step out exploration well spudded in mid-February. Drilling the well from the S1 slot on the Etame platform Etame West (ET-14P) exploration prospect has a 57% chance of geologic success and is expected to reach the target zone by mid-March. Etame Marin block lies in Congo basin about 32 km off the coast of Gabon. The license area is spread over five fields covering about 187 sq km. Vaalco is operator at the block with 58.8% interest. In Ivory Coast, Vaalco has been confirmed as operator (60%) of Kossipo field on the CI-40 Block southwest of Baobab field with partner PetroCI holding the remaining 40%. An FDP is expected to be completed in second-half 2026. New ocean bottom node (OBN) seismic data is expected to drive and derisk Vaalco’s updated evaluation and development plan. Estimated Gross 2C resources are 102-293 MMboe in place. The Baobab Ivorien (formerly MV10) floating production storage and offloading vessel (FPSO) is currently off the East coast of Africa and is expected to return to Ivory Coast by late March.

Ovintiv sets 2026 plan around Permian, Montney after declaring portfolio shift ‘complete’
2026 guidance For 2026, Ovintiv plans to invest $2.25–2.35 billion, up slightly from the $2.147 billion spent in 2025. McCracken said capital spend will be highest in first-quarter 2026 at about $625 million, “largely due to $50 million of capital allocated to the Anadarko and some drilling activity in the Montney that we inherited from NuVista.” The program is designed to deliver 205,000–212,000 b/d of oil and condensate, some 2 bcfd of natural gas, and 620,000–645,000 boe/d total company production. For full-year 2025, the company produced 614,500 boe/d. The company is pursuing a “stay‑flat” oil strategy, maintaining liquids output through steady activity rather than aggressive volume growth. Permian Ovintiv plans to run 5 rigs and 1-2 frac crews in the Permian basin this year, bringing 125–135 net wells online. Oil and condensate volumes are expected to average 117,000–123,000 b/d, with natural gas production of 270–295 MMcfd. The company projects 2026 drilling and completion costs below $600/ft, about $25/ft lower than 2025. Chief operating officer Gregory Givens credited faster cycle times and ongoing application of surfactant technology. Ovintiv has now deployed surfactants in about 300 Permian wells, generating a 9% uplift in oil productivity versus comparable control wells. Givens also reiterated that Ovintiv remains committed to its established cube‑development model. Responding to an analyst question, he said the company continues completing entire cubes at once, then returning “18 months later” to develop adjacent cubes—an approach that stabilizes well performance and reduces parent‑child degradation, he said. “We are getting the whole cube at the same time, and that is working quite well for us,” he said. The company plans to drill its first Barnett Woodford test well across Midland basin acreage in 2026. Ovintiv holds Barnett rights across roughly 100,000 acres and intends to move cautiously given the zone’s depth, higher pressure,

Microsoft will invest $80B in AI data centers in fiscal 2025
And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs). In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

John Deere unveils more autonomous farm machines to address skill labor shortage
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

2025 playbook for enterprise AI success, from agents to evals
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Three Aberdeen oil company headquarters sell for £45m
Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

2025 ransomware predictions, trends, and how to prepare
Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

OpenAI’s “compromise” with the Pentagon is what Anthropic feared
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.” In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon. It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote. OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today. The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences. OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X. But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months. There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote. But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech. There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move. Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI. But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground. If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).

The Download: protesting AI, and what’s floating in space
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. I checked out one of the biggest anti-AI protests ever Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta and Google DeepMind, chanting slogans and waving signs. The march was organized by a coalition of two separate activist groups, Pause AI and Pull the Plug, who billed it as the largest protest of its kind yet.This is all familiar stuff. Researchers have been calling out the harms, both real and hypothetical, caused by generative AI— especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini—for years. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about it. Read the full story. —Will Douglas Heaven
We’re putting more stuff into space than ever. Here’s what’s up there.
Earth’s a medium-size rock with some water on top, enveloped by gases that keep everything that lives here alive. Just at the edge of that envelope begins a thin but dense layer of human-built, high-tech stuff. People started putting gear up there in 1957, and now it’s a real habit. Telescopes look up and out at the wild universe. Humans live in an orbiting metal bubble. In the last five years, the number of active satellites in space has increased from barely 3,000 to about 14,000—and climbing. And then there’s the garbage. Here’s a closer look at Earth’s ever-thickening shell of human-made matter—the anthroposphere. —Jonathan O’Callaghan This story is from the latest print issue of MIT Technology Review magazine. If you haven’t already, subscribe now to receive future issues once they land. MIT Technology Review is a 2026 ASME finalist in reporting The American Society of Magazine Editors has named MIT Technology Review as a finalist for a 2026 National Magazine Award in the reporting category. The shortlisted story—“We did the math on AI’s energy footprint. Here’s the story you haven’t heard”—is part of our Power Hungry package on AI’s energy burden.
In a rigorous investigation, senior AI reporter James O’Donnell and senior climate reporter Casey Crownhart spent six months digging through hundreds of pages of reports, interviewing experts, and crunching the numbers. Read more about what they found out. What comes after the LLMs? The AI industry is organized around LLMs: tools, products, and business models. Yet many researchers believe the next breakthroughs may not look like language models at all. Join us for a LinkedIn Live discussion at 12.30pm ET on Tuesday March 3 to dive into the emerging directions that could define AI’s next era. Register here! The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The Pentagon wanted Anthropic to analyze bulk data collected from Americans It proved the sticking point in talks as OpenAI swooped in to ink a new deal. (The Atlantic $)+ Anthropic has vowed to legally challenge its “security risk” label. (FT $)+ Here’s a blow-by-blow look at how negotiations fell apart. (NYT $)+ Downloads of Claude are on the up. (TechCrunch)2 Iranian apps and websites were hacked in the wake of the US-Israeli strikesNews sites and a religious app were co-opted to display anti-military messages. (Reuters)+ They urged personnel to abandon the regime and to liberate the country. (WSJ $)+ Unsurprisingly, X is rife with disinformation about the attacks. (Wired $)+ The campaign has disrupted online delivery orders across the Middle East. (Bloomberg $)3 DeepSeek is poised to release a new AI model this weekThe multimodal V4 is being released ahead of China’s annual parliamentary meetings. (FT $)4 The UK is trialing a social media ban for under-16sHundreds of teens will test overnight digital curfews and screen time limits. (The Guardian)+ What it’s like to attend a phone addiction meeting. (Boston Globe $)
5 Celebrities are winning huge sums playing on this major crypto casino’s slotsIn fact, their lucky wins appear to spike while they’re livestreaming. (Bloomberg $) 6 America is desperate to steal China’s critical mineral leadThe victor essentially controls global computing, aerospace and defense. (Economist $)+ This rare earth metal shows us the future of our planet’s resources. (MIT Technology Review)7 How lasers became the military’s weapon of choiceFrom Ukraine to the US, soldiers are deploying laser guns. But why? (The Atlantic $)+ They’re a key part of America’s arsenal in manning the southern border. (New Yorker $)+ This giant microwave may change the future of war. (MIT Technology Review)
8 How quantum entanglement became big businessIt promises unhackable communication—but is it too good to be true? (New Scientist $)+ Useful quantum computing is inevitable—and increasingly imminent. (MIT Technology Review) 9 The iPod is proving a hit among Gen ZEven though Apple discontinued the music player four years ago. (NYT $)10 Chinese parents are joining matchmaking apps in their drovesIn a bid to marry off their adult children as soon as humanly possible. (Nikkei Asia) Quote of the day “Day to day it just feels untenable…Some managers know this is the case, but executives just keep pointing to some bigger AI picture.” —An anonymous Amazon employee describes the stresses of trying to increase productivity amid the company’s commitment to reducing headcount to the Financial Times.
One more thing The iPad was meant to revolutionize accessibility. What happened?On April 3, 2010, Steve Jobs debuted the iPad. What for most people was basically a more convenient form factor was something far more consequential for non-speakers: a life-changing revolution in access to a portable, powerful communication device for just a few hundred dollars.But a piece of hardware, however impressively designed and engineered, is only as valuable as what a person can do with it. After the iPad’s release, the flood of new, easy-to-use augmentative and alternative communication apps that users were in desperate need of never came.Today, there are only around half a dozen apps, each retailing for $200 to $300, that ask users to select from menus of crudely drawn icons to produce text and synthesized speech. It’s a depressingly slow pace of development for such an essential human function. Read the full story. —Julie Kim

I checked out one of the biggest anti-AI protests ever
Pull the plug! Pull the plug! Stop the slop! Stop the slop! For a few hours this Saturday, February 28, I watched as a couple hundred anti-AI protesters marched through London’s King’s Cross tech hub, home to the UK headquarters of OpenAI, Meta and Google DeepMind, chanting slogans and waving signs. The march was organized by a coalition of two separate activist groups, Pause AI and Pull the Plug, who billed it as the largest protest of its kind yet. The range of concerns on show covered everything from online slop and abusive images to killer robots and human extinction. One woman wore a large homemade billboard on her head that read “WHO WILL BE WHOSE TOOL?” (with the Os in “TOOL” cut out as eye holes). There were signs that said “Pause before there’s cause” and “EXTINCTION=BAD” and “Demis the Menace” (referring to Demis Hassabis, the CEO of Google DeepMind). Another simply stated: “Stop using AI.” An older man wearing a sandwich board that read “AI? Over my dead body” told me he was concerned about the negative impact of AI on society: “It’s about the dangers of unemployment,” he said. “The devil finds work for idle hands.” This is all familiar stuff. Researchers have been calling out the harms, both real and hypothetical, caused by generative AI—especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini—for years. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about it.
The first time I ran into anti-AI protestors was in May 2023, outside a London lecture hall where Sam Altman was speaking. Two or three people stood heckling an audience of hundreds. In June last year Pause AI, a small but international organization set up in 2023 and funded by private donors, drew a crowd of a few dozen people for a protest outside Google DeepMind’s London office. This felt like a significant escalation. “We want people to know Pause AI exists,” Joseph Miller, who heads up Pause AI’s UK branch and co-organized Saturday’s march, told me on a call the day before the protest: “We’ve been growing very rapidly. In fact, we also appear to be on a somewhat exponential path, matching the progress of AI itself.”
Miller is a PhD student at Oxford University, where he studies mechanistic interpretability, a new field of research that involves trying to understand exactly what goes on inside LLMs when they carry out a task. His work has led him to believe that the technology may forever be beyond our control and that this could have catastrophic consequences. It doesn’t have to be a rogue superintelligence, he said. You just needed someone to put AI in charge of nuclear weapons. “The more silly decisions that humanity makes the less powerful the AI has to be before things go bad,” he said. After a week in which the US government tried to force Anthropic to let it use the firm’s LLM Claude for any “legal” military purposes, such fears seem a little less farfetched. Anthropic stood its ground, but OpenAI signed a deal with the DoD instead. (OpenAI declined an invitation to comment on Saturday’s protest.) For Matilda da Rui, a member of Pause AI and co-organizer of the protest, AI is the last problem that humans will face. She thinks the technology will either allow us to solve—once and for all—every other problem that we have, or it will wipe us out and there will be nobody left to have problems any more. “It’s a mystery to me that anyone would really focus on anything else if they actually understood the problem,” she told me. And yet despite that urgency, the atmosphere at the march was pleasant, even fun. There was no sense of anger and little sense that lives—let alone the survival of our species—was at stake. That could be down to the broad coalition of interests and demands that protestors brought with them. A chemistry researcher I spoke to ticked off a litany of complaints, which ranged from the conspiracy-adjacent (that data centers emitted infrasound below the threshold of human hearing, inducing paranoia in people who lived near them) to the reasonable (that the spread of AI slop online was making it hard to find reliable academic sources). The researcher’s solution was to make it illegal for companies to profit from the technology: “If you couldn’t make money from AI, it wouldn’t be such a problem.” Most people I spoke to agreed that technology companies probably wouldn’t take any notice of this kind of protest. “I don’t think that the pressure on companies will ever work,” Maxime Fournes, the global head of Pause AI, told me when I bumped into him at the march: “They are optimized to just not care about this problem.” But Fournes, who worked in the AI industry for 12 years before joining Pause AI, thinks he can make it harder for those companies. “We can slow down the race by creating protection for whistleblowers or showing the public that working in AI is not a sexy job, that actually it’s a terrible job—you can dry up the talent pipeline.”
In general, most protestors hoped to make as many people as possible aware of the issues and to use that groundswell to push for government regulation. The organizers had pitched the march as a social event, encouraging anyone curious about the cause to come along. It seemed to have worked. I met a man who worked in finance who had tagged along with his roommate. I asked why he was there. “Sometimes you don’t have that much to do on a Saturday anyway,” he said. “If you can see the logic of the argument, it sort of makes sense to you, then it’s like ‘Yeah, sure, I’ll come along and see what it’s like.’” He thought the concerns around AI were hard for anyone to fully oppose. It’s not like a pro-Palestine protest, he said, where you’d have people who might disagree with the cause. “With this, I feel like it’s very hard for someone to totally oppose what you’re marching for.” After winding its way through King’s Cross, the march ended in a church hall in Bloomsbury, where tables and chairs had been set up in rows. The protestors wrote their names on stickers, stuck them to their chests and made awkward introductions to their neighbors. They were here to figure out how to save the world. But I had a train to catch and I left them to it.
MIT Technology Review is a 2026 ASME finalist in reporting
AI is often described as a black box, but it’s not just its inner workings that are mysterious. Leading AI companies have kept figures on energy use closely guarded, making it hard to determine its climate impact. In a rigorous investigation, senior AI reporter James O’Donnell and senior climate reporter Casey Crownhart spent six months digging through hundreds of pages of reports, interviewing experts, and crunching the numbers. The team drilled down into the energy cost of a single prompt, and then zoomed out to build a broader picture illustrating the potential impacts of AI’s current and future energy demand. Their work revealed just how big AI’s energy footprint is, where that energy comes from, and who will pay for it. In the months following the project’s publication, major AI companies including Open AI, Mistral, and Google published details about their models’ energy and water usage. The 2026 awards will be presented in New York City on May 19.

The Download: how AI is shaking up Go, and a cybersecurity mystery
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. AI is rewiring how the world’s best Go players think Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol.And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result.Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Read the full story. —Michelle Kim
MIT Technology Review Narrated: Hackers made death threats against this security researcher. Big mistake.
In April 2024, a mysterious someone using the online handles “Waifu” and “Judische” began posting death threats on Telegram and Discord channels aimed at a cybersecurity researcher named Allison Nixon.As chief research officer at the cyber investigations firm Unit 221B, Nixon had built a career tracking cybercriminals and helping get them arrested. And although she had taken an interest in the Waifu persona in years past for crimes he boasted about committing, he hadn’t been on her radar for a while when the threats began, because she was tracking other targets. Now Nixon resolved to unmask Waifu/Judische and others responsible for the death threats—and take them down for crimes they admitted to committing. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anthropic has refused the Pentagon’s AI demands It’s holding firm on its stance: no mass surveillance of Americans, and no lethal autonomous weapons. (The Verge)+ Anthropic said “virtually no progress” had been made during recent talks. (The Hill)+ Here’s how relations between the US government and the company started to dissolve. (Vox)2 Instagram will alert parents if teens repeatedly search for suicide materialBut campaigners fear the measure could do more harm than good. (BBC)+ Instagram is working on a similar alert feature for its AI tools. (Engadget)+ Poland is weighing up banning under-15s from accessing social media. (Bloomberg $)3 ChatGPT Health regularly fails to recognize medical emergenciesIn more than half of serious cases, it advised users to delay seeking treatment. (The Guardian)+ “Dr. Google” had its issues. Can ChatGPT Health do better? (MIT Technology Review)4 The Islamic State’s online warriors are posting beyond the graveThe group is using AI to resurrect dead leaders and port them to new platforms. (404 Media)5 Vegetarians are at lower risk from five types of cancerIt suggests that avoiding meat could help to avoid certain cancers, including breast and pancreatic. (FT $)+ Interestingly, the same doesn’t apply for vegans. (Bloomberg $)+ RFK Jr. follows a carnivore diet. That doesn’t mean you should. (MIT Technology Review)6 Activists combating online abuse have been barred from AmericaAuthorities accused HateAid of participating in a “global censorship-industrial complex.” (NYT $)+ What it’s like to be banned from the US for fighting online hate. (MIT Technology Review) 7 Russians are looking for missing soldiers on Google MapsThey’re posting reviews pleading for information about missing loved ones. (New Yorker $)+ Google Maps has finally gained approval to operate in South Korea. (FT $)+ It’s hellbent on closing its final few global gaps. (Economist $)
8 Burger King’s new AI assistant will evaluate workers’ friendlinessIt’ll check interactions to make sure they’re saying please and thank you. (The Verge)+ Perplexity’s bossy new AI agent assigns work to fellow agents. (Ars Technica)9 NASA still hasn’t made it back to the moonThe mission has been dogged by delays and issues. (WP $)10 Are you Chinamaxxing yet?Everyone on TikTok is, c’mon. (Insider $) Quote of the day “This is as much of a political fight as a military use issue.” —Steven Feldstein, a senior fellow at the Carnegie Endowment, who researches AI in warfare, explains to the Washington Post why ideological differences are likely to be worsening the rift between Anthropic and the Pentagon. One more thing One city’s fight to solve its sewage problem with sensorsIn the city of South Bend, Indiana, wastewater from people’s kitchens, sinks, washing machines, and toilets flows through 35 neighborhood sewer lines. On good days, just before each line ends, a vertical throttle pipe diverts the sewage into an interceptor tube, which carries it to a treatment plant where solid pollutants and bacteria are filtered out.As in many American cities, those pipes are combined with storm drains, which can fill rivers and lakes with toxic sludge when heavy rains or melted snow overwhelms them, endangering wildlife and drinking water supplies. But city officials have a plan to make its aging sewers significantly smarter. Read the full story.—Andrew Zaleski
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + This is a fascinating insight into Jimi Hendrix’s technical guitar wizardry 🎸+ The Romans: their lives really weren’t so different to ours, y’know.+ How the Beatles kicked back and relaxed at home when they weren’t shaping history.+ Disney composer Alan Menken is an undisputed talent.

AI is rewiring how the world’s best Go players think
Burrowed in the alleys of Hongik-dong, a hushed residential neighborhood in eastern Seoul, is a faded stone-tiled building stamped “Korea Baduk Association,” the governing body for professional Go. The game is an ancient one, with sacred stature in South Korea. But inside the building, rooms once filled with the soft clatter of hands dipping into wooden bowls of stones now echo with mouse clicks. Players hunch over their monitors and replay their matches in an AI program. Others huddle around a Go board and debate the best next move, while coaches report how their choices stack up against the AI’s. Some sit in silence, watching AI programs play against each other. Ten years ago AlphaGo, Google DeepMind’s AI program, stunned the world by defeating the South Korean Go player Lee Sedol. And in the years since, AI has upended the game. It’s overturned centuries-old principles about the best moves and introduced entirely new ones. Players now train to replicate AI’s moves as closely as they can rather than inventing their own, even when the machine’s thinking remains mysterious to them. Today, it is essentially impossible to compete professionally without using AI. Some say the technology has drained the game of its creativity, while others think there is still room for human invention. Meanwhile, AI is democratizing access to training, and more female players are climbing the ranks as a result. For Shin Jin-seo, the top-ranked Go player in the world, AI is an invaluable training partner. Every morning, he sits at his computer and opens a program called KataGo. Nicknamed “Shintelligence” for how closely his moves mimic AI’s, he traces the glowing “blue spot” that represents the program’s suggestion for the best next move, rearranging the stones on the digital grid to try to understand the machine’s thinking. “I constantly think about why AI chose a move,” he says.
When training for a match, Shin spends most of his waking hours poring over KataGo. “It’s almost like an ascetic practice,” he says. According to a study in 2022 by the Korean Baduk League, Shin’s moves match AI’s 37.5% of the time, well above the 28.5% average the study found among all players. “My game has changed a lot,” says Shin, “because I have to follow the directions suggested by AI to some extent.” The Korea Baduk Association says it has reached out to Google DeepMind in the hopes of arranging a match between Shin and AlphaGo, to commemorate the 10th anniversary of its victory over Lee. A spokesperson for Google DeepMind said the company could not provide information at this time. But if a new match does happen, Shin, who has trained on more advanced AI programs, is optimistic that he’d win. “AlphaGo still had some flaws then, so I think I could beat it if I target those weaknesses,” he says.
AI rewrites the Go playbook Go is an abstract strategy board game invented in China more than 2,500 years ago. Two players take turns placing black and white stones on a 19×19 grid, aiming to conquer territory by surrounding their opponent’s stones. It’s a game of striking mathematical complexity. The number of possible board configurations—roughly 10170—dwarfs the number of atoms in the universe. If chess is a battle, Go is a war. You suffocate your enemy in one corner while fending off an invasion in another. To train AI to play Go, a vast trove of human Go moves are fed into a neural network, a computing system that mimics the web of neurons in the human brain. AlphaGo, which was later christened AlphaGo Lee after its victory over Lee Sedol, was trained on 30 million Go moves and refined by playing millions of games against itself. In 2017, its successor, AlphaGo Zero, picked up Go from scratch. Without studying any human games, it learned by playing against itself, with moves based only on the rules of the game. The blank-slate approach proved more powerful, unconstrained by the limits of human knowledge. After three days of training, it beat AlphaGo Lee 100 games to zero. Google DeepMind retired AlphaGo that same year. But then a wave of open-source models inspired by AlphaGo Zero emerged. Today, KataGo is the program most widely used by professional Go players in South Korea. It’s faster and sharper than AlphaGo. It’s learned to predict not just who will win, but also who owns each point on the board at any given moment. While AlphaGo Zero pieced together its understanding of the board by looking at small sections, KataGo learned to read the whole board, developing better judgment for long-term strategies. Instead of just learning how to win, it learned to maximize its score. The software has reshaped how people play. For hundreds of years, professional Go players have navigated the game’s astronomical complexity by developing heuristics that replaced brute calculation. Elegant opening strategies imposed abstract order on the empty grid. Invading corners early was a bad bargain. Each generation of Go players added new principles to the canon. But “AI has changed everything,” says Park Jeong-sang, a South Korean Go commentator. “Fundamental moves that were once considered common sense aren’t played at all today, and techniques that didn’t exist before have become popular.” The starkest shift has been in opening moves. Go starts on a blank grid, and the first 50 moves were canvases for abstract thinking and creativity, where players etched their personalities and philosophies. Lee Sedol fashioned provocative moves that invited chaos. Ke Jie, a Chinese player who was defeated by AlphaGo Master in 2017, dazzled with agile, imaginative moves. Now, players memorize the same strain of efficient, calculated opening moves suggested by AI. The crux of the game has shifted to the middle moves, where raw calculation matters more than creativity. Training with AI has led to a homogenization of playing styles. Ke Jie has lamented the strain of watching the same opening moves recycled endlessly. “I feel the exact same way as the fans watching. It’s very tiring and painful to watch,” he told a Chinese news outlet in 2021. Fans revel when a player breaks from the script with offbeat moves, but those moments have become rarer. Over a third of moves by the top Go players replicate AI’s recommendations, according to a study in 2023. The first 50 moves of each game are often identical to what AI suggests, many players say. “Go has become a mind sport,” says Lee Sedol, who retired three years after his 2016 defeat to AlphaGo. “Before AI, we sought something greater. I learned Go as an art,” he says. “But if you copy your moves from an answer key, that’s no longer art.”
Playing Go is no longer about charting new frontiers, some players say, but about following the dictates of a superhuman oracle. “I used to inspire fans by advancing the techniques of Go and presenting a new paradigm,” says Lee. “My reason for playing Go has vanished.” A mysterious mind The players who have stayed in the game are trying to reinvent their craft. But it can be hard to discern what the new principles are. Disarmingly slight and formidably calm, Kim Chae-young, one of the top female Go players in the world, grew up learning the game from her father, who was also a professional Go player. But when AI began to reshape the game, she found herself starting over. “I needed time to abandon everything I had learned before,” says Kim who shared her screen with me as she pointed her cursor to the blue spots suggested by KataGo. “The intuition I had built up over the years turned out to be wrong.” As she leaned close to her monitor, her blinking screen showed the winning probabilities of each move, with no explanations. Even top players like Kim and Shin don’t understand all of AI’s moves. “It seems like it’s thinking in a higher dimension,” she says. When she tries to learn from AI, she adds, “it’s less about rationally thinking through each move, but more about developing a gut feeling—an intuition.” Researchers are trying to discover the superhuman knowledge encoded in game-playing AI programs so that humans can learn it too. In 2024, researchers at Google DeepMind extracted new chess concepts from AlphaZero, a generalized version of AlphaGo Zero that can also play chess, and taught them to chess grandmasters using chess puzzles. The Go concepts that players have picked up from AI systems so far are “probably only a small portion of what you could potentially learn,” says Nicholas Tomlin, a computer scientist at Toyota Technological Institute at Chicago, who coauthored a study probing Go concepts encoded in AlphaGo Zero. But extracting those lessons remains a struggle. “Top-tier players haven’t yet been able to deduce the general principles behind AI moves,” says Nam Chi-hyung, a Go professor at Myongji University. Although they can emulate AI’s moves, they have yet to glean a new paradigm for the game because its reasoning is a black box, she says. Go may be in an epistemic limbo. Even if AI is an opaque teacher, it’s a democratic one. It has supercharged training for female Go players, who have long been underdogs of the game. For decades, training meant studying under top male players, and the most competitive matches took place in male circles that were difficult for women to break into, says Nam. “Female players never had access to that experience,” she says. “But now they can study with AI, which has made their training environment much more favorable.” More broadly, AI has narrowed the gap between players by helping everyone perfect their opening moves. Female players have climbed the ranks over the last few years as a result. In 2022, Choi Jeong, then the top female player in the world, became the first woman to reach the finals of a major international Go tournament. Dubbed “Girl Wrestler” for her fierce, combative style of play, she took on Shin. She lost, but the match broke new ground for women in Go. In 2024, Kim made headlines for winning the Korean Go League’s postseason playoffs. She was the only female player in the tournament.
Training with AI has given Kim newfound confidence. Analyzing male players’ moves with AI has shattered their veneer of infallibility. “Before, I couldn’t gauge just how strong top male players were—they felt invincible. Now, I know that they make mistakes, and their moves aren’t always brilliant,” she says. “AI broke the psychological barrier.” Go players find a new identity Although AI has mastered Go far better than any player, fans continue to prefer watching people play. “A Go game between AI programs is not very fun for fans to watch,” says Park, the Go commentator. Such matches are too complex for fans to follow, too flawless to be thrilling, he says.
Players can mimic AI’s opening moves, but in the middle game—where the board branches into too many possibilities to memorize—their own judgment takes over. Fans revel in watching players make mistakes and mount comebacks, exuding personality in every stone on the board. Shin’s playing style is combative but marked by machinelike poise. Kim deftly navigates the most chaotic positions on the board. “In Go, every move is a choice you make, and your opponent responds with a choice of their own,” says Kim Dae-hui, 27, a Go fan and amateur player. “Watching that process unfold is fun.” With fans like Kim still watching, Shin finds meaning in his game. “I can play a kind of Go that tells a story that only a human can,” he says. After his retirement, Lee searched for a new job where he could have an edge as a human. He started making board games, giving speeches, and teaching students at a university. “I’m looking for a new domain that I can enjoy and excel at,” he says. But lately, he feels more hopeful for the game he left behind. “It’s every Go player’s dream to play a masterpiece game,” he says—a game of technical brilliance, with no mistakes, fought to a razor’s edge between evenly matched players. “It’s like a mirage,” Lee says, chuckling. “Maybe AI can help us play a masterpiece.” Shin hopes he can do that. To Shin, AI is a teacher, a companion, and a North Star. “I may be one of the strongest human players, but with AI around, I can’t be so arrogant,” he says. “AI gives me a reason to keep improving.”

Nvidia partners with optics technology vendors Lumentum and Coherent to enhance AI infrastructure
Jackson added, “it also looks like the bet will be on photon transfer optics. Photonics-based computers have been in development as prototypes for more than a decade, and seek to address the physical limitations of copper as an electrical conduit.” By relying on the transfer of light through glass, he said, “this architectural approach is more energy efficient and promises to be much faster than current chips. If Nvidia can mass-manufacture a next-generation GPU that integrates photonics right into its silicon, then they can solve a couple of big problems for AI developers: power consumption and speed.” Sanchit Vir Gogia, chief analyst at Greyhound Research, said that the dual $2 billion investment “sends a signal about AI infrastructure bottlenecks: this is the moment where the industry quietly admits that AI scaling is no longer primarily a chip story. It is a communication story.” For the last few years, he said, “the visible constraint was straightforward. Enterprises could not get enough GPUs. Hyperscalers reserved allocation. Vendors rationed supply. That was the first choke point. But once accelerators are deployed at scale, the bottleneck moves. It does not disappear.” Gogia added that in today’s AI clusters, “each accelerator depends on dozens of high-speed links to talk to its neighbours. Multiply that across the rack and you end up with thousands of interconnects operating continuously. Every one of those links draws power. Everyone introduces latency and signal integrity considerations. Everyone carries a probability of failure.” What Nvidia is signalling is that the next bottleneck is the fabric itself, he pointed out. “You can add more GPUs, but if the network layer cannot scale proportionally, utilisation falls and economics deteriorate,” he said. “The company is moving upstream to ensure the arteries of AI infrastructure do not become the new point of scarcity. This is

U.S. Department of Energy Brings Together Vertical Gas Corridor Countries to Strengthen Energy Coordination
WASHINGTON, DC — The U.S. Department of Energy (DOE) today hosted officials from Bulgaria, Greece, Romania, Moldova, Ukraine, and the European Commission to advance work on the Vertical Gas Corridor. The meeting built on progress made at the Partnership for Transatlantic Energy Cooperation Summit in Athens in November 2025 and the Transatlantic Gas Security Summit in Washington, D.C. in February 2026. “By partnering with the countries of the Vertical Corridor, we are opening major opportunities to expand U.S. LNG exports to Central and Eastern Europe,” said Joshua Volz. “This effort is so important to our President and Secretary because it aligns with our nation’s strengths and commitment to supporting friends and allies across Europe.” The technical discussion brought together Energy Ministries, national regulators, and Transmission System Operators (TSOs) to address key objectives essential to unlocking the Vertical Gas Corridor’s capacity to enable the northbound flow of regasified U.S. LNG from Greece and expand access to European markets: Resolving regulatory friction points that impact long-term planning Harmonizing tariffs to achieve cost competitiveness Reviewing strategic infrastructure investments necessary to enable full corridor capacity Today’s meeting reinforces DOE’s commitment to strengthening U.S. energy leadership and helping allies secure reliable alternatives to adversarial energy suppliers. By reducing barriers to U.S. LNG exports, DOE continues to support America’s role as a leading global energy provider. ###

Intel aims advanced Xeon 6+ at AI edge computing
At the Mobile World Conference show in Barcelona, Intel showcased its most advanced processor yet, the Xeon 6+ processor, codenamed “Clearwater Forest.” Technically, it is one of Intel’s most complex chiplet designs, with a package that combines a total of 12 compute chiplets manufactured on a mix of Intel 18A node, Intel 7, and Intel 3 manufacturing processes. [ Related: More Intel news and insights ] Clearwater Forest supports the existing Xeon server platform socket, 12 memory channels, 96 PCIe 5.0 lanes, and 64 CXL 2.0 lanes. It supports memory up to DDR5-8000.The chip contains 288 E-cores, for Efficiency, with a high-bandwidth on-chip fabric to link two chips in a two-socket design.

OpenAI’s “compromise” with the Pentagon is what Anthropic feared
On February 28, OpenAI announced it had reached a deal that will allow the US military to use its technologies in classified settings. CEO Sam Altman said the negotiations, which the company began pursuing only after the Pentagon’s public reprimand of Anthropic, were “definitely rushed.” In its announcements, OpenAI took great pains to say that it had not caved to allow the Pentagon to do whatever it wanted with its technology. The company published a blog post explaining that its agreement protected against use for autonomous weapons and mass domestic surveillance, and Altman said the company did not simply accept the same terms that Anthropic refused. You could read this to say that OpenAI won both the contract and the moral high ground, but reading between the lines and the legalese makes something else clear: Anthropic pursued a moral approach that won it many supporters but failed, while OpenAI pursued a pragmatic and legal approach that is ultimately softer on the Pentagon. It’s not yet clear if OpenAI can build in the safety precautions it promises as the military rushes out a politicized AI strategy during strikes on Iran, or if the deal will be seen as good enough by employees who wanted the company to take a harder line. Walking that tightrope will be tricky. (OpenAI did not immediately respond to requests for additional information about its agreement.)
But the devil is also in the details. The reason OpenAI was able to make a deal when Anthropic could not was less about boundaries, Altman said, but about approach. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he wrote. OpenAI says one basis for its willingness to work with the Pentagon is simply an assumption that the government won’t break the law. The company, which has shared a limited excerpt of its contract, cites a number of laws and policies related to autonomous weapons and surveillance. They are as specific as a 2023 directive from the Pentagon on autonomous weapons (which does not prohibit them but issues guidelines for their design and testing) and as broad as the Fourth Amendment, which has supported protections for Americans against mass surveillance.
However, the published excerpt “does not give OpenAI an Anthropic-style, free-standing right to prohibit otherwise-lawful government use,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It simply states that the Pentagon can’t use OpenAI’s tech to break any of those laws and policies as they’re stated today. The whole reason Anthropic earned so many supporters in its fight—including some of OpenAI’s own employees—is that they don’t believe these rules are good enough to prevent the creation of AI-enabled autonomous weapons or mass surveillance. And an assumption that federal agencies won’t break the law is little assurance to anyone who remembers that the surveillance practices exposed by Edward Snowden had been deemed legal by internal agencies and were ruled unlawful only after drawn-out battles (not to mention the many surveillance tactics allowed under current law that AI could expand). On this front, we’ve essentially ended up back where we started: allowing the Pentagon to use its AI for any lawful use. OpenAI could say, as its head of national security partnerships wrote yesterday, that if you believe the government won’t follow the law, then you should also not be confident it would honor the red lines that Anthropic was proposing. But that’s not an argument against setting them. Imperfect enforcement doesn’t make constraints meaningless, and contract terms still shape behavior, oversight, and political consequences. OpenAI claims a second line of defense. The company says it maintains control over the safety rules governing its models and will not give the military a version of its AI stripped of those safety controls. “We can embed our red lines—no mass surveillance and no directing weapons systems without human involvement—directly into model behavior,” wrote Boaz Barak, an OpenAI employee Altman deputized to speak on the issue about X. But the company doesn’t specify how its safety rules for the military differ from its rules for normal users. Enforcement is also never perfect, and it is especially unlikely to be when OpenAI is rolling out these protections in a classified setting for the first time and is expected to do so in just six months. There’s another question beneath all this: Should it be down to tech companies to prohibit things that are legal but that they find morally objectionable? The government certainly viewed Anthropic’s willingness to play this role as unacceptable. On Friday evening, eight hours before the US launched strikes in Tehran, Defense Secretary Pete Hegseth issued harsh remarks on X. “Anthropic delivered a master class in arrogance and betrayal,” he wrote, and echoed President Trump’s order for the government to cease working with the AI company after Anthropic sought to keep its model Claude from being used for autonomous weapons or mass domestic surveillance. “The Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose,” Hegseth wrote. But unless OpenAI’s full contract will reveal more, it’s hard not to see the company as sitting on an ideological seesaw, promising that it does have leverage it will proudly use to do what it sees as the right thing while deferring to the law as the main backstop for what the Pentagon can do with its tech. There are three things to be watching here. One is whether this position will be good enough for OpenAI’s most critical employees. With AI companies spending so heavily on talent, it’s possible that some at OpenAI see in Altman’s justification an unforgivable compromise.
Second, there is the scorched-earth campaign that Hegseth has promised to wage against Anthropic. Going far beyond simply canceling the government’s contract with the company, he announced that it would be classified as a supply chain risk, and that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” There is significant debate about whether this death blow is legally possible, and Anthropic has said it will sue if the threat is pursued. OpenAI has also come out against the move. Lastly, how will the Pentagon swap out Claude—the only AI model it actively uses in classified operations, including some in Venezuela—while it escalates strikes against Iran? Hegseth granted the agency six months to do so, during which the military will phase in OpenAI’s models as well as those from Elon Musk’s xAI. But Claude was reportedly used in the strikes on Iran hours after the ban was issued, suggesting that a phase-out will be anything but simple. Even if the months-long feud between Anthropic and the Pentagon is over (which I doubt it is), we are now seeing the Pentagon’s AI acceleration plan put pressure on companies to relinquish lines in the sand they had once drawn, with new tensions in the Middle East as the primary testing ground. If you have information to share about how this is unfolding, reach out to me via Signal (username: jamesodonnell.22).

Nvidia partners with telecom providers for open 6G networks
Nvidia has partnered with a variety of global telecom providers for a commitment to build 6G on open and secure artificial intelligence-native platforms, bringing software-defined networking to telecommunications. Announced at the Mobile World Congress conference, the list of Nvidia partners is a who’s who of telecom — Booz Allen, BT Group, Cisco, Deutsche Telekom, Ericsson, MITRE, Nokia, OCUDU Ecosystem Foundation, ODC, SK Telecom, SoftBank Corp. and T-Mobile. Initial trials for 6G are expected to start as early as 2028, and the new network is expected to launch commercially around 2030. “Unlike 5G, 6G is being born in the AI era, and the networks of today simply aren’t ready for the use cases of tomorrow,” said Ronnie Vasishta, senior vice president of telecommunications at Nvidia on conference call with the tech media. “Remember, AI did not exist when 5G was being defined. So using AI to even improve the networks wasn’t possible in that definitional phase.”

Why network bandwidth matters a lot
One interesting point about VPNs is raised by fully a third of capacity-hungry enterprises: SD-WAN is the cheapest and easiest way to increase capacity to remote sites. Yes, service reliability of broadband Internet access for these sites is highly variable, so enterprises say they need to pilot test in a target area to determine whether even business-broadband Internet is reliable enough, but if it is, high capacity is both available and cheap. Clearly data center networking is taking the prime position in enterprise network planning, even without any contribution from AI. Will AI contribute? Enterprises generally believe that self-hosted AI will indeed require more network bandwidth, but again think this will be largely confined to the data center. AI, they say, has a broader and less predictable appetite for data, and business applications involving the data that’s subject to governance, or that’s already data-center hosted, are likely to be hosted proximate to the data. That was true for traditional software, and it’s likely just as true for AI. Yes, but…today, three times as many enterprises say that they’d use AI needs simply to boost justification for capacity expansion as think they currently need it. AI hype has entered, and perhaps even dominates, capital network project justifications. These capacity trends don’t impact enterprises alone, they also reshape the equipment space. Only 9% of enterprises say they have invested in white-box devices to build capacity and data center configuration flexibility, but the number that say they would evaluate them in 2026 is double that. This may be what’s behind Cisco’s decision to push its new G300 chip. AI’s role in capital project justifications may also be why Cisco positions the G300 so aggressively as an AI facilitator. Make no mistake, though; this is really all about capacity and QoE, even for AI.
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.