Your Gateway to Power, Energy, Datacenters, Bitcoin and AI
Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.
Discover What Matters Most to You

AI
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:
Lorem Ipsum is simply dummy text of the printing and typesetting industry.
Discover What Matter Most to You
Featured Articles

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Enphase to absorb bulk of China tariff hit this year: CEO
Dive Brief: Enphase Energy expects to absorb most of the impact of the Trump administration’s China tariffs this year as it works to line up non-China battery cell supplies by early 2026, CEO Badri Kothandaraman said Tuesday on the company’s first quarter earnings call. Though Enphase could raise battery prices by 6% to 8% later this year, it plans to bear the brunt of triple-digit duties on cells and other battery materials imported from China, which Kothandaraman said accounts for 90% to 95% of global battery cell supply. Enphase reported a 13% decline in U.S. revenue from Q4 2024 due to seasonality and softening demand, it said, amid broader uncertainty around U.S. trade policy and the fate of U.S. tax credits that benefit domestic battery manufacturers and installers. Dive Insight: Enphase’s geographically diversified manufacturing base provides some tariff protection for non-battery products, such as microinverters and electric vehicle charging equipment, Kothandaraman said on the call. Its battery business does face significant cost increases due to China’s dominance of the battery supply chain, however. Though the company makes about 25% of its batteries in the United States and plans to further increase that share, it remains reliant on China-made cells for now, Kothandaraman said. U.S. battery distributors and energy storage developers were already bracing for higher import duties on Chinese inputs thanks to an expected increase in tariffs imposed during the Biden administration — but the 145% duty on a range of Chinese imports far exceeds the double-digit tariffs Trump threatened during the 2024 campaign. Administration officials suggested this week that China tariffs could decline to 50% to 65% in the near term without offering details on the timing or scope of the potential change. Looking ahead, Enphase must weigh the impacts of import duties against the higher cost of U.S.

Amazon’s SWE-PolyBench just exposed the dirty secret about your AI coding assistant
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Amazon Web Services today introduced SWE-PolyBench, a comprehensive multi-language benchmark designed to evaluate AI coding assistants across a diverse range of programming languages and real-world scenarios. The benchmark addresses significant limitations in existing evaluation frameworks and offers researchers and developers new ways to assess how effectively AI agents navigate complex codebases. “Now they have a benchmark that they can evaluate on to assess whether the coding agents are able to solve complex programming tasks,” said Anoop Deoras, Director of Applied Sciences for Generative AI Applications and Developer Experiences at AWS, in an interview with VentureBeat. “The real world offers you more complex tasks. In order to fix a bug or do feature building, you need to touch multiple files, as opposed to a single file.” The release comes as AI-powered coding tools have exploded in popularity, with major technology companies integrating them into development environments and standalone products. While these tools show impressive capabilities, evaluating their performance has remained challenging — particularly across different programming languages and varying task complexities. SWE-PolyBench contains over 2,000 curated coding challenges derived from real GitHub issues spanning four languages: Java (165 tasks), JavaScript (1,017 tasks), TypeScript (729 tasks), and Python (199 tasks). The benchmark also includes a stratified subset of 500 issues (SWE-PolyBench500) designed for quicker experimentation. “The task diversity and the diversity of the programming languages was missing,” Deoras explained about existing benchmarks. “In SWE-Bench today, there is only a single programming language, Python, and there is a single task: bug fixes. In PolyBench, as opposed to SWE-Bench, we have expanded this benchmark to include three additional languages.” The new benchmark directly addresses limitations in SWE-Bench, which has emerged as the de facto standard

Roundtables: Brain-Computer Interfaces: From Promise to Product
Available only for MIT Alumni and subscribers.
Recorded on April 23, 2025
[embedded content]
Brain-Computer Interfaces: From Promise to Product Speakers: David Rotman, editor at large, and Antonio Regalado, senior editor for biomedicine. Brain-computer interfaces (BCIs) have been crowned the 11th Breakthrough Technology of 2025 by MIT Technology Review’s readers. BCIs are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people. Hear from MIT Technology Review editor at large David Rotman and senior editor for biomedicine Antonio Regalado as they explore the past, present, and future of BCIs. Related Coverage

OpenAI makes ChatGPT’s image generation available as API
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More People can now natively incorporate Studio Ghibli-inspired pictures generated by ChatGPT into their businesses. OpenAI has added the model behind its wildly popular image generation tool, used in ChatGPT, to its API. The gpt-image-1 model will allow developers and enterprises to “integrate high-quality, professional-grade image generation directly into their own tools and platforms.” “The model’s versatility allows it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render text — unlocking countless practical applications across multiple domains,” OpenAI said in a blog post. Pricing for the API separates tokens for text and images. Text input tokens, or the prompt text, will cost $5 per 1 million tokens. Image input tokens will be $10 per million tokens, while image output tokens, or the generated image, will be a whopping $40 per million tokens. Competitors like Stability AI offer a credit-based system for its API where one credit is equal to $0.01. Using its flagship Stable Image Ultra costs eight credits per generation. Google’s image generation model, Imagen, charges paying users $0.03 per image generated using the Gemini API. Image generation in one place OpenAI allowed ChatGPT users to generate and edit images directly on the chat interface in April, a few months after adding image generation into ChatGPT through the GPT-4o model. The company said image generation in the chat platform “quickly became one of our most popular features.” OpenAI said over 130 million users have accessed the feature and created 700 million photos in the first week alone. However, this popularity also presented OpenAI with some challenges. Social media users quickly discovered that they could prompt ChatGPT to generate images inspired by the Japanese animation juggernaut Studio Ghibli,

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Enphase to absorb bulk of China tariff hit this year: CEO
Dive Brief: Enphase Energy expects to absorb most of the impact of the Trump administration’s China tariffs this year as it works to line up non-China battery cell supplies by early 2026, CEO Badri Kothandaraman said Tuesday on the company’s first quarter earnings call. Though Enphase could raise battery prices by 6% to 8% later this year, it plans to bear the brunt of triple-digit duties on cells and other battery materials imported from China, which Kothandaraman said accounts for 90% to 95% of global battery cell supply. Enphase reported a 13% decline in U.S. revenue from Q4 2024 due to seasonality and softening demand, it said, amid broader uncertainty around U.S. trade policy and the fate of U.S. tax credits that benefit domestic battery manufacturers and installers. Dive Insight: Enphase’s geographically diversified manufacturing base provides some tariff protection for non-battery products, such as microinverters and electric vehicle charging equipment, Kothandaraman said on the call. Its battery business does face significant cost increases due to China’s dominance of the battery supply chain, however. Though the company makes about 25% of its batteries in the United States and plans to further increase that share, it remains reliant on China-made cells for now, Kothandaraman said. U.S. battery distributors and energy storage developers were already bracing for higher import duties on Chinese inputs thanks to an expected increase in tariffs imposed during the Biden administration — but the 145% duty on a range of Chinese imports far exceeds the double-digit tariffs Trump threatened during the 2024 campaign. Administration officials suggested this week that China tariffs could decline to 50% to 65% in the near term without offering details on the timing or scope of the potential change. Looking ahead, Enphase must weigh the impacts of import duties against the higher cost of U.S.

Amazon’s SWE-PolyBench just exposed the dirty secret about your AI coding assistant
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Amazon Web Services today introduced SWE-PolyBench, a comprehensive multi-language benchmark designed to evaluate AI coding assistants across a diverse range of programming languages and real-world scenarios. The benchmark addresses significant limitations in existing evaluation frameworks and offers researchers and developers new ways to assess how effectively AI agents navigate complex codebases. “Now they have a benchmark that they can evaluate on to assess whether the coding agents are able to solve complex programming tasks,” said Anoop Deoras, Director of Applied Sciences for Generative AI Applications and Developer Experiences at AWS, in an interview with VentureBeat. “The real world offers you more complex tasks. In order to fix a bug or do feature building, you need to touch multiple files, as opposed to a single file.” The release comes as AI-powered coding tools have exploded in popularity, with major technology companies integrating them into development environments and standalone products. While these tools show impressive capabilities, evaluating their performance has remained challenging — particularly across different programming languages and varying task complexities. SWE-PolyBench contains over 2,000 curated coding challenges derived from real GitHub issues spanning four languages: Java (165 tasks), JavaScript (1,017 tasks), TypeScript (729 tasks), and Python (199 tasks). The benchmark also includes a stratified subset of 500 issues (SWE-PolyBench500) designed for quicker experimentation. “The task diversity and the diversity of the programming languages was missing,” Deoras explained about existing benchmarks. “In SWE-Bench today, there is only a single programming language, Python, and there is a single task: bug fixes. In PolyBench, as opposed to SWE-Bench, we have expanded this benchmark to include three additional languages.” The new benchmark directly addresses limitations in SWE-Bench, which has emerged as the de facto standard

Roundtables: Brain-Computer Interfaces: From Promise to Product
Available only for MIT Alumni and subscribers.
Recorded on April 23, 2025
[embedded content]
Brain-Computer Interfaces: From Promise to Product Speakers: David Rotman, editor at large, and Antonio Regalado, senior editor for biomedicine. Brain-computer interfaces (BCIs) have been crowned the 11th Breakthrough Technology of 2025 by MIT Technology Review’s readers. BCIs are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people. Hear from MIT Technology Review editor at large David Rotman and senior editor for biomedicine Antonio Regalado as they explore the past, present, and future of BCIs. Related Coverage

OpenAI makes ChatGPT’s image generation available as API
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More People can now natively incorporate Studio Ghibli-inspired pictures generated by ChatGPT into their businesses. OpenAI has added the model behind its wildly popular image generation tool, used in ChatGPT, to its API. The gpt-image-1 model will allow developers and enterprises to “integrate high-quality, professional-grade image generation directly into their own tools and platforms.” “The model’s versatility allows it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render text — unlocking countless practical applications across multiple domains,” OpenAI said in a blog post. Pricing for the API separates tokens for text and images. Text input tokens, or the prompt text, will cost $5 per 1 million tokens. Image input tokens will be $10 per million tokens, while image output tokens, or the generated image, will be a whopping $40 per million tokens. Competitors like Stability AI offer a credit-based system for its API where one credit is equal to $0.01. Using its flagship Stable Image Ultra costs eight credits per generation. Google’s image generation model, Imagen, charges paying users $0.03 per image generated using the Gemini API. Image generation in one place OpenAI allowed ChatGPT users to generate and edit images directly on the chat interface in April, a few months after adding image generation into ChatGPT through the GPT-4o model. The company said image generation in the chat platform “quickly became one of our most popular features.” OpenAI said over 130 million users have accessed the feature and created 700 million photos in the first week alone. However, this popularity also presented OpenAI with some challenges. Social media users quickly discovered that they could prompt ChatGPT to generate images inspired by the Japanese animation juggernaut Studio Ghibli,

Baker Hughes Posts $402MM Q1 Profit
Baker Hughes Co. has reported $402 million in net income for the first quarter (Q1), down $777 million from the prior three-month period and $53 million against Q1 2024. Net earnings adjusted for nonrecurring or extraordinary items fell 27 percent quarter-on-quarter but rose 19 percent year-on-year to $509 million, or 51 cents per share. Adjustments totaled $108 million. The adjusted figure beat the average estimate of 47 cents from analysts surveyed by Zacks. The Houston, Texas-based oilfield and energy tech heavyweight closed higher at $38.36 on Nasdaq on results day. Meanwhile Baker Hughes’ adjusted earnings before interest, taxes, depreciation and amortization (EBITDA) dropped 21 percent sequentially but grew 10 percent year-over-year to $1.04 billion. Adjustments totaled $140 million. The quarter-on-quarter decline in adjusted net income and adjusted EBITDA primarily resulted from lower volumes in both the oilfield services and equipment (OFSE) segment and the industrial and energy technology (IET) segment. The decrease in volumes was partially offset by “productivity and structural cost-out initiatives”, Baker Hughes said in an online statement. “The year-over-year increase in adjusted net income and adjusted EBITDA was driven by increased volume in IET including higher proportionate growth in Gas Technology Equipment and productivity, structural cost-out initiatives and higher pricing in both segments, partially offset by decreased volume and business mix in OFSE and cost inflation in both segments”. Revenue totaled $6.43 billion, down 13 percent sequentially but stable year-on-year. Operating activities in the January-March 2025 period generated $709 million in cash flow. Free cash flow landed at $454 million. “In our IET segment, we booked $3.2 billion of orders, including our first data center awards, totaling more than 350 MW of power solutions for this rapidly evolving market”, highlighted chair and chief executive Lorenzo Simonelli. “In addition to expanding opportunities for data centers, we have a strong pipeline

USA Widens Sanctions on Iran to Target Lucrative Gas Exports
The US’s campaign to impose “maximum pressure” on Iran’s economy now includes the Islamic Republic’s liquefied petroleum gas exports, as Washington broadens its focus beyond crude oil. The Treasury Department on Tuesday sanctioned Iranian national Seyed Asadoollah Emamjomeh, who’s known to ship liquefied petroleum gas and crude oil from the country to foreign markets, some of his trading companies, an LPG tanker, and his son, Meisam Emamjomeh. It marks a step-up in Washington’s actions against individuals or entities involved in the trade of Iran’s non-crude energy exports. LPG is a major source of revenue for Tehran, which uses the proceeds to fund its nuclear ambitions and support regional groups including Hezbollah, the Houthis and Hamas, the Treasury said in a statement. Tehran and Washington have restarted talks over Iran’s nuclear program, with Iranian officials asking for guarantees that US sanctions will be lifted in order to address US concerns. China is a big buyer of Iranian LPG. The Islamic Republic was the No. 2 source for China’s imports of propane, a type of LPG, last year, according to the Energy Information Administration. The US was China’s biggest propane supplier, though that relationship is now threatened by the trade war between the two countries that’s already disrupted flows. Washington has long targeted Iran’s crude exports. Several rounds of sanctions have impacted how the country’s oil was delivered to buyers in China, though flows appear to have recovered. China’s purchases of Iranian oil are often labeled as coming from Malaysia, with the barrels transferred between ships in the waters off the Southeast Asian nation in order to mask their origins. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR

Calpine, Constellation, others seek settlement talks over PJM colocation rules
Calpine, Constellation Energy Generation, LS Power and generator trade groups on Tuesday asked the Federal Energy Regulatory Commission to order settlement talks to resolve issues surrounding the PJM Interconnection’s rules for colocating data centers at power plants. FERC should declare that PJM’s colocation rules should be replaced because they lack adequate clarity or consistency on the rates, terms or conditions of service, according to the joint filing by the Electric Power Supply Association; the PJM Power Providers Group, or P3; Calpine; Cogentrix Energy Power Management; Constellation; and LS Power. The request for 90 days of settlement talks is in response to FERC’s review of PJM’s colocation rules that was launched by a “show cause” order on Feb. 20. “The Commission should direct parties to this settlement process to identify an acceptable replacement rate that reasonably establishes the services, if any, used by co-located loads, and allocates any costs to such loads (or the generator serving them) consistent with cost causation principles,” the companies and trade groups said. PJM and transmission owners appear to expect that FERC will order settlement talks, according to the companies and trade groups. In responses to FERC’s show cause order, PJM offered alternate approaches to colocation that would need stakeholder input and PJM transmission owners said they “anticipate the potential for further discussion regarding possible changes to tariffs,” the companies and trade groups noted. “An attempt to settle these disputes is clearly worth the effort,” the companies and trade groups said. “There is value to a prompt resolution of these heavily contested co-location issues to ensure that the United States does not fall behind in the Artificial Intelligence revolution.” Colocation arrangements where large loads such as data centers are sited at power plants are becoming popular, but clarity about the rules for the practice is needed,

Industry Body Looks at March Texas Upstream Employment
According to the Texas Independent Producers and Royalty Owners Association’s (TIPRO) analysis, direct Texas upstream employment for March totaled 204,400. That’s what TIPRO said in a statement sent to Rigzone by the TIPRO team recently, which cited the latest Current Employment Statistics (CES) report from the U.S. Bureau of Labor Statistics (BLS). In the statement, TIPRO highlighted that the March figure was “a decrease of 700 industry positions from February employment numbers, subject to revisions”. TIPRO noted in the statement that this represented a decline of 900 jobs in the services sector and an increase of 200 jobs in oil and gas extraction. “TIPRO’s new workforce data still indicated strong job postings for the Texas oil and natural gas industry,” the organization said in its statement. “According to the association, there were 10,120 active unique jobs postings for the Texas oil and natural gas industry last month, including 3,458 new postings,” it added. “In comparison, the state of California had 2,777 unique job postings in March, followed by New York (2,892), Florida (1,781), and Colorado (1,438). TIPRO reported a total of 53,285 unique job postings nationwide last month within the oil and natural gas sector,” it continued. In its statement, TIPRO noted that, among the 19 specific industry sectors it uses to define the Texas oil and natural gas industry, “Gasoline Stations with Convenience Stores led in the ranking for unique job listings in March with 2,806 postings, followed by Support Activities for Oil and Gas Operations (2,247), and Petroleum Refineries (820)”. The leading three cities by total unique oil and natural gas job postings were Houston, with 2,212 postings, Midland, with 635 postings, and Odessa, with 412 postings, TIPRO highlighted in its statement. The top three companies ranked by unique job postings in March were Cefco, with 1,200, Love’s, with 726, and Energy Transfer, with 307, according to TIPRO. “Of the top ten companies listed by

Iberdrola Puts Onstream 3 Solar Projects in US
Spain’s Iberdrola S.A., through its unit Avangrid, commenced commercial operations at the True North Solar photovoltaic plant and started exporting energy from its Camino and Powell Creek solar farms in the United States. True North Solar, with over 488,000 solar panels and a capacity of 321 MW, can provide energy for nearly 60,000 U.S. homes. This makes it the largest photovoltaic project for the company in the United States, Iberdrola said. The initiative represents a $369 million investment (EUR 340 million) and has created around 300 jobs during peak construction, mainly occupied by residents, according to the company. The Camino Solar facility in California will begin commercial operation in late spring, the company said. It is equipped with 105,000 panels and represents an investment of $100 million (more than EUR 90 million). The 200-MW Powel Creek facility features 300,000 panels. Iberdrola said it is the company’s second project in the state of Ohio following the construction of the 304-MW Blue Creek in 2012. Iberdrola said Avangrid turned to U.S. companies during the construction of these projects. “In addition, Camino Solar has generated $15 million in state taxes (around EUR 14 million), Powell Creek $31 million (more than EUR 27 million) and True North Solar more than $40 million (more than EUR 37 million), directly benefiting public services and surrounding communities, especially schools”, the company said. True North’s production has expanded Avangrid’s installed capacity in the state of Texas, where it has been operating for more than 15 years, Iberdrola said. The company now has seven projects and a combined installed capacity of nearly 1.6 gigawatts (GW). True North supports the operations of Meta, with which it has signed a long-term power purchase agreement. True North Solar will also supply energy to Meta’s upcoming data center in the city of Temple,

Phillips exits FERC, leaving a seat for Trump to fill
Willie Phillips, Federal Energy Regulatory Commission commissioner and former chair, has resigned from the five-member agency, giving President Donald Trump a vacant seat to fill. The move leaves FERC with two Democrats and two Republicans. Phillips, a Democrat, was sworn in as a FERC commissioner on Dec. 3, 2021. He served as chair from Jan. 3, 2023, until January 20. His term was set to end on June 30, 2026. FERC Chairman Mark Christie said in a statement on Tuesday that Phillips was a “dedicated and selfless” public servant. “He and I worked together on many contentious issues to find common ground and get things done to serve the public interest,” Christie said. During Phillips’ tenure as agency head, his stated top priorities were grid reliability, transmission expansion and environmental justice and equity. Under Phillips’ leadership, FERC issued key rulemakings on grid interconnection reform and transmission planning and cost allocation. The agency also expanded its Office of Public Participation to make it easier for the public to take part in FERC proceedings and bolstered its environmental justice efforts. “There’s a general view that he did a good job as chairman,” William Scherman, a partner at Vinson & Elkins, said Tuesday, noting Phillips had bipartisan support. “Willie was somebody who brought a renewed focus on collegiality and accommodation of different points of view.” Although having four sitting commissioners opens the possibility for deadlocked, 2-2 votes, Scherman, who previously worked at FERC as general counsel and chief of staff, said he doubts that will be a problem for the agency. The remaining members, Christie and Lindsay See, both Republicans, and David Rosner and Judy Chang, Democrats, appear to work well together, he said. “They’re all smart and hard-working and competent people who are trying to do the right thing, even when they don’t

West of Orkney developers helped support 24 charities last year
The developers of the 2GW West of Orkney wind farm paid out a total of £18,000 to 24 organisations from its small donations fund in 2024. The money went to projects across Caithness, Sutherland and Orkney, including a mental health initiative in Thurso and a scheme by Dunnet Community Forest to improve the quality of meadows through the use of traditional scythes. Established in 2022, the fund offers up to £1,000 per project towards programmes in the far north. In addition to the small donations fund, the West of Orkney developers intend to follow other wind farms by establishing a community benefit fund once the project is operational. West of Orkney wind farm project director Stuart McAuley said: “Our donations programme is just one small way in which we can support some of the many valuable initiatives in Caithness, Sutherland and Orkney. “In every case we have been immensely impressed by the passion and professionalism each organisation brings, whether their focus is on sport, the arts, social care, education or the environment, and we hope the funds we provide help them achieve their goals.” In addition to the local donations scheme, the wind farm developers have helped fund a £1 million research and development programme led by EMEC in Orkney and a £1.2m education initiative led by UHI. It also provided £50,000 to support the FutureSkills apprenticeship programme in Caithness, with funds going to employment and training costs to help tackle skill shortages in the North of Scotland. The West of Orkney wind farm is being developed by Corio Generation, TotalEnergies and Renewable Infrastructure Development Group (RIDG). The project is among the leaders of the ScotWind cohort, having been the first to submit its offshore consent documents in late 2023. In addition, the project’s onshore plans were approved by the

Biden bans US offshore oil and gas drilling ahead of Trump’s return
US President Joe Biden has announced a ban on offshore oil and gas drilling across vast swathes of the country’s coastal waters. The decision comes just weeks before his successor Donald Trump, who has vowed to increase US fossil fuel production, takes office. The drilling ban will affect 625 million acres of federal waters across America’s eastern and western coasts, the eastern Gulf of Mexico and Alaska’s Northern Bering Sea. The decision does not affect the western Gulf of Mexico, where much of American offshore oil and gas production occurs and is set to continue. In a statement, President Biden said he is taking action to protect the regions “from oil and natural gas drilling and the harm it can cause”. “My decision reflects what coastal communities, businesses, and beachgoers have known for a long time: that drilling off these coasts could cause irreversible damage to places we hold dear and is unnecessary to meet our nation’s energy needs,” Biden said. “It is not worth the risks. “As the climate crisis continues to threaten communities across the country and we are transitioning to a clean energy economy, now is the time to protect these coasts for our children and grandchildren.” Offshore drilling ban The White House said Biden used his authority under the 1953 Outer Continental Shelf Lands Act, which allows presidents to withdraw areas from mineral leasing and drilling. However, the law does not give a president the right to unilaterally reverse a drilling ban without congressional approval. This means that Trump, who pledged to “unleash” US fossil fuel production during his re-election campaign, could find it difficult to overturn the ban after taking office. Sunset shot of the Shell Olympus platform in the foreground and the Shell Mars platform in the background in the Gulf of Mexico Trump
The Download: our 10 Breakthrough Technologies for 2025
This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: MIT Technology Review’s 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. It’s hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. It’s all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as: + How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs.+ What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird data—and to shed much-needed light on their migration habits. Read the full story.
+ How poop could help feed the planet—yes, really. Read the full story.
Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if you’re interested in previous discussions about topics ranging from mixed reality tech to gene editing to AI’s climate impact, check out some of the highlights from the past year’s events. This international surveillance project aims to protect wheat from deadly diseases For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world.But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change. An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories. Read the full story. —Shaoni Bhattacharya
The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge) 2 Uber and Lyft are racing to catch up with their robotaxi rivalsAfter abandoning their own self-driving projects years ago. (WSJ $)+ China’s Pony.ai is gearing up to expand to Hong Kong. (Reuters)3 Elon Musk is going after NASA He’s largely veered away from criticising the space agency publicly—until now. (Wired $)+ SpaceX’s Starship rocket has a legion of scientist fans. (The Guardian)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAIFeaturing three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (MIT Technology Review) 5 The dangerous allure of TikTokMigrants’ online portrayal of their experiences in America aren’t always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketingAnd AI is only a part of it. (Economist $)+ AI’s search for more energy is growing more urgent. (MIT Technology Review) 7 The messy ethics of writing religious sermons using AISkeptics aren’t convinced the technology should be used to channel spirituality. (NYT $)
8 How a wildlife app became an invaluable wildfire trackerWatch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review) 9 Computer scientists just love oracles 🔮 Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)
10 Pet tech is booming 🐾But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review) Quote of the day “The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI?” —Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies’ AI claims are overblown.
The big story Broadband funding for Native communities could finally connect some of America’s most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.
The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. —Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ It’s not just you, some people really do look like their dogs!+ I’m not sure if this is actually the world’s healthiest meal, but it sure looks tasty.+ Ah, the old “bitten by a rabid fox chestnut.”

Equinor Secures $3 Billion Financing for US Offshore Wind Project
Equinor ASA has announced a final investment decision on Empire Wind 1 and financial close for $3 billion in debt financing for the under-construction project offshore Long Island, expected to power 500,000 New York homes. The Norwegian majority state-owned energy major said in a statement it intends to farm down ownership “to further enhance value and reduce exposure”. Equinor has taken full ownership of Empire Wind 1 and 2 since last year, in a swap transaction with 50 percent co-venturer BP PLC that allowed the former to exit the Beacon Wind lease, also a 50-50 venture between the two. Equinor has yet to complete a portion of the transaction under which it would also acquire BP’s 50 percent share in the South Brooklyn Marine Terminal lease, according to the latest transaction update on Equinor’s website. The lease involves a terminal conversion project that was intended to serve as an interconnection station for Beacon Wind and Empire Wind, as agreed on by the two companies and the state of New York in 2022. “The expected total capital investments, including fees for the use of the South Brooklyn Marine Terminal, are approximately $5 billion including the effect of expected future tax credits (ITCs)”, said the statement on Equinor’s website announcing financial close. Equinor did not disclose its backers, only saying, “The final group of lenders includes some of the most experienced lenders in the sector along with many of Equinor’s relationship banks”. “Empire Wind 1 will be the first offshore wind project to connect into the New York City grid”, the statement added. “The redevelopment of the South Brooklyn Marine Terminal and construction of Empire Wind 1 will create more than 1,000 union jobs in the construction phase”, Equinor said. On February 22, 2024, the Bureau of Ocean Energy Management (BOEM) announced

USA Crude Oil Stocks Drop Week on Week
U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.2 million barrels from the week ending December 20 to the week ending December 27, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 2. Crude oil stocks, excluding the SPR, stood at 415.6 million barrels on December 27, 416.8 million barrels on December 20, and 431.1 million barrels on December 29, 2023, the report revealed. Crude oil in the SPR came in at 393.6 million barrels on December 27, 393.3 million barrels on December 20, and 354.4 million barrels on December 29, 2023, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on December 27, the report revealed. This figure was up 9.6 million barrels week on week and up 17.8 million barrels year on year, the report outlined. “At 415.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are slightly below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.4 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.6 million barrels from last week and are 10 percent above the five year average for this time of year,” it went on to state. In the report, the EIA noted

More telecom firms were breached by Chinese hackers than previously reported
Broader implications for US infrastructure The Salt Typhoon revelations follow a broader pattern of state-sponsored cyber operations targeting the US technology ecosystem. The telecom sector, serving as a backbone for industries including finance, energy, and transportation, remains particularly vulnerable to such attacks. While Chinese officials have dismissed the accusations as disinformation, the recurring breaches underscore the pressing need for international collaboration and policy enforcement to deter future attacks. The Salt Typhoon campaign has uncovered alarming gaps in the cybersecurity of US telecommunications firms, with breaches now extending to over a dozen networks. Federal agencies and private firms must act swiftly to mitigate risks as adversaries continue to evolve their attack strategies. Strengthening oversight, fostering industry-wide collaboration, and investing in advanced defense mechanisms are essential steps toward safeguarding national security and public trust.

How the brain, with sleep, maps space
Scientists have known for decades that certain neurons in the hippocampus are dedicated to remembering specific locations where an animal has been. More useful, though, is remembering where places are relative to each other, and it hasn’t been clear how those mental maps are formed. A study by MIT neuroscientist Matthew Wilson and colleagues sheds light on that question. The researchers let mice explore mazes freely for about 30 minutes a day for several days. While the animals were wandering and while they were sleeping, the team monitored hundreds of neurons that they had engineered to flash when electrically active. Wilson’s lab has shown that animals essentially refine their memories by dreaming about their experiences. The recordings showed that the “place cells” were equally active for days. But activity in another group of cells, which were only weakly attuned to individual places, gradually changed so that it correlated not with locations, but with activity patterns among other neurons in the network. As this happened, an increasingly accurate cognitive map of the maze took shape. Sleep played a crucial role in this process: When mice explored a new maze twice with a siesta in between, the mental maps of those allowed to sleep during the break showed significant refinement, while those of mice that stayed awake did not. “On day 1, the brain doesn’t represent the space very well,” says research scientist Wei Guo, the study’s lead author. “Neurons represent individual locations, but together they don’t form a map. But on day 5 they form a map. If you want a map, you need all these neurons to work together.”

Odd new tricks from a massive black hole
In 2018 astronomers at MIT and elsewhere observed previously unseen behavior from a black hole known as 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy 270 million light-years away. Its corona—a cloud of whirling, white-hot plasma—suddenly disappeared before reassembling months later. Now members of the team have caught the same object exhibiting another strange pattern: Flashes of x-rays are coming from it at a steadily increasing clip. By looking through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures x-ray emissions from extreme cosmic sources, they found that the flashes increased from every 18 minutes to every seven minutes over a two-year period. One possible explanation is that the corona is oscillating. But the researchers believe the most likely culprit is a spinning white dwarf—an extremely compact core of a dead star orbiting around the black hole and getting closer to its event horizon, the boundary beyond which nothing can escape its gravitational pull. Circling closer would mean moving faster, explaining the increasing frequency of x-ray oscillations. If this is the case, the white dwarf could be coming right up to the black hole’s edge without falling in. “This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who reported the findings with associate professor Erin Kara and others. If a white dwarf is at the root of the mysterious flashing, it can also be expected to give off gravitational waves, detectable by next-generation observatories such as ESA’s Laser Interferometer Space Antenna (LISA). Its launch is currently planned for the mid-2030s. “The one thing I’ve learned with this source is to never stop looking at it, because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.

Cheaper buildings, courtesy of mud
One costly and time-consuming step in constructing a concrete building is creating the “formwork,” the wooden mold into which the concrete is poured. Now MIT researchers have developed a way to replace the wood with lightly treated mud. “What we’ve demonstrated is that we can essentially take the ground we’re standing on, or waste soil from a construction site, and transform it into accurate, highly complex, and flexible formwork for customized concrete structures,” says Sandy Curth, a PhD student in MIT’s Department of Architecture, who has helped spearhead the project. The EarthWorks method, as it’s known, introduces some additives, such as straw, and a waxlike coating to the soil material. Then it’s 3D-printed into a custom-designed shape. “We found a way to make formwork that is infinitely recyclable,” Curth says. “It’s just dirt.” A particular advantage of the technique is that the material’s flexibility makes it easier to create unique shapes optimized so that the resulting buildings use no more concrete than structurally necessary. This can significantly reduce the carbon emissions associated with concrete construction. “What’s cool here is we’re able to make shape-optimized building elements for the same amount of time and energy it would take to make rectilinear building elements,” says Curth, who recently coauthored a paper on the work with MIT professors Lawrence Sass, SM ’94, PhD ’00; Caitlin Mueller ’07, SM ’14, PhD ’14; and others. He has also founded a firm, Forma Systems, through which he hopes to take EarthWorks into the construction industry.

A worldwide road trip for the Institute’s president
Soon after MIT’s 18th president, Sally Kornbluth, was inaugurated in May 2023, she made it a priority to expand her early on-campus listening tour to alumni living and working around the world. She wanted to learn more about their priorities and their connections with MIT, while also engaging them in her expansive vision for its future. This international “presidential welcome tour” brought Kornbluth to cities with large alumni communities, including New York, San Francisco, and Washington, DC, as well as London and Singapore. She mingled with alumni and friends, including MIT donors and the families of current students, at receptions that were followed by fireside chats with MIT alumni leaders. At these events, she underscored the ways alumni and friends can help promote the Institute’s mission—such as volunteering, donating, and spreading positive news from MIT throughout the world. “I think that communication about the wonderful things that are going on at MIT to the broader community is actually really important,” she said. “There’s no place like MIT to address the serious problems of our time.” The impact of the alumni community on MIT’s mission was further articulated by MIT Alumni Association CEO Whitney T. Espich, HM ’24. “You are the walking embodiment of MIT’s values and potential in the world,” she told alums. “It is this community that keeps taking on our toughest problems, healing our planet, leading on AI, and finding grand solutions in tiny quantum dots.” Past MITAA president R. Robert Wickham ’93, SM ’95, who moderated the conversation with President Kornbluth in London, noted that the event gave him and his peers a renewed sense of MIT’s role in meeting the world’s greatest challenges, such as combating climate change, ensuring ethical AI, and treating and curing disease. “Energizing the global connectivity of our community is something that’s very important to me as an international alum, so having Sally come to London and meet with so many of our European-based alums was very special,” says Wickham. Natalie Lorenz Anderson ’84, the MITAA’s 2024–’25 president, traveled to Singapore for the tour’s final event. “I have found the president to be an excellent listener, very empathetic, attuned to the audience, and very wise in what she communicates,” says Lorenz Anderson. “There was such palpable energy, and alumni enjoyed hearing from her about the future of MIT. All five of these events have been a terrific way for alumni to get to know her.”

Gooey greatness
A new type of glue developed by researchers from MIT and Germany combines sticky polymers inspired by the mussel with the germ-fighting properties of another natural material: mucus. To stick to a rock or a ship, mussels secrete a fluid full of proteins connected by chemical cross-links. As it happens, similar cross-linking features are found in mucin—a large protein that, besides water, is the primary component of mucus. George Degen, a postdoc in MIT’s Department of Mechanical Engineering and a coauthor of a paper on the work, wondered whether mussel-inspired polymers could link with chemical groups in mucin. To test this idea, he combined solutions of natural mucin proteins with synthetic mussel-inspired polymers and observed how the resulting mixture solidified and stuck to surfaces over time. “It’s like a two-part epoxy. You combine two liquids together, and chemistry starts to occur so that the liquid solidifies while the substance is simultaneously gluing itself to the surface,” Degen says. The resulting gel strongly adheres even to wet surfaces while preventing the buildup of bacteria. The researchers envision that it could be injected or sprayed as a liquid, which would soon turn into a sticky gel. The material might coat medical implants, for example, to help prevent infection. The approach could also be adapted to incorporate other natural materials such as keratin, which might be used in sustainable packaging materials.

Batch data processing is too slow for real-time AI: How open-source Apache Airflow 3.0 solves the challenge with event-driven data orchestration
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Moving data from diverse sources to the right location for AI use is a challenging task. That’s where data orchestration technologies like Apache Airflow fit in. Today, the Apache Airflow community is out with its biggest update in years, with the debut of the 3.0 release. The new release marks the first major version update in four years. Airflow has been active, though, steadily incrementing on the 2.x series, including the 2.9 and 2.10 updates in 2024, which both had a heavy focus on AI. In recent years, data engineers have adopted Apache Airflow as their de facto standard tool. Apache Airflow has established itself as the leading open-source workflow orchestration platform with over 3,000 contributors and widespread adoption across Fortune 500 companies. There are also multiple commercial services based on the platform, including Astronomer Astro, Google Cloud Composer, Amazon Managed Workflows for Apache Airflow (MWAA) and Microsoft Azure Data Factory Managed Airflow, among others. As organizations struggle to coordinate data workflows across disparate systems, clouds and increasingly AI workloads, organizations have growing needs. Apache Airflow 3.0 addresses critical enterprise needs with an architectural redesign that could improve how organizations build and deploy data applications. “To me, Airflow 3 is a new beginning, it is a foundation for a much greater sets of capabilities,” Vikram Koka, Apache Airflow PMC (project management committee ) member and Chief Strategy Officer at Astronomer, told VentureBeat in an exclusive interview. “This is almost a complete refactor based on what enterprises told us they needed for the next level of mission-critical adoption.” Enterprise data complexity has changed data orchestration needs As businesses increasingly rely on data-driven decision-making, the complexity of data workflows has exploded. Organizations now

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Former DeepSeeker and collaborators release new method for training reliable AI agents: RAGEN
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 was, by many expert accounts, supposed to be the year of AI agents — task-specific AI implementations powered by leading large language and multimodal models (LLMs) like the kinds offered by OpenAI, Anthropic, Google, and DeepSeek. But so far, most AI agents remain stuck as experimental pilots in a kind of corporate purgatory, according to a recent poll conducted by VentureBeat on the social network X. Help may be on the way: a collaborative team from Northwestern University, Microsoft, Stanford, and the University of Washington — including a former DeepSeek researcher named Zihan Wang, currently completing a computer science PhD at Northwestern — has introduced RAGEN, a new system for training and evaluating AI agents that they hope makes them more reliable and less brittle for real-world, enterprise-grade usage. Unlike static tasks like math solving or code generation, RAGEN focuses on multi-turn, interactive settings where agents must adapt, remember, and reason in the face of uncertainty. Built on a custom RL framework called StarPO (State-Thinking-Actions-Reward Policy Optimization), the system explores how LLMs can learn through experience rather than memorization. The focus is on entire decision-making trajectories, not just one-step responses. StarPO operates in two interleaved phases: a rollout stage where the LLM generates complete interaction sequences guided by reasoning, and an update stage where the model is optimized using normalized cumulative rewards. This structure supports a more stable and interpretable learning loop compared to standard policy optimization approaches. The authors implemented and tested the framework using fine-tuned variants of Alibaba’s Qwen models, including Qwen 1.5 and Qwen 2.5. These models served as the base LLMs for all experiments and were chosen for their open weights and robust instruction-following capabilities. This

Enphase to absorb bulk of China tariff hit this year: CEO
Dive Brief: Enphase Energy expects to absorb most of the impact of the Trump administration’s China tariffs this year as it works to line up non-China battery cell supplies by early 2026, CEO Badri Kothandaraman said Tuesday on the company’s first quarter earnings call. Though Enphase could raise battery prices by 6% to 8% later this year, it plans to bear the brunt of triple-digit duties on cells and other battery materials imported from China, which Kothandaraman said accounts for 90% to 95% of global battery cell supply. Enphase reported a 13% decline in U.S. revenue from Q4 2024 due to seasonality and softening demand, it said, amid broader uncertainty around U.S. trade policy and the fate of U.S. tax credits that benefit domestic battery manufacturers and installers. Dive Insight: Enphase’s geographically diversified manufacturing base provides some tariff protection for non-battery products, such as microinverters and electric vehicle charging equipment, Kothandaraman said on the call. Its battery business does face significant cost increases due to China’s dominance of the battery supply chain, however. Though the company makes about 25% of its batteries in the United States and plans to further increase that share, it remains reliant on China-made cells for now, Kothandaraman said. U.S. battery distributors and energy storage developers were already bracing for higher import duties on Chinese inputs thanks to an expected increase in tariffs imposed during the Biden administration — but the 145% duty on a range of Chinese imports far exceeds the double-digit tariffs Trump threatened during the 2024 campaign. Administration officials suggested this week that China tariffs could decline to 50% to 65% in the near term without offering details on the timing or scope of the potential change. Looking ahead, Enphase must weigh the impacts of import duties against the higher cost of U.S.

Amazon’s SWE-PolyBench just exposed the dirty secret about your AI coding assistant
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Amazon Web Services today introduced SWE-PolyBench, a comprehensive multi-language benchmark designed to evaluate AI coding assistants across a diverse range of programming languages and real-world scenarios. The benchmark addresses significant limitations in existing evaluation frameworks and offers researchers and developers new ways to assess how effectively AI agents navigate complex codebases. “Now they have a benchmark that they can evaluate on to assess whether the coding agents are able to solve complex programming tasks,” said Anoop Deoras, Director of Applied Sciences for Generative AI Applications and Developer Experiences at AWS, in an interview with VentureBeat. “The real world offers you more complex tasks. In order to fix a bug or do feature building, you need to touch multiple files, as opposed to a single file.” The release comes as AI-powered coding tools have exploded in popularity, with major technology companies integrating them into development environments and standalone products. While these tools show impressive capabilities, evaluating their performance has remained challenging — particularly across different programming languages and varying task complexities. SWE-PolyBench contains over 2,000 curated coding challenges derived from real GitHub issues spanning four languages: Java (165 tasks), JavaScript (1,017 tasks), TypeScript (729 tasks), and Python (199 tasks). The benchmark also includes a stratified subset of 500 issues (SWE-PolyBench500) designed for quicker experimentation. “The task diversity and the diversity of the programming languages was missing,” Deoras explained about existing benchmarks. “In SWE-Bench today, there is only a single programming language, Python, and there is a single task: bug fixes. In PolyBench, as opposed to SWE-Bench, we have expanded this benchmark to include three additional languages.” The new benchmark directly addresses limitations in SWE-Bench, which has emerged as the de facto standard

Roundtables: Brain-Computer Interfaces: From Promise to Product
Available only for MIT Alumni and subscribers.
Recorded on April 23, 2025
[embedded content]
Brain-Computer Interfaces: From Promise to Product Speakers: David Rotman, editor at large, and Antonio Regalado, senior editor for biomedicine. Brain-computer interfaces (BCIs) have been crowned the 11th Breakthrough Technology of 2025 by MIT Technology Review’s readers. BCIs are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people. Hear from MIT Technology Review editor at large David Rotman and senior editor for biomedicine Antonio Regalado as they explore the past, present, and future of BCIs. Related Coverage

OpenAI makes ChatGPT’s image generation available as API
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More People can now natively incorporate Studio Ghibli-inspired pictures generated by ChatGPT into their businesses. OpenAI has added the model behind its wildly popular image generation tool, used in ChatGPT, to its API. The gpt-image-1 model will allow developers and enterprises to “integrate high-quality, professional-grade image generation directly into their own tools and platforms.” “The model’s versatility allows it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render text — unlocking countless practical applications across multiple domains,” OpenAI said in a blog post. Pricing for the API separates tokens for text and images. Text input tokens, or the prompt text, will cost $5 per 1 million tokens. Image input tokens will be $10 per million tokens, while image output tokens, or the generated image, will be a whopping $40 per million tokens. Competitors like Stability AI offer a credit-based system for its API where one credit is equal to $0.01. Using its flagship Stable Image Ultra costs eight credits per generation. Google’s image generation model, Imagen, charges paying users $0.03 per image generated using the Gemini API. Image generation in one place OpenAI allowed ChatGPT users to generate and edit images directly on the chat interface in April, a few months after adding image generation into ChatGPT through the GPT-4o model. The company said image generation in the chat platform “quickly became one of our most popular features.” OpenAI said over 130 million users have accessed the feature and created 700 million photos in the first week alone. However, this popularity also presented OpenAI with some challenges. Social media users quickly discovered that they could prompt ChatGPT to generate images inspired by the Japanese animation juggernaut Studio Ghibli,
Stay Ahead with the Paperboy Newsletter
Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.