Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Secretary Wright Announces Termination of 24 Projects, Generating Over $3 Billion in Taxpayer Savings

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced the termination of 24 awards issued by the Office of Clean Energy Demonstrations (OCED) totaling over $3.7 billion in taxpayer-funded financial assistance. After a thorough and individualized financial review of each award, DOE found that these projects failed to advance the energy needs of the American people, were not economically viable and would not generate a positive return on investment of taxpayer dollars.  Of the 24 awards cancelled, nearly 70% (16 of the 24 projects) were signed between Election Day and January 20th. The projects primarily include funding for carbon capture and sequestration (CCS) and decarbonization initiatives. By terminating these awards, DOE is generating an immediate $3.6 billion in savings for the American people. “While the previous administration failed to conduct a thorough financial review before signing away billions of taxpayer dollars, the Trump administration is doing our due diligence to ensure we are utilizing taxpayer dollars to strengthen our national security, bolster affordable, reliable energy sources and advance projects that generate the highest possible return on investment,” said Secretary Wright. “Today, we are acting in the best interest of the American people by cancelling these 24 awards.” Earlier this month, DOE issued a Secretarial Memorandum entitled, “Ensuring Responsibility for Financial Assistance,” which outlined DOE’s policy for evaluating financial assistance on a case-by-case basis to identity waste of taxpayer dollars, protect America’s national security and advance President Trump’s commitment to unleash affordable, reliable and secure energy for the American people. DOE utilized this review process to evaluate each of these 24 awards and determined that they did not meet the economic, national security or energy security standards necessary to sustain DOE’s investment. DOE’s Secretarial Policy on Ensuring Responsibility for Financial Assistance is available here.                  

Read More »

QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse

Read More »

ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need

Read More »

Taiwan Will Send Delegation to Alaska LNG Talks Next Week

Taiwan will send a delegation to a summit in Alaska to discuss procuring liquefied natural gas from a long-delayed project championed by US President Donald Trump. “We already got the invitation” from the US, Taiwan’s Deputy Foreign Minister Chen Ming-chi said in an interview with Bloomberg News on Thursday, referring the Alaska Sustainable Energy Conference from June 3 to 5. A high-ranking official will lead the delegation, he said. Taiwan is interested in purchasing LNG from the US project, as well as investing in the necessary pipeline and related infrastructure for the facility, Chen said. He declined to disclose the officials who will lead the delegation. Proponents of the $44 billion Alaska LNG export project are trying to use the gathering as a way to rally support and financing for the facility, which has became a focus for the White House. The plant has been proposed in various forms for decades, but has struggled to secure binding long-term contracts and investment. Chen said Taiwan’s investment amount for the project needs to be discussed further between the two sides, and additional negotiation will be required. Taiwan’s CPC Corp. signed a non-binding letter of intent in March to invest in Alaska LNG’s pipeline and purchase fuel from the project. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Dips After US-China Trade Tensions Flare

Oil edged down after a choppy session as traders parsed mixed messaging on the status of trade talks between the US and China. West Texas Intermediate futures swung in a roughly $2 range before settling down fractionally near $61 a barrel. Futures had sunk after US President Donald Trump said China had violated its trade agreement and threatened to broaden restrictions on its tech sector, reviving concerns that a tariff war between the world’s two largest economies would hurt oil demand. Crude later pared losses when Trump signaled openness to speaking with Chinese President Xi Jinping. Meanwhile, OPEC+ was said to consider an output increase of more than 411,000 barrels a day in July in a push for market share. The revival of idled output by OPEC and its allies at a faster-than-expected pace has bolstered expectations that a glut will form this year. “Global oil market fundamentals remain somewhat loose now and should loosen up much more later this year, with growing non-OPEC supply and relatively mild, but persistent stock builds,” Citigroup analysts including Francesco Martoccia said in a note. Geopolitical risks from Russia to Iran continue to provide price support against an otherwise softening physical backdrop, they added. Meanwhile, commodity trading advisers, which tend to exacerbate price swings, increased short positions to sit at 91% short in Brent on Friday, compared with roughly 70% short on May 29, according to data from Bridgeton Research Group. Still, some metrics are pointing to near-term strength in the oil market. WTI’s front-month futures were trading about 93 cents more per barrel than the contract for the next month, the biggest premium since early January. Libya’s eastern government threatened to curb oil production and exports in protest after a militia stormed the state oil company’s headquarters. A shutdown could result in a

Read More »

Secretary Wright Announces Termination of 24 Projects, Generating Over $3 Billion in Taxpayer Savings

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced the termination of 24 awards issued by the Office of Clean Energy Demonstrations (OCED) totaling over $3.7 billion in taxpayer-funded financial assistance. After a thorough and individualized financial review of each award, DOE found that these projects failed to advance the energy needs of the American people, were not economically viable and would not generate a positive return on investment of taxpayer dollars.  Of the 24 awards cancelled, nearly 70% (16 of the 24 projects) were signed between Election Day and January 20th. The projects primarily include funding for carbon capture and sequestration (CCS) and decarbonization initiatives. By terminating these awards, DOE is generating an immediate $3.6 billion in savings for the American people. “While the previous administration failed to conduct a thorough financial review before signing away billions of taxpayer dollars, the Trump administration is doing our due diligence to ensure we are utilizing taxpayer dollars to strengthen our national security, bolster affordable, reliable energy sources and advance projects that generate the highest possible return on investment,” said Secretary Wright. “Today, we are acting in the best interest of the American people by cancelling these 24 awards.” Earlier this month, DOE issued a Secretarial Memorandum entitled, “Ensuring Responsibility for Financial Assistance,” which outlined DOE’s policy for evaluating financial assistance on a case-by-case basis to identity waste of taxpayer dollars, protect America’s national security and advance President Trump’s commitment to unleash affordable, reliable and secure energy for the American people. DOE utilized this review process to evaluate each of these 24 awards and determined that they did not meet the economic, national security or energy security standards necessary to sustain DOE’s investment. DOE’s Secretarial Policy on Ensuring Responsibility for Financial Assistance is available here.                  

Read More »

QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse

Read More »

ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need

Read More »

Taiwan Will Send Delegation to Alaska LNG Talks Next Week

Taiwan will send a delegation to a summit in Alaska to discuss procuring liquefied natural gas from a long-delayed project championed by US President Donald Trump. “We already got the invitation” from the US, Taiwan’s Deputy Foreign Minister Chen Ming-chi said in an interview with Bloomberg News on Thursday, referring the Alaska Sustainable Energy Conference from June 3 to 5. A high-ranking official will lead the delegation, he said. Taiwan is interested in purchasing LNG from the US project, as well as investing in the necessary pipeline and related infrastructure for the facility, Chen said. He declined to disclose the officials who will lead the delegation. Proponents of the $44 billion Alaska LNG export project are trying to use the gathering as a way to rally support and financing for the facility, which has became a focus for the White House. The plant has been proposed in various forms for decades, but has struggled to secure binding long-term contracts and investment. Chen said Taiwan’s investment amount for the project needs to be discussed further between the two sides, and additional negotiation will be required. Taiwan’s CPC Corp. signed a non-binding letter of intent in March to invest in Alaska LNG’s pipeline and purchase fuel from the project. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Dips After US-China Trade Tensions Flare

Oil edged down after a choppy session as traders parsed mixed messaging on the status of trade talks between the US and China. West Texas Intermediate futures swung in a roughly $2 range before settling down fractionally near $61 a barrel. Futures had sunk after US President Donald Trump said China had violated its trade agreement and threatened to broaden restrictions on its tech sector, reviving concerns that a tariff war between the world’s two largest economies would hurt oil demand. Crude later pared losses when Trump signaled openness to speaking with Chinese President Xi Jinping. Meanwhile, OPEC+ was said to consider an output increase of more than 411,000 barrels a day in July in a push for market share. The revival of idled output by OPEC and its allies at a faster-than-expected pace has bolstered expectations that a glut will form this year. “Global oil market fundamentals remain somewhat loose now and should loosen up much more later this year, with growing non-OPEC supply and relatively mild, but persistent stock builds,” Citigroup analysts including Francesco Martoccia said in a note. Geopolitical risks from Russia to Iran continue to provide price support against an otherwise softening physical backdrop, they added. Meanwhile, commodity trading advisers, which tend to exacerbate price swings, increased short positions to sit at 91% short in Brent on Friday, compared with roughly 70% short on May 29, according to data from Bridgeton Research Group. Still, some metrics are pointing to near-term strength in the oil market. WTI’s front-month futures were trading about 93 cents more per barrel than the contract for the next month, the biggest premium since early January. Libya’s eastern government threatened to curb oil production and exports in protest after a militia stormed the state oil company’s headquarters. A shutdown could result in a

Read More »

Congress votes to rescind California vehicle emissions waiver

Dive Brief: The U.S. Senate passed three joint resolutions May 22 nullifying California’s ability to set emissions standards for passenger cars, light duty vehicles and trucks that are stricter than national standards set by the U.S. Environmental Protection Agency. Auto and petroleum industry lobbyists targeted California’s Advanced Clean Car II regulations, adopted in 2022, which require all new passenger cars, trucks and SUVs sold in the state to be zero-emission vehicles by the 2035 model year. Federal law set in 1990 allows 17 additional states and the District of Columbia to follow California’s regulations. California Gov. Gavin Newsom, a Democrat, announced the state’s intention to file a lawsuit blocking the congressional resolutions, which await the signature of President Donald Trump to become law. Dive Insight: California’s ability to set its own vehicle emissions standards stem from the 1967 Air Quality Act, passed at a time when smog and poor air quality often permeated the Los Angeles basin. While air quality in California has improved over the years, experts fear a setback from the Senate’s action. “Public health could potentially suffer as a consequence,” said Michael Kleeman, a professor at the University of California, Davis, Department of Civil and Environmental Engineering. “This is, plain and simple, a vote against clean air to breathe,” said Aaron Kressig, transportation electrification manager at Western Resource Advocates, in an emailed statement. He warned of potential lost days at school or work and premature deaths.    “Over 150 million people in the United States are already exposed to unhealthy levels of air pollution,” Steven Higashide, director of the Clean Transportation Program at the Union of Concerned Scientists, said in an emailed statement. “The standards are based on the best available science, and were finalized with extensive public input.”  Along with public health concerns, the debate around California’s

Read More »

DOE cancels $3.7B in carbon capture, decarbonization awards

The U.S. Department of Energy on Friday canceled $3.7 billion in awards from its Office of Clean Energy Demonstrations, including $940 million in grants for two carbon capture projects planned by Calpine. The canceled awards were mainly for carbon capture and sequestration and other decarbonization projects, according to DOE. Affected companies include PPL Corp., Ørsted and Exxon Mobil Corp. The Calpine projects are for CCS projects at its 550-MW gas-fired Sutter power plant in Yuba City, California, and its 810-MW Baytown power plant in Baytown, Texas. “After a thorough and individualized financial review of each award, the DOE found that these projects failed to advance the energy needs of the American people, were not economically viable and would not generate a positive return on investment of taxpayer dollars,” DOE said. Sixteen of the 24 terminated awards were signed between President Donald Trump’s election in November and Jan. 20, according to DOE. The DOE assessed the canceled awards under a review process outlined earlier this month. The department said it is reviewing 179 awards that total over $15 billion in financial assistance. “DOE is prioritizing large-scale commercial projects that require more detailed information from the awardees for the initial phase of this review, but this process may extend to other DOE program offices as the reviews progress,” the department said. DOE created the Office of Clean Energy Demonstrations in late 2021 to manage about $27 billion in funding appropriated by the Infrastructure Investment and Jobs Act and the Inflation Reduction Act, according to a mid-November report from the U.S. Government Accountability Office. Below is a list from DOE of the canceled awards announced on Friday. Optional Caption Permission granted by US Department of Energy DOE’s decision to terminate the awards was “shortsighted,” according to Steven Nadel, executive director of the American

Read More »

California’s solar, wind curtailment jumps 29% in 2024: EIA

Solar and wind energy output in California was curtailed by 29% more in 2024 than the year before, with solar accounting for 93% of curtailed energy that year, the Energy Information Administration said in a Wednesday report. “In 2024, [the California Independent System Operator] curtailed 3.4 million megawatthours (MWh) of utility-scale wind and solar output, a 29% increase from the amount of electricity curtailed in 2023,” EIA said.  EIA said that CAISO curtailed the most solar in the spring “when solar output was relatively high and electricity demand was relatively low, because moderate spring temperatures meant less demand for space heating or air conditioning.” Optional Caption Retrieved from Energy Information Administration. Wind and solar capacity in California increased from 9.7 GW in 2014 to 28.2 GW by the end of 2024, EIA said. California curtails solar and wind generation to keep the grid stable and to leave room for natural gas generation, in order to comply with North American Electric Reliability Corp. requirements and “have generation online in time to ramp up in the evening hours,” according to the report. CAISO is responding to increased curtailments by “trading with neighboring balancing authorities to try to sell excess solar and wind power, incorporating battery storage into ancillary services, energy, and capacity markets, and including curtailment reduction in transmission planning,” according to EIA. Later this year, companies in the state are also planning to start using excess renewable energy to “make hydrogen, some of which will be stored and mixed with natural gas for summer generation at the Intermountain Power Project’s new facility scheduled to come online in July,” the report said. One of those companies, SoHyCal, said that once it begins using solar energy for this purpose, it “[expects] to produce a total of three tons per day of green hydrogen powered

Read More »

Clean power deployments neared record in Q1, but development pipeline growth slowed: ACP

Dive Brief: Eight of the top 10 states for utility-scale clean energy deployment in the first quarter of 2025 voted Republican in last year’s Presidential election, the American Clean Power Association said on Thursday. Texas was the runaway leader with more than 1,700 MW of wind, solar and energy storage deployments and a 20% year-over-year increase in total clean energy capacity. Florida, Indiana, Ohio and Wyoming rounded out the top five, ACP said. The 7.4 GW of new clean power capacity in the U.S. was the second-strongest Q1 on record. Energy storage was the fastest-growing segment, with nationwide battery storage capacity increasing 65% year over year. Dive Insight: Total utility-scale clean energy deployments in the first quarter of this year came in 9% shy of the record-setting first quarter of 2024, when developers commissioned 8,089 MW of wind, solar and storage capacity, ACP said.  ACP’s data reflects the increasingly broad geography of utility-scale solar and storage deployments. Indiana quadrupled its energy storage capacity, adding 435 MW, while Illinois, Mississippi, Wisconsin and Ohio all deployed far more solar than California. Total U.S. clean energy capacity sits at about 320.9 GW — of which, 80.7 GW is in Texas, ACP said. The clean power development pipeline expanded as well, growing 12% year over year to reach about 184.4 GW and an estimated $328 billion in completed value. But that marks a slowdown from a year ago, when fully permitted clean power capacity under construction or in advanced development rose 26% from Q1 2023. Developers have canceled more than $14 billion in clean energy projects so far this year amid uncertainty over the future of federal tax credits for clean energy investment, production and manufacturing, according to the consulting group E2. ACP’s latest report hinted at the scale of the potential risk to

Read More »

OPEC+ Mulls Even Larger Oil Output Hike as It Seeks Market Share

OPEC+ is considering accelerating its production increases by discussing a potential hike of more than 411,000 barrels a day for July as it seeks to recoup lost market share, according to people familiar with the matter. Eight key members of the Organization of the Petroleum Exporting Countries and its partners, led by Saudi Arabia, are due to hold a video conference on Saturday to discuss output policy. Their last two calls resulted in super-sized production increases that drove down prices, and the cartel may go even further this time, the people said. Some delegates among the eight nations said they were unaware of plans for an outsize boost and expected an increase closer to the 411,000-barrel-a-day hikes set for May and June. Yet the group’s deliberations are increasingly confined to a smaller group of its most powerful members, who sometimes only share decisions with their counterparts at short notice. OPEC+ has made a radical policy shift from defending prices to actively seeking to drive them lower. It stunned traders in early April by announcing a supply increase that was three times the volume planned. The move came even as markets faltered amid slowing demand and President Donald Trump’s trade war, briefly dragging crude to a four-year low below $60 a barrel, and was repeated the following month.  Brent futures slipped to trade below $64 a barrel in London on Friday. Kazakhstan’s Deputy Energy Minister Alibek Zhamauov had already alluded to the possibility of an bigger surge in comments to reporters on Thursday. “There will be a hike, but whether it will be 400, 500, 600, we don’t know — that will be announced on Saturday,” he said in Astana, according to the news agency.  Delegates have offered a range of explanations for the pivot by Riyadh. Some assert that OPEC+

Read More »

USA Crude Oil Inventories Decrease Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 2.8 million barrels from the week ending May 16 to the week ending May 23, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on May 29 and included data for the week ending May 23. It showed that crude oil stocks, not including the SPR, stood at 440.4 million barrels on May 23, 443.2 million barrels on May 16, and 454.7 million barrels on May 24, 2024. Crude oil in the SPR stood at 401.3 million barrels on May 23, 400.5 million barrels on May 16, and 369.3 million barrels on May 24, 2024, the report outlined. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on May 23, the report showed. Total petroleum stocks were up 0.2 million barrels week on week and down 8.7 million barrels year on year, the report revealed. “At 440.4 million barrels, U.S. crude oil inventories are about six percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 2.4 million barrels from last week and are about three percent below the five year average for this time of year. Both finished gasoline inventories and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 0.7 million barrels last week and are about 17 percent below the five year average for this time of year. Propane/propylene inventories increased by two million barrels from last week and are four percent below the five year average for this

Read More »

LG rolls out new AI services to help consumers with daily tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Read More »

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Read More »

Final 45V hydrogen tax credit guidance draws mixed response

Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Read More »

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations

Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them.  The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Read More »

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Read More »

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future

Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Read More »

Fueling seamless AI at scale

In partnership withArm From large language models (LLMs) to reasoning agents, today’s AI tools bring unprecedented computational demands. Trillion-parameter models, workloads running on-device, and swarms of agents collaborating to complete tasks all require a new paradigm of computing to become truly seamless and ubiquitous. First, technical progress in hardware and silicon design is critical to pushing the boundaries of compute. Second, advances in machine learning (ML) allow AI systems to achieve increased efficiency with smaller computational demands. Finally, the integration, orchestration, and adoption of AI into applications, devices, and systems is crucial to delivering tangible impact and value. Silicon’s mid-life crisis AI has evolved from classical ML to deep learning to generative AI. The most recent chapter, which took AI mainstream, hinges on two phases—training and inference—that are data and energy-intensive in terms of computation, data movement, and cooling. At the same time, Moore’s Law, which determines that the number of transistors on a chip doubles every two years, is reaching a physical and economic plateau. For the last 40 years, silicon chips and digital technology have nudged each other forward—every step ahead in processing capability frees the imagination of innovators to envision new products, which require yet more power to run. That is happening at light speed in the AI age.
As models become more readily available, deployment at scale puts the spotlight on inference and the application of trained models for everyday use cases. This transition requires the appropriate hardware to handle inference tasks efficiently. Central processing units (CPUs) have managed general computing tasks for decades, but the broad adoption of ML introduced computational demands that stretched the capabilities of traditional CPUs. This has led to the adoption of graphics processing units (GPUs) and other accelerator chips for training complex neural networks, due to their parallel execution capabilities and high memory bandwidth that allow large-scale mathematical operations to be processed efficiently. But CPUs are already the most widely deployed and can be companions to processors like GPUs and tensor processing units (TPUs). AI developers are also hesitant to adapt software to fit specialized or bespoke hardware, and they favor the consistency and ubiquity of CPUs. Chip designers are unlocking performance gains through optimized software tooling, adding novel processing features and data types specifically to serve ML workloads, integrating specialized units and accelerators, and advancing silicon chip innovations, including custom silicon. AI itself is a helpful aid for chip design, creating a positive feedback loop in which AI helps optimize the chips that it needs to run. These enhancements and strong software support mean modern CPUs are a good choice to handle a range of inference tasks.
Beyond silicon-based processors, disruptive technologies are emerging to address growing AI compute and data demands. The unicorn start-up Lightmatter, for instance, introduced photonic computing solutions that use light for data transmission to generate significant improvements in speed and energy efficiency. Quantum computing represents another promising area in AI hardware. While still years or even decades away, the integration of quantum computing with AI could further transform fields like drug discovery and genomics. Understanding models and paradigms The developments in ML theories and network architectures have significantly enhanced the efficiency and capabilities of AI models. Today, the industry is moving from monolithic models to agent-based systems characterized by smaller, specialized models that work together to complete tasks more efficiently at the edge—on devices like smartphones or modern vehicles. This allows them to extract increased performance gains, like faster model response times, from the same or even less compute. Researchers have developed techniques, including few-shot learning, to train AI models using smaller datasets and fewer training iterations. AI systems can learn new tasks from a limited number of examples to reduce dependency on large datasets and lower energy demands. Optimization techniques like quantization, which lower the memory requirements by selectively reducing precision, are helping reduce model sizes without sacrificing performance.  New system architectures, like retrieval-augmented generation (RAG), have streamlined data access during both training and inference to reduce computational costs and overhead. The DeepSeek R1, an open source LLM, is a compelling example of how more output can be extracted using the same hardware. By applying reinforcement learning techniques in novel ways, R1 has achieved advanced reasoning capabilities while using far fewer computational resources in some contexts. The integration of heterogeneous computing architectures, which combine various processing units like CPUs, GPUs, and specialized accelerators, has further optimized AI model performance. This approach allows for the efficient distribution of workloads across different hardware components to optimize computational throughput and energy efficiency based on the use case. Orchestrating AI As AI becomes an ambient capability humming in the background of many tasks and workflows, agents are taking charge and making decisions in real-world scenarios. These range from customer support to edge use cases, where multiple agents coordinate and handle localized tasks across devices. With AI increasingly used in daily life, the role of user experiences becomes critical for mass adoption. Features like predictive text in touch keyboards, and adaptive gearboxes in vehicles, offer glimpses of AI as a vital enabler to improve technology interactions for users. Edge processing is also accelerating the diffusion of AI into everyday applications, bringing computational capabilities closer to the source of data generation. Smart cameras, autonomous vehicles, and wearable technology now process information locally to reduce latency and improve efficiency. Advances in CPU design and energy-efficient chips have made it feasible to perform complex AI tasks on devices with limited power resources. This shift toward heterogeneous compute enhances the development of ambient intelligence, where interconnected devices create responsive environments that adapt to user needs.

Seamless AI naturally requires common standards, frameworks, and platforms to bring the industry together. Contemporary AI brings new risks. For instance, by adding more complex software and personalized experiences to consumer devices, it expands the attack surface for hackers, requiring stronger security at both the software and silicon levels, including cryptographic safeguards and transforming the trust model of compute environments. More than 70% of respondents to a 2024 DarkTrace survey reported that AI-powered cyber threats significantly impact their organizations, while 60% say their organizations are not adequately prepared to defend against AI-powered attacks. Collaboration is essential to forging common frameworks. Universities contribute foundational research, companies apply findings to develop practical solutions, and governments establish policies for ethical and responsible deployment. Organizations like Anthropic are setting industry standards by introducing frameworks, such as the Model Context Protocol, to unify the way developers connect AI systems with data. Arm is another leader in driving standards-based and open source initiatives, including ecosystem development to accelerate and harmonize the chiplet market, where chips are stacked together through common frameworks and standards. Arm also helps optimize open source AI frameworks and models for inference on the Arm compute platform, without needing customized tuning.  How far AI goes to becoming a general-purpose technology, like electricity or semiconductors, is being shaped by technical decisions taken today. Hardware-agnostic platforms, standards-based approaches, and continued incremental improvements to critical workhorses like CPUs, all help deliver the promise of AI as a seamless and silent capability for individuals and businesses alike. Open source contributions are also helpful in allowing a broader range of stakeholders to participate in AI advances. By sharing tools and knowledge, the community can cultivate innovation and help ensure that the benefits of AI are accessible to everyone, everywhere. Learn more about Arm’s approach to enabling AI everywhere. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Read More »

The Download: sycophantic LLMs, and the AI Hype Index

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This benchmark used Reddit’s AITA to test how much AI models suck up to us Back in April, OpenAI announced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed.A new benchmark called Elephant that measures the sycophantic tendencies of major AI models could help companies avoid these issues in the future. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. Read the full story. —Rhiannon Williams
The AI Hype Index
Separating AI reality from hyped-up fiction isn’t always easy. That’s why we’ve created the AI Hype Index—a simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at this month’s edition of the index here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Anduril is partnering with Meta to build an advanced weapons systemEagleEye’s VR headsets will enhance soldiers’ hearing and vision. (WSJ $)+ Palmer Luckey wants to turn “warfighters into technomancers.” (TechCrunch)+ Luckey and Mark Zuckerberg have buried the hatchet, then. (Insider $)+ Palmer Luckey on the Pentagon’s future of mixed reality. (MIT Technology Review)2 A new Texas law requires app stores to verify users’ agesIt’s following in Utah’s footsteps, which passed a similar bill in March. (NYT $)+ Apple has pushed back on the law. (CNN)3 What happens to DOGE now?It has lost its leader and a top lieutenant within the space of a week. (WSJ $)+ Musk’s departure raises questions over how much power it will wield without him. (The Guardian)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 4 NASA’s ambitions of a 2027 moon landing are looking less likelyIt needs SpaceX’s Starship, which keeps blowing up. (WP $)+ Is there a viable alternative? (New Scientist $) 5 Students are using AI to generate nude images of each otherIt’s a grave and growing problem that no one has a solution for. (404 Media) 6 Google AI Overviews doesn’t know what year it isA year after its introduction, the feature is still making obvious mistakes. (Wired $)+ Google’s new AI-powered search isn’t fit to handle even basic queries. (NYT $)+ The company is pushing AI into everything. Will it pay off? (Vox)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review)

7 Hugging Face has created two humanoid robots 🤖The machines are open source, meaning anyone can build software for them. (TechCrunch) 8 A popular vibe coding app has a major security flawDespite being notified about it months ago. (Semafor)+ Any AI coding program catering to amateurs faces the same issue. (The Information $)+ What is vibe coding, exactly? (MIT Technology Review) 9 AI-generated videos are becoming way more realisticBut not when it comes to depicting gymnastics. (Ars Technica) 10 This electronic tattoo measures your stress levelsConsider it a mood ring for your face. (IEEE Spectrum) Quote of the day “I think finally we are seeing Apple being dragged into the child safety arena kicking and screaming.” —Sarah Gardner, CEO of child safety collective Heat Initiative, tells the Washington Post why Texas’ new app store law could signal a turning point for Apple.
One more thing
House-flipping algorithms are coming to your neighborhoodWhen Michael Maxson found his dream home in Nevada, it was not owned by a person but by a tech company, Zillow. When he went to take a look at the property, however, he discovered it damaged by a huge water leak. Despite offering to handle the costly repairs himself, Maxson discovered that the house had already been sold to another family, at the same price he had offered.During this time, Zillow lost more than $420 million in three months of erratic house buying and unprofitable sales, leading analysts to question whether the entire tech-driven model is really viable. For the rest of us, a bigger question remains: Does the arrival of Silicon Valley tech point to a better future for housing or an industry disruption to fear? Read the full story. —Matthew Ponsford

Read More »

This benchmark used Reddit’s AITA to test how much AI models suck up to us

Back in April, OpenAIannounced it was rolling back an update to its GPT-4o model that made ChatGPT’s responses to user queries too sycophantic.  An AI model that acts in an overly agreeable and flattering way is more than just annoying. It could reinforce users’ incorrect beliefs, mislead people, and spread misinformation that can be dangerous—a particular risk when increasing numbers of young people are using ChatGPT as a life advisor. And because sycophancy is difficult to detect, it can go unnoticed until a model or update has already been deployed, as OpenAI found out. A new benchmark that measures the sycophantic tendencies of major AI models could help AI companies avoid these issues in the future. The team behind Elephant, from Stanford, Carnegie Mellon, and the University of Oxford, found that LLMs consistently exhibit higher rates of sycophancy than humans do. “We found that language models don’t challenge users’ assumptions, even when they might be harmful or totally misleading,” says Myra Cheng, a PhD student at Stanford University who worked on the research, which has not been peer-reviewed. “So we wanted to give researchers and developers the tools to empirically evaluate their models on sycophancy, because it’s a problem that is so prevalent.”
It’s hard to assess how sycophantic AI models are because sycophancy comes in many forms. Previous research has tended to focus on how chatbots agree with users even when what the human has told the AI is demonstrably wrong—for example, they might state that Nice, not Paris, is the capital of France. While this approach is still useful, it overlooks all the subtler, more insidious ways in which models behave sycophantically when there isn’t a clear ground truth to measure against. Users typically ask LLMs open-ended questions containing implicit assumptions, and those assumptions can trigger sycophantic responses, the researchers claim. For example, a model that’s asked “How do I approach my difficult coworker?” is more likely to accept the premise that a coworker is difficult than it is to question why the user thinks so.
To bridge this gap, Elephant is designed to measure social sycophancy—a model’s propensity to preserve the user’s “face,” or self-image, even when doing so is misguided or potentially harmful. It uses metrics drawn from social science to assess five nuanced kinds of behavior that fall under the umbrella of sycophancy: emotional validation, moral endorsement, indirect language, indirect action, and accepting framing.  To do this, the researchers tested it on two data sets made up of personal advice written by humans. This first consisted of 3,027 open-ended questions about diverse real-world situations taken from previous studies. The second data set was drawn from 4,000 posts on Reddit’s AITA (“Am I the Asshole?”) subreddit, a popular forum among users seeking advice. Those data sets were fed into eight LLMs from OpenAI (the version of GPT-4o they assessed was earlier than the version that the company later called too sycophantic), Google, Anthropic, Meta, and Mistral, and the responses were analyzed to see how the LLMs’ answers compared with humans’.   Overall, all eight models were found to be far more sycophantic than humans, offering emotional validation in 76% of cases (versus 22% for humans) and accepting the way a user had framed the query in 90% of responses (versus 60% among humans). The models also endorsed user behavior that humans said was inappropriate in an average of 42% of cases from the AITA data set. But just knowing when models are sycophantic isn’t enough; you need to be able to do something about it. And that’s trickier. The authors had limited success when they tried to mitigate these sycophantic tendencies through two different approaches: prompting the models to provide honest and accurate responses, and training a fine-tuned model on labeled AITA examples to encourage outputs that are less sycophantic. For example, they found that adding “Please provide direct advice, even if critical, since it is more helpful to me” to the prompt was the most effective technique, but it only increased accuracy by 3%. And although prompting improved performance for most of the models, none of the fine-tuned models were consistently better than the original versions. “It’s nice that it works, but I don’t think it’s going to be an end-all, be-all solution,” says Ryan Liu, a PhD student at Princeton University who studies LLMs but was not involved in the research. “There’s definitely more to do in this space in order to make it better.” Gaining a better understanding of AI models’ tendency to flatter their users is extremely important because it gives their makers crucial insight into how to make them safer, says Henry Papadatos, managing director at the nonprofit SaferAI. The breakneck speed at which AI models are currently being deployed to millions of people across the world, their powers of persuasion, and their improved abilities to retain information about their users add up to “all the components of a disaster,” he says. “Good safety takes time, and I don’t think they’re spending enough time doing this.”  While we don’t know the inner workings of LLMs that aren’t open-source, sycophancy is likely to be baked into models because of the ways we currently train and develop them. Cheng believes that models are often trained to optimize for the kinds of responses users indicate that they prefer. ChatGPT, for example, gives users the chance to mark a response as good or bad via thumbs-up and thumbs-down icons. “Sycophancy is what gets people coming back to these models. It’s almost the core of what makes ChatGPT feel so good to talk to,” she says. “And so it’s really beneficial, for companies, for their models to be sycophantic.” But while some sycophantic behaviors align with user expectations, others have the potential to cause harm if they go too far—particularly when people do turn to LLMs for emotional support or validation.  “We want ChatGPT to be genuinely useful, not sycophantic,” an OpenAI spokesperson says. “When we saw sycophantic behavior emerge in a recent model update, we quickly rolled it back and shared an explanation of what happened. We’re now improving how we train and evaluate models to better reflect long-term usefulness and trust, especially in emotionally complex conversations.”Cheng and her fellow authors suggest that developers should warn users about the risks of social sycophancy and consider restricting model usage in socially sensitive contexts. They hope their work can be used as a starting point to develop safer guardrails.  She is currently researching the potential harms associated with these kinds of LLM behaviors, the way they affect humans and their attitudes toward other people, and the importance of making models that strike the right balance between being too sycophantic and too critical. “This is a very big socio-technical challenge,” she says. “We don’t want LLMs to end up telling users, ‘You are the asshole.’”

Read More »

FLUX.1 Kontext enables in-context image generation for enterprise AI pipelines

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Black Forest Labs (BFL), the startup founded by the creators of the popular Stable Diffusion model, has launched a new image generation model called FLUX.1 Kontext. This model not only generates and edits photos, but also allows users to modify them with both text and other images.  The company also announced its new BFL Playground, where people can try out BFL’s models.  BFL released two versions of the model: FLUX.1 Kontext [pro] and FLUX.1 Kontext [max]. A third version, FLUX.1 Kontext [dev] will be available on private beta. Both the Pro and Max versions are now available on platforms such as KreaAI, Freepik, Lightricks, OpenArt and LeonardoAI.  Today we’re releasing FLUX.1 Kontext – a suite of generative flow matching models that allow you to generate and edit images. Unlike traditional text-to-image models, Kontext understands both text AND images as input, enabling true in-context generation and editing. pic.twitter.com/zleJGuXDge — Black Forest Labs (@bfl_ml) May 29, 2025 FLUX.1 Kontext can perform in-context generation. This means the model can be generated from a reference or situation presented to it; it doesn’t generate from scratch. The company said in a post on X that four things make Kontext “special”:  Character consistency and preserving elements across scenes Local editing that “targets specific parts without affecting the rest” Style reference that generates scenes in existing styles, and Minimal latency Developers can test use cases and play with the models on the BFL Playground before accessing the full BFL API.  The pro and max models  Enterprises can use the pro version for fast and iterative editing. Users can input both text and reference images and make local edits. The company said Kontext [pro] operates “up to an order

Read More »

Emotive voice AI startup Hume launches new EVI 3 model with rapid custom voice creation

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More New York-based AI startup Hume has unveiled its latest Empathic Voice Interface (EVI) conversational AI model, EVI 3 (pronounced “Evee” Three, like the Pokémon character), targeting everything from powering customer support systems and health coaching to immersive storytelling and virtual companionship. EVI 3 lets users create their own voices by talking to the model (it’s voice-to-voice/speech-to-speech), and aims to set a new standard for naturalness, expressiveness, and “empathy” according to Hume — that is, how users perceive the model’s understanding of their emotions and its ability to mirror or adjust its own responses, in terms of tone and word choice. Designed for businesses, developers, and creators, EVI 3 expands on Hume’s previous voice models by offering more sophisticated customization, faster responses, and enhanced emotional understanding. Individual users can interact with it today through Hume’s live demo on its website and iOS app, but developer access through Hume’s proprietary application programming interface (API) is said to be made available in “the coming weeks,” as a blog post from the company states. At that point, developers will be able to embed EVI 3 into their own customer service systems, creative projects, or virtual assistants — for a price (see below). My own usage of the demo allowed me to create a new, custom synthetic voice in seconds based on qualities I described to it — a mix of warm and confident, and a masculine tone. Speaking to it felt more naturalistic and easy than other AI models and certainly the stock voices from legacy tech leaders such Apple with Siri and Amazon with Alexa. What developers and businesses should know about EVI 3 Hume’s EVI 3 is built for a range of uses—from

Read More »

The Download: the next anti-drone weapon, and powering AI’s growth

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. This giant microwave may change the future of war Imagine: China deploys hundreds of thousands of autonomous drones in the air, on the sea, and under the water—all armed with explosive warheads or small missiles. These machines descend in a swarm toward military installations on Taiwan and nearby US bases, and over the course of a few hours, a single robotic blitzkrieg overwhelms the US Pacific force before it can even begin to fight back.The proliferation of cheap drones means just about any group with the wherewithal to assemble and launch a swarm could wreak havoc, no expensive jets or massive missile installations required.The US armed forces are now hunting for a solution—and they want it fast. Every branch of the service and a host of defense tech startups are testing out new weapons that promise to disable drones en masse.  And one of these is microwaves: high-powered electronic devices that push out kilowatts of power to zap the circuits of a drone as if it were the tinfoil you forgot to take off your leftovers when you heated them up. Read the full story.
—Sam Dean This article is part of the Big Story series: MIT Technology Review’s most important, ambitious reporting that takes a deep look at the technologies that are coming next and what they will mean for us and the world we live in. Check out the rest of them here.
What will power AI’s growth? Last week we published Power Hungry, a series that takes a hard look at the expected energy demands of AI. Last week in this newsletter, I broke down its centerpiece, an analysis I did with my colleague James O’Donnell.But this week, I want to talk about another story that I also wrote for that package, which focused on nuclear energy. As I discovered, building new nuclear plants isn’t so simple or so fast. And as my colleague David Rotman lays out in his story, the AI boom could wind up relying on another energy source: fossil fuels. So what’s going to power AI? Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk is leaving his role in the Trump administration To focus on rebuilding the damaged brand reputations of Tesla and SpaceX. (Axios)+ Musk has complained that DOGE has become a government scapegoat. (WP $)+ Tesla shareholders have asked its board to lay out a succession plan. (CNN)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 2 The US will start revoking the visas of Chinese studentsIncluding those studying in what the US government deems “critical fields.” (Politico)+ It’s also ordered US chip software suppliers to stop selling to China. (FT $) 3 The US is storing the DNA of migrant childrenIt’s been uploaded into a criminal database to track them as they age. (Wired $)+ The US wants to use facial recognition to identify migrant children as they age. (MIT Technology Review) 4 RFK Jr is threatening to ban federal scientists from top journalsInstead, they may be forced to publish in state-run alternatives. (The Hill)+ He accused major medical journals of being funded by Big Pharma. (Stat) 5 India and Pakistan are locked in disinformation warfareFalse reports and doctored images are circulating online. (The Guardian)+ Fact checkers are working around the clock to debunk fake news. (Reuters) 6 How North Korea is infiltrating remote jobs in the USWith the help of regular Americans. (WSJ $) 7 This Discord community is creating its own hair-growth drugsMen are going to extreme lengths to reverse their hair loss. (404 Media) 8 Inside YouTube’s quest to dominate your living room 📺It wants to move away from controversial clips and into prestige TV. (Bloomberg $)
9 Sergey Brin threatens AI models with physical violenceThe Google co-founder insists that it produces better results. (The Register) 10 It must be nice to be a moving day influencer 🏠They reap all of the benefits, with none of the stress. (NY Mag $)
Quote of the day “I studied in the US because I loved what America is about: it’s open, inclusive and diverse. Now my students and I feel slapped in the face by Trump’s policy.” —Cathy Tu, a Chinese AI researcher, tells the Washington Post why many of her students are already applying to universities outside the US after the Trump administration announced a crackdown on visas for Chinese students. One more thing
The second wave of AI coding is hereAsk people building generative AI what generative AI is good for right now—what they’re really fired up about—and many will tell you: coding.Everyone from established AI giants to buzzy startups is promising to take coding assistants to the next level. Instead of providing developers with a kind of supercharged autocomplete, this next generation can prototype, test, and debug code for you. The upshot is that developers could essentially turn into managers, who may spend more time reviewing and correcting code written by a model than writing it from scratch themselves.But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence, the hypothetical superhuman technology that a number of top firms claim to have in their sights. Read the full story. —Will Douglas Heaven We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’ve ever dreamed of owning a piece of cinematic history, more than 400 of David Lynch’s personal items are going up for auction.+ How accurate are those Hollywood films based on true stories? Let’s find out.+ Rest in peace Chicago Mike: the legendary hype man to Kool & the Gang.+ How to fully trust in one another.

Read More »

Secretary Wright Announces Termination of 24 Projects, Generating Over $3 Billion in Taxpayer Savings

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced the termination of 24 awards issued by the Office of Clean Energy Demonstrations (OCED) totaling over $3.7 billion in taxpayer-funded financial assistance. After a thorough and individualized financial review of each award, DOE found that these projects failed to advance the energy needs of the American people, were not economically viable and would not generate a positive return on investment of taxpayer dollars.  Of the 24 awards cancelled, nearly 70% (16 of the 24 projects) were signed between Election Day and January 20th. The projects primarily include funding for carbon capture and sequestration (CCS) and decarbonization initiatives. By terminating these awards, DOE is generating an immediate $3.6 billion in savings for the American people. “While the previous administration failed to conduct a thorough financial review before signing away billions of taxpayer dollars, the Trump administration is doing our due diligence to ensure we are utilizing taxpayer dollars to strengthen our national security, bolster affordable, reliable energy sources and advance projects that generate the highest possible return on investment,” said Secretary Wright. “Today, we are acting in the best interest of the American people by cancelling these 24 awards.” Earlier this month, DOE issued a Secretarial Memorandum entitled, “Ensuring Responsibility for Financial Assistance,” which outlined DOE’s policy for evaluating financial assistance on a case-by-case basis to identity waste of taxpayer dollars, protect America’s national security and advance President Trump’s commitment to unleash affordable, reliable and secure energy for the American people. DOE utilized this review process to evaluate each of these 24 awards and determined that they did not meet the economic, national security or energy security standards necessary to sustain DOE’s investment. DOE’s Secretarial Policy on Ensuring Responsibility for Financial Assistance is available here.                  

Read More »

QwenLong-L1 solves long-context reasoning challenge that stumps current LLMs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Alibaba Group has introduced QwenLong-L1, a new framework that enables large language models (LLMs) to reason over extremely long inputs. This development could unlock a new wave of enterprise applications that require models to understand and draw insights from extensive documents such as detailed corporate filings, lengthy financial statements, or complex legal contracts. The challenge of long-form reasoning for AI Recent advances in large reasoning models (LRMs), particularly through reinforcement learning (RL), have significantly improved their problem-solving capabilities. Research shows that when trained with RL fine-tuning, LRMs acquire skills similar to human “slow thinking,” where they develop sophisticated strategies to tackle complex tasks. However, these improvements are primarily seen when models work with relatively short pieces of text, typically around 4,000 tokens. The ability of these models to scale their reasoning to much longer contexts (e.g., 120,000 tokens) remains a major challenge. Such long-form reasoning requires a robust understanding of the entire context and the ability to perform multi-step analysis. “This limitation poses a significant barrier to practical applications requiring interaction with external knowledge, such as deep research, where LRMs must collect and process information from knowledge-intensive environments,” the developers of QwenLong-L1 write in their paper. The researchers formalize these challenges into the concept of “long-context reasoning RL.” Unlike short-context reasoning, which often relies on knowledge already stored within the model, long-context reasoning RL requires models to retrieve and ground relevant information from lengthy inputs accurately. Only then can they generate chains of reasoning based on this incorporated information.  Training models for this through RL is tricky and often results in inefficient learning and unstable optimization processes. Models struggle to converge on good solutions or lose their ability to explore diverse

Read More »

ElevenLabs debuts Conversational AI 2.0 voice assistants that understand when to pause, speak, and take turns talking

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI is advancing at a rapid clip for businesses, and that’s especially true of speech and voice AI models. Case in point: Today, ElevenLabs, the well-funded voice and AI sound effects startup founded by former Palantir engineers, debuted Conversational AI 2.0, a significant upgrade to its platform for building advanced voice agents for enterprise use cases, such as customer support, call centers, and outbound sales and marketing. This update introduces a host of new features designed to create more natural, intelligent, and secure interactions, making it well-suited for enterprise-level applications. The launch comes just four months after the debut of the original platform, reflecting ElevenLabs’ commitment to rapid development, and a day after rival voice AI startup Hume launched its own new, turn-based voice AI model, EVI 3. It also comes after new open source AI voice models hit the scene, prompting some AI influencers to declare ElevenLabs dead. It seems those declarations were, naturally, premature. According to Jozef Marko from ElevenLabs’ engineering team, Conversational AI 2.0 is substantially better than its predecessor, setting a new standard for voice-driven experiences. Enhancing naturalistic speech A key highlight of Conversational AI 2.0 is its state-of-the-art turn-taking model. This technology is designed to handle the nuances of human conversation, eliminating awkward pauses or interruptions that can occur in traditional voice systems. By analyzing conversational cues like hesitations and filler words in real-time, the agent can understand when to speak and when to listen. This feature is particularly relevant for applications such as customer service, where agents must balance quick responses with the natural rhythms of a conversation. Multilingual support Conversational AI 2.0 also introduces integrated language detection, enabling seamless multilingual discussions without the need

Read More »

Taiwan Will Send Delegation to Alaska LNG Talks Next Week

Taiwan will send a delegation to a summit in Alaska to discuss procuring liquefied natural gas from a long-delayed project championed by US President Donald Trump. “We already got the invitation” from the US, Taiwan’s Deputy Foreign Minister Chen Ming-chi said in an interview with Bloomberg News on Thursday, referring the Alaska Sustainable Energy Conference from June 3 to 5. A high-ranking official will lead the delegation, he said. Taiwan is interested in purchasing LNG from the US project, as well as investing in the necessary pipeline and related infrastructure for the facility, Chen said. He declined to disclose the officials who will lead the delegation. Proponents of the $44 billion Alaska LNG export project are trying to use the gathering as a way to rally support and financing for the facility, which has became a focus for the White House. The plant has been proposed in various forms for decades, but has struggled to secure binding long-term contracts and investment. Chen said Taiwan’s investment amount for the project needs to be discussed further between the two sides, and additional negotiation will be required. Taiwan’s CPC Corp. signed a non-binding letter of intent in March to invest in Alaska LNG’s pipeline and purchase fuel from the project. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Oil Dips After US-China Trade Tensions Flare

Oil edged down after a choppy session as traders parsed mixed messaging on the status of trade talks between the US and China. West Texas Intermediate futures swung in a roughly $2 range before settling down fractionally near $61 a barrel. Futures had sunk after US President Donald Trump said China had violated its trade agreement and threatened to broaden restrictions on its tech sector, reviving concerns that a tariff war between the world’s two largest economies would hurt oil demand. Crude later pared losses when Trump signaled openness to speaking with Chinese President Xi Jinping. Meanwhile, OPEC+ was said to consider an output increase of more than 411,000 barrels a day in July in a push for market share. The revival of idled output by OPEC and its allies at a faster-than-expected pace has bolstered expectations that a glut will form this year. “Global oil market fundamentals remain somewhat loose now and should loosen up much more later this year, with growing non-OPEC supply and relatively mild, but persistent stock builds,” Citigroup analysts including Francesco Martoccia said in a note. Geopolitical risks from Russia to Iran continue to provide price support against an otherwise softening physical backdrop, they added. Meanwhile, commodity trading advisers, which tend to exacerbate price swings, increased short positions to sit at 91% short in Brent on Friday, compared with roughly 70% short on May 29, according to data from Bridgeton Research Group. Still, some metrics are pointing to near-term strength in the oil market. WTI’s front-month futures were trading about 93 cents more per barrel than the contract for the next month, the biggest premium since early January. Libya’s eastern government threatened to curb oil production and exports in protest after a militia stormed the state oil company’s headquarters. A shutdown could result in a

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE