Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more While enterprises face the challenges of deploying AI agents in critical applications, a new, more pragmatic model is emerging that puts humans back in control as a strategic safeguard against AI failure.  One such example is Mixus, a platform that uses a “colleague-in-the-loop” approach to make AI agents reliable for mission-critical work. This approach is a response to the growing evidence that fully autonomous agents are a high-stakes gamble.  The high cost of unchecked AI The problem of AI hallucinations has become a tangible risk as companies explore AI applications. In a recent incident, the AI-powered code editor Cursor saw its own support bot invent a fake policy restricting subscriptions, sparking a wave of public customer cancellations.  Similarly, the fintech company Klarna famously reversed course on replacing customer service agents with AI after admitting the move resulted in lower quality. In a more alarming case, New York City’s AI-powered business chatbot advised entrepreneurs to engage in illegal practices, highlighting the catastrophic compliance risks of unmonitored agents. These incidents are symptoms of a larger capability gap. According to a May 2025 Salesforce research paper, today’s leading agents succeed only 58% of the time on single-step tasks and just 35% of the time on multi-step ones, highlighting “a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios.”  The colleague-in-the-loop model To bridge this gap, a new approach focuses on structured human oversight. “An AI agent should act at your direction and on your behalf,” Mixus co-founder Elliot Katz told VentureBeat. “But without built-in organizational oversight, fully autonomous agents often create more problems than they solve.”  This philosophy underpins Mixus’s colleague-in-the-loop model, which embeds human

Read More »

New York Gov. Hochul hints at ‘fleet-style approach’ to nuclear deployments

Dive Brief: New York could take a page from Ontario’s playbook and deploy multiple reactors to reach and possibly exceed the 1-GW target Democratic Gov. Kathy Hochul announced on Monday, analysts with Clean Air Task Force said in an interview. Whether the New York Power Authority ultimately selects a large light-water reactor like the Westinghouse AP1000 or multiple units of a small modular design like the GE Hitachi BWRX-300, lessons learned on recent and ongoing nuclear builds could translate to lower final costs, said John Carlson, CATF’s senior Northeast regional policy manager. That could enable a “fleet-style approach” to deployment similar to Ontario Power Generation’s plan to build four 300-MW BWRX-300 reactors in sequence, lowering the final cost per unit, said Victor Ibarra, senior manager for CATF’s advanced nuclear energy program. On Monday, Hochul said the plan would “allow for future collaboration with other states and Ontario.” Dive Insight: Gov. Hochul on Monday directed NYPA and the New York Department of Public Service “to develop at least one new nuclear energy facility with a combined capacity of no less than one gigawatt of electricity, either alone or in partnership with private entities,” in upstate New York. As governor, Hochul has considerable influence over NYPA, the state-owned electric utility. In February, for example, she “demand[ed]” NYPA suspend a proposed rate hike. Hochul’s announcement made no mention of specific reactor types or designs, but the suggestion that multiple plants could be in the offing suggests NYPA could consider small modular designs alongside a large light-water reactor, Ibarra said. “It’s good that they’re taking a minute to explore both options,” Carlson said. “I don’t think they know which one is most beneficial yet.” Hochul said NYPA would immediately begin evaluating “technologies, business models and locations” for the first plant. The preconstruction process will

Read More »

FERC’s Christie calls for dispatchable resources after grid operators come ‘close to the edge’

The ability of Midcontinent and East Coast grid operators to narrowly handle this week’s extreme heat and humidity without blackouts reflects the urgent need to ensure the United States has adequate power supplies, according to Mark Christie, chairman of the Federal Energy Regulatory Commission. “We’re simply not building generation fast enough, and we’re not keeping generation that we need to keep,” Christie said Thursday during a media briefing after the agency’s open meeting. “Some of our systems really came close to the edge.” The PJM Interconnection, the largest U.S. grid operator, hit a peak load of about 161 GW on Monday, nearly 5% above its 154 GW peak demand forecast for this summer and the highest demand on its system since 2011. The grid operator had about 10 GW to spare at the peak, according to Christie. At that peak, PJM’s fuel mix included gas at about 44%, nuclear at 20%, coal at 19%, solar at 5% and wind at 4%, according to Christie. Also, PJM told Christie that demand response was “essential” at reducing load, he said. PJM used nearly 4,000 MW of demand response to reduce its load, according to FERC Commissioner Judy Chang. “I see load flexibility as a key tool for grid operators to meet the challenges that we face,” Chang said. PJM called on demand response resources on Monday in its mid-Atlantic and Dominion regions, on Tuesday across its footprint and on Wednesday in its eastern zones, according to Dan Lockwood, a PJM spokesman. PJM was within its reserve requirements, but used DR to provide additional resources for the grid, he said in an email. Resource adequacy is the “central issue” facing the U.S., according to Christie, who said blackouts during the extreme heat could have been deadly. “You never know about the next time,

Read More »

Dangote Plans to List Africa’s Biggest Oil Refinery by Next Year

Aliko Dangote, Africa’s richest person, plans a stock listing for his Nigerian crude oil refinery by the end of next year to widen the company’s investor base. The billionaire also plans this year to list the group’s urea plant, which has a capacity to produce 2.8 million tons of the crop nutrient per annum, Dangote told the African Export-Import Bank’s annual general meeting in Nigeria’s capital, Abuja, on Friday.  The oil facility can processes 650,000 barrels of crude a day, making it the continent’s biggest refinery. Nigeria’s downstream regulator and fuel marketers have accused Dangote of seeking to become a monopoly with his new refinery.  A listing — through an initial public offering — could help woo investors including state-owned pension funds. The $20 billion Dangote Refinery outside the commercial hub Lagos, which became operational last year, currently produces aviation fuel, naphtha, diesel and gasoline. Monopoly Accusation It’s “important to list the refinery so that people will not be calling us a monopoly,” Dangote said. “They will now say we have shares, so let everybody have a part of it.” The tycoon, who had planned to start construction of a 5,000 ton steel plant after completing the refinery, last year scrapped the proposal because of the allegations.  Dangote earlier this year said his group is on track to generate total revenue of $30 billion in 2026. On Friday, he said that the company plans to surpass Qatar as the world’s biggest exporter of urea within four years.  The facility currently exports 37% of its output to the US. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Energy Department Withdraws from Biden-Era Columbia River System Memorandum of Understanding

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced that the Department of Energy in coordination with the White House Council on Environmental Quality (CEQ), the Departments of Commerce and the Interior and the U.S. Army Corps of Engineers, has officially withdrawn from the Columbia River System Memorandum of Understanding (MOU). Today’s action follows President Trump’s Memorandum directing the federal government to halt the Biden Administration’s radical Columbia River basin policy and will ensure Americans living in the Pacific Northwest can continue to rely on affordable hydropower from the Lower Snake River dams to help meet their growing power needs. “The Pacific Northwest deserves energy security, not energy scarcity. Dams in the Columbia River Basin have provided affordable and reliable electricity to millions of American families and businesses for decades,” said Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, American taxpayer dollars will not be spent dismantling critical infrastructure, reducing our energy-generating capacity or on radical nonsense policies that dramatically raise prices on the American people. This Administration will continue to protect America’s critical energy infrastructure and ensure reliable, affordable power for all Americans.” BACKGROUND: On June 10, 2025, President Trump signed the Presidential Memorandum, Stopping Radical Environmentalism to Generate Power for the Columbia River Basin, revoking the prior Presidential Memorandum, Restoring Healthy and Abundant Salmon, Steelhead, and Other Native Fish Populations in the Columbia River Basin, part of the radical green energy agenda calling for “equitable treatment for fish.” The Biden-era MOU required the federal government to spend over $1 billion and comply with 36 pages of costly, onerous commitments aimed at replacing services provided by the Lower Snake River Dams and advancing the possibility of breaching them. Breaching the dams would have doubled the region’s risk of power shortages, driven wholesale electricity rates up by as much

Read More »

CTGT wins Best Presentation Style award at VB Transform 2025

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization, won the Best Presentation Style award at VB Transform 2025 in San Francisco. Founded by 23-year-old Cyril Gorlla, the company showcased how its technology helps enterprises overcome AI trust barriers by directly modifying model features instead of using traditional fine-tuning or prompt engineering methods. During his presentation, Gorlla highlighted the “AI Doom Loop” faced by many enterprises: 54% of businesses cite AI as their highest tech risk according to Deloitte, while McKinsey reports 44% of organizations have experienced negative consequences from AI implementation. “A large part of this conference has been about the AI doom loop” Gorlla explained during his presentation. “Unfortunately, a lot of these [AI investments] don’t pan out. J&J just canceled hundreds of AI pilots because they didn’t really deliver ROI due to no fundamental trust in these systems.” Breaking the AI compute wall CTGT’s approach represents a significant departure from conventional AI customization techniques. The company was founded on research Gorlla conducted while holding an endowed chair at the University of California San Diego. In 2023, Gorlla published a paper at the International Conference on Learning Representations (ICLR) describing a method for evaluating and training AI models that was up to 500 times faster than existing approaches while achieving “three nines” (99.9%) of accuracy. Rather than relying on brute-force scaling or traditional deep learning methods, CTGT has developed what it calls an “entirely new AI stack” that fundamentally reimagines how neural networks learn. The company’s innovation focuses on understanding and intervening at the feature level of AI models. The company’s approach differs fundamentally from standard interpretability solutions that

Read More »

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more While enterprises face the challenges of deploying AI agents in critical applications, a new, more pragmatic model is emerging that puts humans back in control as a strategic safeguard against AI failure.  One such example is Mixus, a platform that uses a “colleague-in-the-loop” approach to make AI agents reliable for mission-critical work. This approach is a response to the growing evidence that fully autonomous agents are a high-stakes gamble.  The high cost of unchecked AI The problem of AI hallucinations has become a tangible risk as companies explore AI applications. In a recent incident, the AI-powered code editor Cursor saw its own support bot invent a fake policy restricting subscriptions, sparking a wave of public customer cancellations.  Similarly, the fintech company Klarna famously reversed course on replacing customer service agents with AI after admitting the move resulted in lower quality. In a more alarming case, New York City’s AI-powered business chatbot advised entrepreneurs to engage in illegal practices, highlighting the catastrophic compliance risks of unmonitored agents. These incidents are symptoms of a larger capability gap. According to a May 2025 Salesforce research paper, today’s leading agents succeed only 58% of the time on single-step tasks and just 35% of the time on multi-step ones, highlighting “a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios.”  The colleague-in-the-loop model To bridge this gap, a new approach focuses on structured human oversight. “An AI agent should act at your direction and on your behalf,” Mixus co-founder Elliot Katz told VentureBeat. “But without built-in organizational oversight, fully autonomous agents often create more problems than they solve.”  This philosophy underpins Mixus’s colleague-in-the-loop model, which embeds human

Read More »

New York Gov. Hochul hints at ‘fleet-style approach’ to nuclear deployments

Dive Brief: New York could take a page from Ontario’s playbook and deploy multiple reactors to reach and possibly exceed the 1-GW target Democratic Gov. Kathy Hochul announced on Monday, analysts with Clean Air Task Force said in an interview. Whether the New York Power Authority ultimately selects a large light-water reactor like the Westinghouse AP1000 or multiple units of a small modular design like the GE Hitachi BWRX-300, lessons learned on recent and ongoing nuclear builds could translate to lower final costs, said John Carlson, CATF’s senior Northeast regional policy manager. That could enable a “fleet-style approach” to deployment similar to Ontario Power Generation’s plan to build four 300-MW BWRX-300 reactors in sequence, lowering the final cost per unit, said Victor Ibarra, senior manager for CATF’s advanced nuclear energy program. On Monday, Hochul said the plan would “allow for future collaboration with other states and Ontario.” Dive Insight: Gov. Hochul on Monday directed NYPA and the New York Department of Public Service “to develop at least one new nuclear energy facility with a combined capacity of no less than one gigawatt of electricity, either alone or in partnership with private entities,” in upstate New York. As governor, Hochul has considerable influence over NYPA, the state-owned electric utility. In February, for example, she “demand[ed]” NYPA suspend a proposed rate hike. Hochul’s announcement made no mention of specific reactor types or designs, but the suggestion that multiple plants could be in the offing suggests NYPA could consider small modular designs alongside a large light-water reactor, Ibarra said. “It’s good that they’re taking a minute to explore both options,” Carlson said. “I don’t think they know which one is most beneficial yet.” Hochul said NYPA would immediately begin evaluating “technologies, business models and locations” for the first plant. The preconstruction process will

Read More »

FERC’s Christie calls for dispatchable resources after grid operators come ‘close to the edge’

The ability of Midcontinent and East Coast grid operators to narrowly handle this week’s extreme heat and humidity without blackouts reflects the urgent need to ensure the United States has adequate power supplies, according to Mark Christie, chairman of the Federal Energy Regulatory Commission. “We’re simply not building generation fast enough, and we’re not keeping generation that we need to keep,” Christie said Thursday during a media briefing after the agency’s open meeting. “Some of our systems really came close to the edge.” The PJM Interconnection, the largest U.S. grid operator, hit a peak load of about 161 GW on Monday, nearly 5% above its 154 GW peak demand forecast for this summer and the highest demand on its system since 2011. The grid operator had about 10 GW to spare at the peak, according to Christie. At that peak, PJM’s fuel mix included gas at about 44%, nuclear at 20%, coal at 19%, solar at 5% and wind at 4%, according to Christie. Also, PJM told Christie that demand response was “essential” at reducing load, he said. PJM used nearly 4,000 MW of demand response to reduce its load, according to FERC Commissioner Judy Chang. “I see load flexibility as a key tool for grid operators to meet the challenges that we face,” Chang said. PJM called on demand response resources on Monday in its mid-Atlantic and Dominion regions, on Tuesday across its footprint and on Wednesday in its eastern zones, according to Dan Lockwood, a PJM spokesman. PJM was within its reserve requirements, but used DR to provide additional resources for the grid, he said in an email. Resource adequacy is the “central issue” facing the U.S., according to Christie, who said blackouts during the extreme heat could have been deadly. “You never know about the next time,

Read More »

Dangote Plans to List Africa’s Biggest Oil Refinery by Next Year

Aliko Dangote, Africa’s richest person, plans a stock listing for his Nigerian crude oil refinery by the end of next year to widen the company’s investor base. The billionaire also plans this year to list the group’s urea plant, which has a capacity to produce 2.8 million tons of the crop nutrient per annum, Dangote told the African Export-Import Bank’s annual general meeting in Nigeria’s capital, Abuja, on Friday.  The oil facility can processes 650,000 barrels of crude a day, making it the continent’s biggest refinery. Nigeria’s downstream regulator and fuel marketers have accused Dangote of seeking to become a monopoly with his new refinery.  A listing — through an initial public offering — could help woo investors including state-owned pension funds. The $20 billion Dangote Refinery outside the commercial hub Lagos, which became operational last year, currently produces aviation fuel, naphtha, diesel and gasoline. Monopoly Accusation It’s “important to list the refinery so that people will not be calling us a monopoly,” Dangote said. “They will now say we have shares, so let everybody have a part of it.” The tycoon, who had planned to start construction of a 5,000 ton steel plant after completing the refinery, last year scrapped the proposal because of the allegations.  Dangote earlier this year said his group is on track to generate total revenue of $30 billion in 2026. On Friday, he said that the company plans to surpass Qatar as the world’s biggest exporter of urea within four years.  The facility currently exports 37% of its output to the US. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Energy Department Withdraws from Biden-Era Columbia River System Memorandum of Understanding

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced that the Department of Energy in coordination with the White House Council on Environmental Quality (CEQ), the Departments of Commerce and the Interior and the U.S. Army Corps of Engineers, has officially withdrawn from the Columbia River System Memorandum of Understanding (MOU). Today’s action follows President Trump’s Memorandum directing the federal government to halt the Biden Administration’s radical Columbia River basin policy and will ensure Americans living in the Pacific Northwest can continue to rely on affordable hydropower from the Lower Snake River dams to help meet their growing power needs. “The Pacific Northwest deserves energy security, not energy scarcity. Dams in the Columbia River Basin have provided affordable and reliable electricity to millions of American families and businesses for decades,” said Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, American taxpayer dollars will not be spent dismantling critical infrastructure, reducing our energy-generating capacity or on radical nonsense policies that dramatically raise prices on the American people. This Administration will continue to protect America’s critical energy infrastructure and ensure reliable, affordable power for all Americans.” BACKGROUND: On June 10, 2025, President Trump signed the Presidential Memorandum, Stopping Radical Environmentalism to Generate Power for the Columbia River Basin, revoking the prior Presidential Memorandum, Restoring Healthy and Abundant Salmon, Steelhead, and Other Native Fish Populations in the Columbia River Basin, part of the radical green energy agenda calling for “equitable treatment for fish.” The Biden-era MOU required the federal government to spend over $1 billion and comply with 36 pages of costly, onerous commitments aimed at replacing services provided by the Lower Snake River Dams and advancing the possibility of breaching them. Breaching the dams would have doubled the region’s risk of power shortages, driven wholesale electricity rates up by as much

Read More »

CTGT wins Best Presentation Style award at VB Transform 2025

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization, won the Best Presentation Style award at VB Transform 2025 in San Francisco. Founded by 23-year-old Cyril Gorlla, the company showcased how its technology helps enterprises overcome AI trust barriers by directly modifying model features instead of using traditional fine-tuning or prompt engineering methods. During his presentation, Gorlla highlighted the “AI Doom Loop” faced by many enterprises: 54% of businesses cite AI as their highest tech risk according to Deloitte, while McKinsey reports 44% of organizations have experienced negative consequences from AI implementation. “A large part of this conference has been about the AI doom loop” Gorlla explained during his presentation. “Unfortunately, a lot of these [AI investments] don’t pan out. J&J just canceled hundreds of AI pilots because they didn’t really deliver ROI due to no fundamental trust in these systems.” Breaking the AI compute wall CTGT’s approach represents a significant departure from conventional AI customization techniques. The company was founded on research Gorlla conducted while holding an endowed chair at the University of California San Diego. In 2023, Gorlla published a paper at the International Conference on Learning Representations (ICLR) describing a method for evaluating and training AI models that was up to 500 times faster than existing approaches while achieving “three nines” (99.9%) of accuracy. Rather than relying on brute-force scaling or traditional deep learning methods, CTGT has developed what it calls an “entirely new AI stack” that fundamentally reimagines how neural networks learn. The company’s innovation focuses on understanding and intervening at the feature level of AI models. The company’s approach differs fundamentally from standard interpretability solutions that

Read More »

Russian Fuel Flows Decline to Lowest in 8 Months on Baltic Slump

Russia’s oil product exports dropped in June to the lowest in eight months amid extended work at refineries supplying Baltic ports, coupled with efforts to stabilize domestic fuel supplies before the upcoming seasonal surge in agricultural and holiday consumption. Seaborne shipments of refined fuels totaled 2 million barrels a day in the first 20 days in June, according to data compiled by Bloomberg from analytics firm Vortexa Ltd. That’s the lowest monthly tally since October and an 8% decline compared to both the previous month and last year in June. Flows from Baltic ports recorded the sharpest drop of more than 15% from May levels. Russian seaborne oil flows are closely watched by the market to assess its production since official data has been classified. Crude outflows slid to the lowest since mid-April led by maintenance-related disruptions at a key Pacific port, compounded by a decline from the Baltic. Oil processing rates have ramped-up this month as refineries wrap up seasonal maintenance. However, volumes available for export may be curbed by government initiatives to boost stockpiles to meet growing fuel demand from agricultural activity and summer travel. Diesel exports were largely flat, while flows of refinery feedstocks like vacuum gasoil, used by secondary units like the fluid catalytic crackers, jumped this month. Outflows of all other major fuels slumped. Most of the decline in fuel flows were concentrated in the Baltic ports, indicating extended turnarounds at refineries that usually supply these terminals. “Drone strikes earlier this year could have extended the turnaround time for both primary and secondary units,” according to Mick Strautmann, a market analyst at Vortexa. The spike in vacuum gasoil flows out of Ust-Luga in the Baltic, a feedstock used in secondary units like the fluid catalytic cracking units, suggests more serious disruptions at downstream units in the region, he

Read More »

Oil Steady as OPEC+ Weighs Output Hike

Oil held steady as traders weighed the uncertain status of nuclear talks between the US and Iran against reports that OPEC+ may extend its run of super-sized production increases. West Texas Intermediate edged up to settle above $65 a barrel after swinging between gains and losses. Bloomberg reported that several OPEC delegates, who asked not to be identified, said their countries are ready consider another 411,000 barrel-a-day increase for August when they convene on July 6, following similarly sized hikes agreed upon in each of the previous three months. While that figure is broadly in-line with expectations, “the indications are that the group may go beyond the 411,000 barrel-a-day increase,” said John Kilduff, a partner at Again Capital. “Next, we should hear about the voluntary cuts under-shooting the goal from the group laggards. I expect the ultimate decision to be bearish for prices.” Crude had earlier advanced as much as 1.3% after US Energy Secretary Chris Wright told Bloomberg that sanctions against Iran will remain in place for now, and US President Donald Trump said he dropped plans to ease Iran sanctions. The statement comes just days after the president claimed that Iran and the US would meet for nuclear talk as soon as next week, which Iran denied. Oil still ended the week down roughly 13% — snapping three weeks of gains — after a ceasefire in the Israel-Iran conflict was reached, easing concerns about supply disruptions from a region that pumps about a third of the world’s crude. The focus has largely reverted to fundamental catalysts, including OPEC moves. Russia now also appears more receptive to a fresh output boost, in a reversal of an earlier stance, raising concerns of supply overhang in the second half of the year. Investors have also turned their attention to progress on

Read More »

Oil Tanker Rates Collapse as Conflict in Middle East Abates

The cost of shipping Middle East crude to customers in Asia collapsed on Thursday, the latest sign of oil markets returning to normal after conflict eased in the world’s top petroleum-exporting region. Charter rates slumped by 17% to 55.50 industry-standard Worldscale points, according to data from the Baltic Exchange in London. It works out at roughly $1.60 a barrel. “Risk premiums have naturally faded,” said Fredrik Dybwad, an analyst at Fearnley Securities AS. “There is ample vessel availability, and considering normal seasonality, rates should naturally find a lower level.” Shipping prices soared two weeks ago amid concern Iran might disrupt maritime traffic around Hormuz Strait, a vital waterway through which 20% of the world’s oil and liquefied natural gas must pass. After almost two weeks of fighting between Iran and Israel that began on June 13, there’s since been a ceasefire, hitting oil prices and lowering the risks for ships that enter the region. The Joint Maritime Information Center, a naval liaison with commercial shipping in the region, said Thursday that no hostilities had been reported in the Strait of Hormuz over the past 48 hours and that traffic had returned to normal levels. “A sustained period of inactivity and strengthening of the ceasefire agreement will stabilize maritime tension in the Arabian Gulf,” it said in a note.  “Now that the market has become sanguine about Iran shutting the Strait of Hormuz, ships are running fluidly again, the premium gas been removed, and rates are correcting lower meaningfully,” said Jonathan Chappell, senior managing director at Evercore ISI. The Worldscale system is designed to let owners and charterers quickly calculate the latest earnings and per-barrel costs on thousands of trade routes.  Vessels on the industry’s benchmark Saudi Arabia-to-China route are earning $35,281 a day, according to the Baltic Exchange. They were

Read More »

Equinor, Shell Unveil Name of UK North Sea JV

Shell PLC and Equinor ASA have named their United Kingdom North Sea joint venture Adura, which they announced December as the biggest independent producer on the UK’s side of the sea. “Work continues towards securing regulatory approvals, with launch of the IJV [incorporated JV] expected by the end of this year”, Norway’s majority state-owned Equinor said in an online statement. Adura, which will be equally owned, combines the two companies’ offshore assets in the UK, where Shell currently produces over 100,000 barrels of oil equivalent a day (boed) and Equinor about 38,000 boed. “Adura is expected to produce over 140,000 barrels of oil equivalent per day in 2025”, Equinor said. The name Adura is “rooted in their [the companies] respective heritage and focused on shaping the future of the basin in the years ahead”, Equinor explained. “Adura has been created to bring together the A of Aberdeen and the dura of durability. It’s a company built on firm foundations, much like the strong granite synonymous with the city”. “Adura will sustain domestic oil and gas production and security of energy supply in the UK and beyond”, Equinor added. Adura will include Equinor’s 29.89 percent stake in the CNOOC Ltd.-operated Buzzard field, which started production 2007; an operating stake of 65.11percent in Mariner, online since 2019; and an 80 percent operating stake in Rosebank, expected to come onstream 2026. Shell will contribute its 27.97 percent ownership in BP PLC-operated Clair, which began production 2005; a 50 percent operating stake in Gannet, started up 1992; a 100 percent stake in Jackdaw, for which Shell plans to seek a new consent following a court nullification; a 21.23 percent operating stake in Nelson, which started production 1994; a 50 percent operating stake in Penguins, which started production 2003; a 92.52 percent operating stake in Pierce,

Read More »

New York offering up to $750K for facility decarbonization projects

Dive Brief: New York state is offering up to $750,000 in state cost-sharing funding for building and campus decarbonization efforts that use ground-source heat pumps, waste heat recovery, thermal energy storage and other low-emissions technologies. Applications are due July 31. The New York State Energy Research and Development Authority’s Large-Scale Thermal program encourages property owners to pursue high-efficiency, “grid-friendly” electrification projects, NYSERDA Program Manager Sue Dougherty said in a presentation at the International District Energy Association annual conference earlier this month. The $10 million program is open to systems that provide heating, cooling and hot water to single buildings with at least 100,000 square feet of conditioned space or multibuilding campuses with at least 250,000 conditioned square feet, NYSERDA says.  Dive Insight: State funding opportunities like the Large-Scale Thermal program are key to New York’s efforts to significantly reduce the environmental impact of its roughly 6 million buildings in the coming decades, Dougherty said. The state wants 85% of its buildings to use clean heating technologies like heat pumps and thermal energy networks by 2050, the same year its statutory net-zero statewide GHG emissions target kicks in. “We’re not going to do all 6 million buildings, and we really don’t have to,” Dougherty said. “But we will need to do a significant number, and our solutions will need to address existing, older buildings and newer buildings getting built [today].”  The Large-Scale Thermal program is accepting applications for its third funding round through July. Successful applicants will receive state funding equal to 50% of total project design costs, with maximum funding up to $300,000 for new construction and $750,000 for existing buildings. The project economics tend to work best for existing facilities with aging heating and cooling infrastructure, new construction and larger buildings or campuses that can achieve “economies of scale,”

Read More »

Electrical manufacturers publish ‘digital substation’ standards

The National Electrical Manufacturers Association on Wednesday announced the publication of three standards aimed at helping utilities and equipment manufacturers develop the “digital substations” they say will underpin a modern, reliable and self-healing grid. More standards will be published in the future, with the initial set covering fault isolation and restoration issues in distributed energy resources, and on a looped single line feeder, potentially in instances of communications loss. The standards can assist utilities that want to install grid sensors, switches and automated reclosers, NEMA officials said. “You can think of them as basically blueprints for utilities to use when they are going to implement one of these systems,” NEMA Senior Vice President of Strategy, Technical and Industry Affairs Patrick Hughes said. “We’ve seen and heard from utilities that there are interoperability questions and there’s a need for technical guidance for utilities interested in installing these … self-healing grid technologies,” Hughes said. “These give utilities a very clear blueprint for how to implement these systems.” Modernized substations and automated restoration equipment will be key to managing the increasing demand for electricity across a strained grid, experts say. NEMA anticipates electricity demand in the U.S. will increase 2% annually and 50% by 2050, as more data centers are built, buildings are electrified and transportation shifts away from fossil fuels. “When you’re dealing with modern grid technologies, and it includes communication systems and smart technologies, you have a number of different vendors that are offering these technologies and the service,” Hughes said. “And it can be a little overwhelming with different manufacturers and different communication protocols and all the different specifications that go into implementing one of these systems.” Add to that, utilities must convince regulators who are concerned about how the investments are made, and whether the systems will have long-term support

Read More »

LG rolls out new AI services to help consumers with daily tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Read More »

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Read More »

Final 45V hydrogen tax credit guidance draws mixed response

Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Read More »

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations

Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them.  The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Read More »

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Read More »

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future

Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Read More »

Retail Resurrection: David’s Bridal bets its future on AI after double bankruptcy

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Inside a new David’s Bridal store in Delray Beach, Florida, a bride-to-be carefully taps images on a 65-inch touchscreen, curating a vision board for her wedding. Behind the scenes, an AI system automatically analyzes her selections, building a knowledge graph that will match her with vendors, recommend products and generate a personalized wedding plan. For the overwhelmed bride facing 300-plus wedding planning tasks, this AI assistant promises to automate the process: suggesting what to do next, reorganizing timelines when plans change and eliminating the need to manually update spreadsheets that inevitably break when wedding plans evolve. That’s the vision David’s Bridal is racing to fully implement with Pearl Planner, its new beta AI-powered wedding planning platform. For the twice-bankrupt retailer, this technology-driven transformation represents a high-stakes bet that AI can accomplish what traditional retail strategies couldn’t: Survival in an industry where 15,000 stores are projected to close this year alone. David’s Bridal is hardly alone in the dramatic and ongoing wave of store closures, bankruptcies and disruptions sweeping through the U.S. retail industry since the mid-2010s. Dubbed the “retail apocalypse,” there were at least 133 major retail bankruptcies and 57,000 store closures between 2018 and 2024. The company narrowly survived liquidation in its second bankruptcy in 2023 when business development company CION Investment Corporation — which has more than $6.1 billion in assets and a portfolio of 100 companies — acquired substantially all of its assets and invested $20 million in new funding. David’s AI-led transformation is driven from the top down by new CEO Kelly Cook, who originally joined the company as CMO in 2019. Her vision of taking the company from “aisle to algorithm” led her

Read More »

How runtime attacks turn profitable AI into budget black holes

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

AI’s promise is undeniable, but so are its blindsiding security costs at the inference layer. New attacks targeting AI’s operational side are quietly inflating budgets, jeopardizing regulatory compliance and eroding customer trust, all of which threaten the return on investment (ROI) and total cost of ownership of enterprise AI deployments.

AI has captivated the enterprise with its potential for game-changing insights and efficiency gains. Yet, as organizations rush to operationalize their models, a sobering reality is emerging: The inference stage, where AI translates investment into real-time business value, is under siege. This critical juncture is driving up the total cost of ownership (TCO) in ways that initial business cases failed to predict.

Security executives and CFOs who greenlit AI projects for their transformative upside are now grappling with the hidden expenses of defending these systems. Adversaries have discovered that inference is where AI “comes alive” for a business, and it’s precisely where they can inflict the most damage. The result is a cascade of cost inflation: Breach containment can exceed $5 million per incident in regulated sectors, compliance retrofits run into the hundreds of thousands and trust failures can trigger stock hits or contract cancellations that decimate projected AI ROI. Without cost containment at inference, AI becomes an ungovernable budget wildcard.

The unseen battlefield: AI inference and exploding TCO

AI inference is rapidly becoming the “next insider risk,” Cristian Rodriguez, field CTO for the Americas at CrowdStrike, told the audience at RSAC 2025.

Other technology leaders echo this perspective and see a common blind spot in enterprise strategy. Vineet Arora, CTO at WinWire, notes that many organizations “focus intensely on securing the infrastructure around AI while inadvertently sidelining inference.” This oversight, he explains, “leads to underestimated costs for continuous monitoring systems, real-time threat analysis and rapid patching mechanisms.”

Another critical blind spot, according to Steffen Schreier, SVP of product and portfolio at Telesign, is “the assumption that third-party models are thoroughly vetted and inherently safe to deploy.”

He warned that in reality, “these models often haven’t been evaluated against an organization’s specific threat landscape or compliance needs,” which can lead to harmful or non-compliant outputs that erode brand trust. Schreier told VentureBeat that “inference-time vulnerabilities — like prompt injection, output manipulation or context leakage — can be exploited by attackers to produce harmful, biased or non-compliant outputs. This poses serious risks, especially in regulated industries, and can quickly erode brand trust.”

When inference is compromised, the fallout hits multiple fronts of TCO. Cybersecurity budgets spiral, regulatory compliance is jeopardized and customer trust erodes. Executive sentiment reflects this growing concern. In CrowdStrike’s State of AI in Cybersecurity survey, only 39% of respondents felt generative AI’s rewards clearly outweigh the risks, while 40% judged them comparable. This ambivalence underscores a critical finding: Safety and privacy controls have become top requirements for new gen AI initiatives, with a striking 90% of organizations now implementing or developing policies to govern AI adoption. The top concerns are no longer abstract; 26% cite sensitive data exposure and 25% fear adversarial attacks as key risks.

Security leaders exhibit mixed sentiments regarding the overall safety of gen AI, with top concerns centered on the exposure of sensitive data to LLMs (26%) and adversarial attacks on AI tools (25%).

Anatomy of an inference attack

The unique attack surface exposed by running AI models is being aggressively probed by adversaries. To defend against this, Schreier advises, “it is critical to treat every input as a potential hostile attack.” Frameworks like the OWASP Top 10 for Large Language Model (LLM) Applications catalogue these threats, which are no longer theoretical but active attack vectors impacting the enterprise:

Prompt injection (LLM01) and insecure output handling (LLM02): Attackers manipulate models via inputs or outputs. Malicious inputs can cause the model to ignore instructions or divulge proprietary code. Insecure output handling occurs when an application blindly trusts AI responses, allowing attackers to inject malicious scripts into downstream systems.

Training data poisoning (LLM03) and model poisoning: Attackers corrupt training data by sneaking in tainted samples, planting hidden triggers. Later, an innocuous input can unleash malicious outputs.

Model denial of service (LLM04): Adversaries can overwhelm AI models with complex inputs, consuming excessive resources to slow or crash them, resulting in direct revenue loss.

Supply chain and plugin vulnerabilities (LLM05 and LLM07): The AI ecosystem is built on shared components. For instance, a vulnerability in the Flowise LLM tool exposed private AI dashboards and sensitive data, including GitHub tokens and OpenAI API keys, on 438 servers.

Sensitive information disclosure (LLM06): Clever querying can extract confidential information from an AI model if it was part of its training data or is present in the current context.

Excessive agency (LLM08) and Overreliance (LLM09): Granting an AI agent unchecked permissions to execute trades or modify databases is a recipe for disaster if manipulated.

Model theft (LLM10): An organization’s proprietary models can be stolen through sophisticated extraction techniques — a direct assault on its competitive advantage.

Underpinning these threats are foundational security failures. Adversaries often log in with leaked credentials. In early 2024, 35% of cloud intrusions involved valid user credentials, and new, unattributed cloud attack attempts spiked 26%, according to the CrowdStrike 2025 Global Threat Report. A deepfake campaign resulted in a fraudulent $25.6 million transfer, while AI-generated phishing emails have demonstrated a 54% click-through rate, more than four times higher than those written by humans.

The OWASP framework illustrates how various LLM attack vectors target different components of an AI application, from prompt injection at the user interface to data poisoning in the training models and sensitive information disclosure from the datastore.

Back to basics: Foundational security for a new era

Securing AI requires a disciplined return to security fundamentals — but applied through a modern lens. “I think that we need to take a step back and ensure that the foundation and the fundamentals of security are still applicable,” Rodriguez argued. “The same approach you would have to securing an OS is the same approach you would have to securing that AI model.”

This means enforcing unified protection across every attack path, with rigorous data governance, robust cloud security posture management (CSPM), and identity-first security through cloud infrastructure entitlement management (CIEM) to lock down the cloud environments where most AI workloads reside. As identity becomes the new perimeter, AI systems must be governed with the same strict access controls and runtime protections as any other business-critical cloud asset.

The specter of “shadow AI”: Unmasking hidden risks

Shadow AI, or the unsanctioned use of AI tools by employees, creates a massive, unknown attack surface. A financial analyst using a free online LLM for confidential documents can inadvertently leak proprietary data. As Rodriguez warned, queries to public models can “become another’s answers.” Addressing this requires a combination of clear policy, employee education, and technical controls like AI security posture management (AI-SPM) to discover and assess all AI assets, sanctioned or not.

Fortifying the future: Actionable defense strategies

While adversaries have weaponized AI, the tide is beginning to turn. As Mike Riemer, Field CISO at Ivanti, observes, defenders are beginning to “harness the full potential of AI for cybersecurity purposes to analyze vast amounts of data collected from diverse systems.” This proactive stance is essential for building a robust defense, which requires several key strategies:

Budget for inference security from day zero: The first step, according to Arora, is to begin with “a comprehensive risk-based assessment.” He advises mapping the entire inference pipeline to identify every data flow and vulnerability. “By linking these risks to possible financial impacts,” he explains, “we can better quantify the cost of a security breach” and build a realistic budget.

To structure this more systematically, CISOs and CFOs should start with a risk-adjusted ROI model. One approach:

Security ROI = (estimated breach cost × annual risk probability) – total security investment

For example, if an LLM inference attack could result in a $5 million loss and the likelihood is 10%, the expected loss is $500,000. A $350,000 investment in inference-stage defenses would yield a net gain of $150,000 in avoided risk. This model enables scenario-based budgeting tied directly to financial outcomes.

Enterprises allocating less than 8 to 12% of their AI project budgets to inference-stage security are often blindsided later by breach recovery and compliance costs. A Fortune 500 healthcare provider CIO, interviewed by VentureBeat and requesting anonymity, now allocates 15% of their total gen AI budget to post-training risk management, including runtime monitoring, AI-SPM platforms and compliance audits. A practical budgeting model should allocate across four cost centers: runtime monitoring (35%), adversarial simulation (25%), compliance tooling (20%) and user behavior analytics (20%).

Here’s a sample allocation snapshot for a $2 million enterprise AI deployment based on VentureBeat’s ongoing interviews with CFOs, CIOs and CISOs actively budgeting to support AI projects:

Budget categoryAllocationUse case exampleRuntime monitoring$300,000Behavioral anomaly detection (API spikes)Adversarial simulation$200,000Red team exercises to probe prompt injectionCompliance tooling$150,000EU AI Act alignment, SOC 2 inference validationsUser behavior analytics$150,000Detect misuse patterns in internal AI use

These investments reduce downstream breach remediation costs, regulatory penalties and SLA violations, all helping to stabilize AI TCO.

Implement runtime monitoring and validation: Begin by tuning anomaly detection to detect behaviors at the inference layer, such as abnormal API call patterns, output entropy shifts or query frequency spikes. Vendors like DataDome and Telesign now offer real-time behavioral analytics tailored to gen AI misuse signatures.

Teams should monitor entropy shifts in outputs, track token irregularities in model responses and watch for atypical frequency in queries from privileged accounts. Effective setups include streaming logs into SIEM tools (such as Splunk or Datadog) with tailored gen AI parsers and establishing real-time alert thresholds for deviations from model baselines.

Adopt a zero-trust framework for AI: Zero-trust is non-negotiable for AI environments. It operates on the principle of “never trust, always verify.” By adopting this architecture, Riemer notes, organizations can ensure that “only authenticated users and devices gain access to sensitive data and applications, regardless of their physical location.”

Inference-time zero-trust should be enforced at multiple layers:

Identity: Authenticate both human and service actors accessing inference endpoints.

Permissions: Scope LLM access using role-based access control (RBAC) with time-boxed privileges.

Segmentation: Isolate inference microservices with service mesh policies and enforce least-privilege defaults through cloud workload protection platforms (CWPPs).

A proactive AI security strategy requires a holistic approach, encompassing visibility and supply chain security during development, securing infrastructure and data and implementing robust safeguards to protect AI systems in runtime during production.

Protecting AI ROI: A CISO/CFO collaboration model

Protecting the ROI of enterprise AI requires actively modeling the financial upside of security. Start with a baseline ROI projection, then layer in cost-avoidance scenarios for each security control. Mapping cybersecurity investments to avoided costs including incident remediation, SLA violations and customer churn, turns risk reduction into a measurable ROI gain.

Enterprises should model three ROI scenarios that include baseline, with security investment and post-breach recovery to show cost avoidance clearly. For example, a telecom deploying output validation prevented 12,000-plus misrouted queries per month, saving $6.3 million annually in SLA penalties and call center volume. Tie investments to avoided costs across breach remediation, SLA non-compliance, brand impact and customer churn to build a defensible ROI argument to CFOs.

Checklist: CFO-Grade ROI protection model

CFOs need to communicate with clarity on how security spending protects the bottom line. To safeguard AI ROI at the inference layer, security investments must be modeled like any other strategic capital allocation: With direct links to TCO, risk mitigation and revenue preservation.

Use this checklist to make AI security investments defensible in the boardroom — and actionable in the budget cycle.

Link every AI security spend to a projected TCO reduction category (compliance, breach remediation, SLA stability).

Run cost-avoidance simulations with 3-year horizon scenarios: baseline, protected and breach-reactive.

Quantify financial risk from SLA violations, regulatory fines, brand trust erosion and customer churn.

Co-model inference-layer security budgets with both CISOs and CFOs to break organizational silos.

Present security investments as growth enablers, not overhead, showing how they stabilize AI infrastructure for sustained value capture.

This model doesn’t just defend AI investments; it defends budgets and brands and can protect and grow boardroom credibility.

Concluding analysis: A strategic imperative

CISOs must present AI risk management as a business enabler, quantified in terms of ROI protection, brand trust preservation and regulatory stability. As AI inference moves deeper into revenue workflows, protecting it isn’t a cost center; it’s the control plane for AI’s financial sustainability. Strategic security investments at the infrastructure layer must be justified with financial metrics that CFOs can act on.

The path forward requires organizations to balance investment in AI innovation with an equal investment in its protection. This necessitates a new level of strategic alignment. As Ivanti CIO Robert Grazioli told VentureBeat: “CISO and CIO alignment will be critical to effectively safeguard modern businesses.” This collaboration is essential to break down the data and budget silos that undermine security, allowing organizations to manage the true cost of AI and turn a high-risk gamble into a sustainable, high-ROI engine of growth.

Telesign’s Schreier added: “We view AI inference risks through the lens of digital identity and trust. We embed security across the full lifecycle of our AI tools — using access controls, usage monitoring, rate limiting and behavioral analytics to detect misuse and protect both our customers and their end users from emerging threats.”

He continued: “We approach output validation as a critical layer of our AI security architecture, particularly because many inference-time risks don’t stem from how a model is trained, but how it behaves in the wild.”

Read More »

Model minimalism: The new AI strategy saving companies millions

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

The advent of large language models (LLMs) has made it easier for enterprises to envision the kinds of projects they can undertake, leading to a surge in pilot programs now transitioning to deployment. 

However, as these projects gained momentum, enterprises realized that the earlier LLMs they had used were unwieldy and, worse, expensive. 

Enter small language models and distillation. Models like Google’s Gemma family, Microsoft’s Phi and Mistral’s Small 3.1 allowed businesses to choose fast, accurate models that work for specific tasks. Enterprises can opt for a smaller model for particular use cases, allowing them to lower the cost of running their AI applications and potentially achieve a better return on investment. 

LinkedIn distinguished engineer Karthik Ramgopal told VentureBeat that companies opt for smaller models for a few reasons. 

“Smaller models require less compute, memory and faster inference times, which translates directly into lower infrastructure OPEX (operational expenditures) and CAPEX (capital expenditures) given GPU costs, availability and power requirements,” Ramgoapl said. “Task-specific models have a narrower scope, making their behavior more aligned and maintainable over time without complex prompt engineering.”

Model developers price their small models accordingly. OpenAI’s o4-mini costs $1.1 per million tokens for inputs and $4.4/million tokens for outputs, compared to the full o3 version at $10 for inputs and $40 for outputs. 

Enterprises today have a larger pool of small models, task-specific models and distilled models to choose from. These days, most flagship models offer a range of sizes. For example, the Claude family of models from Anthropic comprises Claude Opus, the largest model, Claude Sonnet, the all-purpose model, and Claude Haiku, the smallest version. These models are compact enough to operate on portable devices, such as laptops or mobile phones. 

The savings question

When discussing return on investment, though, the question is always: What does ROI look like? Should it be a return on the costs incurred or the time savings that ultimately means dollars saved down the line? Experts VentureBeat spoke to said ROI can be difficult to judge because some companies believe they’ve already reached ROI by cutting time spent on a task while others are waiting for actual dollars saved or more business brought in to say if AI investments have actually worked.

Normally, enterprises calculate ROI by a simple formula as described by Cognizant chief technologist Ravi Naarla in a post: ROI = (Benefits-Cost)/Costs. But with AI programs, the benefits are not immediately apparent. He suggests enterprises identify the benefits they expect to achieve, estimate these based on historical data, be realistic about the overall cost of AI, including hiring, implementation and maintenance, and understand you have to be in it for the long haul.

With small models, experts argue that these reduce implementation and maintenance costs, especially when fine-tuning models to provide them with more context for your enterprise.

Arijit Sengupta, founder and CEO of Aible, said that how people bring context to the models dictates how much cost savings they can get. For individuals who require additional context for prompts, such as lengthy and complex instructions, this can result in higher token costs. 

“You have to give models context one way or the other; there is no free lunch. But with large models, that is usually done by putting it in the prompt,” he said. “Think of fine-tuning and post-training as an alternative way of giving models context. I might incur $100 of post-training costs, but it’s not astronomical.”

Sengupta said they’ve seen about 100X cost reductions just from post-training alone, often dropping model use cost “from single-digit millions to something like $30,000.” He did point out that this number includes software operating expenses and the ongoing cost of the model and vector databases. 

“In terms of maintenance cost, if you do it manually with human experts, it can be expensive to maintain because small models need to be post-trained to produce results comparable to large models,” he said.

Experiments Aible conducted showed that a task-specific, fine-tuned model performs well for some use cases, just like LLMs, making the case that deploying several use-case-specific models rather than large ones to do everything is more cost-effective. 

The company compared a post-trained version of Llama-3.3-70B-Instruct to a smaller 8B parameter option of the same model. The 70B model, post-trained for $11.30, was 84% accurate in automated evaluations and 92% in manual evaluations. Once fine-tuned to a cost of $4.58, the 8B model achieved 82% accuracy in manual assessment, which would be suitable for more minor, more targeted use cases. 

Cost factors fit for purpose

Right-sizing models does not have to come at the cost of performance. These days, organizations understand that model choice doesn’t just mean choosing between GPT-4o or Llama-3.1; it’s knowing that some use cases, like summarization or code generation, are better served by a small model.

Daniel Hoske, chief technology officer at contact center AI products provider Cresta, said starting development with LLMs informs potential cost savings better. 

“You should start with the biggest model to see if what you’re envisioning even works at all, because if it doesn’t work with the biggest model, it doesn’t mean it would with smaller models,” he said. 

Ramgopal said LinkedIn follows a similar pattern because prototyping is the only way these issues can start to emerge.

“Our typical approach for agentic use cases begins with general-purpose LLMs as their broad generalizationability allows us to rapidly prototype, validate hypotheses and assess product-market fit,” LinkedIn’s Ramgopal said. “As the product matures and we encounter constraints around quality, cost or latency, we transition to more customized solutions.”

In the experimentation phase, organizations can determine what they value most from their AI applications. Figuring this out enables developers to plan better what they want to save on and select the model size that best suits their purpose and budget. 

The experts cautioned that while it is important to build with models that work best with what they’re developing, high-parameter LLMs will always be more expensive. Large models will always require significant computing power. 

However, overusing small and task-specific models also poses issues. Rahul Pathak, vice president of data and AI GTM at AWS, said in a blog post that cost optimization comes not just from using a model with low compute power needs, but rather from matching a model to tasks. Smaller models may not have a sufficiently large context window to understand more complex instructions, leading to increased workload for human employees and higher costs. 

Sengupta also cautioned that some distilled models could be brittle, so long-term use may not result in savings. 

Constantly evaluate

Regardless of the model size, industry players emphasized the flexibility to address any potential issues or new use cases. So if they start with a large model and a smaller model with similar or better performance and lower cost, organizations cannot be precious about their chosen model. 

Tessa Burg, CTO and head of innovation at brand marketing company Mod Op, told VentureBeat that organizations must understand that whatever they build now will always be superseded by a better version. 

“We started with the mindset that the tech underneath the workflows that we’re creating, the processes that we’re making more efficient, are going to change. We knew that whatever model we use will be the worst version of a model.”

Burg said that smaller models helped save her company and its clients time in researching and developing concepts. Time saved, she said, that does lead to budget savings over time. She added that it’s a good idea to break out high-cost, high-frequency use cases for light-weight models.

Sengupta noted that vendors are now making it easier to switch between models automatically, but cautioned users to find platforms that also facilitate fine-tuning, so they don’t incur additional costs. 

Read More »

The inference trap: How cloud providers are eating your AI margins

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

AI has become the holy grail of modern companies. Whether it’s customer service or something as niche as pipeline maintenance, organizations in every domain are now implementing AI technologies — from foundation models to VLAs — to make things more efficient. The goal is straightforward: automate tasks to deliver outcomes more efficiently and save money and resources simultaneously.

However, as these projects transition from the pilot to the production stage, teams encounter a hurdle they hadn’t planned for: cloud costs eroding their margins. The sticker shock is so bad that what once felt like the fastest path to innovation and competitive edge becomes an unsustainable budgetary blackhole – in no time. 

This prompts CIOs to rethink everything—from model architecture to deployment models—to regain control over financial and operational aspects. Sometimes, they even shutter the projects entirely, starting over from scratch.

But here’s the fact: while cloud can take costs to unbearable levels, it is not the villain. You just have to understand what type of vehicle (AI infrastructure) to choose to go down which road (the workload).

The cloud story — and where it works 

The cloud is very much like public transport (your subways and buses). You get on board with a simple rental model, and it instantly gives you all the resources—right from GPU instances to fast scaling across various geographies—to take you to your destination, all with minimal work and setup. 

The fast and easy access via a service model ensures a seamless start, paving the way to get the project off the ground and do rapid experimentation without the huge up-front capital expenditure of acquiring specialized GPUs. 

Most early-stage startups find this model lucrative as they need fast turnaround more than anything else, especially when they are still validating the model and determining product-market fit.

“You make an account, click a few buttons, and get access to servers. If you need a different GPU size, you shut down and restart the instance with the new specs, which takes minutes. If you want to run two experiments at once, you initialise two separate instances. In the early stages, the focus is on validating ideas quickly. Using the built-in scaling and experimentation frameworks provided by most cloud platforms helps reduce the time between milestones,” Rohan Sarin, who leads voice AI product at Speechmatics, told VentureBeat.

The cost of “ease”

While cloud makes perfect sense for early-stage usage, the infrastructure math becomes grim as the project transitions from testing and validation to real-world volumes. The scale of workloads makes the bills brutal — so much so that the costs can surge over 1000% overnight. 

This is particularly true in the case of inference, which not only has to run 24/7 to ensure service uptime but also scale with customer demand. 

On most occasions, Sarin explains, the inference demand spikes when other customers are also requesting GPU access, increasing the competition for resources. In such cases, teams either keep a reserved capacity to make sure they get what they need — leading to idle GPU time during non-peak hours — or suffer from latencies, impacting downstream experience.

Christian Khoury, the CEO of AI compliance platform EasyAudit AI, described inference as the new “cloud tax,” telling VentureBeat that he has seen companies go from $5K to $50K/month overnight, just from inference traffic.

It’s also worth noting that inference workloads involving LLMs, with token-based pricing, can trigger the steepest cost increases. This is because these models are non-deterministic and can generate different outputs when handling long-running tasks (involving large context windows). With continuous updates, it gets really difficult to forecast or control LLM inference costs.

Training these models, on its part, happens to be “bursty” (occurring in clusters), which does leave some room for capacity planning. However, even in these cases, especially as growing competition forces frequent retraining, enterprises can have massive bills from idle GPU time, stemming from overprovisioning.

“Training credits on cloud platforms are expensive, and frequent retraining during fast iteration cycles can escalate costs quickly. Long training runs require access to large machines, and most cloud providers only guarantee that access if you reserve capacity for a year or more. If your training run only lasts a few weeks, you still pay for the rest of the year,” Sarin explained.

And, it’s not just this. Cloud lock-in is very real. Suppose you have made a long-term reservation and bought credits from a provider. In that case, you’re locked in their ecosystem and have to use whatever they have on offer, even when other providers have moved to newer, better infrastructure. And, finally, when you get the ability to move, you may have to bear massive egress fees.

“It’s not just compute cost. You get…unpredictable autoscaling, and insane egress fees if you’re moving data between regions or vendors. One team was paying more to move data than to train their models,” Sarin emphasized.

So, what’s the workaround?

Given the constant infrastructure demand of scaling AI inference and the bursty nature of training, enterprises are moving to splitting the workloads — taking inference to colocation or on-prem stacks, while leaving training to the cloud with spot instances.

This isn’t just theory — it’s a growing movement among engineering leaders trying to put AI into production without burning through runway.

“We’ve helped teams shift to colocation for inference using dedicated GPU servers that they control. It’s not sexy, but it cuts monthly infra spend by 60–80%,” Khoury added. “Hybrid’s not just cheaper—it’s smarter.”

In one case, he said, a SaaS company reduced its monthly AI infrastructure bill from approximately $42,000 to just $9,000 by moving inference workloads off the cloud. The switch paid for itself in under two weeks.

Another team requiring consistent sub-50ms responses for an AI customer support tool discovered that cloud-based inference latency was insufficient. Shifting inference closer to users via colocation not only solved the performance bottleneck — but it halved the cost.

The setup typically works like this: inference, which is always-on and latency-sensitive, runs on dedicated GPUs either on-prem or in a nearby data center (colocation facility). Meanwhile, training, which is compute-intensive but sporadic, stays in the cloud, where you can spin up powerful clusters on demand, run for a few hours or days, and shut down. 

Broadly, it is estimated that renting from hyperscale cloud providers can cost three to four times more per GPU hour than working with smaller providers, with the difference being even more significant compared to on-prem infrastructure.

The other big bonus? Predictability. 

With on-prem or colocation stacks, teams also have full control over the number of resources they want to provision or add for the expected baseline of inference workloads. This brings predictability to infrastructure costs — and eliminates surprise bills. It also brings down the aggressive engineering effort to tune scaling and keep cloud infrastructure costs within reason. 

Hybrid setups also help reduce latency for time-sensitive AI applications and enable better compliance, particularly for teams operating in highly regulated industries like finance, healthcare, and education — where data residency and governance are non-negotiable.

Hybrid complexity is real—but rarely a dealbreaker

As it has always been the case, the shift to a hybrid setup comes with its own ops tax. Setting up your own hardware or renting a colocation facility takes time, and managing GPUs outside the cloud requires a different kind of engineering muscle. 

However, leaders argue that the complexity is often overstated and is usually manageable in-house or through external support, unless one is operating at an extreme scale.

“Our calculations show that an on-prem GPU server costs about the same as six to nine months of renting the equivalent instance from AWS, Azure, or Google Cloud, even with a one-year reserved rate. Since the hardware typically lasts at least three years, and often more than five, this becomes cost-positive within the first nine months. Some hardware vendors also offer operational pricing models for capital infrastructure, so you can avoid upfront payment if cash flow is a concern,” Sarin explained.

Prioritize by need

For any company, whether a startup or an enterprise, the key to success when architecting – or re-architecting – AI infrastructure lies in working according to the specific workloads at hand. 

If you’re unsure about the load of different AI workloads, start with the cloud and keep a close eye on the associated costs by tagging every resource with the responsible team. You can share these cost reports with all managers and do a deep dive into what they are using and its impact on the resources. This data will then give clarity and help pave the way for driving efficiencies.

That said, remember that it’s not about ditching the cloud entirely; it’s about optimizing its use to maximize efficiencies. 

“Cloud is still great for experimentation and bursty training. But if inference is your core workload, get off the rent treadmill. Hybrid isn’t just cheaper… It’s smarter,” Khoury added. “Treat cloud like a prototype, not the permanent home. Run the math. Talk to your engineers. The cloud will never tell you when it’s the wrong tool. But your AWS bill will.”

Read More »

Why your enterprise AI strategy needs both open and closed models: The TCO reality check

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

For the last two decades, enterprises have had a choice between open-source and closed proprietary technologies.

The original choice for enterprises was primarily centered on operating systems, with Linux offering an open-source alternative to Microsoft Windows. In the developer realm, open-source languages like Python and JavaScript dominate, as open-source technologies, including Kubernetes, are standards in the cloud.

The same type of choice between open and closed is now facing enterprises for AI, with multiple options for both types of models. On the proprietary closed-model front are some of the biggest, most widely used models on the planet, including those from OpenAI and Anthropic. On the open-source side are models like Meta’s Llama, IBM Granite, Alibaba’s Qwen and DeepSeek.

Understanding when to use an open or closed model is a critical choice for enterprise AI decision-makers in 2025 and beyond. The choice has both financial and customization implications for either options that enterprises need to understand and consider.

Understanding the difference between open and closed licenses

There is no shortage of hyperbole around the decades-old rivalry between open and closed licenses. But what does it all actually mean for enterprise users?

A closed-source proprietary technology, like OpenAI’s GPT 4o for example, does not have model code, training data, or model weights open or available for anyone to see. The model is not easily available to be fine-tuned and generally speaking, it is only available for real enterprise usage with a cost (sure, ChatGPT has a free tier, but that’s not going to cut it for a real enterprise workload).

An open technology, like Meta Llama, IBM Granite, or DeepSeek, has openly available code. Enterprises can use the models freely, generally without restrictions, including fine-tuning and customizations.

Rohan Gupta, a principal with Deloitte, told VentureBeat that the open vs. closed source debate isn’t unique or native to AI, nor is it likely to be resolved anytime soon. 

Gupta explained that closed source providers typically offer several wrappers around their model that enable ease of use, simplified scaling, more seamless upgrades and downgrades and a steady stream of enhancements. They also provide significant developer support. That includes documentation as well as hands-on advice and often delivers tighter integrations with both infrastructure and applications. In exchange, an enterprise pays a premium for these services.

 “Open-source models, on the other hand, can provide greater control, flexibility and customization options, and are supported by a vibrant, enthusiastic developer ecosystem,” Gupta said. “These models are increasingly accessible via fully managed APIs across cloud vendors, broadening their distribution.”

Making the choice between open and closed model for enterprise AI

The question that many enterprise users might ask is what’s better: an open or a closed model? The answer however is not necessarily one or the other.

“We don’t view this as a binary choice,” David Guarrera, Generative AI Leader at EY Americas, told VentureBeat. ” Open vs closed is increasingly a fluid design space, where models are selected, or even automatically orchestrated, based on tradeoffs between accuracy, latency, cost, interpretability and security at different points in a workflow.” 

Guarrera noted that closed models limit how deeply organizations can optimize or adapt behavior. Proprietary model vendors often restrict fine-tuning, charge premium rates, or hide the process in black boxes. While API-based tools simplify integration, they abstract away much of the control, making it harder to build highly specific or interpretable systems.

In contrast, open-source models allow for targeted fine-tuning, guardrail design and optimization for specific use cases. This matters more in an agentic future, where models are no longer monolithic general-purpose tools, but interchangeable components within dynamic workflows. The ability to finely shape model behavior, at low cost and with full transparency, becomes a major competitive advantage when deploying task-specific agents or tightly regulated solutions.

“In practice, we foresee an agentic future where model selection is abstracted away,” Guarrera said.

For example, a user may draft an email with one AI tool, summarize legal docs with another, search enterprise documents with a fine-tuned open-source model and interact with AI locally through an on-device LLM, all without ever knowing which model is doing what. 

“The real question becomes: what mix of models best suits your workflow’s specific demands?” Guarrera said.

Considering total cost of ownership

With open models, the basic idea is that the model is freely available for use. While in contrast, enterprises always pay for closed models.

The reality when it comes to considering total cost of ownership (TCO) is more nuanced.

Praveen Akkiraju, Managing Director at Insight Partners explained to VentureBeat that TCO has many different layers. A few key considerations include infrastructure hosting costs and engineering: Are the open-source models self-hosted by the enterprise or the cloud provider? How much engineering, including fine-tuning, guard railing and security testing, is needed to operationalize the model safely? 

Akkiraju noted that fine-tuning an open weights model can also sometimes be a very complex task. Closed frontier model companies spend enormous engineering effort to ensure performance across multiple tasks. In his view, unless enterprises deploy similar engineering expertise, they will face a complex balancing act when fine-tuning open source models. This creates cost implications when organizations choose their model deployment strategy. For example, enterprises can fine-tune multiple model versions for different tasks or use one API for multiple tasks.

Ryan Gross, Head of Data & Applications at cloud native services provider Caylent told VentureBeat that from his perspective, licensing terms don’t matter, except for in edge case scenarios. The largest restrictions often pertain to model availability when data residency requirements are in place. In this case, deploying an open model on infrastructure like Amazon SageMaker may be the only way to get a state-of-the-art model that still complies. When it comes to TCO, Gross noted that the tradeoff lies between per-token costs and hosting and maintenance costs. 

“There is a clear break-even point where the economics switch from closed to open models being cheaper,” Gross said. 

In his view, for most organizations, closed models, with the hosting and scaling solved on the organization’s behalf, will have a lower TCO. However, for large enterprises, SaaS companies with very high demand on their LLMs, but simpler use-cases requiring frontier performance, or AI-centric product companies, hosting distilled open models can be more cost-effective.

How one enterprise software developer evaluated open vs closed models

Josh Bosquez, CTO at Second Front Systems is among the many firms that have had to consider and evaluate open vs closed models. 

“We use both open and closed AI models, depending on the specific use case, security requirements and strategic objectives,” Bosquez told VentureBeat.

Bosquez explained that open models allow his firm to integrate cutting-edge capabilities without the time or cost of training models from scratch. For internal experimentation or rapid prototyping, open models help his firm to iterate quickly and benefit from community-driven advancements.

“Closed models, on the other hand, are our choice when data sovereignty, enterprise-grade support and security guarantees are essential, particularly for customer-facing applications or deployments involving sensitive or regulated environments,” he said. “These models often come from trusted vendors, who offer strong performance, compliance support, and self-hosting options.”

Bosquez said that the model selection process is cross-functional and risk-informed, evaluating not only technical fit but also data handling policies, integration requirements and long-term scalability.

Looking at TCO, he said that it varies significantly between open and closed models and neither approach is universally cheaper. 

“It depends on the deployment scope and organizational maturity,” Bosquez said. “Ultimately, we evaluate TCO not just on dollars spent, but on delivery speed, compliance risk and the ability to scale securely.”

What this means for enterprise AI strategy

For smart tech decision-makers evaluating AI investments in 2025, the open vs. closed debate isn’t about picking sides. It’s about building a strategic portfolio approach that optimizes for different use cases within your organization.

The immediate action items are straightforward. First, audit your current AI workloads and map them against the decision framework outlined by the experts, considering accuracy requirements, latency needs, cost constraints, security demands and compliance obligations for each use case. Second, honestly assess your organization’s engineering capabilities for model fine-tuning, hosting and maintenance, as this directly impacts your true total cost of ownership.

Third, begin experimenting with model orchestration platforms that can automatically route tasks to the most appropriate model, whether open or closed. This positions your organization for the agentic future that industry leaders, such as EY’s Guarrera, predict, where model selection becomes invisible to end-users.

Read More »

CFOs want AI that pays: real metrics, not marketing demos

This article is part of VentureBeat’s special issue, “The Real Cost of AI: Performance, Efficiency and ROI at Scale.” Read more from this special issue.

Recent surveys and VentureBeat’s conversations with CFOs suggest the honeymoon phase of AI is rapidly drawing to a close. While 2024 was dominated by pilot programs and proof-of-concept demonstrations, in mid-2025, the pressure for measurable results is intensifying, even as CFO interest in AI remains high. 

According to a KPMG survey of 300 U.S. financial executives, investor pressure to demonstrate ROI on generative AI investments has increased significantly. For 90% of organizations, investor pressure is considered “important or very important” for demonstrating ROI in Q1 2025, a sharp increase from 68% in Q4 2024. This indicates a strong and intensifying demand for measurable returns.

Meanwhile, according to a Bain Capital Ventures survey of 50 CFOs, 79% plan to increase their AI budgets this year, with 94% believing gen AI can strongly benefit at least one finance activity. This reveals a telling pattern in how CFOs are currently measuring AI value. Those who have adopted gen AI tools report seeing initial returns primarily through efficiency gains.“We created a custom workflow that automates vendor identification to quickly prepare journal entries,” said Andrea Ellis, CFO of Fanatics Betting and Gaming. “This process used to take 20 hours during month-end close, and now, it takes us just 2 hours each month.”

Jason Whiting, CFO of Mercury Financial, echoed this efficiency focus: “Across the board, [the biggest benefit] has been the ability to increase speed of analysis. Gen AI hasn’t replaced anything, but it has made our existing processes and people better.”

But CFOs are now looking beyond simple time savings toward more strategic applications. 

The Bain data shows CFOs are most excited about applying AI to “long-standing pain points that prior generations of technology have been unable to solve.” Cosmin Pitigoi, CFO of Flywire, explained: “Forecasting trends based on large data sets has been around for a long time, but the issue has always been the model’s ability to explain the assumptions behind the forecast. AI can help not just with forecasting, but also with explaining what assumptions have changed over time.”

These recent surveys suggest that CFOs are becoming the primary gatekeepers for AI investment; however, they’re still developing the financial frameworks necessary to evaluate these investments properly. Those who develop robust evaluation methodologies first will likely gain significant competitive advantages. Those who don’t may find their AI enthusiasm outpacing their ability to measure and manage the returns.

Efficiency metrics: The first wave of AI value

The initial wave of AI value capture by finance departments has focused predominantly on efficiency metrics, with CFOs prioritizing measurable time and cost savings that deliver immediate returns. This focus on efficiency represents the low-hanging fruit of AI implementation — clear, quantifiable benefits that are easily tracked and communicated to stakeholders.

Drip Capital, a Silicon Valley-based fintech, exemplifies this approach with its AI implementation in trade finance operations. According to chief business officer Karl Boog, “We’ve been able to 30X our capacity with what we’ve done so far.” By automating document processing and enhancing risk assessment through large language models (LLMs), the company achieved a remarkable 70% productivity boost while maintaining critical human oversight for complex decisions.

KPMG research indicates this approach is widespread, with one retail company audit committee director noting how automation has improved operational efficiency and ROI. This sentiment is echoed across industries as finance leaders seek to justify their AI investments with tangible productivity improvements.

These efficiency improvements translate directly to the bottom line. Companies across sectors — from insurance to oil and gas — report that AI helps identify process inefficiencies, leading to substantial organizational cost savings and improved expense management.

Beyond simple cost reduction, CFOs are developing more sophisticated efficiency metrics to evaluate AI investments. These include time-to-completion ratios comparing pre- and post-AI implementation timelines, cost-per-transaction analyses measuring reductions in resource expenditure and labor hour reallocation metrics tracking how team members shift from manual data processing to higher-value analytical work.

However, leading CFOs recognize that while efficiency metrics provide a solid foundation for initial ROI calculations, they represent just the beginning of AI’s potential value. As finance leaders gain confidence in measuring these direct returns, they’re developing more comprehensive frameworks to capture AI’s full strategic value — moving well beyond the efficiency calculations that characterized early adoption phases.

Beyond efficiency: The new financial metrics

As CFOs move beyond the initial fascination with AI-driven efficiency gains, they’re developing new financial metrics that more comprehensively capture AI’s business impact. This evolution reflects a maturing approach to AI investments, with finance leaders adopting more sophisticated evaluation frameworks that align with broader corporate objectives.

The surveys highlight a notable shift in primary ROI metrics. While efficiency gains remain important, we see productivity metrics are now overtaking pure profitability measures as the chief priority for AI initiatives in 2025. This represents a fundamental change in how CFOs assess value, focusing on AI’s ability to enhance human capabilities rather than simply reduce costs.

Time to value (TTV) is emerging as a critical new metric in investment decisions. Only about one-third of AI leaders anticipate being able to evaluate ROI within six months, making rapid time-to-value a key consideration when comparing different AI opportunities. This metric will help CFOs prioritize quick-win projects that can deliver measurable returns while building organizational confidence in larger AI initiatives.

Data quality measurements will increasingly be incorporated into evaluation frameworks, with 64% of leaders citing data quality as their most significant AI challenge. Forward-thinking CFOs now incorporate data readiness assessments and ongoing data quality metrics into their AI business cases, recognizing that even the most promising AI applications will fail without high-quality data inputs.

Adoption rate metrics have also become standard in AI evaluation. Finance leaders track how quickly and extensively AI tools are being utilized across departments, using this as a leading indicator of potential value realization. These metrics help identify implementation challenges early and inform decisions about additional training or system modifications.

“The biggest benefit has been the ability to increase speed of analysis,” noted Jason Whiting of Mercury Financial. This perspective represents the bridge between simple efficiency metrics and more sophisticated value assessments — recognizing that AI’s value often comes not from replacing existing processes but enhancing them.

Some CFOs are implementing comprehensive ROI formulas that incorporate both direct and indirect benefits (VAI Consulting):

ROI = (Net Benefit / Total Cost) × 100

Where net benefit equals the sum of direct financial benefits plus an estimated value of indirect benefits, minus total investment costs. This approach acknowledges that AI’s full value encompasses both quantifiable savings and intangible strategic advantages, such as improved decision quality and enhanced customer experience.

For companies with more mature AI implementations, these new metrics are becoming increasingly standardized and integrated into regular financial reporting. The most sophisticated organizations now produce AI value scorecards that track multiple dimensions of performance, linking AI system outputs directly to business outcomes and financial results.

As CFOs refine these new financial metrics, they’re creating a more nuanced picture of AI’s true value — one that extends well beyond the simple time and cost savings that dominated early adoption phases.

Amortization timelines: Recalibrating investment horizons

CFOs are fundamentally rethinking how they amortize AI investments, developing new approaches that acknowledge the unique characteristics of these technologies. Unlike traditional IT systems with predictable depreciation schedules, AI investments often yield evolving returns that increase as systems learn and improve over time. Leading finance executives now evaluate AI investments through the lens of sustainable competitive advantage — asking not just “How much will this save?” but “How will this transform our market position?”

“ROI directly correlates with AI maturity,” according to KPMG, which found that 61% of AI leaders report higher-than-expected ROI, compared with only 33% of beginners and implementers. This correlation is prompting CFOs to develop more sophisticated amortization models that anticipate accelerating returns as AI deployments mature.

The difficulty in establishing accurate amortization timelines remains a significant barrier to AI adoption. “Uncertain ROI/difficulty developing a business case” is cited as a challenge by 33% of executives, particularly those in the early stages of AI implementation. This uncertainty has led to a more cautious, phased approach to investment.

To address this challenge, leading finance teams are implementing pilot-to-scale methodologies to validate ROI before full deployment. This approach enables CFOs to gather accurate performance data, refine their amortization estimates, and make more informed scaling decisions.

The timeframe for expected returns varies significantly based on the type of AI implementation. Automation-focused AI typically delivers more predictable short-term returns, whereas strategic applications, such as improved forecasting, may have longer, less certain payback periods. Progressive CFOs are developing differentiated amortization schedules that reflect these variations rather than applying one-size-fits-all approaches.

Some finance leaders are adopting rolling amortization models that are adjusted quarterly based on actual performance data. This approach acknowledges the dynamic nature of AI returns and allows for ongoing refinement of financial projections. Rather than setting fixed amortization schedules at the outset, these models incorporate learning curves and performance improvements into evolving financial forecasts.

One entertainment company implemented a gen AI-driven tool that scans financial developments, identifies anomalies and automatically generates executive-ready alerts. While the immediate ROI stemmed from efficiency gains, the CFO developed an amortization model that also factored in the system’s increasing accuracy over time and its expanding application across various business units.

Many CFOs are also factoring in how AI investments contribute to building proprietary data assets that appreciate rather than depreciate over time. Unlike traditional technology investments that lose value as they age, AI systems and their associated data repositories often become more valuable as they accumulate training data and insights.

This evolving approach to amortization represents a significant departure from traditional IT investment models. By developing more nuanced timelines that reflect AI’s unique characteristics, CFOs are creating financial frameworks that better capture the true economic value of these investments and support a more strategic allocation of resources.

Strategic value integration: Linking AI to shareholder returns

Forward-thinking CFOs are moving beyond operational metrics to integrate AI investments into broader frameworks for creating shareholder value. This shift represents a fundamental evolution in how financial executives evaluate AI — positioning it not merely as a cost-saving technology but as a strategic asset that drives enterprise growth and competitive differentiation.

This more sophisticated approach assesses AI’s impact on three critical dimensions of shareholder value: revenue acceleration, risk reduction and strategic optionality. Each dimension requires different metrics and evaluation frameworks, creating a more comprehensive picture of AI’s contribution to enterprise value.

Revenue acceleration metrics focus on how AI enhances top-line growth by improving customer acquisition, increasing the share of wallet and expanding market reach. These metrics track AI’s influence on sales velocity, conversion rates, customer lifetime value and price optimization — connecting algorithmic capabilities directly to revenue performance.

Risk reduction frameworks assess how AI enhances forecasting accuracy, improves scenario planning, strengthens fraud detection and optimizes capital allocation. By quantifying risk-adjusted returns, CFOs can demonstrate how AI investments reduce earnings volatility and improve business resilience — factors that directly impact valuation multiples.

Perhaps most importantly, leading CFOs are developing methods to value strategic optionality — the capacity of AI investments to create new business possibilities that didn’t previously exist. This approach recognizes that AI often delivers its most significant value by enabling entirely new business models or unlocking previously inaccessible market opportunities.

To effectively communicate this strategic value, finance leaders are creating new reporting mechanisms tailored to different stakeholders. Some are establishing comprehensive AI value scorecards that link system performance to tangible business outcomes, incorporating both lagging indicators (financial results) and leading indicators (operational improvements) that predict future financial performance.

Executive dashboards now regularly feature AI-related metrics alongside traditional financial KPIs, making AI more visible to senior leadership. These integrated views enable executives to understand how AI investments align with strategic priorities and shareholder expectations.

For board and investor communication, CFOs are developing structured approaches that highlight both immediate financial returns and long-term strategic advantages. Rather than treating AI as a specialized technology investment, these frameworks position it as a fundamental business capability that drives sustainable competitive differentiation.

By developing these integrated strategic value frameworks, CFOs ensure that AI investments are evaluated not only on their immediate operational impact but their contribution to the company’s long-term competitive position and shareholder returns. This more sophisticated approach is rapidly becoming a key differentiator between companies that treat AI as a tactical tool and those that leverage it as a strategic asset.

Risk-adjusted returns: The risk management equation

As AI investments grow in scale and strategic importance, CFOs are incorporating increasingly sophisticated risk assessments into their financial evaluations. This evolution reflects the unique challenges AI presents — balancing unprecedented opportunities against novel risks that traditional financial models often fail to capture.

The risk landscape for AI investments is multifaceted and evolving rapidly. Recent surveys indicate that risk management, particularly in relation to data privacy, is expected to be the biggest challenge to generative AI strategies for 82% of leaders in 2025. This concern is followed closely by data quality issues (64%) and questions of trust in AI outputs (35%).

Forward-thinking finance leaders are developing comprehensive risk-adjusted return frameworks that quantify and incorporate these various risk factors. Rather than treating risk as a binary go/no-go consideration, these frameworks assign monetary values to different risk categories and integrate them directly into ROI calculations.

Data security and privacy vulnerabilities represent a primary concern, with 57% of executives citing these as top challenges. CFOs are now calculating potential financial exposure from data breaches or privacy violations and factoring these costs into their investment analyses. This includes estimating potential regulatory fines, litigation expenses, remediation costs and reputational damage.

Regulatory compliance represents another significant risk factor. With many executives concerned about ensuring compliance with changing regulations, financial evaluations increasingly include contingency allocations for regulatory adaptation. An aerospace company executive noted that “complex regulations make it difficult for us to achieve AI readiness,” highlighting how regulatory uncertainty complicates financial planning.

Beyond these external risks, CFOs are quantifying implementation risks such as adoption failures, integration challenges and technical performance issues. By assigning probability-weighted costs to these scenarios, they create more realistic projections that acknowledge the inherent uncertainties in AI deployment.

The “black box” nature of certain AI technologies presents unique challenges for risk assessment. As stakeholders become increasingly wary of trusting AI results without understanding the underlying logic, CFOs are developing frameworks to evaluate transparency risks and their potential financial implications. This includes estimating the costs of additional validation procedures, explainability tools and human oversight mechanisms.

Some companies are adopting formal risk-adjustment methodologies borrowed from other industries. One approach applies a modified weighted average cost of capital (WACC) that incorporates AI-specific risk premiums. Others use risk-adjusted net present value calculations that explicitly account for the unique uncertainty profiles of different AI applications.

The transportation sector provides an illustrative example of this evolving approach. As one chief data officer noted, “The data received from AI requires human verification, and this is an important step that we overlook.” This recognition has led transportation CFOs to build verification costs directly into their financial models rather than treating them as optional add-ons.

By incorporating these sophisticated risk adjustments into their financial evaluations, CFOs are creating more realistic assessments of AI’s true economic value. This approach enables more confident investment decisions and helps organizations maintain appropriate risk levels as they scale their AI capabilities.

The CFO’s AI evaluation playbook: From experiments to enterprise value

As AI transitions from experimental projects to enterprise-critical systems, CFOs are developing more disciplined, comprehensive frameworks for evaluating these investments. The most successful approaches strike a balance between rigor and flexibility, acknowledging both the unique characteristics of AI and its integration into broader business strategy.

The emerging CFO playbook for AI evaluation contains several key elements that differentiate leaders from followers.

First is the implementation of multi-dimensional ROI frameworks that capture both efficiency gains and strategic value creation. Rather than focusing exclusively on cost reduction, these frameworks incorporate productivity enhancements, decision quality improvements and competitive differentiation into a holistic value assessment.

Second is the adoption of phased evaluation approaches that align with AI’s evolutionary nature. Leading CFOs establish clear metrics for each development stage — from initial pilots to scaled deployment — with appropriate risk adjustments and expected returns for each phase. This approach recognizes that AI investments often follow a J-curve, with value accelerating as systems mature and applications expand.

Third is the integration of AI metrics into standard financial planning and reporting processes. Rather than treating AI as a special category with unique evaluation criteria, forward-thinking finance leaders are incorporating AI performance indicators into regular budget reviews, capital allocation decisions and investor communications. This normalization signals AI’s transition from experimental technology to core business capability.

The most sophisticated organizations are also implementing formal governance structures that connect AI investments directly to strategic objectives. These governance frameworks ensure that AI initiatives remain aligned with enterprise priorities while providing the necessary oversight to manage risks effectively. By establishing clear accountability for both technical performance and business outcomes, these structures help prevent the disconnection between AI capabilities and business value that has plagued many early adopters.

As investors and boards increasingly scrutinize AI investments, CFOs are developing more transparent reporting approaches that clearly communicate both current returns and future potential. These reports typically include standardized metrics that track AI’s contribution to operational efficiency, customer experience, employee productivity and strategic differentiation — providing a comprehensive view of how these investments enhance shareholder value.

The organizations gaining a competitive advantage through AI are those where CFOs have moved to become strategic partners in AI transformation. These finance leaders work closely with technology and business teams to identify high-value use cases, establish appropriate success metrics and create financial frameworks that support responsible innovation while maintaining appropriate risk management.

The CFOs who master these new evaluation frameworks will drive the next wave of AI adoption — one characterized not by speculative experimentation but by disciplined investment in capabilities that deliver sustainable competitive advantage. As AI continues to transform business models and market dynamics, these financial frameworks will become increasingly critical to organizational success.

The CFO’s AI evaluation framework: Key metrics and considerations

Evaluation dimensionTraditional metricsEmerging AI metricsKey considerationsEfficiency• Cost reduction• Time savings• Headcount impact• Cost-per-output• Process acceleration ratio• Labor reallocation value• Measure both direct and indirect efficiency gains• Establish clear pre-implementation baselines• Track productivity improvements beyond cost savingsAmortization• Fixed depreciation schedules• Standard ROI timelines• Uniform capital allocation• Learning curve adjustments• Value acceleration factors• Pilot-to-scale validation• Recognize AI’s improving returns over time• Apply different timelines for different AI applications• Implement phase-gated funding tied to performanceStrategic Value• Revenue impact• Margin improvement• Market share• Decision quality metrics• Data asset appreciation• Strategic optionality value• Connect AI investments to competitive differentiation• Quantify both current and future strategic benefits• Measure contribution to innovation capabilitiesRisk management• Implementation risk• Technical performance risk• Financial exposure• Data privacy risk premium• Regulatory compliance factor• Explainability/transparency risk• Apply risk-weighted adjustments to projected returns• Quantify mitigation costs and residual risk• Factor in emerging regulatory and ethical considerationsGovernance• Project-based oversight• Technical success metrics• Siloed accountability• Enterprise AI governance• Cross-functional value metrics• Integrated performance dashboards• Align AI governance with corporate governance• Establish clear ownership of business outcomes• Create transparent reporting mechanisms for all stakeholders

Read More »

AI agents are hitting a liability wall. Mixus has a plan to overcome it using human overseers on high-risk workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more While enterprises face the challenges of deploying AI agents in critical applications, a new, more pragmatic model is emerging that puts humans back in control as a strategic safeguard against AI failure.  One such example is Mixus, a platform that uses a “colleague-in-the-loop” approach to make AI agents reliable for mission-critical work. This approach is a response to the growing evidence that fully autonomous agents are a high-stakes gamble.  The high cost of unchecked AI The problem of AI hallucinations has become a tangible risk as companies explore AI applications. In a recent incident, the AI-powered code editor Cursor saw its own support bot invent a fake policy restricting subscriptions, sparking a wave of public customer cancellations.  Similarly, the fintech company Klarna famously reversed course on replacing customer service agents with AI after admitting the move resulted in lower quality. In a more alarming case, New York City’s AI-powered business chatbot advised entrepreneurs to engage in illegal practices, highlighting the catastrophic compliance risks of unmonitored agents. These incidents are symptoms of a larger capability gap. According to a May 2025 Salesforce research paper, today’s leading agents succeed only 58% of the time on single-step tasks and just 35% of the time on multi-step ones, highlighting “a significant gap between current LLM capabilities and the multifaceted demands of real-world enterprise scenarios.”  The colleague-in-the-loop model To bridge this gap, a new approach focuses on structured human oversight. “An AI agent should act at your direction and on your behalf,” Mixus co-founder Elliot Katz told VentureBeat. “But without built-in organizational oversight, fully autonomous agents often create more problems than they solve.”  This philosophy underpins Mixus’s colleague-in-the-loop model, which embeds human

Read More »

New York Gov. Hochul hints at ‘fleet-style approach’ to nuclear deployments

Dive Brief: New York could take a page from Ontario’s playbook and deploy multiple reactors to reach and possibly exceed the 1-GW target Democratic Gov. Kathy Hochul announced on Monday, analysts with Clean Air Task Force said in an interview. Whether the New York Power Authority ultimately selects a large light-water reactor like the Westinghouse AP1000 or multiple units of a small modular design like the GE Hitachi BWRX-300, lessons learned on recent and ongoing nuclear builds could translate to lower final costs, said John Carlson, CATF’s senior Northeast regional policy manager. That could enable a “fleet-style approach” to deployment similar to Ontario Power Generation’s plan to build four 300-MW BWRX-300 reactors in sequence, lowering the final cost per unit, said Victor Ibarra, senior manager for CATF’s advanced nuclear energy program. On Monday, Hochul said the plan would “allow for future collaboration with other states and Ontario.” Dive Insight: Gov. Hochul on Monday directed NYPA and the New York Department of Public Service “to develop at least one new nuclear energy facility with a combined capacity of no less than one gigawatt of electricity, either alone or in partnership with private entities,” in upstate New York. As governor, Hochul has considerable influence over NYPA, the state-owned electric utility. In February, for example, she “demand[ed]” NYPA suspend a proposed rate hike. Hochul’s announcement made no mention of specific reactor types or designs, but the suggestion that multiple plants could be in the offing suggests NYPA could consider small modular designs alongside a large light-water reactor, Ibarra said. “It’s good that they’re taking a minute to explore both options,” Carlson said. “I don’t think they know which one is most beneficial yet.” Hochul said NYPA would immediately begin evaluating “technologies, business models and locations” for the first plant. The preconstruction process will

Read More »

FERC’s Christie calls for dispatchable resources after grid operators come ‘close to the edge’

The ability of Midcontinent and East Coast grid operators to narrowly handle this week’s extreme heat and humidity without blackouts reflects the urgent need to ensure the United States has adequate power supplies, according to Mark Christie, chairman of the Federal Energy Regulatory Commission. “We’re simply not building generation fast enough, and we’re not keeping generation that we need to keep,” Christie said Thursday during a media briefing after the agency’s open meeting. “Some of our systems really came close to the edge.” The PJM Interconnection, the largest U.S. grid operator, hit a peak load of about 161 GW on Monday, nearly 5% above its 154 GW peak demand forecast for this summer and the highest demand on its system since 2011. The grid operator had about 10 GW to spare at the peak, according to Christie. At that peak, PJM’s fuel mix included gas at about 44%, nuclear at 20%, coal at 19%, solar at 5% and wind at 4%, according to Christie. Also, PJM told Christie that demand response was “essential” at reducing load, he said. PJM used nearly 4,000 MW of demand response to reduce its load, according to FERC Commissioner Judy Chang. “I see load flexibility as a key tool for grid operators to meet the challenges that we face,” Chang said. PJM called on demand response resources on Monday in its mid-Atlantic and Dominion regions, on Tuesday across its footprint and on Wednesday in its eastern zones, according to Dan Lockwood, a PJM spokesman. PJM was within its reserve requirements, but used DR to provide additional resources for the grid, he said in an email. Resource adequacy is the “central issue” facing the U.S., according to Christie, who said blackouts during the extreme heat could have been deadly. “You never know about the next time,

Read More »

Dangote Plans to List Africa’s Biggest Oil Refinery by Next Year

Aliko Dangote, Africa’s richest person, plans a stock listing for his Nigerian crude oil refinery by the end of next year to widen the company’s investor base. The billionaire also plans this year to list the group’s urea plant, which has a capacity to produce 2.8 million tons of the crop nutrient per annum, Dangote told the African Export-Import Bank’s annual general meeting in Nigeria’s capital, Abuja, on Friday.  The oil facility can processes 650,000 barrels of crude a day, making it the continent’s biggest refinery. Nigeria’s downstream regulator and fuel marketers have accused Dangote of seeking to become a monopoly with his new refinery.  A listing — through an initial public offering — could help woo investors including state-owned pension funds. The $20 billion Dangote Refinery outside the commercial hub Lagos, which became operational last year, currently produces aviation fuel, naphtha, diesel and gasoline. Monopoly Accusation It’s “important to list the refinery so that people will not be calling us a monopoly,” Dangote said. “They will now say we have shares, so let everybody have a part of it.” The tycoon, who had planned to start construction of a 5,000 ton steel plant after completing the refinery, last year scrapped the proposal because of the allegations.  Dangote earlier this year said his group is on track to generate total revenue of $30 billion in 2026. On Friday, he said that the company plans to surpass Qatar as the world’s biggest exporter of urea within four years.  The facility currently exports 37% of its output to the US. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Energy Department Withdraws from Biden-Era Columbia River System Memorandum of Understanding

WASHINGTON— U.S. Secretary of Energy Chris Wright today announced that the Department of Energy in coordination with the White House Council on Environmental Quality (CEQ), the Departments of Commerce and the Interior and the U.S. Army Corps of Engineers, has officially withdrawn from the Columbia River System Memorandum of Understanding (MOU). Today’s action follows President Trump’s Memorandum directing the federal government to halt the Biden Administration’s radical Columbia River basin policy and will ensure Americans living in the Pacific Northwest can continue to rely on affordable hydropower from the Lower Snake River dams to help meet their growing power needs. “The Pacific Northwest deserves energy security, not energy scarcity. Dams in the Columbia River Basin have provided affordable and reliable electricity to millions of American families and businesses for decades,” said Energy Secretary Chris Wright. “Thanks to President Trump’s leadership, American taxpayer dollars will not be spent dismantling critical infrastructure, reducing our energy-generating capacity or on radical nonsense policies that dramatically raise prices on the American people. This Administration will continue to protect America’s critical energy infrastructure and ensure reliable, affordable power for all Americans.” BACKGROUND: On June 10, 2025, President Trump signed the Presidential Memorandum, Stopping Radical Environmentalism to Generate Power for the Columbia River Basin, revoking the prior Presidential Memorandum, Restoring Healthy and Abundant Salmon, Steelhead, and Other Native Fish Populations in the Columbia River Basin, part of the radical green energy agenda calling for “equitable treatment for fish.” The Biden-era MOU required the federal government to spend over $1 billion and comply with 36 pages of costly, onerous commitments aimed at replacing services provided by the Lower Snake River Dams and advancing the possibility of breaching them. Breaching the dams would have doubled the region’s risk of power shortages, driven wholesale electricity rates up by as much

Read More »

CTGT wins Best Presentation Style award at VB Transform 2025

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more San Francisco-based CTGT, a startup focused on making AI more trustworthy through feature-level model customization, won the Best Presentation Style award at VB Transform 2025 in San Francisco. Founded by 23-year-old Cyril Gorlla, the company showcased how its technology helps enterprises overcome AI trust barriers by directly modifying model features instead of using traditional fine-tuning or prompt engineering methods. During his presentation, Gorlla highlighted the “AI Doom Loop” faced by many enterprises: 54% of businesses cite AI as their highest tech risk according to Deloitte, while McKinsey reports 44% of organizations have experienced negative consequences from AI implementation. “A large part of this conference has been about the AI doom loop” Gorlla explained during his presentation. “Unfortunately, a lot of these [AI investments] don’t pan out. J&J just canceled hundreds of AI pilots because they didn’t really deliver ROI due to no fundamental trust in these systems.” Breaking the AI compute wall CTGT’s approach represents a significant departure from conventional AI customization techniques. The company was founded on research Gorlla conducted while holding an endowed chair at the University of California San Diego. In 2023, Gorlla published a paper at the International Conference on Learning Representations (ICLR) describing a method for evaluating and training AI models that was up to 500 times faster than existing approaches while achieving “three nines” (99.9%) of accuracy. Rather than relying on brute-force scaling or traditional deep learning methods, CTGT has developed what it calls an “entirely new AI stack” that fundamentally reimagines how neural networks learn. The company’s innovation focuses on understanding and intervening at the feature level of AI models. The company’s approach differs fundamentally from standard interpretability solutions that

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE