Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

OpenAI’s API lead explains how enterprises are already succeeding with its Agents SDK and Responses API

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more At VentureBeat’s Transform 2025 conference, Olivier Godement, Head of Product for OpenAI’s API platform, provided a behind-the-scenes look at how enterprise teams are adopting and deploying AI agents at scale. In a 20-minute panel discussion I hosted exclusively with Godement, the former Stripe researcher and current OpenAI API boss unpacked OpenAI’s latest developer tools—the Responses API and Agents SDK—while highlighting real-world patterns, security considerations, and cost-return examples from early adopters like Stripe and Box. For enterprise leaders unable to attend the session live, here are top 8 most important takeaways: Agents Are Rapidly Moving From Prototype to Production According to Godement, 2025 marks a real shift in how AI is being deployed at scale. With over a million monthly active developers now using OpenAI’s API platform globally, and token usage up 700% year over year, AI is moving beyond experimentation. “It’s been five years since we launched essentially GPT-3… and man, the past five years has been pretty wild.” Godement emphasized that current demand isn’t just about chatbots anymore. “AI use cases are moving from simple Q&A to actually use cases where the application, the agent, can do stuff for you.” This shift prompted OpenAI to launch two major developer-facing tools in March: the Responses API and the Agents SDK. When to Use Single Agents vs. Sub-Agent Architectures A major theme was architectural choice. Godement noted that single-agent loops, which encapsulate full tool access and context in one model, are conceptually elegant but often impractical at scale. “Building accurate and reliable single agents is hard. Like, it’s really hard.” As complexity increases—more tools, more possible user inputs, more logic—teams often move toward modular architectures with specialized sub-agents.

Read More »

How Highmark Health and Google Cloud are using Gen AI to streamline medical claims and improve care: 6 key lessons

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Among the numerous educational and startlingly insightful panel discussions on AI enterprise integrations featuring industry leaders at VentureBeat’s Transform 2025 conference this week was one led by Google Cloud Platform Vice President and Chief Technology Officer (CTO) Will Grannis and Richard Clarke, Highmark Health’s Senior Vice President and Chief Data and Analytics Officer. That session, “The New AI Stack in Healthcare: Architecting for Multi-Model, Multi-Modal Environments,” delivered a pragmatic look at how the two organizations are collaborating to deploy AI at scale across more than 14,000 employees at the large U.S. healthcare system Highmark Health (based out of Western Pennsylvania). In addition, the collaboration has onboarded all these employees and turned them into active users without losing sight of complexity, regulation, or clinician trust. So, how did Google Cloud and Highmark go about it? Read on to find out. A Partnership Built on Prepared Foundations Highmark Health, an integrated payer-provider system serving over 6 million members, is using Google Cloud’s AI models and infrastructure to modernize legacy systems, boost internal efficiency, and improve patient outcomes. What sets this initiative apart is its focus on platform engineering—treating AI as a foundational shift in how work gets done, not just another tech layer. Richard Clarke, Highmark’s Chief Data and Analytics Officer, emphasized the importance of building flexible infrastructure early. “There’s nothing more legacy than an employment platform coded in COBOL,” Clarke noted, but Highmark has integrated even those systems with cloud-based AI models. The result: up to 90% workload replication without systemic disruption, enabling smoother transitions and real-time insights into complex administrative processes. Google Cloud CTO Will Grannis echoed that success begins with groundwork. “This may take three, four,

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Oil Tanker Rates Collapse as Conflict in Middle East Abates

The cost of shipping Middle East crude to customers in Asia collapsed on Thursday, the latest sign of oil markets returning to normal after conflict eased in the world’s top petroleum-exporting region. Charter rates slumped by 17% to 55.50 industry-standard Worldscale points, according to data from the Baltic Exchange in London. It works out at roughly $1.60 a barrel. “Risk premiums have naturally faded,” said Fredrik Dybwad, an analyst at Fearnley Securities AS. “There is ample vessel availability, and considering normal seasonality, rates should naturally find a lower level.” Shipping prices soared two weeks ago amid concern Iran might disrupt maritime traffic around Hormuz Strait, a vital waterway through which 20% of the world’s oil and liquefied natural gas must pass. After almost two weeks of fighting between Iran and Israel that began on June 13, there’s since been a ceasefire, hitting oil prices and lowering the risks for ships that enter the region. The Joint Maritime Information Center, a naval liaison with commercial shipping in the region, said Thursday that no hostilities had been reported in the Strait of Hormuz over the past 48 hours and that traffic had returned to normal levels. “A sustained period of inactivity and strengthening of the ceasefire agreement will stabilize maritime tension in the Arabian Gulf,” it said in a note.  “Now that the market has become sanguine about Iran shutting the Strait of Hormuz, ships are running fluidly again, the premium gas been removed, and rates are correcting lower meaningfully,” said Jonathan Chappell, senior managing director at Evercore ISI. The Worldscale system is designed to let owners and charterers quickly calculate the latest earnings and per-barrel costs on thousands of trade routes.  Vessels on the industry’s benchmark Saudi Arabia-to-China route are earning $35,281 a day, according to the Baltic Exchange. They were

Read More »

Equinor, Shell Unveil Name of UK North Sea JV

Shell PLC and Equinor ASA have named their United Kingdom North Sea joint venture Adura, which they announced December as the biggest independent producer on the UK’s side of the sea. “Work continues towards securing regulatory approvals, with launch of the IJV [incorporated JV] expected by the end of this year”, Norway’s majority state-owned Equinor said in an online statement. Adura, which will be equally owned, combines the two companies’ offshore assets in the UK, where Shell currently produces over 100,000 barrels of oil equivalent a day (boed) and Equinor about 38,000 boed. “Adura is expected to produce over 140,000 barrels of oil equivalent per day in 2025”, Equinor said. The name Adura is “rooted in their [the companies] respective heritage and focused on shaping the future of the basin in the years ahead”, Equinor explained. “Adura has been created to bring together the A of Aberdeen and the dura of durability. It’s a company built on firm foundations, much like the strong granite synonymous with the city”. “Adura will sustain domestic oil and gas production and security of energy supply in the UK and beyond”, Equinor added. Adura will include Equinor’s 29.89 percent stake in the CNOOC Ltd.-operated Buzzard field, which started production 2007; an operating stake of 65.11percent in Mariner, online since 2019; and an 80 percent operating stake in Rosebank, expected to come onstream 2026. Shell will contribute its 27.97 percent ownership in BP PLC-operated Clair, which began production 2005; a 50 percent operating stake in Gannet, started up 1992; a 100 percent stake in Jackdaw, for which Shell plans to seek a new consent following a court nullification; a 21.23 percent operating stake in Nelson, which started production 1994; a 50 percent operating stake in Penguins, which started production 2003; a 92.52 percent operating stake in Pierce,

Read More »

Can oil and gas solve the AI power dilemma?

Joe Brettell is a partner at Prosody Group. The promise, peril and possibilities of artificial intelligence continue to capture the cultural and business zeitgeist worldwide. Hardly a conference or long-form interview can be held these days without a panelist or pundit commenting on the technology’s implications for their profession. Yet despite being the hottest topic in every circle, AI’s ultimate challenge isn’t technological but physical. After years of breathless speculation and prediction, the issue remains the same: AI needs more energy. Amidst this backdrop, the oil and gas industry faces a similarly fundamental challenge: a shifting production frontier and evolving path to continued growth. After a decade of efficiency-driven growth, the era of easy barrels is waning. Diamondback Energy CEO Travis Stice captured the new reality in a recent letter, warning of the increasingly dim prospects for expanding production amid geological constraints and rising costs. Other energy majors have issued similar cautions, a sharp departure from the boom years of the shale revolution when abundant, low-cost reserves, followed by shareholder-focused production, made the industry a market favorite. Now, with resource intensity rising, global volatility accelerating and economic conditions tightening, the industry is under pressure to find its next value horizon. That horizon may be converging with AI. The pairing makes increasing sense. While initially circling one another warily, major players in energy and technology have become increasingly intertwined. At major gatherings like CERAWeek, energy executives and tech leaders now share the same stage — and increasingly, the same strategic questions. How do we scale the infrastructure to match exponential AI growth? Who will supply the energy to power it? And how do we do so fast enough while dealing with rising environmental, social and regulatory concerns? These challenges come amid a stark reality: AI’s computational appetite isn’t just increasing —

Read More »

OpenAI’s API lead explains how enterprises are already succeeding with its Agents SDK and Responses API

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more At VentureBeat’s Transform 2025 conference, Olivier Godement, Head of Product for OpenAI’s API platform, provided a behind-the-scenes look at how enterprise teams are adopting and deploying AI agents at scale. In a 20-minute panel discussion I hosted exclusively with Godement, the former Stripe researcher and current OpenAI API boss unpacked OpenAI’s latest developer tools—the Responses API and Agents SDK—while highlighting real-world patterns, security considerations, and cost-return examples from early adopters like Stripe and Box. For enterprise leaders unable to attend the session live, here are top 8 most important takeaways: Agents Are Rapidly Moving From Prototype to Production According to Godement, 2025 marks a real shift in how AI is being deployed at scale. With over a million monthly active developers now using OpenAI’s API platform globally, and token usage up 700% year over year, AI is moving beyond experimentation. “It’s been five years since we launched essentially GPT-3… and man, the past five years has been pretty wild.” Godement emphasized that current demand isn’t just about chatbots anymore. “AI use cases are moving from simple Q&A to actually use cases where the application, the agent, can do stuff for you.” This shift prompted OpenAI to launch two major developer-facing tools in March: the Responses API and the Agents SDK. When to Use Single Agents vs. Sub-Agent Architectures A major theme was architectural choice. Godement noted that single-agent loops, which encapsulate full tool access and context in one model, are conceptually elegant but often impractical at scale. “Building accurate and reliable single agents is hard. Like, it’s really hard.” As complexity increases—more tools, more possible user inputs, more logic—teams often move toward modular architectures with specialized sub-agents.

Read More »

How Highmark Health and Google Cloud are using Gen AI to streamline medical claims and improve care: 6 key lessons

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Among the numerous educational and startlingly insightful panel discussions on AI enterprise integrations featuring industry leaders at VentureBeat’s Transform 2025 conference this week was one led by Google Cloud Platform Vice President and Chief Technology Officer (CTO) Will Grannis and Richard Clarke, Highmark Health’s Senior Vice President and Chief Data and Analytics Officer. That session, “The New AI Stack in Healthcare: Architecting for Multi-Model, Multi-Modal Environments,” delivered a pragmatic look at how the two organizations are collaborating to deploy AI at scale across more than 14,000 employees at the large U.S. healthcare system Highmark Health (based out of Western Pennsylvania). In addition, the collaboration has onboarded all these employees and turned them into active users without losing sight of complexity, regulation, or clinician trust. So, how did Google Cloud and Highmark go about it? Read on to find out. A Partnership Built on Prepared Foundations Highmark Health, an integrated payer-provider system serving over 6 million members, is using Google Cloud’s AI models and infrastructure to modernize legacy systems, boost internal efficiency, and improve patient outcomes. What sets this initiative apart is its focus on platform engineering—treating AI as a foundational shift in how work gets done, not just another tech layer. Richard Clarke, Highmark’s Chief Data and Analytics Officer, emphasized the importance of building flexible infrastructure early. “There’s nothing more legacy than an employment platform coded in COBOL,” Clarke noted, but Highmark has integrated even those systems with cloud-based AI models. The result: up to 90% workload replication without systemic disruption, enabling smoother transitions and real-time insights into complex administrative processes. Google Cloud CTO Will Grannis echoed that success begins with groundwork. “This may take three, four,

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Oil Tanker Rates Collapse as Conflict in Middle East Abates

The cost of shipping Middle East crude to customers in Asia collapsed on Thursday, the latest sign of oil markets returning to normal after conflict eased in the world’s top petroleum-exporting region. Charter rates slumped by 17% to 55.50 industry-standard Worldscale points, according to data from the Baltic Exchange in London. It works out at roughly $1.60 a barrel. “Risk premiums have naturally faded,” said Fredrik Dybwad, an analyst at Fearnley Securities AS. “There is ample vessel availability, and considering normal seasonality, rates should naturally find a lower level.” Shipping prices soared two weeks ago amid concern Iran might disrupt maritime traffic around Hormuz Strait, a vital waterway through which 20% of the world’s oil and liquefied natural gas must pass. After almost two weeks of fighting between Iran and Israel that began on June 13, there’s since been a ceasefire, hitting oil prices and lowering the risks for ships that enter the region. The Joint Maritime Information Center, a naval liaison with commercial shipping in the region, said Thursday that no hostilities had been reported in the Strait of Hormuz over the past 48 hours and that traffic had returned to normal levels. “A sustained period of inactivity and strengthening of the ceasefire agreement will stabilize maritime tension in the Arabian Gulf,” it said in a note.  “Now that the market has become sanguine about Iran shutting the Strait of Hormuz, ships are running fluidly again, the premium gas been removed, and rates are correcting lower meaningfully,” said Jonathan Chappell, senior managing director at Evercore ISI. The Worldscale system is designed to let owners and charterers quickly calculate the latest earnings and per-barrel costs on thousands of trade routes.  Vessels on the industry’s benchmark Saudi Arabia-to-China route are earning $35,281 a day, according to the Baltic Exchange. They were

Read More »

Equinor, Shell Unveil Name of UK North Sea JV

Shell PLC and Equinor ASA have named their United Kingdom North Sea joint venture Adura, which they announced December as the biggest independent producer on the UK’s side of the sea. “Work continues towards securing regulatory approvals, with launch of the IJV [incorporated JV] expected by the end of this year”, Norway’s majority state-owned Equinor said in an online statement. Adura, which will be equally owned, combines the two companies’ offshore assets in the UK, where Shell currently produces over 100,000 barrels of oil equivalent a day (boed) and Equinor about 38,000 boed. “Adura is expected to produce over 140,000 barrels of oil equivalent per day in 2025”, Equinor said. The name Adura is “rooted in their [the companies] respective heritage and focused on shaping the future of the basin in the years ahead”, Equinor explained. “Adura has been created to bring together the A of Aberdeen and the dura of durability. It’s a company built on firm foundations, much like the strong granite synonymous with the city”. “Adura will sustain domestic oil and gas production and security of energy supply in the UK and beyond”, Equinor added. Adura will include Equinor’s 29.89 percent stake in the CNOOC Ltd.-operated Buzzard field, which started production 2007; an operating stake of 65.11percent in Mariner, online since 2019; and an 80 percent operating stake in Rosebank, expected to come onstream 2026. Shell will contribute its 27.97 percent ownership in BP PLC-operated Clair, which began production 2005; a 50 percent operating stake in Gannet, started up 1992; a 100 percent stake in Jackdaw, for which Shell plans to seek a new consent following a court nullification; a 21.23 percent operating stake in Nelson, which started production 1994; a 50 percent operating stake in Penguins, which started production 2003; a 92.52 percent operating stake in Pierce,

Read More »

Can oil and gas solve the AI power dilemma?

Joe Brettell is a partner at Prosody Group. The promise, peril and possibilities of artificial intelligence continue to capture the cultural and business zeitgeist worldwide. Hardly a conference or long-form interview can be held these days without a panelist or pundit commenting on the technology’s implications for their profession. Yet despite being the hottest topic in every circle, AI’s ultimate challenge isn’t technological but physical. After years of breathless speculation and prediction, the issue remains the same: AI needs more energy. Amidst this backdrop, the oil and gas industry faces a similarly fundamental challenge: a shifting production frontier and evolving path to continued growth. After a decade of efficiency-driven growth, the era of easy barrels is waning. Diamondback Energy CEO Travis Stice captured the new reality in a recent letter, warning of the increasingly dim prospects for expanding production amid geological constraints and rising costs. Other energy majors have issued similar cautions, a sharp departure from the boom years of the shale revolution when abundant, low-cost reserves, followed by shareholder-focused production, made the industry a market favorite. Now, with resource intensity rising, global volatility accelerating and economic conditions tightening, the industry is under pressure to find its next value horizon. That horizon may be converging with AI. The pairing makes increasing sense. While initially circling one another warily, major players in energy and technology have become increasingly intertwined. At major gatherings like CERAWeek, energy executives and tech leaders now share the same stage — and increasingly, the same strategic questions. How do we scale the infrastructure to match exponential AI growth? Who will supply the energy to power it? And how do we do so fast enough while dealing with rising environmental, social and regulatory concerns? These challenges come amid a stark reality: AI’s computational appetite isn’t just increasing —

Read More »

WTI Edges Up as OPEC+ Meeting Looms

Oil held steady as traders grappled with conflicting signals about how much US military strikes hobbled Iran’s nuclear program and whether Washington will continue to target Tehran’s oil flows. West Texas Intermediate edged up to settle near near $65 a barrel while Brent closed little changed near $68. WTI had climbed as much as 2.3% earlier after the Financial Times reported that European capitals believe that Iran’s highly enriched uranium stockpile remain largely intact following US strikes. President Donald Trump, in a social media post, denied reports that Iran successfully moved nuclear material from its sites before the attacks. At the same time, an Iranian law to suspend cooperation with the UN nuclear watchdog came into effect. Prices eased off intraday highs after CNN reported that Washington has discussed offering incentives to restart talks with Iran, including possibly easing sanctions. At a news conference on Wednesday, Trump indicated that US financial penalties are doing little to stop China from buying Iran’s supplies, contradicting earlier comments that he’s “not giving up” on a strategy of targeting Tehran’s petrodollars. “If they’re going to sell oil, they’re going to sell oil,” Trump said. “China is going to want to buy oil. They can buy it from us. They can buy it from other people.” In another headwind, White House Press Secretary Karoline Leavitt told reporters that there are no imminent plans to refill the Strategic Petroleum Reserve. She also said there were no plans for nuclear talks between Washington and Tehran and reiterated that the US destroyed Iran’s nuclear capabilities. The developments highlight the fragility of a ceasefire between Israel and Iran, with traders still on edge over potential disruptions to energy supplies from the Middle East. Still, the large geopolitical risk premium in the market only a few sessions ago has mostly

Read More »

Carlyle, Diversified Energy to Jointly Invest Up To $2B in US PDP Assets

Diversified Energy Co. PLC and Carlyle have formed a partnership to invest up to $2 billion in proven developed producing (PDP) natural gas and oil assets across the United States.  “This exclusive partnership will combine Carlyle’s deep credit and structuring expertise, led by Carlyle’s asset-backed finance (ABF) team, with Diversified’s market-leading operating capabilities and differentiated business model of acquiring and optimizing portfolios of existing long-life oil and gas assets to generate reliable production and consistent cash flow”, a joint statement said. “Under the terms of the agreement, Diversified will serve as the operator and servicer of the newly acquired assets”, the companies added. “As investments occur, Carlyle intends to pursue opportunities to securitize these assets, seeking to unlock long-term, resilient financing for this critical segment of the nation’s energy infrastructure”. Diversified Energy chief executive Rusty Hutson Jr said, “This arrangement significantly enhances our ability to pursue and scale strategic acquisitions in what we believe is a highly compelling environment for PDP asset consolidation”. “Diversified is a leading operator of long-life energy assets and a pioneer in bringing PDP securitizations to institutional markets”, commented Akhil Bansal, head of Carlyle ABF. “We are excited to bring institutional capital to high-quality, cash-yielding energy assets that are core to US domestic energy production and energy security. “This partnership underscores Carlyle’s ability to originate differentiated investment opportunities through proprietary sourcing channels and seek access to stable, yield-oriented energy exposure”. Carlyle ABF, part of Carlyle’s Global Credit platform, focuses on private fixed income and asset-backed investments. Carlyle ABF supports businesses, specialty finance companies, banks, asset managers and other originators and owners of diversified pools of assets. It has deployed around $8 billion since 2021 and has about$9 billion in assets under management as of the first quarter of 2025, according to Washington-based Carlyle. Birmingham, Alabama-based Diversified Energy

Read More »

Eight utility regulators challenge DOE order keeping Michigan coal plant open

Eight utility commissions in the Midcontinent Independent System Operator’s footprint are challenging the U.S. Department of Energy’s emergency order to keep a coal-fired power plant running in Michigan past its May 30 retirement date. DOE failed to show that emergency conditions exist in the MISO footprint warranting its May 23 order directing Consumers Energy to delay retiring the 1,560-MW,  J.H. Campbell power plant in West Olive, the Organization of MISO States said in a June 23 rehearing request to the department. In its decision ordering the Campbell plant to run through Aug. 21, DOE cited the North American Electric Reliability Corp.’s 2025 Summer Reliability Assessment as the main evidence that there is an emergency in MISO, according to the rehearing request. However, the OMS-MISO Resource Adequacy Survey, MISO’s 2025/2026 Planning Resource Auction, MISO’s summer readiness assessment and Consumers Energy’s plans do not show a regional reliability emergency, shortfall or an unmet reliability criterion that justifies reversing the approved power plant retirement, OMS said. MISO’s most recent capacity auction cleared above its reserve margin target for this summer, the utility commissions noted. The auction cleared with a 10.1% summer reserve margin — a buffer above expected power supply needs — for MISO’s north and central regions compared to a 7.9% reserve target, according to MISO. Also, NERC’s long term reliability and seasonal assessment have limited value due to the inconsistent data collection methods between regional transmission organizations, unverified data inputs and doubtful evaluation metrics, according to the OMS commissions signing the rehearing request. “At their core, the NERC [Long Term Reliability Assessment] and seasonal assessments are undependable because they lack stakeholder input and verification,” the utility commissions said, noting they have been called into question in recent years. Most recently, MISO’s independent market monitor raised his concerns about NERC’s assessments at

Read More »

Senate negotiating IRA tax credits, aims to vote on budget bill Friday

The Senate is planning for gentler cuts to the Inflation Reduction Act’s clean energy tax credits as it continues to negotiate its version of the ‘One Big Beautiful Bill’ budget legislation, with Republican leadership hoping to start voting on the bill by Friday, senators said Tuesday. Sen. Kevin Cramer, R-N.D., said the Senate’s cuts to the IRA may ultimately be “a little more generous” than the House’s cuts, Reuters reported. Those cuts sought to claw back funding by winding down most credits much earlier than originally laid out in the IRA. Cramer also said there is “work being done” to revise the cuts to the residential solar tax credit, or 25D, which was eliminated in both the final bill passed by the House of Representatives and the version of the bill passed out of the Senate Finance Committee. Senate Majority Leader John Thune, R-S.D., told Axios Tuesday that he expects the Senate will begin voting on the bill Friday and “grind it out until … whenever.” Congressional Republicans have set a self-imposed deadline to deliver final legislation to President Donald Trump by July 4. House Speaker Mike Johnson, R-La., warned House Republicans on Tuesday “not to leave town” in the coming days in case of a vote, Politico reported. Senate Parliamentarian Elizabeth MacDonough announced on Monday that multiple provisions included in the Senate Finance Committee’s version of the bill wouldn’t qualify for budget reconciliation under the Byrd Rule, and have to be removed or will trigger a 60-vote threshold requirement.  Several of those provisions are energy-related, including one deeming offshore oil and gas projects “automatically compliant” with the National Environmental Policy Act, and one which would remove the Secretary of the Interior’s “discretion to reduce fees for solar and wind projects on Bureau of Land Management land,” according to Senate

Read More »

How risk-spend efficiency puts AI at the center of utility investment strategy

Adam Cornille is a senior director of advanced analytics at Logic20/20. Today’s utilities face a broad array of challenges — from aging infrastructure to escalating climate emergencies to rising costs — all while working with limited resources. Traditional methods of risk management, often driven by standard schedules and corrective maintenance, are no longer sufficient to meet the demands of the moment. As threats become more frequent, severe, and interconnected, utilities need a data-driven strategy for making effective, defensible investment decisions. Risk-spend efficiency, or RSE, has emerged as a standard for evaluating and prioritizing risk mitigation investments, allowing utilities to maximize the impact per dollar. For RSE to be effective, utilities must have a full and accurate understanding of the potential costs and impacts of mitigation efforts under consideration. This is where data and AI are critical to the process. AI enables utilities to integrate and analyze data from multiple facets of operations, delivering the dynamic risk profiles needed to yield informed investment insights. Having a full range of operational data included in RSE analysis allows it to accurately reflect real world impacts of hardening, asset management, vegetation management and other risk reduction programs. RSE gives utilities a scalable tool that adapts to changing organizational needs and environmental risks. Delivering on the full potential of RSE supported by AI requires the right technical foundation — starting with integrated data pipelines and scalable platforms. Building the foundation AI-driven RSE requires two foundational elements: high-integrity data pipelines and modular, flexible platforms. The pipelines are essential for preparing and delivering varied data from diverse sources (asset specifications, weather forecasts, etc.), while the platforms make this data usable at scale to support comprehensive analysis and reliable outputs. Our team recently helped a large utility upgrade the foundation of its RSE program with modular data pipelines so

Read More »

Iraq Says USA Company Among Bidders as It Plans First LNG Imports

Iraq is in advanced talks with Excelerate Energy Inc. to facilitate its first-ever liquefied natural gas imports as the country, which struggles with power outages, seeks to cut dependence on Iran for supplies. Texas-based Excelerate is one of several companies to have submitted a bid to provide an LNG import ship and secure supplies of the fuel, said Ali Salman, acting director general for the state-run South Gas Co., which has been charged with overseeing the development of the project. An Excelerate representative declined to comment. Iraq’s push to buy LNG comes as the country halted electricity imports from neighboring Iran earlier this year after the US ended a sanction waiver ordered by President Donald Trump to tighten pressure on Tehran. Gas imports from Iran have since continued and weren’t affected by the US and Israeli strikes on Iran’s energy sites this month, although this highlighted the fragility of the supply route. Gas Supplies Iraq, OPEC’s second-biggest oil producer, doesn’t have enough gas to operate its mostly gas-fired power plants and suffers from crippling blackouts every summer when demand peaks. It’s also been trying to reduce the amount of wastful gas flaring from its own fields, and has been looking at buying LNG for years as a way to fill the shortages. The US has urged Iraq to diversify the origins of its gas imports, and Baghdad signed an agreement with Turkmenistan last year for gas supplies. Iraq has been in talks with Qatar and others to find more solutions, Farhad Alaaldin, an adviser on foreign affairs to Iraq’s prime minister, said on March 12. Iraq receives Iranian natural gas from two pipelines, but flows have been interrupted several times in recent years. In 2023, Iran cut volumes in half because of unpaid bills, which Baghdad said arose due to US sanctions on Iran. Iraq’s

Read More »

AI means the end of internet search as we’ve known it

We all know what it means, colloquially, to google something. You pop a few relevant words in a search box and in return get a list of blue links to the most relevant results. Maybe some quick explanations up top. Maybe some maps or sports scores or a video. But fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in some sort of structured way.  But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.  Of course, Google—the company that has defined search for the past 25 years—is trying to be out front on this. In May of 2023, it began testing AI-generated responses to search queries, using its large language model (LLM) to deliver the kinds of answers you might expect from an expert source or trusted friend. It calls these AI Overviews. Google CEO Sundar Pichai described this to MIT Technology Review as “one of the most positive changes we’ve done to search in a long, long time.”
AI Overviews fundamentally change the kinds of queries Google can address. You can now ask it things like “I’m going to Japan for one week next month. I’ll be staying in Tokyo but would like to take some day trips. Are there any festivals happening nearby? How will the surfing be in Kamakura? Are there any good bands playing?” And you’ll get an answer—not just a link to Reddit, but a built-out answer with current results.  More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.
And it’s not just Google. OpenAI’s ChatGPT now has access to the web, making it far better at finding up-to-date answers to your queries. Microsoft released generative search results for Bing in September. Meta has its own version. The startup Perplexity was doing the same, but with a “move fast, break things” ethos. Literal trillions of dollars are at stake in the outcome as these players jockey to become the next go-to source for information retrieval—the next Google. Not everyone is excited for the change. Publishers are completely freaked out. The shift has heightened fears of a “zero-click” future, where search referral traffic—a mainstay of the web since before Google existed—vanishes from the scene.  I got a vision of that future last June, when I got a push alert from the Perplexity app on my phone. Perplexity is a startup trying to reinvent web search. But in addition to delivering deep answers to queries, it will create entire articles about the news of the day, cobbled together by AI from different sources.  On that day, it pushed me a story about a new drone company from Eric Schmidt. I recognized the story. Forbes had reported it exclusively, earlier in the week, but it had been locked behind a paywall. The image on Perplexity’s story looked identical to one from Forbes. The language and structure were quite similar. It was effectively the same story, but freely available to anyone on the internet. I texted a friend who had edited the original story to ask if Forbes had a deal with the startup to republish its content. But there was no deal. He was shocked and furious and, well, perplexed. He wasn’t alone. Forbes, the New York Times, and Condé Nast have now all sent the company cease-and-desist orders. News Corp is suing for damages.  People are worried about what these new LLM-powered results will mean for our fundamental shared reality. It could spell the end of the canonical answer. It was precisely the nightmare scenario publishers have been so afraid of: The AI was hoovering up their premium content, repackaging it, and promoting it to its audience in a way that didn’t really leave any reason to click through to the original. In fact, on Perplexity’s About page, the first reason it lists to choose the search engine is “Skip the links.” But this isn’t just about publishers (or my own self-interest).  People are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Language models have a tendency to make stuff up—they can hallucinate nonsense. Moreover, generative AI can serve up an entirely new answer to the same question every time, or provide different answers to different people on the basis of what it knows about them. It could spell the end of the canonical answer. But make no mistake: This is the future of search. Try it for a bit yourself, and you’ll see. 

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.  Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?  In the beginning there was Archie. It was the first real internet search engine, and it crawled files previously hidden in the darkness of remote servers. It didn’t tell you what was in those files—just their names. It didn’t preview images; it didn’t have a hierarchy of results, or even much of an interface. But it was a start. And it was pretty good.  Then Tim Berners-Lee created the World Wide Web, and all manner of web pages sprang forth. The Mosaic home page and the Internet Movie Database and Geocities and the Hampster Dance and web rings and Salon and eBay and CNN and federal government sites and some guy’s home page in Turkey. Until finally, there was too much web to even know where to start. We really needed a better way to navigate our way around, to actually find the things we needed.  And so in 1994 Jerry Yang created Yahoo, a hierarchical directory of websites. It quickly became the home page for millions of people. And it was … well, it was okay. TBH, and with the benefit of hindsight, I think we all thought it was much better back then than it actually was. But the web continued to grow and sprawl and expand, every day bringing more information online. Rather than just a list of sites by category, we needed something that actually looked at all that content and indexed it. By the late ’90s that meant choosing from a variety of search engines: AltaVista and AlltheWeb and WebCrawler and HotBot. And they were good—a huge improvement. At least at first.   But alongside the rise of search engines came the first attempts to exploit their ability to deliver traffic. Precious, valuable traffic, which web publishers rely on to sell ads and retailers use to get eyeballs on their goods. Sometimes this meant stuffing pages with keywords or nonsense text designed purely to push pages higher up in search results. It got pretty bad. 
And then came Google. It’s hard to overstate how revolutionary Google was when it launched in 1998. Rather than just scanning the content, it also looked at the sources linking to a website, which helped evaluate its relevance. To oversimplify: The more something was cited elsewhere, the more reliable Google considered it, and the higher it would appear in results. This breakthrough made Google radically better at retrieving relevant results than anything that had come before. It was amazing.  Google CEO Sundar Pichai describes AI Overviews as “one of the most positive changes we’ve done to search in a long, long time.”JENS GYARMATY/LAIF/REDUX For 25 years, Google dominated search. Google was search, for most people. (The extent of that domination is currently the subject of multiple legal probes in the United States and the European Union.)  
But Google has long been moving away from simply serving up a series of blue links, notes Pandu Nayak, Google’s chief scientist for search.  “It’s not just so-called web results, but there are images and videos, and special things for news. There have been direct answers, dictionary answers, sports, answers that come with Knowledge Graph, things like featured snippets,” he says, rattling off a litany of Google’s steps over the years to answer questions more directly.  It’s true: Google has evolved over time, becoming more and more of an answer portal. It has added tools that allow people to just get an answer—the live score to a game, the hours a café is open, or a snippet from the FDA’s website—rather than being pointed to a website where the answer may be.  But once you’ve used AI Overviews a bit, you realize they are different.  Take featured snippets, the passages Google sometimes chooses to highlight and show atop the results themselves. Those words are quoted directly from an original source. The same is true of knowledge panels, which are generated from information stored in a range of public databases and Google’s Knowledge Graph, its database of trillions of facts about the world. While these can be inaccurate, the information source is knowable (and fixable). It’s in a database. You can look it up. Not anymore: AI Overviews can be entirely new every time, generated on the fly by a language model’s predictive text combined with an index of the web. 
“I think it’s an exciting moment where we have obviously indexed the world. We built deep understanding on top of it with Knowledge Graph. We’ve been using LLMs and generative AI to improve our understanding of all that,” Pichai told MIT Technology Review. “But now we are able to generate and compose with that.” The result feels less like a querying a database than like asking a very smart, well-read friend. (With the caveat that the friend will sometimes make things up if she does not know the answer.)  “[The company’s] mission is organizing the world’s information,” Liz Reid, Google’s head of search, tells me from its headquarters in Mountain View, California. “But actually, for a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you.”  That second concept—accessibility—is what Google is really keying in on with AI Overviews. It’s a sentiment I hear echoed repeatedly while talking to Google execs: They can address more complicated types of queries more efficiently by bringing in a language model to help supply the answers. And they can do it in natural language. 
That will become even more important for a future where search goes beyond text queries. For example, Google Lens, which lets people take a picture or upload an image to find out more about something, uses AI-generated answers to tell you what you may be looking at. Google has even showed off the ability to query live video.  When it doesn’t have an answer, an AI model can confidently spew back a response anyway. For Google, this could be a real problem. For the rest of us, it could actually be dangerous. “We are definitely at the start of a journey where people are going to be able to ask, and get answered, much more complex questions than where we’ve been in the past decade,” says Pichai.  There are some real hazards here. First and foremost: Large language models will lie to you. They hallucinate. They get shit wrong. When it doesn’t have an answer, an AI model can blithely and confidently spew back a response anyway. For Google, which has built its reputation over the past 20 years on reliability, this could be a real problem. For the rest of us, it could actually be dangerous. In May 2024, AI Overviews were rolled out to everyone in the US. Things didn’t go well. Google, long the world’s reference desk, told people to eat rocks and to put glue on their pizza. These answers were mostly in response to what the company calls adversarial queries—those designed to trip it up. But still. It didn’t look good. The company quickly went to work fixing the problems—for example, by deprecating so-called user-generated content from sites like Reddit, where some of the weirder answers had come from. Yet while its errors telling people to eat rocks got all the attention, the more pernicious danger might arise when it gets something less obviously wrong. For example, in doing research for this article, I asked Google when MIT Technology Review went online. It helpfully responded that “MIT Technology Review launched its online presence in late 2022.” This was clearly wrong to me, but for someone completely unfamiliar with the publication, would the error leap out?  I came across several examples like this, both in Google and in OpenAI’s ChatGPT search. Stuff that’s just far enough off the mark not to be immediately seen as wrong. Google is banking that it can continue to improve these results over time by relying on what it knows about quality sources. “When we produce AI Overviews,” says Nayak, “we look for corroborating information from the search results, and the search results themselves are designed to be from these reliable sources whenever possible. These are some of the mechanisms we have in place that assure that if you just consume the AI Overview, and you don’t want to look further … we hope that you will still get a reliable, trustworthy answer.” In the case above, the 2022 answer seemingly came from a reliable source—a story about MIT Technology Review’s email newsletters, which launched in 2022. But the machine fundamentally misunderstood. This is one of the reasons Google uses human beings—raters—to evaluate the results it delivers for accuracy. Ratings don’t correct or control individual AI Overviews; rather, they help train the model to build better answers. But human raters can be fallible. Google is working on that too.  “Raters who look at your experiments may not notice the hallucination because it feels sort of natural,” says Nayak. “And so you have to really work at the evaluation setup to make sure that when there is a hallucination, someone’s able to point out and say, That’s a problem.” The new search Google has rolled out its AI Overviews to upwards of a billion people in more than 100 countries, but it is facing upstarts with new ideas about how search should work. Search Engine GoogleThe search giant has added AI Overviews to search results. These overviews take information from around the web and Google’s Knowledge Graph and use the company’s Gemini language model to create answers to search queries. What it’s good at Google’s AI Overviews are great at giving an easily digestible summary in response to even the most complex queries, with sourcing boxes adjacent to the answers. Among the major options, its deep web index feels the most “internety.” But web publishers fear its summaries will give people little reason to click through to the source material. PerplexityPerplexity is a conversational search engine that uses third-party largelanguage models from OpenAI and Anthropic to answer queries. Perplexity is fantastic at putting together deeper dives in response to user queries, producing answers that are like mini white papers on complex topics. It’s also excellent at summing up current events. But it has gotten a bad rep with publishers, who say it plays fast and loose with their content. ChatGPTWhile Google brought AI to search, OpenAI brought search to ChatGPT. Queries that the model determines will benefit from a web search automatically trigger one, or users can manually select the option to add a web search. Thanks to its ability to preserve context across a conversation, ChatGPT works well for performing searches that benefit from follow-up questions—like planning a vacation through multiple search sessions. OpenAI says users sometimes go “20 turns deep” in researching queries. Of these three, it makes links out to publishers least prominent. When I talked to Pichai about this, he expressed optimism about the company’s ability to maintain accuracy even with the LLM generating responses. That’s because AI Overviews is based on Google’s flagship large language model, Gemini, but also draws from Knowledge Graph and what it considers reputable sources around the web.  “You’re always dealing in percentages. What we have done is deliver it at, like, what I would call a few nines of trust and factuality and quality. I’d say 99-point-few-nines. I think that’s the bar we operate at, and it is true with AI Overviews too,” he says. “And so the question is, are we able to do this again at scale? And I think we are.” There’s another hazard as well, though, which is that people ask Google all sorts of weird things. If you want to know someone’s darkest secrets, look at their search history. Sometimes the things people ask Google about are extremely dark. Sometimes they are illegal. Google doesn’t just have to be able to deploy its AI Overviews when an answer can be helpful; it has to be extremely careful not to deploy them when an answer may be harmful.  “If you go and say ‘How do I build a bomb?’ it’s fine that there are web results. It’s the open web. You can access anything,” Reid says. “But we do not need to have an AI Overview that tells you how to build a bomb, right? We just don’t think that’s worth it.”  But perhaps the greatest hazard—or biggest unknown—is for anyone downstream of a Google search. Take publishers, who for decades now have relied on search queries to send people their way. What reason will people have to click through to the original source, if all the information they seek is right there in the search result?   Rand Fishkin, cofounder of the market research firm SparkToro, publishes research on so-called zero-click searches. As Google has moved increasingly into the answer business, the proportion of searches that end without a click has gone up and up. His sense is that AI Overviews are going to explode this trend.   “If you are reliant on Google for traffic, and that traffic is what drove your business forward, you are in long- and short-term trouble,” he says.  Don’t panic, is Pichai’s message. He argues that even in the age of AI Overviews, people will still want to click through and go deeper for many types of searches. “The underlying principle is people are coming looking for information. They’re not looking for Google always to just answer,” he says. “Sometimes yes, but the vast majority of the times, you’re looking at it as a jumping-off point.”  Reid, meanwhile, argues that because AI Overviews allow people to ask more complicated questions and drill down further into what they want, they could even be helpful to some types of publishers and small businesses, especially those operating in the niches: “You essentially reach new audiences, because people can now express what they want more specifically, and so somebody who specializes doesn’t have to rank for the generic query.”  “I’m going to start with something risky,” Nick Turley tells me from the confines of a Zoom window. Turley is the head of product for ChatGPT, and he’s showing off OpenAI’s new web search tool a few weeks before it launches. “I should normally try this beforehand, but I’m just gonna search for you,” he says. “This is always a high-risk demo to do, because people tend to be particular about what is said about them on the internet.”  He types my name into a search field, and the prototype search engine spits back a few sentences, almost like a speaker bio. It correctly identifies me and my current role. It even highlights a particular story I wrote years ago that was probably my best known. In short, it’s the right answer. Phew?  A few weeks after our call, OpenAI incorporated search into ChatGPT, supplementing answers from its language model with information from across the web. If the model thinks a response would benefit from up-to-date information, it will automatically run a web search (OpenAI won’t say who its search partners are) and incorporate those responses into its answer, with links out if you want to learn more. You can also opt to manually force it to search the web if it does not do so on its own. OpenAI won’t reveal how many people are using its web search, but it says some 250 million people use ChatGPT weekly, all of whom are potentially exposed to it.   “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be a better super-assistant for you.” Kevin Weil, chief product officer, OpenAI According to Fishkin, these newer forms of AI-assisted search aren’t yet challenging Google’s search dominance. “It does not appear to be cannibalizing classic forms of web search,” he says.  OpenAI insists it’s not really trying to compete on search—although frankly this seems to me like a bit of expectation setting. Rather, it says, web search is mostly a means to get more current information than the data in its training models, which tend to have specific cutoff dates that are often months, or even a year or more, in the past. As a result, while ChatGPT may be great at explaining how a West Coast offense works, it has long been useless at telling you what the latest 49ers score is. No more.  “I come at it from the perspective of ‘How can we make ChatGPT able to answer every question that you have? How can we make it more useful to you on a daily basis?’ And that’s where search comes in for us,” Kevin Weil, the chief product officer with OpenAI, tells me. “There’s an incredible amount of content on the web. There are a lot of things happening in real time. You want ChatGPT to be able to use that to improve its answers and to be able to be a better super-assistant for you.” Today ChatGPT is able to generate responses for very current news events, as well as near-real-time information on things like stock prices. And while ChatGPT’s interface has long been, well, boring, search results bring in all sorts of multimedia—images, graphs, even video. It’s a very different experience.  Weil also argues that ChatGPT has more freedom to innovate and go its own way than competitors like Google—even more than its partner Microsoft does with Bing. Both of those are ad-dependent businesses. OpenAI is not. (At least not yet.) It earns revenue from the developers, businesses, and individuals who use it directly. It’s mostly setting large amounts of money on fire right now—it’s projected to lose $14 billion in 2026, by some reports. But one thing it doesn’t have to worry about is putting ads in its search results as Google does.  “For a while what we did was organize web pages. Which is not really the same thing as organizing the world’s information or making it truly useful and accessible to you,” says Google head of search, Liz Reid.WINNI WINTERMEYER/REDUX Like Google, ChatGPT is pulling in information from web publishers, summarizing it, and including it in its answers. But it has also struck financial deals with publishers, a payment for providing the information that gets rolled into its results. (MIT Technology Review has been in discussions with OpenAI, Google, Perplexity, and others about publisher deals but has not entered into any agreements. Editorial was neither party to nor informed about the content of those discussions.) But the thing is, for web search to accomplish what OpenAI wants—to be more current than the language model—it also has to bring in information from all sorts of publishers and sources that it doesn’t have deals with. OpenAI’s head of media partnerships, Varun Shetty, told MIT Technology Review that it won’t give preferential treatment to its publishing partners. Instead, OpenAI told me, the model itself finds the most trustworthy and useful source for any given question. And that can get weird too. In that very first example it showed me—when Turley ran that name search—it described a story I wrote years ago for Wired about being hacked. That story remains one of the most widely read I’ve ever written. But ChatGPT didn’t link to it. It linked to a short rewrite from The Verge. Admittedly, this was on a prototype version of search, which was, as Turley said, “risky.”  When I asked him about it, he couldn’t really explain why the model chose the sources that it did, because the model itself makes that evaluation. The company helps steer it by identifying—sometimes with the help of users—what it considers better answers, but the model actually selects them.  “And in many cases, it gets it wrong, which is why we have work to do,” said Turley. “Having a model in the loop is a very, very different mechanism than how a search engine worked in the past.” Indeed!  The model, whether it’s OpenAI’s GPT-4o or Google’s Gemini or Anthropic’s Claude, can be very, very good at explaining things. But the rationale behind its explanations, its reasons for selecting a particular source, and even the language it may use in an answer are all pretty mysterious. Sure, a model can explain very many things, but not when that comes to its own answers.  It was almost a decade ago, in 2016, when Pichai wrote that Google was moving from “mobile first” to “AI first”: “But in the next 10 years, we will shift to a world that is AI-first, a world where computing becomes universally available—be it at home, at work, in the car, or on the go—and interacting with all of these surfaces becomes much more natural and intuitive, and above all, more intelligent.”  We’re there now—sort of. And it’s a weird place to be. It’s going to get weirder. That’s especially true as these things we now think of as distinct—querying a search engine, prompting a model, looking for a photo we’ve taken, deciding what we want to read or watch or hear, asking for a photo we wish we’d taken, and didn’t, but would still like to see—begin to merge.  The search results we see from generative AI are best understood as a waypoint rather than a destination. What’s most important may not be search in itself; rather, it’s that search has given AI model developers a path to incorporating real-time information into their inputs and outputs. And that opens up all sorts of possibilities. “A ChatGPT that can understand and access the web won’t just be about summarizing results. It might be about doing things for you. And I think there’s a fairly exciting future there,” says OpenAI’s Weil. “You can imagine having the model book you a flight, or order DoorDash, or just accomplish general tasks for you in the future. It’s just once the model understands how to use the internet, the sky’s the limit.” This is the agentic future we’ve been hearing about for some time now, and the more AI models make use of real-time data from the internet, the closer it gets.  Let’s say you have a trip coming up in a few weeks. An agent that can get data from the internet in real time can book your flights and hotel rooms, make dinner reservations, and more, based on what it knows about you and your upcoming travel—all without your having to guide it. Another agent could, say, monitor the sewage output of your home for certain diseases, and order tests and treatments in response. You won’t have to search for that weird noise your car is making, because the agent in your vehicle will already have done it and made an appointment to get the issue fixed.  “It’s not always going to be just doing search and giving answers,” says Pichai. “Sometimes it’s going to be actions. Sometimes you’ll be interacting within the real world. So there is a notion of universal assistance through it all.” And the ways these things will be able to deliver answers is evolving rapidly now too. For example, today Google can not only search text, images, and even video; it can create them. Imagine overlaying that ability with search across an array of formats and devices. “Show me what a Townsend’s warbler looks like in the tree in front of me.” Or “Use my existing family photos and videos to create a movie trailer of our upcoming vacation to Puerto Rico next year, making sure we visit all the best restaurants and top landmarks.” “We have primarily done it on the input side,” he says, referring to the ways Google can now search for an image or within a video. “But you can imagine it on the output side too.” This is the kind of future Pichai says he is excited to bring online. Google has already showed off a bit of what that might look like with NotebookLM, a tool that lets you upload large amounts of text and have it converted into a chatty podcast. He imagines this type of functionality—the ability to take one type of input and convert it into a variety of outputs—transforming the way we interact with information.  In a demonstration of a tool called Project Astra this summer at its developer conference, Google showed one version of this outcome, where cameras and microphones in phones and smart glasses understand the context all around you—online and off, audible and visual—and have the ability to recall and respond in a variety of ways. Astra can, for example, look at a crude drawing of a Formula One race car and not only identify it, but also explain its various parts and their uses.  But you can imagine things going a bit further (and they will). Let’s say I want to see a video of how to fix something on my bike. The video doesn’t exist, but the information does. AI-assisted generative search could theoretically find that information somewhere online—in a user manual buried in a company’s website, for example—and create a video to show me exactly how to do what I want, just as it could explain that to me with words today. These are the kinds of things that start to happen when you put the entire compendium of human knowledge—knowledge that’s previously been captured in silos of language and format; maps and business registrations and product SKUs; audio and video and databases of numbers and old books and images and, really, anything ever published, ever tracked, ever recorded; things happening right now, everywhere—and introduce a model into all that. A model that maybe can’t understand, precisely, but has the ability to put that information together, rearrange it, and spit it back in a variety of different hopefully helpful ways. Ways that a mere index could not. That’s what we’re on the cusp of, and what we’re starting to see. And as Google rolls this out to a billion people, many of whom will be interacting with a conversational AI for the first time, what will that mean? What will we do differently? It’s all changing so quickly. Hang on, just hang on. 

Read More »

Subsea7 Scores Various Contracts Globally

Subsea 7 S.A. has secured what it calls a “sizeable” contract from Turkish Petroleum Offshore Technology Center AS (TP-OTC) to provide inspection, repair and maintenance (IRM) services for the Sakarya gas field development in the Black Sea. The contract scope includes project management and engineering executed and managed from Subsea7 offices in Istanbul, Türkiye, and Aberdeen, Scotland. The scope also includes the provision of equipment, including two work class remotely operated vehicles, and construction personnel onboard TP-OTC’s light construction vessel Mukavemet, Subsea7 said in a news release. The company defines a sizeable contract as having a value between $50 million and $150 million. Offshore operations will be executed in 2025 and 2026, Subsea7 said. Hani El Kurd, Senior Vice President of UK and Global Inspection, Repair, and Maintenance at Subsea7, said: “We are pleased to have been selected to deliver IRM services for TP-OTC in the Black Sea. This contract demonstrates our strategy to deliver engineering solutions across the full asset lifecycle in close collaboration with our clients. We look forward to continuing to work alongside TP-OTC to optimize gas production from the Sakarya field and strengthen our long-term presence in Türkiye”. North Sea Project Subsea7 also announced the award of a “substantial” contract by Inch Cape Offshore Limited to Seaway7, which is part of the Subsea7 Group. The contract is for the transport and installation of pin-pile jacket foundations and transition pieces for the Inch Cape Offshore Wind Farm. The 1.1-gigawatt Inch Cape project offshore site is located in the Scottish North Sea, 9.3 miles (15 kilometers) off the Angus coast, and will comprise 72 wind turbine generators. Seaway7’s scope of work includes the transport and installation of 18 pin-pile jacket foundations and 54 transition pieces with offshore works expected to begin in 2026, according to a separate news

Read More »

Driving into the future

Welcome to our annual breakthroughs issue. If you’re an MIT Technology Review superfan, you may already know that putting together our 10 Breakthrough Technologies (TR10) list is one of my favorite things we do as a publication. We spend months researching and discussing which technologies will make the list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.  We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. When you look back over the years, you’ll find items like natural-language processing (2001), wireless power (2008), and reusable rockets (2016)—spot-on in terms of horizon scanning. You’ll also see the occasional miss, or moments when maybe we were a little bit too far ahead of ourselves. (See our Magic Leap entry from 2015.) But the real secret of the TR10 is what we leave off the list. It is hard to think of another industry, aside from maybe entertainment, that has as much of a hype machine behind it as tech does. Which means that being too conservative is rarely the wrong call. But it does happen.  Last year, for example, we were going to include robotaxis on the TR10. Autonomous vehicles have been around for years, but 2023 seemed like a real breakthrough moment; both Cruise and Waymo were ferrying paying customers around various cities, with big expansion plans on the horizon. And then, last fall, after a series of mishaps (including an incident when a pedestrian was caught under a vehicle and dragged), Cruise pulled its entire fleet of robotaxis from service. Yikes. 
The timing was pretty miserable, as we were in the process of putting some of the finishing touches on the issue. I made the decision to pull it. That was a mistake.  What followed turned out to be a banner year for the robotaxi. Waymo, which had previously been available only to a select group of beta testers, opened its service to the general public in San Francisco and Los Angeles in 2024. Its cars are now ubiquitous in the City by the Bay, where they have not only become a real competitor to the likes of Uber and Lyft but even created something of a tourist attraction. Which is no wonder, because riding in one is delightful. They are still novel enough to make it feel like a kind of magic. And as you can read, Waymo is just a part of this amazing story. 
The item we swapped into the robotaxi’s place was the Apple Vision Pro, an example of both a hit and a miss. We’d included it because it is truly a revolutionary piece of hardware, and we zeroed in on its micro-OLED display. Yet a year later, it has seemingly failed to find a market fit, and its sales are reported to be far below what Apple predicted. I’ve been covering this field for well over a decade, and I would still argue that the Vision Pro (unlike the Magic Leap vaporware of 2015) is a breakthrough device. But it clearly did not have a breakthrough year. Mea culpa.  Having said all that, I think we have an incredible and thought-provoking list for you this year—from a new astronomical observatory that will allow us to peer into the fourth dimension to new ways of searching the internet to, well, robotaxis. I hope there’s something here for everyone.

Read More »

Oil Holds at Highest Levels Since October

Crude oil futures slightly retreated but continue to hold at their highest levels since October, supported by colder weather in the Northern Hemisphere and China’s economic stimulus measures. That’s what George Pavel, General Manager at Naga.com Middle East, said in a market analysis sent to Rigzone this morning, adding that Brent and WTI crude “both saw modest declines, yet the outlook remains bullish as colder temperatures are expected to increase demand for heating oil”. “Beijing’s fiscal stimulus aims to rejuvenate economic activity and consumer demand, further contributing to fuel consumption expectations,” Pavel said in the analysis. “This economic support from China could help sustain global demand for crude, providing upward pressure on prices,” he added. Looking at supply, Pavel noted in the analysis that “concerns are mounting over potential declines in Iranian oil production due to anticipated sanctions and policy changes under the incoming U.S. administration”. “Forecasts point to a reduction of 300,000 barrels per day in Iranian output by the second quarter of 2025, which would weigh on global supply and further support prices,” he said. “Moreover, the U.S. oil rig count has decreased, indicating a potential slowdown in future output,” he added. “With supply-side constraints contributing to tightening global inventories, this situation is likely to reinforce the current market optimism, supporting crude prices at elevated levels,” Pavel continued. “Combined with the growing demand driven by weather and economic factors, these supply dynamics point to a favorable environment for oil prices in the near term,” Pavel went on to state. Rigzone has contacted the Trump transition team and the Iranian ministry of foreign affairs for comment on Pavel’s analysis. At the time of writing, neither have responded to Rigzone’s request yet. In a separate market analysis sent to Rigzone earlier this morning, Antonio Di Giacomo, Senior Market Analyst at

Read More »

What to expect from NaaS in 2025

Shamus McGillicuddy, vice president of research at EMA, says that network execs today have a fuller understanding of the potential benefits of NaaS, beyond simply a different payment model. NaaS can deliver access to new technologies faster and keep enterprises up-to-date as technologies evolve over time; it can help mitigate skills gaps for organizations facing a shortage of networking talent. For example, in a retail scenario, an organization can offload deployment and management of its Wi-Fi networks at all of its stores to a NaaS vendor, freeing up IT staffers for higher-level activities. Also, it can help organizations manage rapidly fluctuating demands on the network, he says. 2. Frameworks help drive adoption Industry standards can help accelerate the adoption of new technologies. MEF, a nonprofit industry forum, has developed a framework that combines standardized service definitions, extensive automation frameworks, security certifications, and multi-cloud integration capabilities—all aimed at enabling service providers to deliver what MEF calls a true cloud experience for network services. The blueprint serves as a guide for building an automated, federated ecosystem where enterprises can easily consume NaaS services from providers. It details the APIs, service definitions, and certification programs that MEF has developed to enable this vision. The four components of NaaS, according to the blueprint, are on-demand automated transport services, SD-WAN overlays and network slicing for application assurance, SASE-based security, and multi-cloud on-ramps. 3. The rise of campus/LAN NaaS Until very recently, the most popular use cases for NaaS were on-demand WAN connectivity, multi-cloud connectivity, SD-WAN, and SASE. However, campus/LAN NaaS, which includes both wired and wireless networks, has emerged as the breakout star in the overall NaaS market. Dell’Oro Group analyst Sian Morgan predicts: “In 2025, Campus NaaS revenues will grow over eight times faster than the overall LAN market. Startups offering purpose-built CNaaS technology will

Read More »

UK battery storage industry ‘back on track’

UK battery storage investor Gresham House Energy Storage Fund (LON:GRID) has said the industry is “back on track” as trading conditions improved, particularly in December. The UK’s largest fund specialising in battery energy storage systems (BESS) highlighted improvements in service by the UK government’s National Energy System Operator (NESO) as well as its renewed commitment to to the sector as part of clean power aims by 2030. It also revealed that revenues exceeding £60,000 per MW of electricity its facilities provided in the second half of 2024 meant it would meet or even exceed revenue targets. This comes after the fund said it had faced a “weak revenue environment” in the first part of the year. In April it reported a £110 million loss compared to a £217m profit the previous year and paused dividends. Fund manager Ben Guest said the organisation was “working hard” on refinancing  and a plan to “re-instate dividend payments”. In a further update, the fund said its 40MW BESS project at Shilton Lane, 11 miles from Glasgow, was  fully built and in the final stages of the NESO compliance process which expected to complete in February 2025. Fund chair John Leggate welcomed “solid progress” in company’s performance, “as well as improvements in NESO’s control room, and commitment to further change, that should see BESS increasingly well utilised”. He added: “We thank our shareholders for their patience as the battery storage industry gets back on track with the most environmentally appropriate and economically competitive energy storage technology (Li-ion) being properly prioritised. “Alongside NESO’s backing of BESS, it is encouraging to see the government’s endorsement of a level playing field for battery storage – the only proven, commercially viable technology that can dynamically manage renewable intermittency at national scale.” Guest, who in addition to managing the fund is also

Read More »

Get paid faster: How Intuit’s new AI agents help businesses get funds up to 5 days faster and save 12 hours a month with autonomous workflows

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Intuit has been on a journey over the last several years with generative AI, incorporating the technology as part of its services at QuickBooks, Credit Karma,Turbotax and Mailchimp. Today the company is taking the next step with a series of AI agents that go beyond that to transform how small and mid-market businesses operate. These new agents work as a virtual team that automates workflows and provides real-time business insights. They include capabilities for payments, accounts and finance that will directly impact business operations. According to Intuit, customers save up to 12 hours per month and, on average, will get paid up to five days faster thanks to the new agents. “If you look at the trajectory of our AI experiences at Intuit in the early years, AI was built into the background, and with Intuit Assist, you saw a shift to provide information back to the customer,” Ashok Srivastava, chief data officer at Intuit, told VentureBeat. “Now what you’re seeing is a complete redesign. The agents are actually doing work on behalf of the customer, with their permission.” Technical architecture: From starter kit to production agents Intuit has been working on the path from assistants to agentic AI for some time. In September 2024, the company detailed its plans to use AI to automate complex tasks. It’s an approach built firmly on the company’s generative AI operating system (GenOS) platform, the foundation of its AI efforts. Earlier this month, Intuit announced a series of efforts that further extend its capabilities. The company has developed its own prompt optimization service that will optimize queries for any large language model (LLM). It has also developed what it calls

Read More »

The Download: Google DeepMind’s DNA AI, and heatwaves’ impact on the grid

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Google’s new AI will help researchers understand how our genes work When scientists first sequenced the human genome in 2003, they revealed the full set of DNA instructions that make a person. But we still didn’t know what all those 3 billion genetic letters actually do.Now Google’s DeepMind division says it’s made a leap in trying to understand the code with AlphaGenome, an AI model that predicts what effects small changes in DNA will have on an array of molecular processes, such as whether a gene’s activity will go up or down.It’s just the sort of question biologists regularly assess in lab experiments, and is an attempt to further smooth biologists’ work by answering basic questions about how changing DNA letters alters gene activity and, eventually, how genetic mutations affect our health. Read the full story. —Antonio Regalado
It’s officially summer, and the grid is stressed
It’s crunch time for the grid this week. Large swaths of the US have reached or exceeded record-breaking temperatures. Spain recently went through a dramatic heat wave too, as did the UK, which is bracing for another one soon.We rely on electricity to keep ourselves comfortable, and more to the point, safe. These are the moments we design the grid for: when need is at its very highest. The key to keeping everything running smoothly during these times might be just a little bit of flexibility. But demand for electricity from major grids is already peaking, and that’s a good reason to be a little nervous. Read the full story. —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. MIT Technology Review Narrated: How did China come to dominate the world of electric cars? From generous government subsidies to support for lithium batteries, here are the keys to understanding how China managed to build a world-leading industry in electric vehicles.This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. Inside OpenAI’s empire with Karen Hao

Journalist Karen Hao’s newly released book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, tells the story of OpenAI’s rise to power and its far-reaching impact all over the world.Hao, a former MIT Technology Review senior editor, will join our executive editor Niall Firth in an intimate subscriber-exclusive Roundtable conversation exploring the AI arms race, what it means for all of us, and where it’s headed. Register here to join us at 9am ET on Monday June 30th June. Special giveaway: Attendees will have the chance to receive a free copy of Hao’s book. See registration form for details. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has won an AI copyright case against authorsThe judge said the authors hadn’t presented enough evidence to back up their case. (TechCrunch)+ It’s not an entirely decisive victory for Meta, though. (Wired $)+ It’s the second lawsuit in favor of AI giants this week. (Insider $) 2 The US will stop contributing towards a global vaccine allianceRFK Jr made unsubstantiated claims about Gavi’s safety record. (WP $)+ Kennedy’s newly-assembled vaccine panel is reviewing its guidelines for children. (Vox)+ Experts are worried the once-influential panel will cause irreparable harm. (Ars Technica)+ How measuring vaccine hesitancy could help health professionals tackle it. (MIT Technology Review) 3 Jeff Bezos is cozying up to Donald TrumpIf the Trump administration happens to need a new space company, he’s ready and willing to supply it. (WSJ $)+ Meanwhile, a private astronaut mission is on its way to the ISS. (CNN)
4 Taiwan is working on suicide drones to defend itself from ChinaThe country is taking a leaf out of Ukraine’s defense book. (FT $)+ This giant microwave may change the future of war. (MIT Technology Review) 5 Biohackers are feeling emboldened by the Trump administrationThey welcome lower barriers to entry for their unorthodox treatments. (Wired $)+ The first US hub for experimental medical treatments is coming. (MIT Technology Review)
6 A UK cyberattack on a health firm contributed to a patient’s deathThe ransomware attack disrupted blood services at London hospitals. (BBC)+ A Russian hacking gang is to blame for the incident. (Bloomberg $) 7 Take a look inside Amazon’s colossal new data centerFour construction teams are working around the clock to finish it. (NYT $)+ Generating video is the most energy-intensive AI prompt. (WSJ $)+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review) 8 The debate around dark energy is intensifyingNew research suggests it evolves over time. But not everyone agrees. (Undark) 9 Trump Mobile is no longer claiming to be ‘made in the USA’It’s now “designed with American values in mind” instead. (Ars Technica) 10 It’s official: The Social Network is getting a sequelZuck goes MAGA? (Deadline $)
Quote of the day “By training generative AI models with copyrighted works, companies are creating something that often will dramatically undermine the market for those works, and thus dramatically undermine the incentive for human beings to create things the old-fashioned way.” —US district judge Vince Chhabria, who presided over a copyright lawsuit brought against Meta by a group of authors, warns of the implications of the company’s actions, the Guardian reports.
One more thing Beyond gene-edited babies: the possible paths for tinkering with human evolution Editing human embryos is restricted in much of the world—and making an edited baby is fully illegal in most countries surveyed by legal scholars. But advancing technology could render the embryo issue moot. New ways of adding CRISPR, the revolutionary gene editing tool, to the bodies of people already born could let them easily receive changes as well. It’s possible that in 125 years, many people will be the beneficiaries of multiple rare, but useful, gene mutations currently found in only small segments of the population.  These could protect us against common diseases and infections, but eventually they could also yield improvements in other traits, such as height, metabolism, or even cognition. But humanity won’t necessarily do things the right way. Read the full story. —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Amazing things are happening in New York’s Central Park.+ A newly-discovered species of dinosaur has gone on display in London, and it’s small but perfectly formed.+ Cool—Bob Dylan is releasing a new art book, this time of his drawings.+ Iron Maiden bassist Steve Harris has a secret second career—as a footballer ⚽

Read More »

It’s officially summer, and the grid is stressed

It’s crunch time for the grid this week. As I’m writing this newsletter, it’s 100 °F (nearly 38 °C) here in New Jersey, and I’m huddled in the smallest room in my apartment with the shades drawn and a single window air conditioner working overtime.   Large swaths of the US have seen brutal heat this week, with multiple days in a row nearing or exceeding record-breaking temperatures. Spain recently went through a dramatic heat wave too, as did the UK, which is unfortunately bracing for another one soon. As I’ve been trying to stay cool, I’ve had my eyes on a website tracking electricity demand, which is also hitting record highs.  We rely on electricity to keep ourselves comfortable, and more to the point, safe. These are the moments we design the grid for: when need is at its very highest. The key to keeping everything running smoothly during these times might be just a little bit of flexibility.  While heat waves happen all over the world, let’s take my local grid as an example. I’m one of the roughly 65 million people covered by PJM Interconnection, the largest grid operator in the US. PJM covers Virginia, West Virginia, Ohio, Pennsylvania, and New Jersey, as well as bits of a couple of neighboring states.
Earlier this year, PJM forecast that electricity demand would peak at 154 gigawatts (GW) this summer. On Monday, just a few days past the official start of the season, the grid blew past that, averaging over 160 GW between 5 p.m. and 6 p.m.  The fact that we’ve already passed both last year’s peak and this year’s forecasted one isn’t necessarily a disaster (PJM says the system’s total capacity is over 179 GW this year). But it is a good reason to be a little nervous. Usually, PJM sees its peak in July or August. As a reminder, it’s June. So we shouldn’t be surprised if we see electricity demand creep to even higher levels later in the summer.
It’s not just PJM, either. MISO, the grid that covers most of the Midwest and part of the US South, put out a notice that it expected to be close to its peak demand this week. And the US Department of Energy released an emergency order for parts of the Southeast, which allows the local utility to boost generation and skirt air pollution limits while demand is high. This pattern of maxing out the grid is only going to continue. That’s because climate change is pushing temperatures higher, and electricity demand is simultaneously swelling (in part because of data centers like those that power AI). PJM’s forecasts show that the summer peak in 2035 could reach nearly 210 GW, well beyond the 179 GW it can provide today.  Of course, we need more power plants to be built and connected to the grid in the coming years (at least if we don’t want to keep ancient, inefficient, expensive coal plants running, as we covered last week). But there’s a quiet strategy that could limit the new construction needed: flexibility. The power grid has to be built for moments of the absolute highest demand we can predict, like this heat wave. But most of the time, a decent chunk of capacity that exists to get us through these peaks sits idle—it only has to come online when demand surges. Another way to look at that, however, is that by shaving off demand during the peak, we can reduce the total infrastructure required to run the grid.  If you live somewhere that’s seen a demand crunch during a heat wave, you might have gotten an email from your utility asking you to hold off on running the dishwasher in the early evening or to set your air conditioner a few degrees higher. These are called demand response programs. Some utilities run more organized programs, where utilities pay customers to ramp down their usage during periods of peak demand. PJM’s demand response programs add up to almost eight gigawatts of power—enough to power over 6 million homes. With these programs, PJM basically avoids having to fire up the equivalent of multiple massive nuclear power plants. (It did activate these programs on Monday afternoon during the hottest part of the day.) As electricity demand goes up, building in and automating this sort of flexibility could go a long way to reducing the amount of new generation needed. One report published earlier this year found that if data centers agreed to have their power curtailed for just 0.5% of the time (around 40 hours out of a year of continuous operation), the grid could handle about 18 GW of new power demand in the PJM region without adding generation capacity.  For the whole US, this level of flexibility would allow the grid to take on an additional 98 gigawatts of new demand without building any new power plants to meet it. To give you a sense of just how significant that would be, all the nuclear reactors in the US add up to 97 gigawatts of capacity. Tweaking the thermostat and ramping down data centers during hot summer days won’t solve the demand crunch on their own, but it certainly won’t hurt to have more flexibility. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

IBM sees enterprise customers are using ‘everything’ when it comes to AI, the challenge is matching the LLM to the right use case

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Over the last 100 years, IBM has seen many different tech trends rise and fall. What tends to win out are technologies where there is choice. At VB Transform 2025 today, Armand Ruiz, VP of AI Platform at IBM detailed how Big Blue is thinking about generative AI and how its enterprise users are actually deploying the technology. A key theme that Ruiz emphasized is that at this point, it’s not about choosing a single large language model (LLM) provider or technology. Increasingly, enterprise customers are systematically rejecting single-vendor AI strategies in favor of multi-model approaches that match specific LLMs to targeted use cases. IBM has its own open-source AI models with the Granite family, but it is not positioning that technology as the only choice, or even the right choice for all workloads. This enterprise behavior is driving IBM to position itself not as a foundation model competitor, but as what Ruiz referred to as a control tower for AI workloads. “When I sit in front of a customer, they’re using everything they have access to, everything,” Ruiz explained. “For coding, they love Anthropic and for some other use cases like  for reasoning, they like o3 and then for LLM customization, with their own data and fine tuning, they like either our Granite series or Mistral with their small models, or even Llama…it’s just matching the LLM to the right use case. And then we help them as well to make recommendations.” The Multi-LLM gateway strategy IBM’s response to this market reality is a newly released model gateway that provides enterprises with a single API to switch between different LLMs while maintaining observability and governance across all deployments. 

Read More »

Boston Consulting Group: To unlock enterprise AI value, start with the data you’ve been ignoring

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When building enterprise AI, some companies are finding the hardest part is sometimes deciding what to build and how to address the various processes involved.  At VentureBeat Transform 2025, data quality and governance were front and center as companies look beyond the experimental phase of AI and explore ways to productize and scale agents and other applications.  >>See all our Transform 2025 coverage here<< Organizations are dealing with the pain of thinking through how tech intersects with people, processes and design, said Braden Holstege, managing director and partner at Boston Consulting Group. He added that companies need to think about a range of complexities related to data exposure, per-person AI budgets, access permissions and how to manage external and internal risks. Sometimes, new solutions involve ways of using previously unusable data. Speaking onstage Tuesday afternoon, Holstege gave an example of one client that used large language models (LLMs) to analyze millions of insights about people churn, product complaints and positive feedback — and discovering insights that weren’t possible a few years ago with natural language processing (NLP). “The broader lesson here is that data are not monolithic,” Holstege said. “You have everything from transaction records to documents to customer feedback to trace data which is produced in the course of application development and a million other types of data.” Some of these new possibilities are thanks to improvements in AI-ready data, said Susan Etlinger, Microsoft’s senior director of strategy and thought leadership of Azure AI. “Once you’re in it, you start getting that sense of the art of the possible,” Etlinger said. “It’s a balancing act between that and coming in with a clear sense of what you’re

Read More »

For Replit’s CEO, the future of software is ‘agents all the way down’

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Can enterprise teams truly vibe code their way out of expensive SaaS contracts? Replit CEO and co-founder Amjad Masad seems to think so, and the ambitious vision could mean “agents all the way down.” Speaking at VB Transform on Tuesday, Masad touted how his startup’s agents could help a non-developer design and code a live polling app in a mere 15 minutes — using a written prompt to create databases, login authentication and even quality checks. “This is sort of like an almost semi-autonomous agent,” Masad said. “You can watch it, you can also go get a coffee and it’ll send you a notification when it’s ready to show you the future.” >>See all our Transform 2025 coverage here<< Scaling apps, sites and software without coding A polling app might not seem all that necessarily for most enterprise teams. However, the process illustrates how quickly some platforms are allowing individuals and teams to quickly and cheaply build and scale various websites, apps and software in ways that could potentially cut timelines or even replace some outside vendors — all without knowing much or any code.  The road map for Replit includes building more APIs and abstractions of primitives that agents can use to quickly set up databases, payment processes, and other features. Masad also mentioned other updates for Replit v3, including a way for users to add generative models directly to their app and have agents autonomously run tests of AI-generated apps. In recent months, vibe coding has become increasingly popular to help non-developers quickly design and code a new website, app, or agent from scratch using natural language prompts. Giants like Anthropic and Google have rolled out new

Read More »

OpenAI’s API lead explains how enterprises are already succeeding with its Agents SDK and Responses API

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more At VentureBeat’s Transform 2025 conference, Olivier Godement, Head of Product for OpenAI’s API platform, provided a behind-the-scenes look at how enterprise teams are adopting and deploying AI agents at scale. In a 20-minute panel discussion I hosted exclusively with Godement, the former Stripe researcher and current OpenAI API boss unpacked OpenAI’s latest developer tools—the Responses API and Agents SDK—while highlighting real-world patterns, security considerations, and cost-return examples from early adopters like Stripe and Box. For enterprise leaders unable to attend the session live, here are top 8 most important takeaways: Agents Are Rapidly Moving From Prototype to Production According to Godement, 2025 marks a real shift in how AI is being deployed at scale. With over a million monthly active developers now using OpenAI’s API platform globally, and token usage up 700% year over year, AI is moving beyond experimentation. “It’s been five years since we launched essentially GPT-3… and man, the past five years has been pretty wild.” Godement emphasized that current demand isn’t just about chatbots anymore. “AI use cases are moving from simple Q&A to actually use cases where the application, the agent, can do stuff for you.” This shift prompted OpenAI to launch two major developer-facing tools in March: the Responses API and the Agents SDK. When to Use Single Agents vs. Sub-Agent Architectures A major theme was architectural choice. Godement noted that single-agent loops, which encapsulate full tool access and context in one model, are conceptually elegant but often impractical at scale. “Building accurate and reliable single agents is hard. Like, it’s really hard.” As complexity increases—more tools, more possible user inputs, more logic—teams often move toward modular architectures with specialized sub-agents.

Read More »

How Highmark Health and Google Cloud are using Gen AI to streamline medical claims and improve care: 6 key lessons

Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more Among the numerous educational and startlingly insightful panel discussions on AI enterprise integrations featuring industry leaders at VentureBeat’s Transform 2025 conference this week was one led by Google Cloud Platform Vice President and Chief Technology Officer (CTO) Will Grannis and Richard Clarke, Highmark Health’s Senior Vice President and Chief Data and Analytics Officer. That session, “The New AI Stack in Healthcare: Architecting for Multi-Model, Multi-Modal Environments,” delivered a pragmatic look at how the two organizations are collaborating to deploy AI at scale across more than 14,000 employees at the large U.S. healthcare system Highmark Health (based out of Western Pennsylvania). In addition, the collaboration has onboarded all these employees and turned them into active users without losing sight of complexity, regulation, or clinician trust. So, how did Google Cloud and Highmark go about it? Read on to find out. A Partnership Built on Prepared Foundations Highmark Health, an integrated payer-provider system serving over 6 million members, is using Google Cloud’s AI models and infrastructure to modernize legacy systems, boost internal efficiency, and improve patient outcomes. What sets this initiative apart is its focus on platform engineering—treating AI as a foundational shift in how work gets done, not just another tech layer. Richard Clarke, Highmark’s Chief Data and Analytics Officer, emphasized the importance of building flexible infrastructure early. “There’s nothing more legacy than an employment platform coded in COBOL,” Clarke noted, but Highmark has integrated even those systems with cloud-based AI models. The result: up to 90% workload replication without systemic disruption, enabling smoother transitions and real-time insights into complex administrative processes. Google Cloud CTO Will Grannis echoed that success begins with groundwork. “This may take three, four,

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Oil Tanker Rates Collapse as Conflict in Middle East Abates

The cost of shipping Middle East crude to customers in Asia collapsed on Thursday, the latest sign of oil markets returning to normal after conflict eased in the world’s top petroleum-exporting region. Charter rates slumped by 17% to 55.50 industry-standard Worldscale points, according to data from the Baltic Exchange in London. It works out at roughly $1.60 a barrel. “Risk premiums have naturally faded,” said Fredrik Dybwad, an analyst at Fearnley Securities AS. “There is ample vessel availability, and considering normal seasonality, rates should naturally find a lower level.” Shipping prices soared two weeks ago amid concern Iran might disrupt maritime traffic around Hormuz Strait, a vital waterway through which 20% of the world’s oil and liquefied natural gas must pass. After almost two weeks of fighting between Iran and Israel that began on June 13, there’s since been a ceasefire, hitting oil prices and lowering the risks for ships that enter the region. The Joint Maritime Information Center, a naval liaison with commercial shipping in the region, said Thursday that no hostilities had been reported in the Strait of Hormuz over the past 48 hours and that traffic had returned to normal levels. “A sustained period of inactivity and strengthening of the ceasefire agreement will stabilize maritime tension in the Arabian Gulf,” it said in a note.  “Now that the market has become sanguine about Iran shutting the Strait of Hormuz, ships are running fluidly again, the premium gas been removed, and rates are correcting lower meaningfully,” said Jonathan Chappell, senior managing director at Evercore ISI. The Worldscale system is designed to let owners and charterers quickly calculate the latest earnings and per-barrel costs on thousands of trade routes.  Vessels on the industry’s benchmark Saudi Arabia-to-China route are earning $35,281 a day, according to the Baltic Exchange. They were

Read More »

Equinor, Shell Unveil Name of UK North Sea JV

Shell PLC and Equinor ASA have named their United Kingdom North Sea joint venture Adura, which they announced December as the biggest independent producer on the UK’s side of the sea. “Work continues towards securing regulatory approvals, with launch of the IJV [incorporated JV] expected by the end of this year”, Norway’s majority state-owned Equinor said in an online statement. Adura, which will be equally owned, combines the two companies’ offshore assets in the UK, where Shell currently produces over 100,000 barrels of oil equivalent a day (boed) and Equinor about 38,000 boed. “Adura is expected to produce over 140,000 barrels of oil equivalent per day in 2025”, Equinor said. The name Adura is “rooted in their [the companies] respective heritage and focused on shaping the future of the basin in the years ahead”, Equinor explained. “Adura has been created to bring together the A of Aberdeen and the dura of durability. It’s a company built on firm foundations, much like the strong granite synonymous with the city”. “Adura will sustain domestic oil and gas production and security of energy supply in the UK and beyond”, Equinor added. Adura will include Equinor’s 29.89 percent stake in the CNOOC Ltd.-operated Buzzard field, which started production 2007; an operating stake of 65.11percent in Mariner, online since 2019; and an 80 percent operating stake in Rosebank, expected to come onstream 2026. Shell will contribute its 27.97 percent ownership in BP PLC-operated Clair, which began production 2005; a 50 percent operating stake in Gannet, started up 1992; a 100 percent stake in Jackdaw, for which Shell plans to seek a new consent following a court nullification; a 21.23 percent operating stake in Nelson, which started production 1994; a 50 percent operating stake in Penguins, which started production 2003; a 92.52 percent operating stake in Pierce,

Read More »

Can oil and gas solve the AI power dilemma?

Joe Brettell is a partner at Prosody Group. The promise, peril and possibilities of artificial intelligence continue to capture the cultural and business zeitgeist worldwide. Hardly a conference or long-form interview can be held these days without a panelist or pundit commenting on the technology’s implications for their profession. Yet despite being the hottest topic in every circle, AI’s ultimate challenge isn’t technological but physical. After years of breathless speculation and prediction, the issue remains the same: AI needs more energy. Amidst this backdrop, the oil and gas industry faces a similarly fundamental challenge: a shifting production frontier and evolving path to continued growth. After a decade of efficiency-driven growth, the era of easy barrels is waning. Diamondback Energy CEO Travis Stice captured the new reality in a recent letter, warning of the increasingly dim prospects for expanding production amid geological constraints and rising costs. Other energy majors have issued similar cautions, a sharp departure from the boom years of the shale revolution when abundant, low-cost reserves, followed by shareholder-focused production, made the industry a market favorite. Now, with resource intensity rising, global volatility accelerating and economic conditions tightening, the industry is under pressure to find its next value horizon. That horizon may be converging with AI. The pairing makes increasing sense. While initially circling one another warily, major players in energy and technology have become increasingly intertwined. At major gatherings like CERAWeek, energy executives and tech leaders now share the same stage — and increasingly, the same strategic questions. How do we scale the infrastructure to match exponential AI growth? Who will supply the energy to power it? And how do we do so fast enough while dealing with rising environmental, social and regulatory concerns? These challenges come amid a stark reality: AI’s computational appetite isn’t just increasing —

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE