Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

I asked an AI swarm to fill out a March Madness bracket — here’s what happened

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Imagine if a large team of 200 people could hold a thoughtful real-time conversation in which they efficiently brainstorm ideas, share knowledge, debate alternatives and quickly converge on AI-optimized solutions. Is this possible — and if so, would it amplify their collective intelligence? There is a new generative AI technology, conversational swarm intelligence (or simply hyperchat), that enables teams of potentially any size to engage in real-time conversations and quickly converge on AI-optimized solutions. To put this to the test, I asked the research team at Unanimous AI to bring together 50 random sports fans and task that large group with quickly creating a March Madness bracket through real-time conversational deliberation. Before I tell you how the experiment is going, I need to explain why we can’t just bring 50 people into a Zoom meeting and have them quickly create a bracket together. Research shows that the ideal size for a productive real-time conversation is only 4 to 7 people. In small groups, each individual gets a good amount of airtime to express their views and has low wait time to respond to others. But as group size grows, airtime drops, wait-time rises — and by a dozen people it devolves into a series of monologues. Above 20 people, it’s chaos.  So how can 50 people hold a conversation, or 250, or even 2,500?  Hyperchat works by breaking any large group into a set of parallel subgroups. It then adds an AI agent into each subgroup called a “conversational surrogate” tasked with distilling the human insights within its local group and quickly sharing those insights as natural dialog with other groups. These surrogate agents enable all the subgroups to overlap, weaving

Read More »

Trump Vents Anger at Putin Over Ukraine, Hints at Oil Curbs

President Donald Trump said he was “very angry” at Vladimir Putin and threatened “secondary tariffs” on buyers of his country’s oil if the Russian leader refuses a ceasefire with Ukraine. In comments reported by NBC News, Trump said he was “pissed off” at Putin for casting doubt on Ukrainian President Volodymyr Zelenskiy’s legitimacy as a negotiating partner, and threatened curbs on “all oil coming out of Russia.” He later added that he didn’t think the Russian president would “go back on his word.”  While the US president appeared to temper his remarks, the threats mark a significant change of tone for Washington and suggest a possible souring in relations with his Russian counterpart over the pace of ceasefire talks. Before taking office, Trump said he could resolve the war quickly, but the conflict rages on more than two months later.  “I certainly wouldn’t want to put secondary tariffs on Russia,” Trump later clarified in comments to reporters on Air Force One, adding he was “disappointed” with some of Putin’s recent comments on Zelenskiy. “He’s supposed to be making a deal with him, whether you like him or don’t like him. So I wasn’t happy with that. But I think he’s going to be good.” Trump’s frustration was sparked by comments Putin made on Friday that implicitly challenged Zelenskiy’s legitimacy by proposing the United Nations should take over Ukraine with a temporary government overseen by the US and possibly even some European countries.  The Kremlin on Monday said that Putin remained open to contacts with Trump.  “If necessary, their conversation will be organized very quickly,” spokesman Dmitry Peskov told reporters, according to the state-run Tass news agency, though he said no call had been scheduled yet. Peskov also said that Russia was continuing to work with the US to build bilateral

Read More »

Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Another day, another announcement about AI agents. Hailed by various market research reports as the big tech trend in 2025 — especially in the enterprise — it seems we can’t go more than 12 hours or so without the debut of another way to make, orchestrate (link together), or otherwise optimize purpose-built AI tools and workflows designed to handle routine white collar work. Yet Emergence AI, a startup founded by former IBM Research veterans and which late last year debuted its own, cross-platform AI agent orchestration framework, is out with something novel from all the rest: an AI agent creation platform that lets the human user specify what work they are trying to accomplish via text prompts, and then turns it over to AI models to create the agents they believe are necessary to accomplish said work. This new system is literally a no code, natural language, AI-powered multi-agent builder, and it works in real time. Emergence AI describes it as a milestone in recursive intelligence, aims to simplify and accelerate complex data workflows for enterprise users. “Recursive intelligence paves the path for agents to create agents,” said Satya Nitta, co-founder and CEO of Emergence AI. “Our systems allow creativity and intelligence to scale fluidly, without human bottlenecks, but always within human-defined boundaries.” Image of Dr. Satya Nitta, Co-founder and CEO of Emergence AI, during his keynote at the AI Engineer World’s Fair 2024, where he unveiled Emergence’s Orchestrator meta-agent and introduced the open-source web agent, Agent-E. (photo courtesy AI Engineer World’s Fair) The platform is designed to evaluate incoming tasks, check its existing agent registry, and, if necessary, autonomously generate new agents tailored to fulfill specific enterprise needs. It can

Read More »

Where battery and hydrogen-powered trains are coming to US commuter rail

As U.S. transit agencies increasingly order buses powered by batteries or hydrogen fuel cells, some of these same agencies are beginning to look at trains that use similar technologies. Stadler, an international train manufacturer, already has trains in testing and on order in two states, while other manufacturers of such trains operating in Canada and Europe are eyeing U.S. opportunities, too. California puts Stadler hydrogen trains to the test California announced a $310 billion plan in January to develop a zero-emission passenger rail network across much of the state by 2050. A hydrogen-powered passenger train built by Stadler, a Swiss company, began testing on San Bernardino County’s Metrolink commuter line between San Bernardino and Redlands, California, in November. The San Bernardino County Transportation Authority expects the train to go into regular service this year. “We’re confident that once that train goes into revenue service soon, that we’ll see a lot of positive feedback,” said Stadler’s Martin Ritter, executive vice president for North America. Ritter said California signed a contract with Stadler to provide up to 29 hydrogen fuel cell trains; it had ordered 10 as of a year ago. The state is bundling the procurement contract and will assign trains to different transit agencies, he said. Prior to its arrival in California, the SBCTA hydrogen train underwent testing at the Ensco Transportation Technology Center in Pueblo, Colorado. During that process, the train set a Guinness World Record for traveling 1,741.7 miles around a test loop without refueling or recharging.  Ritter said zero-emission trains are quieter and produce fewer vibrations than conventional fuel trains as they speed through communities along the line. He noted that the only byproduct of a fuel cell train is water vapor. Electric trains and streetcars have existed for more than a century. Passenger railroads like the

Read More »

EPA denies harm from GGRF freeze in court filing

The U.S. Environmental Protection Agency filed a motion Wednesday opposing motions for injunctive relief filed by three nonprofits that have had their access to Greenhouse Gas Reduction Fund grant money frozen, arguing that their monetary harm does not warrant an injunction and is not irreparable.  The nonprofit Climate Fund United, which received a $6.97 billion National Clean Investment Fund grant, was the first to sue over the frozen funds last month, targeting EPA and fund holder Citibank. The Coalition for Green Capital, which received $5 billion from the NCFI, and Power Forward Communities, which received $2 billion from it, have each filed lawsuits against Citibank.  EPA argued for the injunction requests filed by each to be denied, as “an injunction should be denied when Plaintiffs’ alleged harms are monetary and may be remedied by damages” and “in terminating Plaintiffs’ grants, EPA has not prohibited or made it unlawful for Plaintiffs (or their subgrantees) to carry out their work.” “Nor has any other government action,” EPA said. “The government is not preventing Plaintiffs from providing services; EPA has just terminated the contracts under which the government would provide reimbursement for those services.” In a joint response filed Friday, the three plaintiffs argued that they have already “demonstrated several forms of irreparable harm, including potentially fatal disruption to Plaintiffs’ operations; irreplaceable loss of clients, partnerships, and opportunities; devastating reputational injury; interference with Plaintiffs’ missions; and an immediate risk of insolvency for some of the Plaintiffs and their subgrantees.” “Many of these injuries have already materialized and will worsen if Plaintiffs continue to be deprived of access to their funds,” they said. The plaintiffs argue that the U.S. District Court for the District of Columbia, where the case is being heard, has previously held that financial harm can constitute irreparable harm when the existence

Read More »

FERC review of PJM colocation rules for data centers, large loads may extend past mid-year: analysts

The PJM Interconnection’s response to the Federal Energy Regulatory Commission’s investigation into the grid operator’s rules for colocated loads indicates FERC may not approve new regulations by mid-year, as some people initially thought, according to utility-sector analysts. FERC on Feb. 20 launched a review of issues related to colocating large loads, such as data centers, at power plants in PJM’s footprint. The outcome of the review could set a precedent for colocated load in the power markets FERC oversees. Talen Energy, Constellation Energy and PSEG Power, a Public Service Enterprise Group subsidiary, are among the companies that are considering hosting data centers at their nuclear power plants in PJM. In its “show cause” order, FERC asked PJM and stakeholders to explain why the grid operator’s colocation rules are just and reasonable or to offer rules that would pass agency muster. FERC established a comment schedule that enables the agency to issue a response by June 20. The agency said it could make a decision on a PJM proposal within three months. However, instead of proposing new colocation rules, PJM on March 24 said its existing rules are just and reasonable. The grid operator also offered five conceptual colocation options that have been proposed by stakeholders or developed by PJM. PJM urged FERC to issue “detailed guiding principles” that the grid operator could use to craft colocation rules for the agency’s approval. The lack of a proposal from PJM likely extends FERC’s review process, according to analysts. “FERC may still act on the show cause order in June, but we don’t rule out a new iteration of process instead of a clear policy decision,” ClearView Energy Partners analysts said in a client note on Friday. It will likely take FERC until late this year to approve changes to PJM’s colocation rules,

Read More »

I asked an AI swarm to fill out a March Madness bracket — here’s what happened

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Imagine if a large team of 200 people could hold a thoughtful real-time conversation in which they efficiently brainstorm ideas, share knowledge, debate alternatives and quickly converge on AI-optimized solutions. Is this possible — and if so, would it amplify their collective intelligence? There is a new generative AI technology, conversational swarm intelligence (or simply hyperchat), that enables teams of potentially any size to engage in real-time conversations and quickly converge on AI-optimized solutions. To put this to the test, I asked the research team at Unanimous AI to bring together 50 random sports fans and task that large group with quickly creating a March Madness bracket through real-time conversational deliberation. Before I tell you how the experiment is going, I need to explain why we can’t just bring 50 people into a Zoom meeting and have them quickly create a bracket together. Research shows that the ideal size for a productive real-time conversation is only 4 to 7 people. In small groups, each individual gets a good amount of airtime to express their views and has low wait time to respond to others. But as group size grows, airtime drops, wait-time rises — and by a dozen people it devolves into a series of monologues. Above 20 people, it’s chaos.  So how can 50 people hold a conversation, or 250, or even 2,500?  Hyperchat works by breaking any large group into a set of parallel subgroups. It then adds an AI agent into each subgroup called a “conversational surrogate” tasked with distilling the human insights within its local group and quickly sharing those insights as natural dialog with other groups. These surrogate agents enable all the subgroups to overlap, weaving

Read More »

Trump Vents Anger at Putin Over Ukraine, Hints at Oil Curbs

President Donald Trump said he was “very angry” at Vladimir Putin and threatened “secondary tariffs” on buyers of his country’s oil if the Russian leader refuses a ceasefire with Ukraine. In comments reported by NBC News, Trump said he was “pissed off” at Putin for casting doubt on Ukrainian President Volodymyr Zelenskiy’s legitimacy as a negotiating partner, and threatened curbs on “all oil coming out of Russia.” He later added that he didn’t think the Russian president would “go back on his word.”  While the US president appeared to temper his remarks, the threats mark a significant change of tone for Washington and suggest a possible souring in relations with his Russian counterpart over the pace of ceasefire talks. Before taking office, Trump said he could resolve the war quickly, but the conflict rages on more than two months later.  “I certainly wouldn’t want to put secondary tariffs on Russia,” Trump later clarified in comments to reporters on Air Force One, adding he was “disappointed” with some of Putin’s recent comments on Zelenskiy. “He’s supposed to be making a deal with him, whether you like him or don’t like him. So I wasn’t happy with that. But I think he’s going to be good.” Trump’s frustration was sparked by comments Putin made on Friday that implicitly challenged Zelenskiy’s legitimacy by proposing the United Nations should take over Ukraine with a temporary government overseen by the US and possibly even some European countries.  The Kremlin on Monday said that Putin remained open to contacts with Trump.  “If necessary, their conversation will be organized very quickly,” spokesman Dmitry Peskov told reporters, according to the state-run Tass news agency, though he said no call had been scheduled yet. Peskov also said that Russia was continuing to work with the US to build bilateral

Read More »

Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Another day, another announcement about AI agents. Hailed by various market research reports as the big tech trend in 2025 — especially in the enterprise — it seems we can’t go more than 12 hours or so without the debut of another way to make, orchestrate (link together), or otherwise optimize purpose-built AI tools and workflows designed to handle routine white collar work. Yet Emergence AI, a startup founded by former IBM Research veterans and which late last year debuted its own, cross-platform AI agent orchestration framework, is out with something novel from all the rest: an AI agent creation platform that lets the human user specify what work they are trying to accomplish via text prompts, and then turns it over to AI models to create the agents they believe are necessary to accomplish said work. This new system is literally a no code, natural language, AI-powered multi-agent builder, and it works in real time. Emergence AI describes it as a milestone in recursive intelligence, aims to simplify and accelerate complex data workflows for enterprise users. “Recursive intelligence paves the path for agents to create agents,” said Satya Nitta, co-founder and CEO of Emergence AI. “Our systems allow creativity and intelligence to scale fluidly, without human bottlenecks, but always within human-defined boundaries.” Image of Dr. Satya Nitta, Co-founder and CEO of Emergence AI, during his keynote at the AI Engineer World’s Fair 2024, where he unveiled Emergence’s Orchestrator meta-agent and introduced the open-source web agent, Agent-E. (photo courtesy AI Engineer World’s Fair) The platform is designed to evaluate incoming tasks, check its existing agent registry, and, if necessary, autonomously generate new agents tailored to fulfill specific enterprise needs. It can

Read More »

Where battery and hydrogen-powered trains are coming to US commuter rail

As U.S. transit agencies increasingly order buses powered by batteries or hydrogen fuel cells, some of these same agencies are beginning to look at trains that use similar technologies. Stadler, an international train manufacturer, already has trains in testing and on order in two states, while other manufacturers of such trains operating in Canada and Europe are eyeing U.S. opportunities, too. California puts Stadler hydrogen trains to the test California announced a $310 billion plan in January to develop a zero-emission passenger rail network across much of the state by 2050. A hydrogen-powered passenger train built by Stadler, a Swiss company, began testing on San Bernardino County’s Metrolink commuter line between San Bernardino and Redlands, California, in November. The San Bernardino County Transportation Authority expects the train to go into regular service this year. “We’re confident that once that train goes into revenue service soon, that we’ll see a lot of positive feedback,” said Stadler’s Martin Ritter, executive vice president for North America. Ritter said California signed a contract with Stadler to provide up to 29 hydrogen fuel cell trains; it had ordered 10 as of a year ago. The state is bundling the procurement contract and will assign trains to different transit agencies, he said. Prior to its arrival in California, the SBCTA hydrogen train underwent testing at the Ensco Transportation Technology Center in Pueblo, Colorado. During that process, the train set a Guinness World Record for traveling 1,741.7 miles around a test loop without refueling or recharging.  Ritter said zero-emission trains are quieter and produce fewer vibrations than conventional fuel trains as they speed through communities along the line. He noted that the only byproduct of a fuel cell train is water vapor. Electric trains and streetcars have existed for more than a century. Passenger railroads like the

Read More »

EPA denies harm from GGRF freeze in court filing

The U.S. Environmental Protection Agency filed a motion Wednesday opposing motions for injunctive relief filed by three nonprofits that have had their access to Greenhouse Gas Reduction Fund grant money frozen, arguing that their monetary harm does not warrant an injunction and is not irreparable.  The nonprofit Climate Fund United, which received a $6.97 billion National Clean Investment Fund grant, was the first to sue over the frozen funds last month, targeting EPA and fund holder Citibank. The Coalition for Green Capital, which received $5 billion from the NCFI, and Power Forward Communities, which received $2 billion from it, have each filed lawsuits against Citibank.  EPA argued for the injunction requests filed by each to be denied, as “an injunction should be denied when Plaintiffs’ alleged harms are monetary and may be remedied by damages” and “in terminating Plaintiffs’ grants, EPA has not prohibited or made it unlawful for Plaintiffs (or their subgrantees) to carry out their work.” “Nor has any other government action,” EPA said. “The government is not preventing Plaintiffs from providing services; EPA has just terminated the contracts under which the government would provide reimbursement for those services.” In a joint response filed Friday, the three plaintiffs argued that they have already “demonstrated several forms of irreparable harm, including potentially fatal disruption to Plaintiffs’ operations; irreplaceable loss of clients, partnerships, and opportunities; devastating reputational injury; interference with Plaintiffs’ missions; and an immediate risk of insolvency for some of the Plaintiffs and their subgrantees.” “Many of these injuries have already materialized and will worsen if Plaintiffs continue to be deprived of access to their funds,” they said. The plaintiffs argue that the U.S. District Court for the District of Columbia, where the case is being heard, has previously held that financial harm can constitute irreparable harm when the existence

Read More »

FERC review of PJM colocation rules for data centers, large loads may extend past mid-year: analysts

The PJM Interconnection’s response to the Federal Energy Regulatory Commission’s investigation into the grid operator’s rules for colocated loads indicates FERC may not approve new regulations by mid-year, as some people initially thought, according to utility-sector analysts. FERC on Feb. 20 launched a review of issues related to colocating large loads, such as data centers, at power plants in PJM’s footprint. The outcome of the review could set a precedent for colocated load in the power markets FERC oversees. Talen Energy, Constellation Energy and PSEG Power, a Public Service Enterprise Group subsidiary, are among the companies that are considering hosting data centers at their nuclear power plants in PJM. In its “show cause” order, FERC asked PJM and stakeholders to explain why the grid operator’s colocation rules are just and reasonable or to offer rules that would pass agency muster. FERC established a comment schedule that enables the agency to issue a response by June 20. The agency said it could make a decision on a PJM proposal within three months. However, instead of proposing new colocation rules, PJM on March 24 said its existing rules are just and reasonable. The grid operator also offered five conceptual colocation options that have been proposed by stakeholders or developed by PJM. PJM urged FERC to issue “detailed guiding principles” that the grid operator could use to craft colocation rules for the agency’s approval. The lack of a proposal from PJM likely extends FERC’s review process, according to analysts. “FERC may still act on the show cause order in June, but we don’t rule out a new iteration of process instead of a clear policy decision,” ClearView Energy Partners analysts said in a client note on Friday. It will likely take FERC until late this year to approve changes to PJM’s colocation rules,

Read More »

EPA denies harm from GGRF freeze in court filing

The U.S. Environmental Protection Agency filed a motion Wednesday opposing motions for injunctive relief filed by three nonprofits that have had their access to Greenhouse Gas Reduction Fund grant money frozen, arguing that their monetary harm does not warrant an injunction and is not irreparable.  The nonprofit Climate Fund United, which received a $6.97 billion National Clean Investment Fund grant, was the first to sue over the frozen funds last month, targeting EPA and fund holder Citibank. The Coalition for Green Capital, which received $5 billion from the NCFI, and Power Forward Communities, which received $2 billion from it, have each filed lawsuits against Citibank.  EPA argued for the injunction requests filed by each to be denied, as “an injunction should be denied when Plaintiffs’ alleged harms are monetary and may be remedied by damages” and “in terminating Plaintiffs’ grants, EPA has not prohibited or made it unlawful for Plaintiffs (or their subgrantees) to carry out their work.” “Nor has any other government action,” EPA said. “The government is not preventing Plaintiffs from providing services; EPA has just terminated the contracts under which the government would provide reimbursement for those services.” In a joint response filed Friday, the three plaintiffs argued that they have already “demonstrated several forms of irreparable harm, including potentially fatal disruption to Plaintiffs’ operations; irreplaceable loss of clients, partnerships, and opportunities; devastating reputational injury; interference with Plaintiffs’ missions; and an immediate risk of insolvency for some of the Plaintiffs and their subgrantees.” “Many of these injuries have already materialized and will worsen if Plaintiffs continue to be deprived of access to their funds,” they said. The plaintiffs argue that the U.S. District Court for the District of Columbia, where the case is being heard, has previously held that financial harm can constitute irreparable harm when the existence

Read More »

Where battery and hydrogen-powered trains are coming to US commuter rail

As U.S. transit agencies increasingly order buses powered by batteries or hydrogen fuel cells, some of these same agencies are beginning to look at trains that use similar technologies. Stadler, an international train manufacturer, already has trains in testing and on order in two states, while other manufacturers of such trains operating in Canada and Europe are eyeing U.S. opportunities, too. California puts Stadler hydrogen trains to the test California announced a $310 billion plan in January to develop a zero-emission passenger rail network across much of the state by 2050. A hydrogen-powered passenger train built by Stadler, a Swiss company, began testing on San Bernardino County’s Metrolink commuter line between San Bernardino and Redlands, California, in November. The San Bernardino County Transportation Authority expects the train to go into regular service this year. “We’re confident that once that train goes into revenue service soon, that we’ll see a lot of positive feedback,” said Stadler’s Martin Ritter, executive vice president for North America. Ritter said California signed a contract with Stadler to provide up to 29 hydrogen fuel cell trains; it had ordered 10 as of a year ago. The state is bundling the procurement contract and will assign trains to different transit agencies, he said. Prior to its arrival in California, the SBCTA hydrogen train underwent testing at the Ensco Transportation Technology Center in Pueblo, Colorado. During that process, the train set a Guinness World Record for traveling 1,741.7 miles around a test loop without refueling or recharging.  Ritter said zero-emission trains are quieter and produce fewer vibrations than conventional fuel trains as they speed through communities along the line. He noted that the only byproduct of a fuel cell train is water vapor. Electric trains and streetcars have existed for more than a century. Passenger railroads like the

Read More »

ISO New England issues transmission RFP to access new wind resources

The New England grid operator on Monday published a request for proposals to address the region’s longer-term transmission needs, aimed at upgrading the electric system between anticipated wind generation in northern Maine and demand centers to the south. ISO New England said it published the RFP at the direction of the New England States Committee on Electricity. Proposals are due in September, though the schedule is subject to change, the ISO said. After evaluation by the ISO, a preferred solution may be selected by NESCOE as early as September 2026.  Proposals must aim to increase the amount of power that can flow across the Maine–New Hampshire and Surowiec–South transmission interfaces, and develop new infrastructure around Pittsfield, Maine, that could accommodate the interconnection of 1,200 MW of land-based wind generation, the ISO said. “A strong preference will be given to proposals with an in-service date on or before December 31, 2035, or as close as possible,” according to the RFP.  Massachusetts officials celebrated the announcement, noting that the first competitive RFP for longer-term transmission investments has been “a long-time goal of the New England states.”  “This RFP will address long-standing constraints on the New England power system and integrate new, affordable, onshore wind resources in the coming years,” according to a statement from Massachusetts Gov. Maura Healey, D. Previously, New England lacked a mechanism to enable the ISO to procure transmission at the states’ request. The RFP process was developed in collaboration between the ISO and regional stakeholders, allowing the states to request that the grid operator pursue transmission investment “that is grounded in the evaluation of broad regional benefits and consumer interests,” according to the Massachusetts statement. “This milestone represents what can happen when we work together — innovative and cost-effective solutions to our region’s most pressing energy challenges,” Healey said. “We are grateful

Read More »

Macquarie Strategists Forecast USA Crude Inventory Rise

In an oil and gas report sent to Rigzone late Monday by the Macquarie team, Macquarie strategists revealed that they are forecasting that U.S. crude inventories will be up 4.2 million barrels for the week ending March 28. “This follows a 3.3 million barrel draw for the week ending March 21 and compares to our initial expectation for a larger crude build this week,” the strategists said in the report. “For this week’s crude balance, from refineries, we model crude runs down meaningfully (-0.4 million barrels per day) following a strong print last week,” they added. “Among net imports, we model a moderate increase, with exports (-1.0 million barrels per day) and imports (-0.7 million barrels per day) much lower on a nominal basis,” they continued. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we look for a bounce (+0.3 million barrels per day) this week,” they said in the report. “Rounding out the picture, we anticipate another small increase in SPR [Strategic Petroleum Reserve] stocks (+0.3 MM BBL) this week,” they added. The strategists also noted in the report that, “among products”, they “look for draws in gasoline (-0.9 million barrels) and distillate (-4.1 million barrels), with jet stocks effectively flat”. “We model implied demand for these three products at ~14.4 million barrels per day for the week ending March 28,” they said. In its latest weekly petroleum status report at the time of writing, which was released on March 26 and included data for the week ending March 21, the U.S. Energy Information Administration (EIA) highlighted that U.S. commercial crude oil inventories, excluding those in the SPR, decreased by 3.3 million barrels from the week ending March 14 to the

Read More »

NEO Energy seeks contractors for Donan, Balloch and Lochranza decommissioning

NEO Energy has released five tenders seeking contractors to help decommission its Donan, Balloch and Lochranza fields, along with the Global Producer III floating production offloading and storage (FPSO) vessel. According to data from the North Sea Transition Authority’s (NSTA’s) Pathfinder database, the decommissioning campaign is expected to start in the second quarter of 2026 at the earliest, when work to disconnect the subsea infrastructure is expected to commence. This will also see the FPSO unmoored and towed to an unspecified location. By 2027, NEO plans to begin recovering the subsea infrastructure, followed by plugging and abandoning a total of 19 wells in 2028.- To help with this, NEO Energy is looking for a contractor to perform P&A activities on the wells. The tender is expected to take place on 31 December 2025 and has a value of over £25 million The company also announced four additional tenders, each with a value of less than £25m, covering recycling the FPSO, flushing, isolating and disconnecting the subsea infrastructure from the FPSO, disconnecting the moorings and towing the FPSO, and bulk seabed clearance. NEO Energy recently announced plans to merge its North Sea operations with Repsol Resources UK’s. The deal will see Repsol retain $1.8 billion (£1.4bn) in decommissioning liabilities related to its legacy assets, which NEO said will enhance the cash flows of the merged business. NEO said it expects to complete the deal during the third quarter of 2025, subject to regulatory approvals. © Supplied by SystemNinian South. CNRL Canadian Natural Resources Ltd (CNRL) has issued two tenders to assist with decommissioning its Ninian field in the Northern North Sea, located east of Shetland. The decommissioning scope consists of three areas, covering the Ninian South Platform, Ninian Central Platform and the Ninian subsea infrastructure, which includes the Strathspey, Lyell, Columba

Read More »

Eni, Saipem Extend Biorefining Collaboration

Eni SpA and Saipem SpA have extended a deal to collaborate on building biorefineries and converting traditional refineries. The agreement, first signed 2023, combines Eni’s technological expertise with Saipem’s expertise in the design and construction of such plants. Italian state-backed integrated energy company Eni holds a 21.19 percent stake in energy engineering company Saipem. “The agreement concerns, in particular, the construction of new biorefineries, the conversion of traditional refineries into biorefineries and, generally, the development of new initiatives by Eni in the field of industrial transformation”, Eni said in an online statement. “Through this agreement, Eni, in line with its goal of decarbonizing processes and products, intends to further develop its biorefining capacity through the development of new initiatives to produce biofuels both for aviation (SAF, Sustainable Aviation Fuel) and for land and sea mobility (HVO, Hydrotreated Vegetable Oil). “At the same time, Saipem further strengthens its distinctive expertise in biorefining and decarbonization”. Under the agreement Eni recently awarded Saipem a contract for engineering, procurement services and the purchase of critical equipment for the upgrade of a biorefinery in Porto Marghera. The project will increase the plant’s capacity from 400,000 metric tons a year to 600,000 metric tons per year. The upgrade will also enable the facility to produce SAF from 2027. In November 2024 Eni also picked Saipem for the conversion of the Livorno refinery into a biorefinery, as part of their biorefining collaboration. In both projects Saipem also carried out preparatory engineering activities such as feasibility studies and front-end engineering design. The two contracts are valued about EUR 320 million ($345.4 million), according to Eni. Eni, through subsidiary Enilive, has a biorefining production capacity of 1.65 million metric tons per annum (MMtpa). Eni aims to raise this to over 5 MMtpa by 2030 as part of its efforts

Read More »

National Grid, Con Edison urge FERC to adopt gas pipeline reliability requirements

The Federal Energy Regulatory Commission should adopt reliability-related requirements for gas pipeline operators to ensure fuel supplies during cold weather, according to National Grid USA and affiliated utilities Consolidated Edison Co. of New York and Orange and Rockland Utilities. In the wake of power outages in the Southeast and the near collapse of New York City’s gas system during Winter Storm Elliott in December 2022, voluntary efforts to bolster gas pipeline reliability are inadequate, the utilities said in two separate filings on Friday at FERC. The filings were in response to a gas-electric coordination meeting held in November by the Federal-State Current Issues Collaborative between FERC and the National Association of Regulatory Utility Commissioners. National Grid called for FERC to use its authority under the Natural Gas Act to require pipeline reliability reporting, coupled with enforcement mechanisms, and pipeline tariff reforms. “Such data reporting would enable the commission to gain a clearer picture into pipeline reliability and identify any problematic trends in the quality of pipeline service,” National Grid said. “At that point, the commission could consider using its ratemaking, audit, and civil penalty authority preemptively to address such identified concerns before they result in service curtailments.” On pipeline tariff reforms, FERC should develop tougher provisions for force majeure events — an unforeseen occurence that prevents a contract from being fulfilled — reservation charge crediting, operational flow orders, scheduling and confirmation enhancements, improved real-time coordination, and limits on changes to nomination rankings, National Grid said. FERC should support efforts in New England and New York to create financial incentives for gas-fired generators to enter into winter contracts for imported liquefied natural gas supplies, or other long-term firm contracts with suppliers and pipelines, National Grid said. Con Edison and O&R said they were encouraged by recent efforts such as North American Energy Standard

Read More »

US BOEM Seeks Feedback on Potential Wind Leasing Offshore Guam

The United States Bureau of Ocean Energy Management (BOEM) on Monday issued a Call for Information and Nominations to help it decide on potential leasing areas for wind energy development offshore Guam. The call concerns a contiguous area around the island that comprises about 2.1 million acres. The area’s water depths range from 350 meters (1,148.29 feet) to 2,200 meters (7,217.85 feet), according to a statement on BOEM’s website. Closing April 7, the comment period seeks “relevant information on site conditions, marine resources, and ocean uses near or within the call area”, the BOEM said. “Concurrently, wind energy companies can nominate specific areas they would like to see offered for leasing. “During the call comment period, BOEM will engage with Indigenous Peoples, stakeholder organizations, ocean users, federal agencies, the government of Guam, and other parties to identify conflicts early in the process as BOEM seeks to identify areas where offshore wind development would have the least impact”. The next step would be the identification of specific WEAs, or wind energy areas, in the larger call area. BOEM would then conduct environmental reviews of the WEAs in consultation with different stakeholders. “After completing its environmental reviews and consultations, BOEM may propose one or more competitive lease sales for areas within the WEAs”, the Department of the Interior (DOI) sub-agency said. BOEM Director Elizabeth Klein said, “Responsible offshore wind development off Guam’s coast offers a vital opportunity to expand clean energy, cut carbon emissions, and reduce energy costs for Guam residents”. Late last year the DOI announced the approval of the 2.4-gigawatt (GW) SouthCoast Wind Project, raising the total capacity of federally approved offshore wind power projects to over 19 GW. The project owned by a joint venture between EDP Renewables and ENGIE received a positive Record of Decision, the DOI said in

Read More »

Biden Bars Offshore Oil Drilling in USA Atlantic and Pacific

President Joe Biden is indefinitely blocking offshore oil and gas development in more than 625 million acres of US coastal waters, warning that drilling there is simply “not worth the risks” and “unnecessary” to meet the nation’s energy needs.  Biden’s move is enshrined in a pair of presidential memoranda being issued Monday, burnishing his legacy on conservation and fighting climate change just two weeks before President-elect Donald Trump takes office. Yet unlike other actions Biden has taken to constrain fossil fuel development, this one could be harder for Trump to unwind, since it’s rooted in a 72-year-old provision of federal law that empowers presidents to withdraw US waters from oil and gas leasing without explicitly authorizing revocations.  Biden is ruling out future oil and gas leasing along the US East and West Coasts, the eastern Gulf of Mexico and a sliver of the Northern Bering Sea, an area teeming with seabirds, marine mammals, fish and other wildlife that indigenous people have depended on for millennia. The action doesn’t affect energy development under existing offshore leases, and it won’t prevent the sale of more drilling rights in Alaska’s gas-rich Cook Inlet or the central and western Gulf of Mexico, which together provide about 14% of US oil and gas production.  The president cast the move as achieving a careful balance between conservation and energy security. “It is clear to me that the relatively minimal fossil fuel potential in the areas I am withdrawing do not justify the environmental, public health and economic risks that would come from new leasing and drilling,” Biden said. “We do not need to choose between protecting the environment and growing our economy, or between keeping our ocean healthy, our coastlines resilient and the food they produce secure — and keeping energy prices low.” Some of the areas Biden is protecting

Read More »

Biden Admin Finalizes Hydrogen Tax Credit Favoring Cleaner Production

The Biden administration has finalized rules for a tax incentive promoting hydrogen production using renewable power, with lower credits for processes using abated natural gas. The Clean Hydrogen Production Credit is based on carbon intensity, which must not exceed four kilograms of carbon dioxide equivalent per kilogram of hydrogen produced. Qualified facilities are those whose start of construction falls before 2033. These facilities can claim credits for 10 years of production starting on the date of service placement, according to the draft text on the Federal Register’s portal. The final text is scheduled for publication Friday. Established by the 2022 Inflation Reduction Act, the four-tier scheme gives producers that meet wage and apprenticeship requirements a credit of up to $3 per kilogram of “qualified clean hydrogen”, to be adjusted for inflation. Hydrogen whose production process makes higher lifecycle emissions gets less. The scheme will use the Energy Department’s Greenhouse Gases, Regulated Emissions and Energy Use in Transportation (GREET) model in tiering production processes for credit computation. “In the coming weeks, the Department of Energy will release an updated version of the 45VH2-GREET model that producers will use to calculate the section 45V tax credit”, the Treasury Department said in a statement announcing the finalization of rules, a process that it said had considered roughly 30,000 public comments. However, producers may use the GREET model that was the most recent when their facility began construction. “This is in consideration of comments that the prospect of potential changes to the model over time reduces investment certainty”, explained the statement on the Treasury’s website. “Calculation of the lifecycle GHG analysis for the tax credit requires consideration of direct and significant indirect emissions”, the statement said. For electrolytic hydrogen, electrolyzers covered by the scheme include not only those using renewables-derived electricity (green hydrogen) but

Read More »

Xthings unveils Ulticam home security cameras powered by edge AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Xthings announced that its Ulticam security camera brand has a new model out today: the Ulticam IQ Floodlight, an edge AI-powered home security camera. The company also plans to showcase two additional cameras, Ulticam IQ, an outdoor spotlight camera, and Ulticam Dot, a portable, wireless security camera. All three cameras offer free cloud storage (seven days rolling) and subscription-free edge AI-powered person detection and alerts. The AI at the edge means that it doesn’t have to go out to an internet-connected data center to tap AI computing to figure out what is in front of the camera. Rather, the processing for the AI is built into the camera itself, and that sets a new standard for value and performance in home security cameras. It can identify people, faces and vehicles. CES 2025 attendees can experience Ulticam’s entire lineup at Pepcom’s Digital Experience event on January 6, 2025, and at the Venetian Expo, Halls A-D, booth #51732, from January 7 to January 10, 2025. These new security cameras will be available for purchase online in the U.S. in Q1 and Q2 2025 at U-tec.com, Amazon, and Best Buy. The Ulticam IQ Series: smart edge AI-powered home security cameras Ulticam IQ home security camera. The Ulticam IQ Series, which includes IQ and IQ Floodlight, takes home security to the next level with the most advanced AI-powered recognition. Among the very first consumer cameras to use edge AI, the IQ Series can quickly and accurately identify people, faces and vehicles, without uploading video for server-side processing, which improves speed, accuracy, security and privacy. Additionally, the Ulticam IQ Series is designed to improve over time with over-the-air updates that enable new AI features. Both cameras

Read More »

Intel unveils new Core Ultra processors with 2X to 3X performance on AI apps

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel unveiled new Intel Core Ultra 9 processors today at CES 2025 with as much as two or three times the edge performance on AI apps as before. The chips under the Intel Core Ultra 9 and Core i9 labels were previously codenamed Arrow Lake H, Meteor Lake H, Arrow Lake S and Raptor Lake S Refresh. Intel said it is pushing the boundaries of AI performance and power efficiency for businesses and consumers, ushering in the next era of AI computing. In other performance metrics, Intel said the Core Ultra 9 processors are up to 5.8 times faster in media performance, 3.4 times faster in video analytics end-to-end workloads with media and AI, and 8.2 times better in terms of performance per watt than prior chips. Intel hopes to kick off the year better than in 2024. CEO Pat Gelsinger resigned last month without a permanent successor after a variety of struggles, including mass layoffs, manufacturing delays and poor execution on chips including gaming bugs in chips launched during the summer. Intel Core Ultra Series 2 Michael Masci, vice president of product management at the Edge Computing Group at Intel, said in a briefing that AI, once the domain of research labs, is integrating into every aspect of our lives, including AI PCs where the AI processing is done in the computer itself, not the cloud. AI is also being processed in data centers in big enterprises, from retail stores to hospital rooms. “As CES kicks off, it’s clear we are witnessing a transformative moment,” he said. “Artificial intelligence is moving at an unprecedented pace.” The new processors include the Intel Core 9 Ultra 200 H/U/S models, with up to

Read More »

Beyond encryption: Why quantum computing might be more of a science boom than a cybersecurity bust

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Last August, the National Institute of Standards and Technology (NIST) released the first three “post-quantum encryption standards” designed to withstand an attack from a quantum computer. For years, cryptography experts have worried that the advent of quantum computing could spell doom for traditional encryption methods. With the technology now firmly on the horizon, the new NIST standards represent the first meaningful step toward post-quantum protections.  But is quantum computing the threat to encryption it’s been made out to be? While it’s true that quantum computers will be able to break traditional encryption more quickly and easily, we’re still a long way from the “No More Secrets” decryption box imagined in the 1992 movie Sneakers. With energy demands and computing power still limiting factors, those with access to quantum computers are likely considering putting the technology to better use elsewhere — such as science, pharmaceuticals and healthcare. Remember the electron microscope theory? I’ve spent a long time working in digital forensics, and it’s given me a unique perspective on the challenges of quantum computing. In 1996, Peter Gutman published a white paper, “Secure Deletion of Data from Magnetic and Solid-State Memory”, which theorized that deleted data could be recovered from a hard drive using an electron microscope. Was this possible? Maybe — but ultimately, the process would be incredibly laborious, resource-intensive and unreliable. More importantly, it wasn’t long before hard drives were storing information in such a densely-packed manner that even an electron microscope had no hope of recovering deleted data.  In fact, there is almost no evidence that such an electron microscope was ever successfully used for that purpose, and modern testing confirms that the method is neither practical nor reliable.

Read More »

Why businesses judge AI like humans — and what that means for adoption

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As businesses rush to adopt AI, they’re discovering an unexpected truth: Even the most rational enterprise buyers aren’t making purely rational decisions — their subconscious requirements go far beyond the conventional software evaluation standards. Let me share an anecdote: It’s November 2024; I’m sitting in a New York City skyscraper, working with a fashion brand on their first AI assistant. The avatar, Nora, is a 25-year-old digital assistant displayed on a six-foot-tall kiosk. She has sleek brown hair, a chic black suit and a charming smile. She waves “hi” when recognizing a client’s face, nods as they speak and answers questions about company history and tech news. I came prepared with a standard technical checklist: response accuracy, conversation latency, face recognition precision… But my client didn’t even glance at the checklist. Instead, they asked, “Why doesn’t she have her own personality? I asked her favorite handbag, and she didn’t give me one!” Changing how we evaluate technology It’s striking how quickly we forget these avatars aren’t human. While many worry about AI blurring the lines between humans and machines, I see a more immediate challenge for businesses: A fundamental shift in how we evaluate technology. When software begins to look and act human, users stop evaluating it as a tool and begin judging it as a human being. This phenomenon — judging non-human entities by human standards — is anthropomorphism, which has been well-studied in human-pet relationships, and is now emerging in the human-AI relationship. When it comes to procuring AI products, enterprise decisions are not as rational as you might think because decision-makers are still humans. Research has shown that unconscious perceptions shape most human-to-human interactions, and enterprise buyers are

Read More »

Google’s Gemini 2.5 Pro is the smartest model you’re not using – and 4 reasons it matters for enterprise AI

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The release of Gemini 2.5 Pro on Tuesday didn’t exactly dominate the news cycle. It landed the same week OpenAI’s image-generation update lit up social media with Studio Ghibli-inspired avatars and jaw-dropping instant renders. But while the buzz went to OpenAI, Google may have quietly dropped the most enterprise-ready reasoning model to date. Gemini 2.5 Pro marks a significant leap forward for Google in the foundational model race – not just in benchmarks, but in usability. Based on early experiments, benchmark data, and hands-on developer reactions, it’s a model worth serious attention from enterprise technical decision-makers, particularly those who’ve historically defaulted to OpenAI or Claude for production-grade reasoning. Here are four major takeaways for enterprise teams evaluating Gemini 2.5 Pro. 1. Transparent, structured reasoning – a new bar for chain-of-thought clarity What sets Gemini 2.5 Pro apart isn’t just its intelligence – it’s how clearly that intelligence shows its work. Google’s step-by-step training approach results in a structured chain of thought (CoT) that doesn’t feel like rambling or guesswork, like what we’ve seen from models like DeepSeek. And these CoTs aren’t truncated into shallow summaries like what you see in OpenAI’s models. The new Gemini model presents ideas in numbered steps, with sub-bullets and internal logic that’s remarkably coherent and transparent. In practical terms, this is a breakthrough for trust and steerability. Enterprise users evaluating output for critical tasks – like reviewing policy implications, coding logic, or summarizing complex research – can now see how the model arrived at an answer. That means they can validate, correct, or redirect it with more confidence. It’s a major evolution from the “black box” feel that still plagues many LLM outputs. For a deeper

Read More »

Credit where credit’s due: Inside Experian’s AI framework that’s changing financial access

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More While many enterprises are now racing to adopt and deploy AI, credit bureau giant Experian has taken a very measured approach. Experian has developed its own internal processes, frameworks and governance models that have helped it test out generative AI, deploy it at scale and have an impact. The company’s journey has helped to transform operations from a traditional credit bureau into a sophisticated AI-powered platform company. Its approach—blending advanced machine learning (ML), agentic AI architectures and grassroots innovation—has improved business operations and expanded financial access to an estimated 26 million Americans. Experian’s AI journey contrasts sharply with companies that only began exploring machine learning after ChatGPT’s emergence in 2022. The credit giant has been methodically developing AI capabilities for nearly two decades, creating a foundation allowing it to capitalize on generative AI breakthroughs rapidly. “AI has been part of the fabric at Experian way beyond when it was cool to be in AI,” Shri Santhanam, EVP and GM, Software, Platforms and AI products at Experian, told VentureBeat in an exclusive interview. “We’ve used AI to unlock the power of our data to create a better impact for businesses and consumers for the past two decades.” From traditional machine learning to AI innovation engine Before the modern gen AI era, Experian was already using and innovating with ML. Santhanam explained that instead of relying on basic, traditional statistical models, Experian pioneered the use of Gradient-Boosted Decision Trees alongside other machine learning techniques for credit underwriting. The company also developed explainable AI systems—crucial for regulatory compliance in financial services—that could articulate the reasoning behind automated lending decisions. Most significantly, the Experian Innovation Lab (formerly Data Lab) experimented with language models and transformer

Read More »

New approach to agent reliability, AgentSpec, forces agents to follow rules

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI agents have a safety and reliability problem. Agents would allow enterprises to automate more steps in their workflows, but they can take unintended actions while executing a task, are not very flexible, and are difficult to control. Organizations have already sounded the alarm about unreliable agents, worried that once deployed, agents might forget to follow instructions.  OpenAI even admitted that ensuring agent reliability would involve working with outside developers, so it opened up its Agents SDK to help solve this issue.  But researchers from the Singapore Management University (SMU) have developed a new approach to solving agent reliability. AgentSpec is a domain-specific framework that lets users “define structured rules that incorporate triggers, predicates and enforcement mechanisms.” The researchers said AgentSpec will make agents work only within the parameters that users want. Guiding LLM-based agents with a new approach AgentSpec is not a new LLM but rather an approach to guide LLM-based AI agents. The researchers believe AgentSpec can be used not only for agents in enterprise settings but useful for self-driving applications.    The first AgentSpec tests integrated on LangChain frameworks, but the researchers said they designed it to be framework-agnostic, meaning it can also run on ecosystems on AutoGen and Apollo.  Experiments using AgentSpec showed it prevented “over 90% of unsafe code executions, ensures full compliance in autonomous driving law-violation scenarios, eliminates hazardous actions in embodied agent tasks, and operates with millisecond-level overhead.” LLM-generated AgentSpec rules, which used OpenAI’s o1, also had a strong performance and enforced 87% of risky code and prevented “law-breaking in 5 out of 8 scenarios.” Current methods are a little lacking AgentSpec is not the only method to help developers bring more control and reliability

Read More »

Hands on with Gemini 2.5 Pro: why it might be the most useful reasoning model yet

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Unfortunately for Google, the release of its latest flagship language model, Gemini 2.5 Pro, got buried under the Studio Ghibli AI image storm that sucked the air out of the AI space. And perhaps fearful of its previous failed launches, Google cautiously presented it as “Our most intelligent AI model” instead of the approach of other AI labs, which introduce their new models as the best in the world. However, practical experiments with real-world examples show that Gemini 2.5 Pro is really impressive and might currently be the best reasoning model. This opens the way for many new applications and possibly puts Google at the forefront of the generative AI race.  Source: Polymarket Long context with good coding capabilities The outstanding feature of Gemini 2.5 Pro is its very long context window and output length. The model can process up to 1 million tokens (with 2 million coming soon), making it possible to fit multiple long documents and entire code repositories into the prompt when necessary. The model also has an output limit of 64,000 tokens instead of around 8,000 for other Gemini models.  The long context window also allows for extended conversations, as each interaction with a reasoning model can generate tens of thousands of tokens, especially if it involves code, images and video (I’ve run into this issue with Claude 3.7 Sonnet, which has a 200,000-token context window). For example, software engineer Simon Willison used Gemini 2.5 Pro to create a new feature for his website. Willison said in a blog, “It crunched through my entire codebase and figured out all of the places I needed to change—18 files in total, as you can see in the resulting PR.

Read More »

I asked an AI swarm to fill out a March Madness bracket — here’s what happened

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Imagine if a large team of 200 people could hold a thoughtful real-time conversation in which they efficiently brainstorm ideas, share knowledge, debate alternatives and quickly converge on AI-optimized solutions. Is this possible — and if so, would it amplify their collective intelligence? There is a new generative AI technology, conversational swarm intelligence (or simply hyperchat), that enables teams of potentially any size to engage in real-time conversations and quickly converge on AI-optimized solutions. To put this to the test, I asked the research team at Unanimous AI to bring together 50 random sports fans and task that large group with quickly creating a March Madness bracket through real-time conversational deliberation. Before I tell you how the experiment is going, I need to explain why we can’t just bring 50 people into a Zoom meeting and have them quickly create a bracket together. Research shows that the ideal size for a productive real-time conversation is only 4 to 7 people. In small groups, each individual gets a good amount of airtime to express their views and has low wait time to respond to others. But as group size grows, airtime drops, wait-time rises — and by a dozen people it devolves into a series of monologues. Above 20 people, it’s chaos.  So how can 50 people hold a conversation, or 250, or even 2,500?  Hyperchat works by breaking any large group into a set of parallel subgroups. It then adds an AI agent into each subgroup called a “conversational surrogate” tasked with distilling the human insights within its local group and quickly sharing those insights as natural dialog with other groups. These surrogate agents enable all the subgroups to overlap, weaving

Read More »

Trump Vents Anger at Putin Over Ukraine, Hints at Oil Curbs

President Donald Trump said he was “very angry” at Vladimir Putin and threatened “secondary tariffs” on buyers of his country’s oil if the Russian leader refuses a ceasefire with Ukraine. In comments reported by NBC News, Trump said he was “pissed off” at Putin for casting doubt on Ukrainian President Volodymyr Zelenskiy’s legitimacy as a negotiating partner, and threatened curbs on “all oil coming out of Russia.” He later added that he didn’t think the Russian president would “go back on his word.”  While the US president appeared to temper his remarks, the threats mark a significant change of tone for Washington and suggest a possible souring in relations with his Russian counterpart over the pace of ceasefire talks. Before taking office, Trump said he could resolve the war quickly, but the conflict rages on more than two months later.  “I certainly wouldn’t want to put secondary tariffs on Russia,” Trump later clarified in comments to reporters on Air Force One, adding he was “disappointed” with some of Putin’s recent comments on Zelenskiy. “He’s supposed to be making a deal with him, whether you like him or don’t like him. So I wasn’t happy with that. But I think he’s going to be good.” Trump’s frustration was sparked by comments Putin made on Friday that implicitly challenged Zelenskiy’s legitimacy by proposing the United Nations should take over Ukraine with a temporary government overseen by the US and possibly even some European countries.  The Kremlin on Monday said that Putin remained open to contacts with Trump.  “If necessary, their conversation will be organized very quickly,” spokesman Dmitry Peskov told reporters, according to the state-run Tass news agency, though he said no call had been scheduled yet. Peskov also said that Russia was continuing to work with the US to build bilateral

Read More »

Emergence AI’s new system automatically creates AI agents rapidly in realtime based on the work at hand

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Another day, another announcement about AI agents. Hailed by various market research reports as the big tech trend in 2025 — especially in the enterprise — it seems we can’t go more than 12 hours or so without the debut of another way to make, orchestrate (link together), or otherwise optimize purpose-built AI tools and workflows designed to handle routine white collar work. Yet Emergence AI, a startup founded by former IBM Research veterans and which late last year debuted its own, cross-platform AI agent orchestration framework, is out with something novel from all the rest: an AI agent creation platform that lets the human user specify what work they are trying to accomplish via text prompts, and then turns it over to AI models to create the agents they believe are necessary to accomplish said work. This new system is literally a no code, natural language, AI-powered multi-agent builder, and it works in real time. Emergence AI describes it as a milestone in recursive intelligence, aims to simplify and accelerate complex data workflows for enterprise users. “Recursive intelligence paves the path for agents to create agents,” said Satya Nitta, co-founder and CEO of Emergence AI. “Our systems allow creativity and intelligence to scale fluidly, without human bottlenecks, but always within human-defined boundaries.” Image of Dr. Satya Nitta, Co-founder and CEO of Emergence AI, during his keynote at the AI Engineer World’s Fair 2024, where he unveiled Emergence’s Orchestrator meta-agent and introduced the open-source web agent, Agent-E. (photo courtesy AI Engineer World’s Fair) The platform is designed to evaluate incoming tasks, check its existing agent registry, and, if necessary, autonomously generate new agents tailored to fulfill specific enterprise needs. It can

Read More »

Where battery and hydrogen-powered trains are coming to US commuter rail

As U.S. transit agencies increasingly order buses powered by batteries or hydrogen fuel cells, some of these same agencies are beginning to look at trains that use similar technologies. Stadler, an international train manufacturer, already has trains in testing and on order in two states, while other manufacturers of such trains operating in Canada and Europe are eyeing U.S. opportunities, too. California puts Stadler hydrogen trains to the test California announced a $310 billion plan in January to develop a zero-emission passenger rail network across much of the state by 2050. A hydrogen-powered passenger train built by Stadler, a Swiss company, began testing on San Bernardino County’s Metrolink commuter line between San Bernardino and Redlands, California, in November. The San Bernardino County Transportation Authority expects the train to go into regular service this year. “We’re confident that once that train goes into revenue service soon, that we’ll see a lot of positive feedback,” said Stadler’s Martin Ritter, executive vice president for North America. Ritter said California signed a contract with Stadler to provide up to 29 hydrogen fuel cell trains; it had ordered 10 as of a year ago. The state is bundling the procurement contract and will assign trains to different transit agencies, he said. Prior to its arrival in California, the SBCTA hydrogen train underwent testing at the Ensco Transportation Technology Center in Pueblo, Colorado. During that process, the train set a Guinness World Record for traveling 1,741.7 miles around a test loop without refueling or recharging.  Ritter said zero-emission trains are quieter and produce fewer vibrations than conventional fuel trains as they speed through communities along the line. He noted that the only byproduct of a fuel cell train is water vapor. Electric trains and streetcars have existed for more than a century. Passenger railroads like the

Read More »

EPA denies harm from GGRF freeze in court filing

The U.S. Environmental Protection Agency filed a motion Wednesday opposing motions for injunctive relief filed by three nonprofits that have had their access to Greenhouse Gas Reduction Fund grant money frozen, arguing that their monetary harm does not warrant an injunction and is not irreparable.  The nonprofit Climate Fund United, which received a $6.97 billion National Clean Investment Fund grant, was the first to sue over the frozen funds last month, targeting EPA and fund holder Citibank. The Coalition for Green Capital, which received $5 billion from the NCFI, and Power Forward Communities, which received $2 billion from it, have each filed lawsuits against Citibank.  EPA argued for the injunction requests filed by each to be denied, as “an injunction should be denied when Plaintiffs’ alleged harms are monetary and may be remedied by damages” and “in terminating Plaintiffs’ grants, EPA has not prohibited or made it unlawful for Plaintiffs (or their subgrantees) to carry out their work.” “Nor has any other government action,” EPA said. “The government is not preventing Plaintiffs from providing services; EPA has just terminated the contracts under which the government would provide reimbursement for those services.” In a joint response filed Friday, the three plaintiffs argued that they have already “demonstrated several forms of irreparable harm, including potentially fatal disruption to Plaintiffs’ operations; irreplaceable loss of clients, partnerships, and opportunities; devastating reputational injury; interference with Plaintiffs’ missions; and an immediate risk of insolvency for some of the Plaintiffs and their subgrantees.” “Many of these injuries have already materialized and will worsen if Plaintiffs continue to be deprived of access to their funds,” they said. The plaintiffs argue that the U.S. District Court for the District of Columbia, where the case is being heard, has previously held that financial harm can constitute irreparable harm when the existence

Read More »

FERC review of PJM colocation rules for data centers, large loads may extend past mid-year: analysts

The PJM Interconnection’s response to the Federal Energy Regulatory Commission’s investigation into the grid operator’s rules for colocated loads indicates FERC may not approve new regulations by mid-year, as some people initially thought, according to utility-sector analysts. FERC on Feb. 20 launched a review of issues related to colocating large loads, such as data centers, at power plants in PJM’s footprint. The outcome of the review could set a precedent for colocated load in the power markets FERC oversees. Talen Energy, Constellation Energy and PSEG Power, a Public Service Enterprise Group subsidiary, are among the companies that are considering hosting data centers at their nuclear power plants in PJM. In its “show cause” order, FERC asked PJM and stakeholders to explain why the grid operator’s colocation rules are just and reasonable or to offer rules that would pass agency muster. FERC established a comment schedule that enables the agency to issue a response by June 20. The agency said it could make a decision on a PJM proposal within three months. However, instead of proposing new colocation rules, PJM on March 24 said its existing rules are just and reasonable. The grid operator also offered five conceptual colocation options that have been proposed by stakeholders or developed by PJM. PJM urged FERC to issue “detailed guiding principles” that the grid operator could use to craft colocation rules for the agency’s approval. The lack of a proposal from PJM likely extends FERC’s review process, according to analysts. “FERC may still act on the show cause order in June, but we don’t rule out a new iteration of process instead of a clear policy decision,” ClearView Energy Partners analysts said in a client note on Friday. It will likely take FERC until late this year to approve changes to PJM’s colocation rules,

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE