Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

HPE cuts 2,500 workers, expects Juniper buy to close end of ’25, faces tariff issues

AI systems backlog rose 29% quarter over quarter to $3.1 billion and total server revenue totaled $4.29 billion, Myers said. The company reported Intelligent Edge revenue was down 5% from the prior-year period to $1.1 billion, but Hybrid Cloud revenue was $1.4 billion, up 10% from the prior-year period. Then there’s the matter of HPE’s proposed $14 billion buy of Juniper Networks that is now being held up by the U.S. Justice Department. A trial has been set for July 9, Neri said. “The DOJ analysis of the market is fundamentally flawed. We strongly believe this transaction will positively change the dynamics in the networking market by enhancing competition, HPE and Juniper remain fully committed to the transaction, which we expect will deliver at least $450 million in gross annual run rate synergies to shareholders within three years of the deal closing,” Neri said.  “We believe we have a compelling case and expect to be able to close the transaction before the end of fiscal 2025.” Like other industry players, HPE is trying to negotiate the tariffs that the U.S. has threatened to or implemented against China, Mexico, and other countries.

Read More »

Mistral releases new optical character recognition (OCR) API claiming top performance globally

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Well-funded French AI startup Mistral is content to go its own way. In a sea of competing reasoning models, the company today introduced Mistral OCR, a new Optical Character Recognition (OCR) API designed to provide advanced document understanding capabilities. The API extracts content—including handwritten notes, typed text, images, tables, and equations—from unstructured PDFs and images with high accuracy, presenting in a structured format. Structured data is information that is organized in a predefined manner, typically using rows and columns, making it easy to search and analyze. Common examples include names, addresses, and financial transactions stored in databases or spreadsheets.  In contrast, unstructured data lacks a specific format or structure, making it more challenging to process and analyze. This category encompasses a wide range of data types, such as emails, social media posts, videos, images, and audio files. Since unstructured data doesn’t fit neatly into traditional databases, specialized tools and techniques, like natural language processing and machine learning, are often employed to extract meaningful insights from it.  Understanding the distinction between these data types is crucial for businesses aiming to effectively manage and leverage their information assets. With multilingual support, fast processing speeds, and integration with large language models for document understanding, Mistral OCR is positioned to assist organizations in making their documentation AI-ready. Given that, according to Mistral’s blog post announcing the new API, 90% of all business information is unstructured, the new API should be a huge boon to organizations seeking to digitize and catalog their data for use in AI applications or internal/external knowledge bases. A new gold standard for OCR Mistral OCR aims to improve how organizations process and analyze complex documents. Unlike traditional OCR solutions that primarily

Read More »

Oil Recovers on Trade Tariff Reprieve

Oil eked out a marginal gain after a session of whipsawing as US President Donald Trump’s moved to delay tariffs on imports from Canada and Mexico.   West Texas Intermediate settled little changed above $66 a barrel, snapping a four-day straight losing streak by a hair. Brent edged up slightly to top $69 after touching the lowest since late 2021 on Wednesday. Diverging supply signals sparked fluctuations for prices on Thursday. On the one hand, Trump’s tariff threats have prompted some analysts to reconsider how low crude may tumble if the trade wars weigh on economic growth and energy demand. However, the prospect of levies have also been interpreted as somewhat supportive for WTI prices as the benchmark may see increased demand to replace diverted Canadian and Mexican supplies. Trump said that he’ll defer tariffs on Mexico and Canada for all goods covered by the North American trade agreement known as USMCA, which includes energy. Commerce Secretary Howard Lutnick telegraphed the decision earlier in the day, saying Trump was weighing the move. The White House estimates that 62% of Canadian imports will still be subject to the tariffs, most of which are energy products that are being tariffed at a 10% rate, and half of goods coming from Mexico. WTI’s prompt spread — the difference between its two nearest contracts — narrowed to 36 cents following the tariff reprieve, a sign of potentially looser market conditions. Canadian heavy crudes rallied on the tariff postponement. Futures have tumbled since mid-January as Trump’s trade policies rattle global markets and America’s neighbors ready countermeasures. OPEC+ also surprised markets with plans to start reviving idled production in April, adding to the bearish headwinds. Brent futures into oversold territory for the first time since September based on one technical gauge. The term implies the recent

Read More »

Tanzania Plans to Kick Off 5th Oil, Gas Licensing Round in May

Tanzania aims to start a licensing round for dozens of oil and gas exploration blocks in May, the first in more than a decade for the nation with an estimated 57 trillion cubic feet of natural gas reserves. Three of the 26 blocks are in Lake Tanganyika and the rest in the Indian Ocean. The country’s last licensing round was in May 2014. “We are proceeding with promotion activities because the blocks have already been identified and the data is in place. We are waiting for government approval of the Model Production Sharing Agreement, which outlines the fiscal terms,” said Charles Sangweni, director general of the Petroleum Upstream Regulatory Authority. “Our plan is to launch during the Africa Energy Summit in London from 13th to 15th May.” Tanzania already produces natural gas, which it uses to generate electricity, and plans a $42 billion liquefied natural gas facility to be built by a consortium comprising Shell Plc, Equinor ASA and Exxon Mobil Corp. That long-delayed plan is still under negotiation as Tanzania’s government is “trying to align just a few key outstanding issues,” Sangweni said in an interview in Dar es Salaam. “An agreement is coming, that’s my hope. When, I can’t tell you.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Anthropic just launched a new platform that lets everyone in your company collaborate on AI — not just the tech team

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic has launched a significant overhaul to its developer platform today, introducing team collaboration features and extended reasoning capabilities for its Claude AI assistant that aim to solve major pain points for organizations implementing artificial intelligence solutions. The upgraded Anthropic Console now enables cross-functional teams to collaborate on AI prompts—the text instructions that guide AI models—while also supporting the company’s latest Claude 3.7 Sonnet model with new controls for complex problem-solving. “We built our shareable prompts to help our customers and developers work together effectively on prompt development,” an Anthropic spokesperson told VentureBeat. “What we learned from talking to customers was that prompt creation rarely happens in isolation. It’s a team effort involving developers, subject matter experts, product managers, and QA folks all trying to get the best results.” The move addresses a growing challenge for enterprises adopting AI: coordinating prompt engineering work across technical and business teams. Before this update, companies often resorted to sharing prompts through documents or messaging apps, creating version control issues and knowledge silos. How Claude’s new thinking controls balance advanced AI power with budget-friendly cost management The updated platform also introduces “extended thinking controls” for Claude 3.7 Sonnet, allowing developers to specify when the AI should use deeper reasoning while setting budget limits to control costs. “Claude 3.7 Sonnet gives you two modes in one model: standard mode for quick responses and extended thinking mode when you need deeper problem-solving,” the spokesperson told VentureBeat. “In extended thinking mode, Claude takes time to work through problems step-by-step, similar to how humans approach complex challenges.” This dual approach helps companies balance performance with expenditure—a key consideration as AI implementation costs come under greater scrutiny amid widespread adoption.

Read More »

‘EPA’s action is illegal,’ says green group attorney about $20B GGRF funding freeze

An attorney representing the Climate United Fund sent a Tuesday letter to the Environmental Protection Agency asking for the agency to immediately reinstate the fund’s access to its nearly $7 billion Greenhouse Gas Reduction Fund grant, calling the freeze illegal. “As already explained, Climate United’s preferred path forward is direct communication to find common ground,” said Adam Unikowsky, a partner at Jenner & Block. “But if the EPA adheres to its decision to suspend or terminate Climate United’s grant, it should stay its decision pending judicial review …. Climate United is likely to succeed in showing that the EPA’s action is illegal.” Review of the matter is urgent, Unikowsky said, as Climate United faces “immediate, irreparable harm” if the funding is not reinstated.  If the group can’t find another source of funding, it will “shortly” run out of cash for operating expenses, employee pay, rent for some offices, and pay for contractors who provide services such as IT and legal, the letter said. In addition, Climate United won’t be able to meet its commitments for already approved loans and awards, which “would cause profound harm to its subawardees,” Unikowsky alleged. With its GGRF funding, Climate United has so far made investments that include a $10.8 million pre-development loan for utility-scale solar projects on tribal lands in eastern Oregon and Idaho and $250 million toward electric truck manufacturing. EPA Administrator Lee Zeldin commented on the freeze Wednesday, saying that his team had “uncovered extensive troubling developments with $20 billion in ‘gold bars’ that the Biden EPA ‘tossed off the Titanic.’” “These taxpayer funds were parked at an outside financial institution to rush out the door and circumvent proper oversight; $20 billion was given to just eight pass-through nongovernmental entities in an effort riddled with self-dealing, conflicts of interest, and an extreme

Read More »

HPE cuts 2,500 workers, expects Juniper buy to close end of ’25, faces tariff issues

AI systems backlog rose 29% quarter over quarter to $3.1 billion and total server revenue totaled $4.29 billion, Myers said. The company reported Intelligent Edge revenue was down 5% from the prior-year period to $1.1 billion, but Hybrid Cloud revenue was $1.4 billion, up 10% from the prior-year period. Then there’s the matter of HPE’s proposed $14 billion buy of Juniper Networks that is now being held up by the U.S. Justice Department. A trial has been set for July 9, Neri said. “The DOJ analysis of the market is fundamentally flawed. We strongly believe this transaction will positively change the dynamics in the networking market by enhancing competition, HPE and Juniper remain fully committed to the transaction, which we expect will deliver at least $450 million in gross annual run rate synergies to shareholders within three years of the deal closing,” Neri said.  “We believe we have a compelling case and expect to be able to close the transaction before the end of fiscal 2025.” Like other industry players, HPE is trying to negotiate the tariffs that the U.S. has threatened to or implemented against China, Mexico, and other countries.

Read More »

Mistral releases new optical character recognition (OCR) API claiming top performance globally

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Well-funded French AI startup Mistral is content to go its own way. In a sea of competing reasoning models, the company today introduced Mistral OCR, a new Optical Character Recognition (OCR) API designed to provide advanced document understanding capabilities. The API extracts content—including handwritten notes, typed text, images, tables, and equations—from unstructured PDFs and images with high accuracy, presenting in a structured format. Structured data is information that is organized in a predefined manner, typically using rows and columns, making it easy to search and analyze. Common examples include names, addresses, and financial transactions stored in databases or spreadsheets.  In contrast, unstructured data lacks a specific format or structure, making it more challenging to process and analyze. This category encompasses a wide range of data types, such as emails, social media posts, videos, images, and audio files. Since unstructured data doesn’t fit neatly into traditional databases, specialized tools and techniques, like natural language processing and machine learning, are often employed to extract meaningful insights from it.  Understanding the distinction between these data types is crucial for businesses aiming to effectively manage and leverage their information assets. With multilingual support, fast processing speeds, and integration with large language models for document understanding, Mistral OCR is positioned to assist organizations in making their documentation AI-ready. Given that, according to Mistral’s blog post announcing the new API, 90% of all business information is unstructured, the new API should be a huge boon to organizations seeking to digitize and catalog their data for use in AI applications or internal/external knowledge bases. A new gold standard for OCR Mistral OCR aims to improve how organizations process and analyze complex documents. Unlike traditional OCR solutions that primarily

Read More »

Oil Recovers on Trade Tariff Reprieve

Oil eked out a marginal gain after a session of whipsawing as US President Donald Trump’s moved to delay tariffs on imports from Canada and Mexico.   West Texas Intermediate settled little changed above $66 a barrel, snapping a four-day straight losing streak by a hair. Brent edged up slightly to top $69 after touching the lowest since late 2021 on Wednesday. Diverging supply signals sparked fluctuations for prices on Thursday. On the one hand, Trump’s tariff threats have prompted some analysts to reconsider how low crude may tumble if the trade wars weigh on economic growth and energy demand. However, the prospect of levies have also been interpreted as somewhat supportive for WTI prices as the benchmark may see increased demand to replace diverted Canadian and Mexican supplies. Trump said that he’ll defer tariffs on Mexico and Canada for all goods covered by the North American trade agreement known as USMCA, which includes energy. Commerce Secretary Howard Lutnick telegraphed the decision earlier in the day, saying Trump was weighing the move. The White House estimates that 62% of Canadian imports will still be subject to the tariffs, most of which are energy products that are being tariffed at a 10% rate, and half of goods coming from Mexico. WTI’s prompt spread — the difference between its two nearest contracts — narrowed to 36 cents following the tariff reprieve, a sign of potentially looser market conditions. Canadian heavy crudes rallied on the tariff postponement. Futures have tumbled since mid-January as Trump’s trade policies rattle global markets and America’s neighbors ready countermeasures. OPEC+ also surprised markets with plans to start reviving idled production in April, adding to the bearish headwinds. Brent futures into oversold territory for the first time since September based on one technical gauge. The term implies the recent

Read More »

Tanzania Plans to Kick Off 5th Oil, Gas Licensing Round in May

Tanzania aims to start a licensing round for dozens of oil and gas exploration blocks in May, the first in more than a decade for the nation with an estimated 57 trillion cubic feet of natural gas reserves. Three of the 26 blocks are in Lake Tanganyika and the rest in the Indian Ocean. The country’s last licensing round was in May 2014. “We are proceeding with promotion activities because the blocks have already been identified and the data is in place. We are waiting for government approval of the Model Production Sharing Agreement, which outlines the fiscal terms,” said Charles Sangweni, director general of the Petroleum Upstream Regulatory Authority. “Our plan is to launch during the Africa Energy Summit in London from 13th to 15th May.” Tanzania already produces natural gas, which it uses to generate electricity, and plans a $42 billion liquefied natural gas facility to be built by a consortium comprising Shell Plc, Equinor ASA and Exxon Mobil Corp. That long-delayed plan is still under negotiation as Tanzania’s government is “trying to align just a few key outstanding issues,” Sangweni said in an interview in Dar es Salaam. “An agreement is coming, that’s my hope. When, I can’t tell you.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Anthropic just launched a new platform that lets everyone in your company collaborate on AI — not just the tech team

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic has launched a significant overhaul to its developer platform today, introducing team collaboration features and extended reasoning capabilities for its Claude AI assistant that aim to solve major pain points for organizations implementing artificial intelligence solutions. The upgraded Anthropic Console now enables cross-functional teams to collaborate on AI prompts—the text instructions that guide AI models—while also supporting the company’s latest Claude 3.7 Sonnet model with new controls for complex problem-solving. “We built our shareable prompts to help our customers and developers work together effectively on prompt development,” an Anthropic spokesperson told VentureBeat. “What we learned from talking to customers was that prompt creation rarely happens in isolation. It’s a team effort involving developers, subject matter experts, product managers, and QA folks all trying to get the best results.” The move addresses a growing challenge for enterprises adopting AI: coordinating prompt engineering work across technical and business teams. Before this update, companies often resorted to sharing prompts through documents or messaging apps, creating version control issues and knowledge silos. How Claude’s new thinking controls balance advanced AI power with budget-friendly cost management The updated platform also introduces “extended thinking controls” for Claude 3.7 Sonnet, allowing developers to specify when the AI should use deeper reasoning while setting budget limits to control costs. “Claude 3.7 Sonnet gives you two modes in one model: standard mode for quick responses and extended thinking mode when you need deeper problem-solving,” the spokesperson told VentureBeat. “In extended thinking mode, Claude takes time to work through problems step-by-step, similar to how humans approach complex challenges.” This dual approach helps companies balance performance with expenditure—a key consideration as AI implementation costs come under greater scrutiny amid widespread adoption.

Read More »

‘EPA’s action is illegal,’ says green group attorney about $20B GGRF funding freeze

An attorney representing the Climate United Fund sent a Tuesday letter to the Environmental Protection Agency asking for the agency to immediately reinstate the fund’s access to its nearly $7 billion Greenhouse Gas Reduction Fund grant, calling the freeze illegal. “As already explained, Climate United’s preferred path forward is direct communication to find common ground,” said Adam Unikowsky, a partner at Jenner & Block. “But if the EPA adheres to its decision to suspend or terminate Climate United’s grant, it should stay its decision pending judicial review …. Climate United is likely to succeed in showing that the EPA’s action is illegal.” Review of the matter is urgent, Unikowsky said, as Climate United faces “immediate, irreparable harm” if the funding is not reinstated.  If the group can’t find another source of funding, it will “shortly” run out of cash for operating expenses, employee pay, rent for some offices, and pay for contractors who provide services such as IT and legal, the letter said. In addition, Climate United won’t be able to meet its commitments for already approved loans and awards, which “would cause profound harm to its subawardees,” Unikowsky alleged. With its GGRF funding, Climate United has so far made investments that include a $10.8 million pre-development loan for utility-scale solar projects on tribal lands in eastern Oregon and Idaho and $250 million toward electric truck manufacturing. EPA Administrator Lee Zeldin commented on the freeze Wednesday, saying that his team had “uncovered extensive troubling developments with $20 billion in ‘gold bars’ that the Biden EPA ‘tossed off the Titanic.’” “These taxpayer funds were parked at an outside financial institution to rush out the door and circumvent proper oversight; $20 billion was given to just eight pass-through nongovernmental entities in an effort riddled with self-dealing, conflicts of interest, and an extreme

Read More »

Tanzania Plans to Kick Off 5th Oil, Gas Licensing Round in May

Tanzania aims to start a licensing round for dozens of oil and gas exploration blocks in May, the first in more than a decade for the nation with an estimated 57 trillion cubic feet of natural gas reserves. Three of the 26 blocks are in Lake Tanganyika and the rest in the Indian Ocean. The country’s last licensing round was in May 2014. “We are proceeding with promotion activities because the blocks have already been identified and the data is in place. We are waiting for government approval of the Model Production Sharing Agreement, which outlines the fiscal terms,” said Charles Sangweni, director general of the Petroleum Upstream Regulatory Authority. “Our plan is to launch during the Africa Energy Summit in London from 13th to 15th May.” Tanzania already produces natural gas, which it uses to generate electricity, and plans a $42 billion liquefied natural gas facility to be built by a consortium comprising Shell Plc, Equinor ASA and Exxon Mobil Corp. That long-delayed plan is still under negotiation as Tanzania’s government is “trying to align just a few key outstanding issues,” Sangweni said in an interview in Dar es Salaam. “An agreement is coming, that’s my hope. When, I can’t tell you.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Oil Recovers on Trade Tariff Reprieve

Oil eked out a marginal gain after a session of whipsawing as US President Donald Trump’s moved to delay tariffs on imports from Canada and Mexico.   West Texas Intermediate settled little changed above $66 a barrel, snapping a four-day straight losing streak by a hair. Brent edged up slightly to top $69 after touching the lowest since late 2021 on Wednesday. Diverging supply signals sparked fluctuations for prices on Thursday. On the one hand, Trump’s tariff threats have prompted some analysts to reconsider how low crude may tumble if the trade wars weigh on economic growth and energy demand. However, the prospect of levies have also been interpreted as somewhat supportive for WTI prices as the benchmark may see increased demand to replace diverted Canadian and Mexican supplies. Trump said that he’ll defer tariffs on Mexico and Canada for all goods covered by the North American trade agreement known as USMCA, which includes energy. Commerce Secretary Howard Lutnick telegraphed the decision earlier in the day, saying Trump was weighing the move. The White House estimates that 62% of Canadian imports will still be subject to the tariffs, most of which are energy products that are being tariffed at a 10% rate, and half of goods coming from Mexico. WTI’s prompt spread — the difference between its two nearest contracts — narrowed to 36 cents following the tariff reprieve, a sign of potentially looser market conditions. Canadian heavy crudes rallied on the tariff postponement. Futures have tumbled since mid-January as Trump’s trade policies rattle global markets and America’s neighbors ready countermeasures. OPEC+ also surprised markets with plans to start reviving idled production in April, adding to the bearish headwinds. Brent futures into oversold territory for the first time since September based on one technical gauge. The term implies the recent

Read More »

‘EPA’s action is illegal,’ says green group attorney about $20B GGRF funding freeze

An attorney representing the Climate United Fund sent a Tuesday letter to the Environmental Protection Agency asking for the agency to immediately reinstate the fund’s access to its nearly $7 billion Greenhouse Gas Reduction Fund grant, calling the freeze illegal. “As already explained, Climate United’s preferred path forward is direct communication to find common ground,” said Adam Unikowsky, a partner at Jenner & Block. “But if the EPA adheres to its decision to suspend or terminate Climate United’s grant, it should stay its decision pending judicial review …. Climate United is likely to succeed in showing that the EPA’s action is illegal.” Review of the matter is urgent, Unikowsky said, as Climate United faces “immediate, irreparable harm” if the funding is not reinstated.  If the group can’t find another source of funding, it will “shortly” run out of cash for operating expenses, employee pay, rent for some offices, and pay for contractors who provide services such as IT and legal, the letter said. In addition, Climate United won’t be able to meet its commitments for already approved loans and awards, which “would cause profound harm to its subawardees,” Unikowsky alleged. With its GGRF funding, Climate United has so far made investments that include a $10.8 million pre-development loan for utility-scale solar projects on tribal lands in eastern Oregon and Idaho and $250 million toward electric truck manufacturing. EPA Administrator Lee Zeldin commented on the freeze Wednesday, saying that his team had “uncovered extensive troubling developments with $20 billion in ‘gold bars’ that the Biden EPA ‘tossed off the Titanic.’” “These taxpayer funds were parked at an outside financial institution to rush out the door and circumvent proper oversight; $20 billion was given to just eight pass-through nongovernmental entities in an effort riddled with self-dealing, conflicts of interest, and an extreme

Read More »

Enfinium unveils next phase of CCS programme

UK energy-from-waste operator Enfinium has announced the next phase of its carbon capture and storage (CCS) pilot programme. As part of the new phase, the company will relocate the CCS pilot plant currently in place at its Ferrybridge 1 facility in West Yorkshire to Parc Adfer, North Wales, in April. The pilot plant will be installed and operated by clean technology company Kanadevia Inova. A new pilot plant will then be installed at Ferrybridge by UK technology company Nuada. The company is in the process of scaling an innovative metal-organic framework (MOF) technology that captures carbon dioxide (CO2) from point sources through a vacuum swing process. Both pilot projects will run for at least six months as part of Enfinium’s plan to deploy CCS across all six of its UK facilities at a total cost of around £1.7 billion. Enfinium noted that the plant at Parc Adfer would be the only active carbon capture pilot in Wales and the first pilot to be deployed within the wider HyNet industrial cluster. The Parc Adfer facility is also a candidate for grant support through the UK government’s Track-1 HyNet Expansion programme. Enfinium’s announcement indicated that the company was hoping for a positive decision on this from the government in the coming months. The CCS pilot currently operating at Ferrybridge was launched in September 2024, becoming the first pilot project of its kind. The pilot entailed use of a containerised technology that Enfinium said at the time was a scaled-down version of CCS technology that could subsequently be deployed across all of its sites. The technology was supplied by green technology player Hitachi Zosen Inova (HZI) and was being used to capture 1 tonne per day of CO2 emissions at the site. At the time, Enfinium said that trial would run for at

Read More »

Basin Electric urges Congress to support clean energy tax credits

Congress should maintain Inflation Reduction Act clean energy tax credits to provide utilities with certainty about their investment decisions, Todd Brickhouse, CEO and general manager of Basin Electric Power Cooperative, said during a House hearing on Wednesday. “The immediate removal of [the production tax credit] will not allow utilities to plan for and avoid increased costs, and this will also immediately harm ratepayers,” Brickhouse said during a hearing held by the Energy and Commerce Committee’s subcommittee on energy on the challenges of responding to rising demand growth. Basin Electric, a generation and transmission wholesale cooperative based in Bismarck, North Dakota, is building 1,500 MW of solar, partly based on the assumption the capacity would be eligible for PTCs, according to Brickhouse. Congressional Republicans are looking for ways to trim federal spending to pay for their budget plans, potentially including changes to tax credits provisions contained in the Inflation Reduction Act. Rep. Mariannette Miller-Meeks, R-Iowa, said the IRA’s tax credits can help spur the buildout of energy infrastructure to meet growing electric demand.  “Tax incentives like the tech-neutral clean energy credits under [sections] 45Y and 45E, and the 45Q carbon sequestration credit, and the 45X advanced manufacturing credit aim to strengthen American manufacturing capability and reduce the engineering procurement and construction risks that have plagued major energy projects,” Miller-Meeks said. She joined 17 other House Republicans in an Aug. 6 letter to House Speaker Mike Johnson, R-La., supporting the IRA’s tax credits. Those tax credits are “incredibly helpful in ensuring that we can get those projects built and online in a manner that’s affordable for our customers,” said Noel Black, senior vice president of federal regulatory affairs for Southern Co., which owns utilities in the Southeast. Renewable energy can help meet electricity demand, in part because it can be built relatively quickly, according

Read More »

USA Crude Oil Inventories Rise WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), increased by 3.6 million barrels from the week ending February 21 to the week ending February 28, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report. This report was released on March 5 and included data for the week ending February 28. The report showed that crude oil stocks, not including the SPR, stood at 433.8 million barrels on February 28, 430.2 million barrels on February 21, and 448.5 million barrels on March 1, 2024. Crude oil in the SPR stood at 395.3 million barrels on February 28 and February 21, and 361.0 million barrels on March 1, 2024, the report outlined. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.600 billion barrels on February 28, the report showed. Total petroleum stocks were down 4.6 million barrels week on week and up 16.8 million barrels year on year, the report revealed. “At 433.8 million barrels, U.S. crude oil inventories are about four percent below the five year average for this time of year,” the EIA stated in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 1.4 million barrels from last week and are one percent above the five year average for this time of year. Finished gasoline inventories increased, while blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 1.3 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 2.9 million barrels from last week and are four percent below the five year average for this time of year,”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

Mistral releases new optical character recognition (OCR) API claiming top performance globally

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Well-funded French AI startup Mistral is content to go its own way. In a sea of competing reasoning models, the company today introduced Mistral OCR, a new Optical Character Recognition (OCR) API designed to provide advanced document understanding capabilities. The API extracts content—including handwritten notes, typed text, images, tables, and equations—from unstructured PDFs and images with high accuracy, presenting in a structured format. Structured data is information that is organized in a predefined manner, typically using rows and columns, making it easy to search and analyze. Common examples include names, addresses, and financial transactions stored in databases or spreadsheets.  In contrast, unstructured data lacks a specific format or structure, making it more challenging to process and analyze. This category encompasses a wide range of data types, such as emails, social media posts, videos, images, and audio files. Since unstructured data doesn’t fit neatly into traditional databases, specialized tools and techniques, like natural language processing and machine learning, are often employed to extract meaningful insights from it.  Understanding the distinction between these data types is crucial for businesses aiming to effectively manage and leverage their information assets. With multilingual support, fast processing speeds, and integration with large language models for document understanding, Mistral OCR is positioned to assist organizations in making their documentation AI-ready. Given that, according to Mistral’s blog post announcing the new API, 90% of all business information is unstructured, the new API should be a huge boon to organizations seeking to digitize and catalog their data for use in AI applications or internal/external knowledge bases. A new gold standard for OCR Mistral OCR aims to improve how organizations process and analyze complex documents. Unlike traditional OCR solutions that primarily

Read More »

Anthropic just launched a new platform that lets everyone in your company collaborate on AI — not just the tech team

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic has launched a significant overhaul to its developer platform today, introducing team collaboration features and extended reasoning capabilities for its Claude AI assistant that aim to solve major pain points for organizations implementing artificial intelligence solutions. The upgraded Anthropic Console now enables cross-functional teams to collaborate on AI prompts—the text instructions that guide AI models—while also supporting the company’s latest Claude 3.7 Sonnet model with new controls for complex problem-solving. “We built our shareable prompts to help our customers and developers work together effectively on prompt development,” an Anthropic spokesperson told VentureBeat. “What we learned from talking to customers was that prompt creation rarely happens in isolation. It’s a team effort involving developers, subject matter experts, product managers, and QA folks all trying to get the best results.” The move addresses a growing challenge for enterprises adopting AI: coordinating prompt engineering work across technical and business teams. Before this update, companies often resorted to sharing prompts through documents or messaging apps, creating version control issues and knowledge silos. How Claude’s new thinking controls balance advanced AI power with budget-friendly cost management The updated platform also introduces “extended thinking controls” for Claude 3.7 Sonnet, allowing developers to specify when the AI should use deeper reasoning while setting budget limits to control costs. “Claude 3.7 Sonnet gives you two modes in one model: standard mode for quick responses and extended thinking mode when you need deeper problem-solving,” the spokesperson told VentureBeat. “In extended thinking mode, Claude takes time to work through problems step-by-step, similar to how humans approach complex challenges.” This dual approach helps companies balance performance with expenditure—a key consideration as AI implementation costs come under greater scrutiny amid widespread adoption.

Read More »

How to Spot and Prevent Model Drift Before it Impacts Your Business

Despite the AI hype, many tech companies still rely heavily on machine learning to power critical applications, from personalized recommendations to fraud detection. 

I’ve seen firsthand how undetected drifts can result in significant costs — missed fraud detection, lost revenue, and suboptimal business outcomes, just to name a few. So, it’s crucial to have robust monitoring in place if your company has deployed or plans to deploy machine learning models into production.

Undetected Model Drift can lead to significant financial losses, operational inefficiencies, and even damage to a company’s reputation. To mitigate these risks, it’s important to have effective model monitoring, which involves:

Tracking model performance

Monitoring feature distributions

Detecting both univariate and multivariate drifts

A well-implemented monitoring system can help identify issues early, saving considerable time, money, and resources.

In this comprehensive guide, I’ll provide a framework on how to think about and implement effective Model Monitoring, helping you stay ahead of potential issues and ensure stability and reliability of your models in production.

What’s the difference between feature drift and score drift?

Score drift refers to a gradual change in the distribution of model scores. If left unchecked, this could lead to a decline in model performance, making the model less accurate over time.

On the other hand, feature drift occurs when one or more features experience changes in the distribution. These changes in feature values can affect the underlying relationships that the model has learned, and ultimately lead to inaccurate model predictions.

Simulating score shifts

To model real-world fraud detection challenges, I created a synthetic dataset with five financial transaction features.

The reference dataset represents the original distribution, while the production dataset introduces shifts to simulate an increase in high-value transactions without PIN verification on newer accounts, indicating an increase in fraud.

Each feature has different underlying distributions:

Transaction Amount: Log-normal distribution (right-skewed with a long tail)

Account Age (months): clipped normal distribution between 0 to 60 (assuming a 5-year-old company)

Time Since Last Transaction: Exponential distribution

Transaction Count: Poisson distribution

Entered PIN: Binomial distribution.

To approximate model scores, I randomly assigned weights to these features and applied a sigmoid function to constrain predictions between 0 to 1. This mimics how a logistic regression fraud model generates risk scores.

As shown in the plot below:

Drifted features: Transaction Amount, Account Age, Transaction Count, and Entered PIN all experienced shifts in distribution, scale, or relationships.

Distribution of drifted features (image by author)

Stable feature: Time Since Last Transaction remained unchanged.

Distribution of stable feature (image by author)

Drifted scores: As a result of the drifted features, the distribution in model scores has also changed.

Distribution of model scores (image by author)

This setup allows us to analyze how feature drift impacts model scores in production.

Detecting model score drift using PSI

To monitor model scores, I used population stability index (PSI) to measure how much model score distribution has shifted over time.

PSI works by binning continuous model scores and comparing the proportion of scores in each bin between the reference and production datasets. It compares the differences in proportions and their logarithmic ratios to compute a single summary statistic to quantify the drift.

Python implementation:

# Define function to calculate PSI given two datasets
def calculate_psi(reference, production, bins=10):
# Discretize scores into bins
min_val, max_val = 0, 1
bin_edges = np.linspace(min_val, max_val, bins + 1)

# Calculate proportions in each bin
ref_counts, _ = np.histogram(reference, bins=bin_edges)
prod_counts, _ = np.histogram(production, bins=bin_edges)

ref_proportions = ref_counts / len(reference)
prod_proportions = prod_counts / len(production)

# Avoid division by zero
ref_proportions = np.clip(ref_proportions, 1e-8, 1)
prod_proportions = np.clip(prod_proportions, 1e-8, 1)

# Calculate PSI for each bin
psi = np.sum((ref_proportions – prod_proportions) * np.log(ref_proportions / prod_proportions))

return psi

# Calculate PSI
psi_value = calculate_psi(ref_data[‘model_score’], prod_data[‘model_score’], bins=10)
print(f”PSI Value: {psi_value}”)

Below is a summary of how to interpret PSI values:

PSI < 0.1: No drift, or very minor drift (distributions are almost identical).

0.1 ≤ PSI < 0.25: Some drift. The distributions are somewhat different.

0.25 ≤ PSI < 0.5: Moderate drift. A noticeable shift between the reference and production distributions.

PSI ≥ 0.5: Significant drift. There is a large shift, indicating that the distribution in production has changed substantially from the reference data.

Histogram of model score distributions (image by author)

The PSI value of 0.6374 suggests a significant drift between our reference and production datasets. This aligns with the histogram of model score distributions, which visually confirms the shift towards higher scores in production — indicating an increase in risky transactions.

Detecting feature drift

Kolmogorov-Smirnov test for numeric features

The Kolmogorov-Smirnov (K-S) test is my preferred method for detecting drift in numeric features, because it is non-parametric, meaning it doesn’t assume a normal distribution.

The test compares a feature’s distribution in the reference and production datasets by measuring the maximum difference between the empirical cumulative distribution functions (ECDFs). The resulting K-S statistic ranges from 0 to 1:

0 indicates no difference between the two distributions.

Values closer to 1 suggest a greater shift.

Python implementation:

# Create an empty dataframe
ks_results = pd.DataFrame(columns=['Feature', 'KS Statistic', 'p-value', 'Drift Detected'])

# Loop through all features and perform the K-S test
for col in numeric_cols:
ks_stat, p_value = ks_2samp(ref_data[col], prod_data[col])
drift_detected = p_value < 0.05

# Store results in the dataframe
ks_results = pd.concat([
ks_results,
pd.DataFrame({
'Feature': [col],
'KS Statistic': [ks_stat],
'p-value': [p_value],
'Drift Detected': [drift_detected]
})
], ignore_index=True)

Below are ECDF charts of the four numeric features in our dataset:

ECDFs of four numeric features (image by author)

Let’s look at the account age feature as an example: the x-axis represents account age (0-50 months), while the y-axis shows the ECDF for both reference and production datasets. The production dataset skews towards newer accounts, as it has a larger proportion of observations have lower account ages.

Chi-Square test for categorical features

To detect shifts in categorical and boolean features, I like to use the Chi-Square test.

This test compares the frequency distribution of a categorical feature in the reference and production datasets, and returns two values:

Chi-Square statistic: A higher value indicates a greater shift between the reference and production datasets.

P-value: A p-value below 0.05 suggests that the difference between the reference and production datasets is statistically significant, indicating potential feature drift.

Python implementation:

# Create empty dataframe with corresponding column names
chi2_results = pd.DataFrame(columns=['Feature', 'Chi-Square Statistic', 'p-value', 'Drift Detected'])

for col in categorical_cols:
# Get normalized value counts for both reference and production datasets
ref_counts = ref_data[col].value_counts(normalize=True)
prod_counts = prod_data[col].value_counts(normalize=True)

# Ensure all categories are represented in both
all_categories = set(ref_counts.index).union(set(prod_counts.index))
ref_counts = ref_counts.reindex(all_categories, fill_value=0)
prod_counts = prod_counts.reindex(all_categories, fill_value=0)

# Create contingency table
contingency_table = np.array([ref_counts * len(ref_data), prod_counts * len(prod_data)])

# Perform Chi-Square test
chi2_stat, p_value, _, _ = chi2_contingency(contingency_table)
drift_detected = p_value < 0.05

# Store results in chi2_results dataframe
chi2_results = pd.concat([
chi2_results,
pd.DataFrame({
'Feature': [col],
'Chi-Square Statistic': [chi2_stat],
'p-value': [p_value],
'Drift Detected': [drift_detected]
})
], ignore_index=True)

The Chi-Square statistic of 57.31 with a p-value of 3.72e-14 confirms a large shift in our categorical feature, Entered PIN. This finding aligns with the histogram below, which visually illustrates the shift:

Distribution of categorical feature (image by author)

Detecting multivariate shifts

Spearman Correlation for shifts in pairwise interactions

In addition to monitoring individual feature shifts, it’s important to track shifts in relationships or interactions between features, known as multivariate shifts. Even if the distributions of individual features remain stable, multivariate shifts can signal meaningful differences in the data.

By default, Pandas’ .corr() function calculates Pearson correlation, which only captures linear relationships between variables. However, relationships between features are often non-linear yet still follow a consistent trend.

To capture this, we use Spearman correlation to measure monotonic relationships between features. It captures whether features change together in a consistent direction, even if their relationship isn’t strictly linear.

To assess shifts in feature relationships, we compare:

Reference correlation (ref_corr): Captures historical feature relationships in the reference dataset.

Production correlation (prod_corr): Captures new feature relationships in production.

Absolute difference in correlation: Measures how much feature relationships have shifted between the reference and production datasets. Higher values indicate more significant shifts.

Python implementation:

# Calculate correlation matrices
ref_corr = ref_data.corr(method='spearman')
prod_corr = prod_data.corr(method='spearman')

# Calculate correlation difference
corr_diff = abs(ref_corr – prod_corr)

Example: Change in correlation

Now, let’s look at the correlation between transaction_amount and account_age_in_months:

In ref_corr, the correlation is 0.00095, indicating a weak relationship between the two features.

In prod_corr, the correlation is -0.0325, indicating a weak negative correlation.

Absolute difference in the Spearman correlation is 0.0335, which is a small but noticeable shift.

The absolute difference in correlation indicates a shift in the relationship between transaction_amount and account_age_in_months.

There used to be no relationship between these two features, but the production dataset indicates that there is now a weak negative correlation, meaning that newer accounts have higher transaction accounts. This is spot on!

Autoencoder for complex, high-dimensional multivariate shifts

In addition to monitoring pairwise interactions, we can also look for shifts across more dimensions in the data.

Autoencoders are powerful tools for detecting high-dimensional multivariate shifts, where multiple features collectively change in ways that may not be apparent from looking at individual feature distributions or pairwise correlations.

An autoencoder is a neural network that learns a compressed representation of data through two components:

Encoder: Compresses input data into a lower-dimensional representation.

Decoder: Reconstructs the original input from the compressed representation.

To detect shifts, we compare the reconstructed output to the original input and compute the reconstruction loss.

Low reconstruction loss → The autoencoder successfully reconstructs the data, meaning the new observations are similar to it has seen and learned.

High reconstruction loss → The production data deviates significantly from the learned patterns, indicating potential drift.

Unlike traditional drift metrics that focus on individual features or pairwise relationships, autoencoders capture complex, non-linear dependencies across multiple variables simultaneously.

Python implementation:

ref_features = ref_data[numeric_cols + categorical_cols]
prod_features = prod_data[numeric_cols + categorical_cols]

# Normalize the data
scaler = StandardScaler()
ref_scaled = scaler.fit_transform(ref_features)
prod_scaled = scaler.transform(prod_features)

# Split reference data into train and validation
np.random.shuffle(ref_scaled)
train_size = int(0.8 * len(ref_scaled))
train_data = ref_scaled[:train_size]
val_data = ref_scaled[train_size:]

# Build autoencoder
input_dim = ref_features.shape[1]
encoding_dim = 3
# Input layer
input_layer = Input(shape=(input_dim, ))
# Encoder
encoded = Dense(8, activation="relu")(input_layer)
encoded = Dense(encoding_dim, activation="relu")(encoded)
# Decoder
decoded = Dense(8, activation="relu")(encoded)
decoded = Dense(input_dim, activation="linear")(decoded)
# Autoencoder
autoencoder = Model(input_layer, decoded)
autoencoder.compile(optimizer="adam", loss="mse")

# Train autoencoder
history = autoencoder.fit(
train_data, train_data,
epochs=50,
batch_size=64,
shuffle=True,
validation_data=(val_data, val_data),
verbose=0
)

# Calculate reconstruction error
ref_pred = autoencoder.predict(ref_scaled, verbose=0)
prod_pred = autoencoder.predict(prod_scaled, verbose=0)

ref_mse = np.mean(np.power(ref_scaled – ref_pred, 2), axis=1)
prod_mse = np.mean(np.power(prod_scaled – prod_pred, 2), axis=1)

The charts below show the distribution of reconstruction loss between both datasets.

Distribution of reconstruction loss between actuals and predictions (image by author)

The production dataset has a higher mean reconstruction error than that of the reference dataset, indicating a shift in the overall data. This aligns with the changes in the production dataset with a higher number of newer accounts with high-value transactions.

Summarizing

Model monitoring is an essential, yet often overlooked, responsibility for data scientists and machine learning engineers.

All the statistical methods led to the same conclusion, which aligns with the observed shifts in the data: they detected a trend in production towards newer accounts making higher-value transactions. This shift resulted in higher model scores, signaling an increase in potential fraud.

In this post, I covered techniques for detecting drift on three different levels:

Model score drift: Using Population Stability Index (PSI)

Individual feature drift: Using Kolmogorov-Smirnov test for numeric features and Chi-Square test for categorical features

Multivariate drift: Using Spearman correlation for pairwise interactions and autoencoders for high-dimensional, multivariate shifts.

These are just a few of the techniques I rely on for comprehensive monitoring — there are plenty of other equally valid statistical methods that can also detect drift effectively.

Detected shifts often point to underlying issues that warrant further investigation. The root cause could be as serious as a data collection bug, or as minor as a time change like daylight savings time adjustments.

There are also fantastic python packages, like evidently.ai, that automate many of these comparisons. However, I believe there’s significant value in deeply understanding the statistical techniques behind drift detection, rather than relying solely on these tools.

What’s the model monitoring process like at places you’ve worked?

Want to build your AI skills?

👉🏻 I run the AI Weekender and write weekly blog posts on data science, AI weekend projects, career advice for professionals in data.

Resources

Read More »

Peer raises $10.5M for metaverse engine, launches 3D personal planets

Peer Global Inc announced today that it has raised $10.5 million in its latest round of funding for its metaverse game engine, which it plans to use to build out its team and to accelerate AI product development. In addition to the funding, the company also launched its personal planets feature — an in-engine feature that allows users to create their own 3D social hubs. According to founder Tony Tran, this new form of social engagement is intended to be a tonic to more addictive, static forms of social media.

Peer’s total investment numbers sit at $65.5 million, all from angel investors. The Family Office of Tommy Mai is the sole investor in this round of funding. The company will build out its AI features, which make up the backbone of its persistent world. AI is also one of the tools that the company offers for developers who wish to build their experiences on Peer. Within the game’s engine, all games and experiences would be connected to each other.

Mai said in a statement, “Websites, social networks, and digital brand experiences today are flat. People have short attention spans. AI will push everything into spatial experiences, and Peer is leading the way. We’re really excited about the potential for this technology and think Tony and team are the ones to get this right.”

Peer wants to redefine the social experience

Speaking with GamesBeat, Tran said that Peer reshapes those same social engagement forces into something that gets users going outside and engaging with the world around them. “We use location sharing dynamically within the platform, to create a living map where people can see each other moving in real-time, sparking spontaneous interactions and collaboration rather than passive consumption. This transforms the energy of traditional FOMO into something constructive towards exploration, discovery, and shared experiences. Instead of feeling left out, users are invited into the action, whether it’s meeting up with friends, joining an event, or co-creating within the AI-driven world.”

Tran also notes the advantages of using AI for developers: “What we have is a social interface where AI can create to its maximum potential—generating games, characters, and entire experiences on demand—for mass consumption. Peer leverages AI to bring the visual side of the metaverse to life in a way that other experiences can’t. All other metaverses exist in isolation, where in Peer, AI acts as the connective tissue. It links people, places, and experiences in real time, forming an instant information layer that keeps everything fluid, responsive, and intelligent.”

According to Tran, Peer plans to launch its location-based mechanics in the near-future, as well as nascent monetization mechanics, such as digital property sales and premium experiences. For the long-term, the company plans to make the Peer experience accessible on any device. To build it at scale, they plan to offer subscription tiers, AI-based advertising and a full digital economy.

Tran told GamesBeat, “Peer’s AI integration allows for dynamic, procedurally generated environments, meaning developers can create living worlds that adapt to player actions… Peer gives developers a platform to build games that are not just played but lived in—unlocking new possibilities for immersive, connected gameplay.”

Read More »

A standard, open framework for building AI agents is coming from Cisco, LangChain and Galileo

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More One goal for an agentic future is for AI agents from different organizations to freely and seamlessly talk to one another. But getting to that point requires interoperability, and these agents may have been built with different LLMs, data frameworks and code. To achieve interoperability, developers of these agents must agree on how they can communicate with each other. This is a challenging task.  A group of companies, including Cisco, LangChain, LlamaIndex, Galileo and Glean, have now created AGNTCY, an open-source collective with the goal of creating an industry-standard agent interoperability language. AGNTCY aims to make it easy for any AI agent to communicate and exchange data with another. Uniting AI Agents “Just like when the cloud and the internet came about and accelerated applications and all social interactions at a global scale, we want to build the Internet of Agents that accelerate all of human work at a global scale,” said Vijoy Pandey, head of Outshift by Cisco, Cisco’s incubation arm, in an interview with VentureBeat.  Pandey likened AGNTCY to the advent of the Transmission Control Protocol/Internet Protocol (TCP/IP) and the domain name system (DNS), which helped organize the internet and allowed for interconnections between different computer systems.  “The way we are thinking about this problem is that the original internet allowed for humans and servers and web farms to all come together,” he said. “This is the Internet of Agents, and the only way to do that is to make it open and interoperable.” Cisco, LangChain and Galileo will act as AGNTCY’s core maintainers, with Glean and LlamaIndex as contributors. However, this structure may change as the collective adds more members.  Standardizing a fast-moving industry AI agents cannot be

Read More »

Hugging Face co-founder Thomas Wolf just challenged Anthropic CEO’s vision for AI’s future — and the $130 billion industry is taking notice

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Thomas Wolf, co-founder of AI company Hugging Face, has issued a stark challenge to the tech industry’s most optimistic visions of artificial intelligence, arguing that today’s AI systems are fundamentally incapable of delivering the scientific revolutions their creators promise. In a provocative blog post published on his personal website this morning, Wolf directly confronts the widely circulated vision of Anthropic CEO Dario Amodei, who predicted that advanced AI would deliver a “compressed 21st century” where decades of scientific progress could unfold in just years. “I’m afraid AI won’t give us a ‘compressed 21st century,’” Wolf writes in his post, arguing that current AI systems are more likely to produce “a country of yes-men on servers” rather than the “country of geniuses” that Amodei envisions. The exchange highlights a growing divide in how AI leaders think about the technology’s potential to transform scientific discovery and problem-solving, with major implications for business strategies, research priorities, and policy decisions. From straight-A student to ‘mediocre researcher’: Why academic excellence doesn’t equal scientific genius Wolf grounds his critique in personal experience. Despite being a straight-A student who attended MIT, he describes discovering he was a “pretty average, underwhelming, mediocre researcher” when he began his PhD work. This experience shaped his view that academic success and scientific genius require fundamentally different mental approaches — the former rewarding conformity, the latter demanding rebellion against established thinking. “The main mistake people usually make is thinking Newton or Einstein were just scaled-up good students,” Wolf explains. “A real science breakthrough is Copernicus proposing, against all the knowledge of his days — in ML terms we would say ‘despite all his training dataset’ — that the earth may orbit the sun

Read More »

HPE cuts 2,500 workers, expects Juniper buy to close end of ’25, faces tariff issues

AI systems backlog rose 29% quarter over quarter to $3.1 billion and total server revenue totaled $4.29 billion, Myers said. The company reported Intelligent Edge revenue was down 5% from the prior-year period to $1.1 billion, but Hybrid Cloud revenue was $1.4 billion, up 10% from the prior-year period. Then there’s the matter of HPE’s proposed $14 billion buy of Juniper Networks that is now being held up by the U.S. Justice Department. A trial has been set for July 9, Neri said. “The DOJ analysis of the market is fundamentally flawed. We strongly believe this transaction will positively change the dynamics in the networking market by enhancing competition, HPE and Juniper remain fully committed to the transaction, which we expect will deliver at least $450 million in gross annual run rate synergies to shareholders within three years of the deal closing,” Neri said.  “We believe we have a compelling case and expect to be able to close the transaction before the end of fiscal 2025.” Like other industry players, HPE is trying to negotiate the tariffs that the U.S. has threatened to or implemented against China, Mexico, and other countries.

Read More »

Mistral releases new optical character recognition (OCR) API claiming top performance globally

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Well-funded French AI startup Mistral is content to go its own way. In a sea of competing reasoning models, the company today introduced Mistral OCR, a new Optical Character Recognition (OCR) API designed to provide advanced document understanding capabilities. The API extracts content—including handwritten notes, typed text, images, tables, and equations—from unstructured PDFs and images with high accuracy, presenting in a structured format. Structured data is information that is organized in a predefined manner, typically using rows and columns, making it easy to search and analyze. Common examples include names, addresses, and financial transactions stored in databases or spreadsheets.  In contrast, unstructured data lacks a specific format or structure, making it more challenging to process and analyze. This category encompasses a wide range of data types, such as emails, social media posts, videos, images, and audio files. Since unstructured data doesn’t fit neatly into traditional databases, specialized tools and techniques, like natural language processing and machine learning, are often employed to extract meaningful insights from it.  Understanding the distinction between these data types is crucial for businesses aiming to effectively manage and leverage their information assets. With multilingual support, fast processing speeds, and integration with large language models for document understanding, Mistral OCR is positioned to assist organizations in making their documentation AI-ready. Given that, according to Mistral’s blog post announcing the new API, 90% of all business information is unstructured, the new API should be a huge boon to organizations seeking to digitize and catalog their data for use in AI applications or internal/external knowledge bases. A new gold standard for OCR Mistral OCR aims to improve how organizations process and analyze complex documents. Unlike traditional OCR solutions that primarily

Read More »

Oil Recovers on Trade Tariff Reprieve

Oil eked out a marginal gain after a session of whipsawing as US President Donald Trump’s moved to delay tariffs on imports from Canada and Mexico.   West Texas Intermediate settled little changed above $66 a barrel, snapping a four-day straight losing streak by a hair. Brent edged up slightly to top $69 after touching the lowest since late 2021 on Wednesday. Diverging supply signals sparked fluctuations for prices on Thursday. On the one hand, Trump’s tariff threats have prompted some analysts to reconsider how low crude may tumble if the trade wars weigh on economic growth and energy demand. However, the prospect of levies have also been interpreted as somewhat supportive for WTI prices as the benchmark may see increased demand to replace diverted Canadian and Mexican supplies. Trump said that he’ll defer tariffs on Mexico and Canada for all goods covered by the North American trade agreement known as USMCA, which includes energy. Commerce Secretary Howard Lutnick telegraphed the decision earlier in the day, saying Trump was weighing the move. The White House estimates that 62% of Canadian imports will still be subject to the tariffs, most of which are energy products that are being tariffed at a 10% rate, and half of goods coming from Mexico. WTI’s prompt spread — the difference between its two nearest contracts — narrowed to 36 cents following the tariff reprieve, a sign of potentially looser market conditions. Canadian heavy crudes rallied on the tariff postponement. Futures have tumbled since mid-January as Trump’s trade policies rattle global markets and America’s neighbors ready countermeasures. OPEC+ also surprised markets with plans to start reviving idled production in April, adding to the bearish headwinds. Brent futures into oversold territory for the first time since September based on one technical gauge. The term implies the recent

Read More »

Tanzania Plans to Kick Off 5th Oil, Gas Licensing Round in May

Tanzania aims to start a licensing round for dozens of oil and gas exploration blocks in May, the first in more than a decade for the nation with an estimated 57 trillion cubic feet of natural gas reserves. Three of the 26 blocks are in Lake Tanganyika and the rest in the Indian Ocean. The country’s last licensing round was in May 2014. “We are proceeding with promotion activities because the blocks have already been identified and the data is in place. We are waiting for government approval of the Model Production Sharing Agreement, which outlines the fiscal terms,” said Charles Sangweni, director general of the Petroleum Upstream Regulatory Authority. “Our plan is to launch during the Africa Energy Summit in London from 13th to 15th May.” Tanzania already produces natural gas, which it uses to generate electricity, and plans a $42 billion liquefied natural gas facility to be built by a consortium comprising Shell Plc, Equinor ASA and Exxon Mobil Corp. That long-delayed plan is still under negotiation as Tanzania’s government is “trying to align just a few key outstanding issues,” Sangweni said in an interview in Dar es Salaam. “An agreement is coming, that’s my hope. When, I can’t tell you.” WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed. MORE FROM THIS AUTHOR Bloomberg

Read More »

Anthropic just launched a new platform that lets everyone in your company collaborate on AI — not just the tech team

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic has launched a significant overhaul to its developer platform today, introducing team collaboration features and extended reasoning capabilities for its Claude AI assistant that aim to solve major pain points for organizations implementing artificial intelligence solutions. The upgraded Anthropic Console now enables cross-functional teams to collaborate on AI prompts—the text instructions that guide AI models—while also supporting the company’s latest Claude 3.7 Sonnet model with new controls for complex problem-solving. “We built our shareable prompts to help our customers and developers work together effectively on prompt development,” an Anthropic spokesperson told VentureBeat. “What we learned from talking to customers was that prompt creation rarely happens in isolation. It’s a team effort involving developers, subject matter experts, product managers, and QA folks all trying to get the best results.” The move addresses a growing challenge for enterprises adopting AI: coordinating prompt engineering work across technical and business teams. Before this update, companies often resorted to sharing prompts through documents or messaging apps, creating version control issues and knowledge silos. How Claude’s new thinking controls balance advanced AI power with budget-friendly cost management The updated platform also introduces “extended thinking controls” for Claude 3.7 Sonnet, allowing developers to specify when the AI should use deeper reasoning while setting budget limits to control costs. “Claude 3.7 Sonnet gives you two modes in one model: standard mode for quick responses and extended thinking mode when you need deeper problem-solving,” the spokesperson told VentureBeat. “In extended thinking mode, Claude takes time to work through problems step-by-step, similar to how humans approach complex challenges.” This dual approach helps companies balance performance with expenditure—a key consideration as AI implementation costs come under greater scrutiny amid widespread adoption.

Read More »

‘EPA’s action is illegal,’ says green group attorney about $20B GGRF funding freeze

An attorney representing the Climate United Fund sent a Tuesday letter to the Environmental Protection Agency asking for the agency to immediately reinstate the fund’s access to its nearly $7 billion Greenhouse Gas Reduction Fund grant, calling the freeze illegal. “As already explained, Climate United’s preferred path forward is direct communication to find common ground,” said Adam Unikowsky, a partner at Jenner & Block. “But if the EPA adheres to its decision to suspend or terminate Climate United’s grant, it should stay its decision pending judicial review …. Climate United is likely to succeed in showing that the EPA’s action is illegal.” Review of the matter is urgent, Unikowsky said, as Climate United faces “immediate, irreparable harm” if the funding is not reinstated.  If the group can’t find another source of funding, it will “shortly” run out of cash for operating expenses, employee pay, rent for some offices, and pay for contractors who provide services such as IT and legal, the letter said. In addition, Climate United won’t be able to meet its commitments for already approved loans and awards, which “would cause profound harm to its subawardees,” Unikowsky alleged. With its GGRF funding, Climate United has so far made investments that include a $10.8 million pre-development loan for utility-scale solar projects on tribal lands in eastern Oregon and Idaho and $250 million toward electric truck manufacturing. EPA Administrator Lee Zeldin commented on the freeze Wednesday, saying that his team had “uncovered extensive troubling developments with $20 billion in ‘gold bars’ that the Biden EPA ‘tossed off the Titanic.’” “These taxpayer funds were parked at an outside financial institution to rush out the door and circumvent proper oversight; $20 billion was given to just eight pass-through nongovernmental entities in an effort riddled with self-dealing, conflicts of interest, and an extreme

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE