Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both. One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.  1. Why should I use MCP over other alternatives? Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack. Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

How OpenAI’s red team made ChatGPT agent into an AI fortress

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Called the “ChatGPT agent,” this new feature is an optional mode that ChatGPT paying subscribers can engage by clicking “Tools” in the prompt entry box and selecting “agent mode,” at which point, they can ask ChatGPT to log into their email and other web accounts; write and respond to emails; download, modify, and create files; and do a host of other tasks on their behalf, autonomously, much like a real person using a computer with their login credentials. Obviously, this also requires the user to trust the ChatGPT agent not to do anything problematic or nefarious, or to leak their data and sensitive information. It also poses greater risks for a user and their employer than the regular ChatGPT, which can’t log into web accounts or modify files directly. Keren Gu, a member of the Safety Research team at OpenAI, commented on X that “we’ve activated our strongest safeguards for ChatGPT Agent. It’s the first model we’ve classified as High capability in biology & chemistry under our Preparedness Framework. Here’s why that matters–and what we’re doing to keep it safe.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF So how did OpenAI handle all these security issues? The red team’s mission Looking at OpenAI’s ChatGPT agent system card, the “read team” employed by the company to test the feature faced a challenging mission: specifically, 16

Read More »

Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now AnyCoder, an open-source web app development environment developed by Hugging Face ML Growth Lead Ahsen Khaliq (@_akhaliq on X), has launched on Hugging Face Spaces. The tool, now available for all users of the AI code sharing repository Hugging Face, integrates live previews, multimodal input, and one-click deployment — all within a hosted environment, allowing indie creators without much technical expertise, or those working on behalf of clients or large enterprises, to get started “vibe coding” web apps rapidly using the assistance of Hugging Face-hosted AI models. It also acts therefore as an alternative to services such as Lovable, which also allow users to type in plain English and begin coding apps without having formal programming knowledge. Free vibe coding available to all, powered by Kimi K2 Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem and as “one of the first vibe coding apps” to support Moonshot’s powerful yet small and efficient Kimi K2 model launched last week. AnyCoder’s main functionality allows users to enter plain-text descriptions to generate HTML, CSS, and JavaScript. These are displayed in a live preview pane and can be edited or directly deployed. It also includes example templates for todo apps, dashboards, calculators, and more. Screenshot of AnyCoder on Hugging Face Built entirely using Hugging Face’s open-source Python development environment Gradio, AnyCoder allows users to describe applications in plain English or upload images, and instantly generate working frontend code. Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem. In a direct message conversation with this VentureBeat journalist, he described it as a “free open source vibe coding app.” However, he also noted

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both. One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.  1. Why should I use MCP over other alternatives? Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack. Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

How OpenAI’s red team made ChatGPT agent into an AI fortress

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Called the “ChatGPT agent,” this new feature is an optional mode that ChatGPT paying subscribers can engage by clicking “Tools” in the prompt entry box and selecting “agent mode,” at which point, they can ask ChatGPT to log into their email and other web accounts; write and respond to emails; download, modify, and create files; and do a host of other tasks on their behalf, autonomously, much like a real person using a computer with their login credentials. Obviously, this also requires the user to trust the ChatGPT agent not to do anything problematic or nefarious, or to leak their data and sensitive information. It also poses greater risks for a user and their employer than the regular ChatGPT, which can’t log into web accounts or modify files directly. Keren Gu, a member of the Safety Research team at OpenAI, commented on X that “we’ve activated our strongest safeguards for ChatGPT Agent. It’s the first model we’ve classified as High capability in biology & chemistry under our Preparedness Framework. Here’s why that matters–and what we’re doing to keep it safe.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF So how did OpenAI handle all these security issues? The red team’s mission Looking at OpenAI’s ChatGPT agent system card, the “read team” employed by the company to test the feature faced a challenging mission: specifically, 16

Read More »

Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now AnyCoder, an open-source web app development environment developed by Hugging Face ML Growth Lead Ahsen Khaliq (@_akhaliq on X), has launched on Hugging Face Spaces. The tool, now available for all users of the AI code sharing repository Hugging Face, integrates live previews, multimodal input, and one-click deployment — all within a hosted environment, allowing indie creators without much technical expertise, or those working on behalf of clients or large enterprises, to get started “vibe coding” web apps rapidly using the assistance of Hugging Face-hosted AI models. It also acts therefore as an alternative to services such as Lovable, which also allow users to type in plain English and begin coding apps without having formal programming knowledge. Free vibe coding available to all, powered by Kimi K2 Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem and as “one of the first vibe coding apps” to support Moonshot’s powerful yet small and efficient Kimi K2 model launched last week. AnyCoder’s main functionality allows users to enter plain-text descriptions to generate HTML, CSS, and JavaScript. These are displayed in a live preview pane and can be edited or directly deployed. It also includes example templates for todo apps, dashboards, calculators, and more. Screenshot of AnyCoder on Hugging Face Built entirely using Hugging Face’s open-source Python development environment Gradio, AnyCoder allows users to describe applications in plain English or upload images, and instantly generate working frontend code. Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem. In a direct message conversation with this VentureBeat journalist, he described it as a “free open source vibe coding app.” However, he also noted

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

EU Slaps New Sanctions on Russia and Its Oil Trade

European Union states have approved a fresh sanctions package on Russia over its war against Ukraine including a revised oil price cap, new banking restrictions, and curbs on fuels made from Russian petroleum.  The package, the bloc’s 18th since Moscow’s full scale invasion, will see about 20 more Russian banks cut off the international payments system SWIFT and face a full transaction ban, as well as restrictions imposed on Russian petroleum refined in third countries. A large oil refinery in India, part-owned by Russia’s state-run oil company, Rosneft PJSC, was also blacklisted. The cap on Russian oil, currently set at $60 per barrel, will be set dynamically at 15 percent below market rates moving forward. The new mechanism will see the threshold start off somewhere between $45-$50 and automatically revised at least twice a year based on market prices, Bloomberg previously reported. The latest sanctions by the European Union are aimed at further crimping the Kremlin’s energy revenue, the bulk of which comes from oil exports to India and China.  However, the original price cap imposed by the Group of Seven has had a limited impact on Russia’s oil flows, as the nation has built up a huge shadow fleet of tankers to haul its oil without using western services. The EU has also so far failed to convince the US to offer crucial support to the lower cap. Discussions are ongoing with other G-7 members but the US opposition is making it hard to reach agreement, according to people familiar with the matter. The UK, however, is expected to be on board with the move, the people said. The EU’s move to restrict fuels such as diesel made from Russian crude could have some market impact, as Europe imports the fuel from India, which in turn buys large amounts of

Read More »

Aramco Nears $10B Jafurah Pipeline Stake Sale to GIP

Saudi Aramco is in advanced talks to sell a roughly $10 billion stake in midstream infrastructure serving the giant Jafurah natural gas project to a group led by BlackRock Inc., according to people with knowledge of the matter.  The consortium is backed by BlackRock’s Global Infrastructure Partners unit and could reach an agreement as soon as the coming days, said the people, who asked not to be identified discussing confidential information.  The deal will involve pipelines and other infrastructure serving the $100 billion-plus Jafurah project, which Aramco is developing to supply domestic power plants as well as for export. It’s an unconventional field, meaning the gas is trapped in hard-to-access rock formations and requires special techniques to extract. Reuters reported on Thursday that GIP was nearing a deal, citing unidentified people. Aramco didn’t respond to emailed queries outside regular business hours in Saudi Arabia.  Bloomberg News first revealed in 2021 that Aramco was considering introducing outside investors into parts of the Jafurah project. Aramco was approaching infrastructure funds to gauge their interest in the midstream assets, people with knowledge of the matter said the next year.  State-controlled Aramco has been seeking to bring in international capital and sell stakes in some assets as the government pursues massive projects to build futuristic cities and diversify its economy. The kingdom is pushing ahead with a vast expansion, including developing new tourism destinations and building up a manufacturing base, to prepare for a future in which oil demand will begin to wane. BlackRock was earlier among investors that bought stakes in Aramco’s national gas pipeline network.  WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

Trump wants to use AI to prevent wildfires. Utilities are trying. Will it work?

The United States has already experienced more wildfires this year than it has over same period in any other year this decade, according to the National Interagency Fire Center. With the risk of fire expected to grow due to climate change and other factors, utilities have increasingly turned to technology to help them keep up. And those efforts could get a boost following President Donald Trump’s June 12 executive order calling on federal agencies to deploy technology to address “a slow and inadequate response to wildfires.” The order directed agencies to create a roadmap for using “artificial intelligence, data sharing, innovative modeling and mapping capabilities, and technology to identify wildland fire ignitions and weather forecasts to inform response and evacuation.” It also told federal authorities to declassify historical satellite datasets that could be used to improve wildfire prediction, and called for strengthening coordination among agencies and improving wildland and vegetation management. Additionally, the order laid out a vision for consolidating federal wildfire prevention and suppression efforts that are currently spread across agencies. The White House’s proposed 2026 budget blueprint would create a new, unified federal wildland fire service under the Department of Interior. So far, Trump’s directive has drawn a mixed response from wildfire experts. While some said it could empower local governments and save utilities money, others said the order’s impact will be limited. “I think some people read into the order more than is there, and some people read less,” said Chet Wade, a spokesperson for the Partners in Wildfire Prevention coalition. “I don’t know exactly what will come of it, but getting technology into the right hands could be very helpful.” Fire prevention goes high tech Since the 2018 Camp Fire that bankrupted PG&E and set a nationwide precedent for suing utilities that trigger large fires, energy companies around

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both. One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.  1. Why should I use MCP over other alternatives? Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack. Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you

Read More »

How OpenAI’s red team made ChatGPT agent into an AI fortress

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Called the “ChatGPT agent,” this new feature is an optional mode that ChatGPT paying subscribers can engage by clicking “Tools” in the prompt entry box and selecting “agent mode,” at which point, they can ask ChatGPT to log into their email and other web accounts; write and respond to emails; download, modify, and create files; and do a host of other tasks on their behalf, autonomously, much like a real person using a computer with their login credentials. Obviously, this also requires the user to trust the ChatGPT agent not to do anything problematic or nefarious, or to leak their data and sensitive information. It also poses greater risks for a user and their employer than the regular ChatGPT, which can’t log into web accounts or modify files directly. Keren Gu, a member of the Safety Research team at OpenAI, commented on X that “we’ve activated our strongest safeguards for ChatGPT Agent. It’s the first model we’ve classified as High capability in biology & chemistry under our Preparedness Framework. Here’s why that matters–and what we’re doing to keep it safe.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF So how did OpenAI handle all these security issues? The red team’s mission Looking at OpenAI’s ChatGPT agent system card, the “read team” employed by the company to test the feature faced a challenging mission: specifically, 16

Read More »

Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now AnyCoder, an open-source web app development environment developed by Hugging Face ML Growth Lead Ahsen Khaliq (@_akhaliq on X), has launched on Hugging Face Spaces. The tool, now available for all users of the AI code sharing repository Hugging Face, integrates live previews, multimodal input, and one-click deployment — all within a hosted environment, allowing indie creators without much technical expertise, or those working on behalf of clients or large enterprises, to get started “vibe coding” web apps rapidly using the assistance of Hugging Face-hosted AI models. It also acts therefore as an alternative to services such as Lovable, which also allow users to type in plain English and begin coding apps without having formal programming knowledge. Free vibe coding available to all, powered by Kimi K2 Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem and as “one of the first vibe coding apps” to support Moonshot’s powerful yet small and efficient Kimi K2 model launched last week. AnyCoder’s main functionality allows users to enter plain-text descriptions to generate HTML, CSS, and JavaScript. These are displayed in a live preview pane and can be edited or directly deployed. It also includes example templates for todo apps, dashboards, calculators, and more. Screenshot of AnyCoder on Hugging Face Built entirely using Hugging Face’s open-source Python development environment Gradio, AnyCoder allows users to describe applications in plain English or upload images, and instantly generate working frontend code. Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem. In a direct message conversation with this VentureBeat journalist, he described it as a “free open source vibe coding app.” However, he also noted

Read More »

A major AI training data set contains millions of examples of personal data

Millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information are likely included in one of the biggest open-source AI training sets, new research has found. Thousands of images—including identifiable faces—were found in a small subset of DataComp CommonPool, a major AI training set for image generation scraped from the web. Because the researchers audited just 0.1% of CommonPool’s data, they estimate that the real number of images containing personally identifiable information, including faces and identity documents, is in the hundreds of millions. The study that details the breach was published on arXiv earlier this month. The bottom line, says William Agnew, a postdoctoral fellow in AI ethics at Carnegie Mellon University and one of the coauthors, is that “anything you put online can [be] and probably has been scraped.” The researchers found thousands of instances of validated identity documents—including images of credit cards, driver’s licenses, passports, and birth certificates—as well as over 800 validated job application documents (including résumés and cover letters), which were confirmed through LinkedIn and other web searches as being associated with real people. (In many more cases, the researchers did not have time to validate the documents or were unable to because of issues like image clarity.) 
A number of the résumés disclosed sensitive information including disability status, the results of background checks, birth dates and birthplaces of dependents, and race. When résumés were linked to people with online presences, researchers also found contact information, government identifiers, sociodemographic information, face photographs, home addresses, and the contact information of other people (like references). Examples of identity-related documents found in CommonPool’s small scale dataset, showing a credit card, social security number, and a driver’s license. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals.COURTESY OF THE RESEARCHERS When it was released in 2023, DataComp CommonPool, with its 12.8 billion data samples, was the largest existing data set of publicly available image-text pairs, which are often used to train generative text-to-image models. While its curators said that CommonPool was intended for academic research, its license does not prohibit commercial use as well. 
CommonPool was created as a follow-up to the LAION-5B data set, which was used to train models including Stable Diffusion and Midjourney. It draws on the same data source: web scraping done by the nonprofit Common Crawl between 2014 and 2022.  While commercial models often do not disclose what data sets they are trained on, the shared data sources of DataComp CommonPool and LAION-5B mean that the datasets are similar, and that the same personally identifiable information likely appears in LAION-5B, as well as in other downstream models trained on CommonPool data. CommonPool researchers did not respond to emailed questions. And since DataComp CommonPool has been downloaded more than 2 million times over the past two years, it is likely that “there [are]many downstream models that are all trained on this exact data set,” says Rachel Hong, a PhD student in computer science at the University of Washington and the paper’s lead author. Those would duplicate similar privacy risks. Good intentions are not enough “You can assume that any large scale web-scraped data always contains content that shouldn’t be there,” says Abeba Birhane, a cognitive scientist and tech ethicist who leads Trinity College Dublin’s AI Accountability Lab—whether it’s personally identifiable information (PII), child sexual abuse imagery, or hate speech (which Birhane’s own research into LAION-5B has found).  Indeed, the curators of DataComp CommonPool were themselves aware it was likely that PII would appear in the data set and did take some measures to preserve privacy, including automatically detecting and blurring faces. But in their limited data set, Hong’s team found and validated over 800 faces that the algorithm had missed, and they estimated that overall, the algorithm had missed 102 million faces in the entire data set. On the other hand, they did not apply filters that could have recognized known PII strings, like emails or social security numbers.  “Filtering is extremely hard to do well,” says Agnew. “They would have had to make very significant advancements in PII detection and removal that they haven’t made public to be able to effectively filter this.”   Examples of resume documents and personal disclosures found in CommonPool’s small scale dataset. For each sample, the type of URL site is shown at the top, the image in the middle, and the caption in quotes below. All personal information has been replaced, and text has been paraphrased to avoid direct quotations. Images have been redacted to show the presence of faces without identifying the individuals. Image courtesy researchers.COURTESY OF THE RESEARCHERS There are other privacy issues that the face blurring doesn’t address. While the face blurring filter is automatically applied, it is optional and can be removed. Additionally, the captions that often accompany the photos, as well as the photos’ metadata, often contain even more personal information, such as names and exact locations. Another privacy mitigation measure comes from Hugging Face, a platform that distributes training data sets and hosts CommonPool, which integrates with a tool that theoretically allows people to search for and remove their own information from a data set. But as the researchers note in their paper, this would require people to know that their data is there to start with. When asked for comment, Florent Daudens of Hugging Face said that “maximizing the privacy of data subjects across the AI ecosystem takes a multilayered approach, which includes but is not limited to the widget mentioned,” and that the platform is “working with our community of users to move the needle in a more privacy-grounded direction.” 

In any case, just getting your data removed from one data set probably isn’t enough.“ Even if someone finds out their data was used in a training data sets and … exercises their right to deletion, technically the law is unclear about what that means,”  says Tiffany Li, an assistant professor of law at the University of New Hampshire School of Law. “If the organization only deletes data from the training data sets—but does not delete or retrain the already trained model—then the harm will nonetheless be done.” The bottom line, says Agnew, is that “if you web-scrape, you’re going to have private data in there. Even if you filter, you’re still going to have private data in there, just because of the scale of this. And that’s something that we [machine-learning researchers], as a field, really need to grapple with.” Reconsidering consent CommonPool was built on web data scraped between 2014 and 2022, meaning that many of the images likely date to before 2020, when ChatGPT was released. So even if it’s theoretically possible that some people consented to having their information publicly available to anyone on the web, they could not have consented to having their data used to train large AI models that did not yet exist. And with web scrapers often scraping data from each other, an image that was originally uploaded by the owner to one specific location would often find its way into other image repositories. “I might upload something onto the internet, and then … a year or so later, [I] want to take it down, but then that [removal] doesn’t necessarily do anything anymore,” says Agnew. The researchers also found numerous examples of children’s personal information, including depictions of birth certificates, passports, and health status, but in contexts suggesting that they had been shared for limited purposes. “It really illuminates the original sin of AI systems built off public data—it’s extractive, misleading, and dangerous to people who have been using the internet with one framework of risk, never assuming it would all be hoovered up by a group trying to create an image generator,” says Ben Winters, the director of AI and privacy at the Consumer Federation of America. Finding a policy that fits Ultimately, the paper calls for the machine-learning community to rethink the common practice of indiscriminate web scraping and also lays out the possible violations of current privacy laws represented by the existence of PII in massive machine-learning data sets, as well as the limitations of those laws’ ability to protect privacy. “We have the GDPR in Europe, we have the CCPA in California, but there’s still no federal data protection law in America, which also means that different Americans have different rights protections,” says Marietje Schaake, a Dutch lawmaker turned tech policy expert who currently serves as a fellow at Stanford’s Cyber Policy Center. 
Besides, these privacy laws apply to companies that meet certain criteria for size and other characteristics. They do not necessarily apply to researchers like those who were responsible for creating and curating DataComp CommonPool. And even state laws that do address privacy, like California’s consumer privacy act, have carve-outs for “publicly available” information. Machine-learning researchers have long operated on the principle that if it’s available on the internet, then it is public and no longer private information, but Hong, Agnew, and their colleagues hope that their research challenges this assumption.  “What we found is that ‘publicly available’ includes a lot of stuff that a lot of people might consider private—résumés, photos, credit card numbers, various IDs, news stories from when you were a child, your family blog. These are probably not things people want to just be used anywhere, for anything,” says Hong.   Hopefully, Schaake says, this research “will raise alarm bells and create change.” 

Read More »

Salesforce used AI to cut support load by 5% — but the real win was teaching bots to say ‘I’m sorry’

Salesforce has crossed a significant threshold in the enterprise AI race, surpassing 1 million autonomous agent conversations on its help portal — a milestone that offers a rare glimpse into what it takes to deploy AI agents at massive scale and the surprising lessons learned along the way.The achievement, confirmed by company executives in exclusive interviews with VentureBeat, comes just nine months after Salesforce launched Agentforce on its Help Portal in October. The platform now resolves 84% of customer queries autonomously, has led to a 5% reduction in support case volume, and enabled the company to redeploy 500 human support engineers to higher-value roles.But perhaps more valuable than the raw numbers are the hard-won insights Salesforce gleaned from being what executives call “customer zero” for their own AI agent technology — lessons that challenge conventional wisdom about enterprise AI deployment and reveal the delicate balance required between technological capability and human empathy.“We started really small. We launched basically to a cohort of customers on our Help Portal. It had to be English to start with. You had to be logged in and we released it to about 10% of our traffic,” explains Bernard Shaw, SVP of Digital Customer Success at Salesforce, who led the Agentforce implementation. “The first week, I think there was 126 conversations, if I remember rightly. So me and my team could read through each one of them.”

Read More »

The Download: how to run an LLM, and a history of “three-parent babies”

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How to run an LLM on your laptop In the early days of large language models, there was a high barrier to entry: it used to be impossible to run anything useful on your own computer without investing in pricey GPUs. But researchers have had so much success in shrinking down and speeding up models that anyone with a laptop, or even a smartphone, can now get in on the action.For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a compelling alternative to ChatGPT and its web-based peers. Here’s how to get started running a useful model from the safety and comfort of your own computer. Read the full story.—Grace Huckins This story is part of MIT Technology Review’s How To series, helping you get things done. You can check out the rest of the series here.
A brief history of “three-parent babies”
This week we heard that eight babies have been born in the UK following an experimental form of IVF that involves DNA from three people. The approach was used to prevent women with genetic mutations from passing mitochondrial diseases to their children.But these eight babies aren’t the first “three-parent” children out there. Over the last decade, several teams have been using variations of this approach to help people have babies. But the procedure is not without controversy. Read the full story. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI has launched ChatGPT Agent It undertakes tasks on your behalf by building its own “virtual computer.” (The Verge)+ It may take a while to actually complete them. (Wired $)+ Are we ready to hand AI agents the keys? (MIT Technology Review) 2 The White House is going after “woke AI”It’s preparing an executive order preventing companies with “liberal bias” in their models from landing federal contracts. (WSJ $)+ Why it’s impossible to build an unbiased AI language model. (MIT Technology Review)

3 A new law in Russia criminalizes certain online searchesLooking up LGBT content, for example, could land Russians in big trouble. (WP $)+ Dozens of Russian regions have been hit with cellphone internet shutdowns. (ABC News) 4 Elon Musk wants to detonate SpaceX rockets over Hawaii’s watersEven though the proposed area is a sacred Hawaiian religious site. (The Guardian)+ Rivals are rising to challenge the dominance of SpaceX. (MIT Technology Review) 5 Meta’s privacy violation trial is overThe shareholders suing Mark Zuckerberg and other officials have settled for a (likely very hefty) payout. (Reuters) 6 Inside ICE’s powerful facial recognition appMobile Fortify can check a person’s face against a database of 200 million images. (404 Media)+ The department has unprecedented access to Medicaid data, too. (Wired $) 7 DOGE has left federal workers exhausted and anxiousSix months in, workers are struggling to cope with the fall out. (Insider $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 8 Netflix has used generative AI in a show for the first timeTo cut costs, apparently. (BBC) 9 Does AI really spell the end of loneliness?Virtual companions aren’t always what they’re cracked up to be. (New Yorker $)+ The AI relationship revolution is already here. (MIT Technology Review) 10 Flip phones are back with a vengeanceAt least they’re more interesting to look at than a conventional smartphone. (Vox)+ Triple-folding phones might be a bridge too far, though. (The Verge)
Quote of the day
“It is far from perfect.” —Kevin Weil, OpenAI’s chief product officer, acknowledges that its new agent still requires a lot of work, Bloomberg reports. One more thing GMOs could reboot chestnut treesLiving as long as a thousand years, the American chestnut tree once dominated parts of the Eastern forest canopy, with many Native American nations relying on them for food. But by 1950, the tree had largely succumbed to a fungal blight probably introduced by Japanese chestnuts.As recently as last year, it seemed the 35-year effort to revive the American chestnut might grind to a halt. Now, American Castanea, a new biotech startup, has created more than 2,500 transgenic chestnut seedlings— likely the first genetically modified trees to be considered for federal regulatory approval as a tool for ecological restoration. Read the full story.  —Anya Kamenetz
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + This stained glass embedded into a rusted old Porsche is strangely beautiful.+ Uhoh: here comes the next annoying group of people to avoid, the Normans.+ I bet Dolly Parton knows a thing or two about how to pack for a trip.+ Aww—orcas have been known to share food with humans in the wild.

Read More »

5 key questions your developers should be asking about MCP

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now The Model Context Protocol (MCP) has become one of the most talked-about developments in AI integration since its introduction by Anthropic in late 2024. If you’re tuned into the AI space at all, you’ve likely been inundated with developer “hot takes” on the topic. Some think it’s the best thing ever; others are quick to point out its shortcomings. In reality, there’s some truth to both. One pattern I’ve noticed with MCP adoption is that skepticism typically gives way to recognition: This protocol solves genuine architectural problems that other approaches don’t. I’ve gathered a list of questions below that reflect the conversations I’ve had with fellow builders who are considering bringing MCP to production environments.  1. Why should I use MCP over other alternatives? Of course, most developers considering MCP are already familiar with implementations like OpenAI’s custom GPTs, vanilla function calling, Responses API with function calling, and hardcoded connections to services like Google Drive. The question isn’t really whether MCP fully replaces these approaches — under the hood, you could absolutely use the Responses API with function calling that still connects to MCP. What matters here is the resulting stack. Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you

Read More »

WTI Flat as EU Targets Russian Refined Fuels

Oil ended the day little changed as traders weighed fresh efforts from the European Union to crimp Russian energy exports. West Texas Intermediate crude held steady to close near $67 a barrel after the EU agreed to a lower price cap for Moscow’s crude as part of a package of sanctions on Moscow. The measures include curbs on fuels made from Russian petroleum, additional banking limitations and a ban on a large oil refinery in India. The Asian country, which buys large amounts of Russian crude, is a major exporter of refined products to Europe, where markets for fuels like diesel have been tight. “While the EU measures may not drastically impact crude flows, the restrictions on refined products and expanded shadow fleet targeting are fueling concern in the diesel complex,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. Oil has trended higher since early May, with both Morgan Stanley and Goldman Sachs Group Inc. making the case that a buildup in global crude stockpiles has occurred in regions that don’t hold much sway in price-setting. Meanwhile, spreads in the diesel market are indicating tightness. The gap between the first and second month of New York heating oil futures climbed to $4.17 a gallon at one point in the session, up from $2.99 on Thursday. (Diesel and heating oil are the same product in the US, just taxed differently.) “The logic of diesel tightness propping up crude flat prices remains unchanged,” said Huang Wanzhe, an analyst at Dadi Futures Co., who added that the peak-demand season had seen a solid start. “The key question is how long this strength can last,” she said. In wider markets, strong US data on consumer sentiment eased concerns about the world’s largest economy, helping to underpin a risk-on mood. Crude

Read More »

SLB Sees ‘Constructive’ Second Half of 2025

SLB, the world’s largest oil-services provider, sees resiliency in the industry and remains constructive about the second half of 2025 despite uncertainties in customer demand.  “Despite pockets of activity adjustments in key markets, the industry has shown that it can operate through uncertainty without a significant drop in upstream spending,” SLB Chief Executive Officer Olivier Le Peuch said in a statement Friday. “This has been driven by the combination of capital discipline and the need for energy security.” His comments came as SLB posted second-quarter adjusted profit of 74 cents a share, exceeding analyst expectations. SLB, which gets about 82% of its revenue from international markets, has mitigated some of the negative impacts facing smaller peers that are more levered to domestic production. The company is seen as a gauge for the health of the sector through its broad footprint in all major crude-producing theaters.  US oil drilling has dropped 12% this year to the lowest since September 2021, driven by demand concerns triggered by US President Donald Trump’s tariff proposals and faster-than-expected increases in OPEC+ production. Government forecasters have trimmed domestic crude-production estimates for 2025, signaling a lower-for-longer activity environment for service companies. “Looking ahead, assuming commodity prices stay range bound, we remain constructive for the second half of the year,” Le Peuch said. Traders and analysts will also be listening closely to SLB’s quarterly conference call Friday for more details on the completion of the merger with ChampionX Corp. which the company announced Wednesday, according to a statement. SLB is a “leader in digital services for the energy industry and could soon become a leader in production services and equipment post the close of the acquisition,” Citigroup Global Markets Inc. analyst Scott Gruber wrote in a note to clients. SLB is the first of the biggest oilfield contractors

Read More »

How OpenAI’s red team made ChatGPT agent into an AI fortress

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now Called the “ChatGPT agent,” this new feature is an optional mode that ChatGPT paying subscribers can engage by clicking “Tools” in the prompt entry box and selecting “agent mode,” at which point, they can ask ChatGPT to log into their email and other web accounts; write and respond to emails; download, modify, and create files; and do a host of other tasks on their behalf, autonomously, much like a real person using a computer with their login credentials. Obviously, this also requires the user to trust the ChatGPT agent not to do anything problematic or nefarious, or to leak their data and sensitive information. It also poses greater risks for a user and their employer than the regular ChatGPT, which can’t log into web accounts or modify files directly. Keren Gu, a member of the Safety Research team at OpenAI, commented on X that “we’ve activated our strongest safeguards for ChatGPT Agent. It’s the first model we’ve classified as High capability in biology & chemistry under our Preparedness Framework. Here’s why that matters–and what we’re doing to keep it safe.” The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF So how did OpenAI handle all these security issues? The red team’s mission Looking at OpenAI’s ChatGPT agent system card, the “read team” employed by the company to test the feature faced a challenging mission: specifically, 16

Read More »

Meet AnyCoder, a new Kimi K2-powered tool for fast prototyping and deploying web apps

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now AnyCoder, an open-source web app development environment developed by Hugging Face ML Growth Lead Ahsen Khaliq (@_akhaliq on X), has launched on Hugging Face Spaces. The tool, now available for all users of the AI code sharing repository Hugging Face, integrates live previews, multimodal input, and one-click deployment — all within a hosted environment, allowing indie creators without much technical expertise, or those working on behalf of clients or large enterprises, to get started “vibe coding” web apps rapidly using the assistance of Hugging Face-hosted AI models. It also acts therefore as an alternative to services such as Lovable, which also allow users to type in plain English and begin coding apps without having formal programming knowledge. Free vibe coding available to all, powered by Kimi K2 Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem and as “one of the first vibe coding apps” to support Moonshot’s powerful yet small and efficient Kimi K2 model launched last week. AnyCoder’s main functionality allows users to enter plain-text descriptions to generate HTML, CSS, and JavaScript. These are displayed in a live preview pane and can be edited or directly deployed. It also includes example templates for todo apps, dashboards, calculators, and more. Screenshot of AnyCoder on Hugging Face Built entirely using Hugging Face’s open-source Python development environment Gradio, AnyCoder allows users to describe applications in plain English or upload images, and instantly generate working frontend code. Khaliq built AnyCoder as a personal project within the Hugging Face ecosystem. In a direct message conversation with this VentureBeat journalist, he described it as a “free open source vibe coding app.” However, he also noted

Read More »

Germany’s Top Performing Smallcap Surges Again

A breakneck rally in the shares of a German pipeline builder accelerated this week after the company won a role plugging LNG terminals on the coast into the nation’s gas grid.  Friedrich Vorwerk Group SE’s stock is up 24% since last Friday’s close, the biggest gain on Germany’s small-cap SDAX index. The bulk of the advance came after it secured a contract valued in the hundreds of millions of euros to build a 86km-long pipeline with a consortium of companies.  It’s an example of how European firms are benefiting from the wall of money Chancellor Friedrich Merz has unleashed to overhaul the nation’s infrastructure and military. The contract is the latest deal to help revive the fortunes of the builder of underground gas, electricity and hydrogen pipes, sending its stock price to a record high.  It’s “more like an add-on. It’s just nice to have,” said Nikolas Demeter, an analyst at B Metzler Seel Sohn & Co AG. For now, the company still has three buy ratings out of five from analysts. That may change because their targets trail the company’s current share price after this week’s contract win took its advance in the year past 200%. The shares now trade at almost 32 times forward blended earnings, compared with about 14 times for the SDAX index and the Stoxx 600 Index, the European benchmark. Labor Challenge Leon Mühlenbruch at mwb research AG, who has a valuation-driven sell rating on the stock, warns that Vorwerk’s full order book could become a problem. “Capacity constraints are becoming increasingly relevant,” Mühlenbruch said. “Further growth depends on expanding that capacity, a challenge due to the persistent shortage of specialized skilled labor.” But for now the Tostedt-based company is on a roll, and its rebound in recent years has been dramatic. After an initial

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE