Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

BP to Exit $36B Australian Green Hydrogen Hub

BP Plc will exit its role in a massive green hydrogen production facility planned in Australia as the British oil major refocuses on the fossil fuels that drive its profits.  The company told its partners in the Australian Renewable Energy Hub that it plans to leave the project as both operator and equity holder, according to a statement from a BP spokesperson. It’s the latest setback for green hydrogen, a fuel once touted as a key way for Big Oil to profit from the energy transition that has so far proved too costly for mass production and consumption.  The AREH project company will take over as operator over coming months with support from founding partner InterContinental Energy, according to an AREH spokesperson. BP’s decision to exit the project doesn’t reflect the opportunity the hub presents to decarbonize the Pilbara and support the creation of a green iron industry, they said.  BP’s entry into the project – once estimated to cost about $36 billion – came at a time when the company sought to rapidly build up a business in low-carbon energy and shrink its oil business. But after years of stock under-performance compared with its peers and the departure of the plan’s architect – Chief Executive Officer Bernard Looney – BP has refined its strategy to focus more squarely on profits than green goals.  The company is far from alone in leaving its ambitions for green hydrogen behind. Scores of companies that once saw the fuel as the next big thing in energy have cut back plans as hoped for cost declines failed to materialize. Also on Thursday, Fortescue Ltd. said it would abandon plans for a $550 million Arizona Hydrogen Project in the US and a $150 million PEM50 Project in Gladstone, Australia – resulting in a pretax writedown of $150 million. Meanwhile, Woodside

Read More »

Sysadmins ready for AI, but skepticism abounds

The report shows that AI is being deployed typically in high-volume, repetitive tasks. Troubleshooting and log analysis lead the way, with 41% and 35% of sysadmins, respectively, reporting use of AI in those areas—up significantly from 2024. Respondents reported that the following tasks are most likely to be automated with AI in the next two years: Vulnerability prioritization: 67% Monitoring of server CPU and memory utilization: 67% Detecting and remediating incidents: 66% Patch management processes: 66% Security controls and compliance analysis: 65% Performing post-incident reviews: 57% Providing IT staff with guidance and training: 55% Troubleshooting: 55% In contrast, sysadmins surveyed reported that the following tasks are less likely to be automated with AI in the next two years: Managing SSO and passwords: 48% Administering user permissions: 44% Managing files: 38% Defining system usage policies and procedures: 34% Providing end users with first-level IT support: 30% Installing and maintaining software: 29% Troubleshooting: 24% Performing post-incident reviews: 23% Providing IT staff with guidance and training: 23% AI concerns persist Sysadmins indicated they worry about falling behind peers. Some 40% expressed concern about being left behind by more AI-literate peers, a slight improvement from 45% in 2024. The top three reasons sysadmins are hesitant to embrace AI are: 79% worry about accuracy and reliability. 78% cite data privacy and security risks. 60% fear loss of control over automated actions. As AI adoption increases, so does the visibility of its shortcomings. Troubleshooting, the most common use case, is also the topic where AI struggles the most—30% of sysadmins reported failures in this area, a figure that is nearly double from last year’s survey. Another 20% of sysadmins said AI implementations have caused operational disruptions. Other areas that show AI isn’t yet performing up to expectations are: log analysis with 12% failure reports, and first-level

Read More »

AI Project Stargate struggles to get off the ground

Analysts aren’t surprised at the news. “Big IT projects have a long history of dramatically overpromising and it appears that trend is quickly moving into the world of AI data center-based projects as well. The Stargate project, in particular, also seems to have more of a political bent to it than many other projects so that’s likely complicating matters as well,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. “There’s little doubt we will see massive investments by many different organizations to build out AI infrastructure here in the US, but I’m not convinced that individual projects will end up mattering that much in the long run,” he added. “I have always been skeptical about the huge number that was projected. In the hundreds of billions,” said Patrick Moorhead, CEO & chief analyst with Moor Insights & Strategy. “The only problem was that only a few billion in new funding was raised. And now there’s strife between OpenAI and SoftBank. To be fair, Oracle is part of Stargate now and OpenAI will soak up many GPUs in the Texas facility, but this was already in process when the Stargate announcement happened.”

Read More »

It’s Qwen’s summer: new open source Qwen3-235B-A22B-Thinking-2507 tops OpenAI, Gemini reasoning models on key benchmarks

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now If the AI industry had an equivalent to the recording industry’s “song of the summer” — a hit that catches on in the warmer months here in the Northern Hemisphere and is heard playing everywhere — the clear honoree for that title would go to Alibaba’s Qwen Team. Over just the past week, the frontier model AI research division of the Chinese e-commerce behemoth has released not one, not two, not three, but four (!!) new open source generative AI models that offer record-setting benchmarks, besting even some leading proprietary options. Last night, Qwen Team capped it off with the release of Qwen3-235B-A22B-Thinking-2507, it’s updated reasoning large language model (LLM), which takes longer to respond than a non-reasoning or “instruct” LLM, engaging in “chains-of-thought” or self-reflection and self-checking that hopefully result in more correct and comprehensive responses on more difficult tasks. Indeed, the new Qwen3-Thinking-2507, as we’ll call it for short, now leads or closely trails top-performing models across several major benchmarks. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF As AI influencer and news aggregator Andrew Curran wrote on X: “Qwen’s strongest reasoning model has arrived, and it is at the frontier.” In the AIME25 benchmark—designed to evaluate problem-solving ability in mathematical and logical contexts — Qwen3-Thinking-2507 leads all reported models with a score of 92.3, narrowly surpassing both OpenAI’s o4-mini (92.7) and Gemini-2.5

Read More »

Unexpected costs drive on-premises computing

“This reversal challenges the assumption that cloud is always the end goal, and highlights growing concerns about cost predictability, control, and performance in shared cloud environments,” MacDonald told Network World. The survey found 86% of IT professionals report that their organizations currently use dedicated servers, with government (93%), information technology (91%), and finance (90%) being the most likely industries to do so. Fifty-three percent of IT professionals still view dedicated servers as essential, and nearly half (45%) expect their role to grow by 2030. The majority of respondents use dedicated servers for databases, with file storage and web hosting also in the mix. Another finding as relates to customization is that 32% of IT professionals believe their current cloud spend is wasted on features or capacity they don’t fully use. Cloud service providers are notorious for providing features whether you want them or not and not giving customers the choice to opt out and cut bills. On premises computing is not just holding steady but growing. More than one-third of respondents (34%) said their organizations increased spending on dedicated servers in 2024, compared to 24% in 2023 and 26% in 2022. IT managers are finding their biggest challenge is explaining the value of dedicated infrastructure to management. Nearly one-third (31%) said they don’t feel confident explaining infrastructure choices to nontechnical leadership and 70% of IT professionals surveyed believe that executive leadership underestimates the role dedicated servers play in a modern tech stack.

Read More »

Chevron to Cut Positions as Part of Hess Integration

Chevron will “consolidate or eliminate some positions” as part of its integration with Hess Corporation, a Chevron spokesperson told Rigzone. “Chevron completed the merger with Hess Corporation on July 18,” the spokesperson said. “We are working quickly to integrate the Hess workforce and are focused on maintaining safe and reliable operations throughout the transition period,” the spokesperson added. “As part of the integration, we will consolidate or eliminate some positions. As required by the WARN Act, Chevron has provided notice of a planned workforce reduction to appropriate state and local government representatives for Downtown Houston and North Dakota,” the spokesperson went on to state. When asked by Rigzone to confirm how many positions will be affected, the Chevron spokesperson said, “for the WARN Notices issued on July 21, Chevron anticipates a reduction of approximately 575 employees in Downtown Houston and 70 employees in North Dakota”. The spokesperson told Rigzone that “these are difficult decisions which … [the company does] not make lightly”. “We understand the impact this news may have on employees, their families and the communities where we operate,” the spokesperson said. “Our priority is to support our employees through this transition. We are offering severance benefits and outplacement support,” the Chevron representative added. In a statement posted on its website on July 18, Chevron announced that it had completed its acquisition of Hess Corporation following the satisfaction of all necessary closing conditions, including a favorable arbitration outcome regarding Hess’ offshore Guyana asset. “This merger of two great American companies brings together the best in the industry,” Chevron Chairman and CEO Mike Wirth said in that statement. “The combination enhances and extends our growth profile well into the next decade, which we believe will drive greater long-term value to shareholders,” he added. In this statement, former Hess Corporation CEO

Read More »

BP to Exit $36B Australian Green Hydrogen Hub

BP Plc will exit its role in a massive green hydrogen production facility planned in Australia as the British oil major refocuses on the fossil fuels that drive its profits.  The company told its partners in the Australian Renewable Energy Hub that it plans to leave the project as both operator and equity holder, according to a statement from a BP spokesperson. It’s the latest setback for green hydrogen, a fuel once touted as a key way for Big Oil to profit from the energy transition that has so far proved too costly for mass production and consumption.  The AREH project company will take over as operator over coming months with support from founding partner InterContinental Energy, according to an AREH spokesperson. BP’s decision to exit the project doesn’t reflect the opportunity the hub presents to decarbonize the Pilbara and support the creation of a green iron industry, they said.  BP’s entry into the project – once estimated to cost about $36 billion – came at a time when the company sought to rapidly build up a business in low-carbon energy and shrink its oil business. But after years of stock under-performance compared with its peers and the departure of the plan’s architect – Chief Executive Officer Bernard Looney – BP has refined its strategy to focus more squarely on profits than green goals.  The company is far from alone in leaving its ambitions for green hydrogen behind. Scores of companies that once saw the fuel as the next big thing in energy have cut back plans as hoped for cost declines failed to materialize. Also on Thursday, Fortescue Ltd. said it would abandon plans for a $550 million Arizona Hydrogen Project in the US and a $150 million PEM50 Project in Gladstone, Australia – resulting in a pretax writedown of $150 million. Meanwhile, Woodside

Read More »

Sysadmins ready for AI, but skepticism abounds

The report shows that AI is being deployed typically in high-volume, repetitive tasks. Troubleshooting and log analysis lead the way, with 41% and 35% of sysadmins, respectively, reporting use of AI in those areas—up significantly from 2024. Respondents reported that the following tasks are most likely to be automated with AI in the next two years: Vulnerability prioritization: 67% Monitoring of server CPU and memory utilization: 67% Detecting and remediating incidents: 66% Patch management processes: 66% Security controls and compliance analysis: 65% Performing post-incident reviews: 57% Providing IT staff with guidance and training: 55% Troubleshooting: 55% In contrast, sysadmins surveyed reported that the following tasks are less likely to be automated with AI in the next two years: Managing SSO and passwords: 48% Administering user permissions: 44% Managing files: 38% Defining system usage policies and procedures: 34% Providing end users with first-level IT support: 30% Installing and maintaining software: 29% Troubleshooting: 24% Performing post-incident reviews: 23% Providing IT staff with guidance and training: 23% AI concerns persist Sysadmins indicated they worry about falling behind peers. Some 40% expressed concern about being left behind by more AI-literate peers, a slight improvement from 45% in 2024. The top three reasons sysadmins are hesitant to embrace AI are: 79% worry about accuracy and reliability. 78% cite data privacy and security risks. 60% fear loss of control over automated actions. As AI adoption increases, so does the visibility of its shortcomings. Troubleshooting, the most common use case, is also the topic where AI struggles the most—30% of sysadmins reported failures in this area, a figure that is nearly double from last year’s survey. Another 20% of sysadmins said AI implementations have caused operational disruptions. Other areas that show AI isn’t yet performing up to expectations are: log analysis with 12% failure reports, and first-level

Read More »

AI Project Stargate struggles to get off the ground

Analysts aren’t surprised at the news. “Big IT projects have a long history of dramatically overpromising and it appears that trend is quickly moving into the world of AI data center-based projects as well. The Stargate project, in particular, also seems to have more of a political bent to it than many other projects so that’s likely complicating matters as well,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. “There’s little doubt we will see massive investments by many different organizations to build out AI infrastructure here in the US, but I’m not convinced that individual projects will end up mattering that much in the long run,” he added. “I have always been skeptical about the huge number that was projected. In the hundreds of billions,” said Patrick Moorhead, CEO & chief analyst with Moor Insights & Strategy. “The only problem was that only a few billion in new funding was raised. And now there’s strife between OpenAI and SoftBank. To be fair, Oracle is part of Stargate now and OpenAI will soak up many GPUs in the Texas facility, but this was already in process when the Stargate announcement happened.”

Read More »

It’s Qwen’s summer: new open source Qwen3-235B-A22B-Thinking-2507 tops OpenAI, Gemini reasoning models on key benchmarks

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now If the AI industry had an equivalent to the recording industry’s “song of the summer” — a hit that catches on in the warmer months here in the Northern Hemisphere and is heard playing everywhere — the clear honoree for that title would go to Alibaba’s Qwen Team. Over just the past week, the frontier model AI research division of the Chinese e-commerce behemoth has released not one, not two, not three, but four (!!) new open source generative AI models that offer record-setting benchmarks, besting even some leading proprietary options. Last night, Qwen Team capped it off with the release of Qwen3-235B-A22B-Thinking-2507, it’s updated reasoning large language model (LLM), which takes longer to respond than a non-reasoning or “instruct” LLM, engaging in “chains-of-thought” or self-reflection and self-checking that hopefully result in more correct and comprehensive responses on more difficult tasks. Indeed, the new Qwen3-Thinking-2507, as we’ll call it for short, now leads or closely trails top-performing models across several major benchmarks. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF As AI influencer and news aggregator Andrew Curran wrote on X: “Qwen’s strongest reasoning model has arrived, and it is at the frontier.” In the AIME25 benchmark—designed to evaluate problem-solving ability in mathematical and logical contexts — Qwen3-Thinking-2507 leads all reported models with a score of 92.3, narrowly surpassing both OpenAI’s o4-mini (92.7) and Gemini-2.5

Read More »

Unexpected costs drive on-premises computing

“This reversal challenges the assumption that cloud is always the end goal, and highlights growing concerns about cost predictability, control, and performance in shared cloud environments,” MacDonald told Network World. The survey found 86% of IT professionals report that their organizations currently use dedicated servers, with government (93%), information technology (91%), and finance (90%) being the most likely industries to do so. Fifty-three percent of IT professionals still view dedicated servers as essential, and nearly half (45%) expect their role to grow by 2030. The majority of respondents use dedicated servers for databases, with file storage and web hosting also in the mix. Another finding as relates to customization is that 32% of IT professionals believe their current cloud spend is wasted on features or capacity they don’t fully use. Cloud service providers are notorious for providing features whether you want them or not and not giving customers the choice to opt out and cut bills. On premises computing is not just holding steady but growing. More than one-third of respondents (34%) said their organizations increased spending on dedicated servers in 2024, compared to 24% in 2023 and 26% in 2022. IT managers are finding their biggest challenge is explaining the value of dedicated infrastructure to management. Nearly one-third (31%) said they don’t feel confident explaining infrastructure choices to nontechnical leadership and 70% of IT professionals surveyed believe that executive leadership underestimates the role dedicated servers play in a modern tech stack.

Read More »

Chevron to Cut Positions as Part of Hess Integration

Chevron will “consolidate or eliminate some positions” as part of its integration with Hess Corporation, a Chevron spokesperson told Rigzone. “Chevron completed the merger with Hess Corporation on July 18,” the spokesperson said. “We are working quickly to integrate the Hess workforce and are focused on maintaining safe and reliable operations throughout the transition period,” the spokesperson added. “As part of the integration, we will consolidate or eliminate some positions. As required by the WARN Act, Chevron has provided notice of a planned workforce reduction to appropriate state and local government representatives for Downtown Houston and North Dakota,” the spokesperson went on to state. When asked by Rigzone to confirm how many positions will be affected, the Chevron spokesperson said, “for the WARN Notices issued on July 21, Chevron anticipates a reduction of approximately 575 employees in Downtown Houston and 70 employees in North Dakota”. The spokesperson told Rigzone that “these are difficult decisions which … [the company does] not make lightly”. “We understand the impact this news may have on employees, their families and the communities where we operate,” the spokesperson said. “Our priority is to support our employees through this transition. We are offering severance benefits and outplacement support,” the Chevron representative added. In a statement posted on its website on July 18, Chevron announced that it had completed its acquisition of Hess Corporation following the satisfaction of all necessary closing conditions, including a favorable arbitration outcome regarding Hess’ offshore Guyana asset. “This merger of two great American companies brings together the best in the industry,” Chevron Chairman and CEO Mike Wirth said in that statement. “The combination enhances and extends our growth profile well into the next decade, which we believe will drive greater long-term value to shareholders,” he added. In this statement, former Hess Corporation CEO

Read More »

GM, Redwood Materials sign deal to deploy energy-storage batteries

Dive Brief: General Motors and battery recycler Redwood Materials have signed a non-binding memorandum of understanding to accelerate the deployment of stationary energy storage systems built using both new modules and second-life batteries from the automaker’s electric vehicles, according to a July 16 press release. The plans are part of a new business unit launched by Redwood Materials in June named Redwood Energy that’s focused on assembling and deploying low-cost stationary energy-storage systems to help meet growing power demands of AI data centers and other applications. “Electricity demand is climbing, and it’s only going to accelerate,” Kurt Kelty, VP of batteries, propulsion, and sustainability at GM, said in the release. “To meet that challenge, the U.S. needs energy storage solutions that can be deployed quickly, economically, and made right here at home.” Dive Insight: Redwood Materials plans to establish a domestic supply chain to manufacture batteries to support energy storage applications via its Redwood Energy unit. The company currently repurposes around 20 gigawatt hours of batteries annually, which is the equivalent of 250,000 EVs or roughly 90% of all lithium-ion batteries and battery materials currently recycled in North America, according to its website.    GM is already providing the battery recycling company with used batteries to help power what Redwood calls “the largest second-life battery development in the world and the largest microgrid in North America” at an installation in Sparks, Nevada, per the release. The electricity produced by the microgrid is being used by AI infrastructure company Crusoe. “Electricity demand is accelerating at an unprecedented pace, driven by AI and the rapid electrification of everything from transportation to industry,” said JB Straubel, founder and CEO of Redwood Materials, in the release. “Both GM’s second-life EV batteries and new batteries can be deployed in Redwood’s energy storage systems, delivering fast,

Read More »

Data centers seek flexible power solutions for resilience, sustainability

AI data centers in the United States could consume 33.8 gigawatts of power by 2030, or about 3% of the country’s generating capacity, Schneider Electric said earlier this year. Some AI power demand projections are even more aggressive, like a 2024 RAND Corporation forecast cited by Schneider that sees 130 GW of data center demand in 2030. With data center projects worth at least $64 billion delayed or blocked by local opposition and state policymakers and regulators placing restrictions on development, a backlash appears to be brewing.  Data center opponents cite a litany of concerns, from noise to water pollution, but some of the most frequently cited center on the local impacts of onsite power generation and broader effects on the electric grid. Earlier this month, Elon Musk’s xAI overcame local opposition to secure an air permit for a fleet of gas generators at its AI training center in Memphis, Tenn.  Texas lawmakers recently passed a law requiring new data centers to disconnect from the grid during periods of high power demand. Several states have implemented or are considering special utility tariffs requiring data center companies to cover the costs of grid upgrades they benefit from. North Carolina utility regulators plan a technical conference in October on data centers’ potential impacts on power reliability. Data centers’ appetite for power and the growing recognition that unchecked growth would be a problem for the grid could cause short-term growing pains for the industry, said Kelcy Pegler, CEO of FlexGen, a battery management software company. “From a societal perspective, we are underestimating the impact that data centers will have on the grid,” Pegler said in an interview. “We’re going to have a break-in period where data center ambitions will have trouble coming to fruition.” Earlier this year, FlexGen partnered with electrical contractor Rosendin

Read More »

Saskatchewan, Ontario, Alberta Sign MoU to Facilitate O&G Transportation

Three Canadian provinces are collaborating to facilitate the transportation of oil and gas across the country. Saskatchewan Premier Scott Moe, Ontario Premier Doug Ford, and Alberta Premier Danielle Smith signed a memorandum of understanding (MoU) to coordinate the safe transportation and export of Western Canadian oil, natural gas and critical minerals to refineries, seaports and storage facilities across Canada and beyond, according to a statement from the Canadian government. The MoU establishes a collaborative framework to explore multiple pipeline and rail corridors, and expansion of processing hubs for critical minerals. The framework aims to create new and critical avenues to reach domestic and international markets, the statement said. The agreement will help strengthen interprovincial trade by linking Saskatchewan, Ontario, and Alberta through shared infrastructure development and coordinated market strategies, according to the statement. “We are sending a clear signal that Canada’s energy future will be built by Canadians, for Canadians,” Moe said. “This agreement commits our provinces to work together to unlock new markets, shore up our supply chains from mine to port and advocate for the federal reforms our industry needs. By advancing pipelines, rail connections and critical-mineral processing capacity, we are safeguarding thousands of jobs, strengthening our energy security and fostering sustainable growth”. “As the world grapples with President Trump’s unfair tariffs, it is more important than ever to build a resilient and self-reliant economy here at home,” Ford said. “This agreement sends a clear message: Ontario, Alberta and Saskatchewan are ready to get shovels in the ground and move forward on projects that will secure our long-term prosperity”. “We are taking action to grow our economy, build real infrastructure and get major projects moving,” Smith said. “Alberta is proud to lead the way in uniting with provinces that share a vision for responsible development, economic freedom and

Read More »

Enbridge to Supply Meta with Power from 600-MW Solar Project in Texas

Enbridge Inc. said it has reached a final investment decision on Clear Fork, a 600-megawatt (MW) solar project in Texas, with Meta Platforms, Inc. signing a long-term contract for 100 percent of the renewable output of the project. Clear Fork is planned to be a utility-scale solar facility located near San Antonio. Construction is underway, and the facility is expected to enter service in the summer of 2027, the company said in a news release. Enbridge’s estimated project cost is $900 mllion and the project is expected to be accretive to cash flow and earnings per share starting in 2027, the company said. “Clear Fork demonstrates the growing demand for renewable power across North America from blue-chip companies who are involved in technology and data center operations,” Enbridge Executive Vice President Matthew Akman said. “Enbridge continues to advance its world-class renewables development portfolio using our financial strength, supply chain reach and construction expertise under a low-risk commercial model that delivers strong competitive returns”. “We are thrilled to partner with Enbridge to bring new renewable energy to Texas and help support our operations with 100 percent clean energy,” Meta Head of Global Energy Urvi Parekh said. First Nations Stake Acquisition Earlier in the month, Stonlasec8 Indigenous Alliance Limited, representing 38 Indigenous communities in British Columbia, completed a CAD 715 million ($512.75 million) equity investment to acquire a 12.5 percent ownership interest in Enbridge’s Westcoast natural gas pipeline system, the first to be competed under the program. The acquisition was supported by two separate bond financings by TD Securities Inc., as a bond placement agent, and a syndicate of institutional investors who provided CAD 400 million through a secured bond issuance to facilitate the Indigenous Partnership’s investment, according to a separate statement. The bond financing was backed by a CAD 400 million

Read More »

Canteen Workers at Exxon Refinery Suspend Strike

In a statement sent to Rigzone on Friday, UK union Unite announced that workers employed by the Compass Group at ExxonMobil’s Fawley oil refinery who were due to take strike action this weekend have suspended their industrial action. The statement highlighted that the workers were due to go on strike from July 25 until 28 July but noted that they will now not take industrial action in order to consult on a pay offer. Unite added that strike action scheduled for August 1-4 will go ahead as scheduled if workers reject the current pay offer. “Unite has paused its industrial action to allow our members to consult on a pay offer,” Unite Regional Officer Rebeca Johns-Lawrence said in the statement. “If the offer is rejected then the strikes planned for early next month will go ahead as planned,” Unite added. Rigzone has contacted Compass Group and ExxonMobil for comment on Unite’s statement. Rigzone has also contacted Bram Frankhuijzen, EMEA Procurement Director at CBRE Global Workplace Solutions (GWS) EMEA, for comment on Unite’s statement. ExxonMobil previously directed Rigzone to Frankhuijzen, describing him as the staff’s employer. At the time of writing, none of the above have responded to Rigzone. In a statement sent to Rigzone last week, Unite revealed that canteen workers employed by Compass Group at ExxonMobil’s Fawley complex in the UK were due to strike.  “Fawley workers will be famished when the oil refinery’s canteen staff strike over poverty pay,” Unite noted in that statement. “The predominantly female workers, employed by catering outsourcer Compass, are paid only slightly more than the minimum wage,” it added. “They are demanding a pay rise that reflects the rising cost of living and ensures the gap between their wages and the absolute legal minimum does not shrink,” it went on to state. Unite

Read More »

Karoon CEO Julian Fowles Stepping Down; Search for Replacement Starts

Karoon Energy Ltd said that its Managing Director and CEO Julian Fowles will be leaving the company by mid-2026. The move follows the company’s decision to relocate key corporate teams and roles to Brazil and the USA, Karoon said in a news release. The company said its board, supported by an international search firm, is leading a global search process to appoint a Houston-based CEO / managing director to succeed Fowles. Fowles will remain in his role until the appointment of the new CEO / managing director or the end of the year. He will then serve a notice period through to mid-2026, where he will be available to provide continuity, if required, the company said. Karoon Chair Peter Botten said, “Following discussions with the Board, Julian will not be relocating as part of the planned transition of key corporate roles, including senior management, from Melbourne to Brazil and the USA”. “The decision to relocate these roles has not been taken lightly. The Board expects that this change, which includes simplifying Karoon’s structure, will increase efficiency and facilitate collaboration between the business units in Brazil and the USA. It will also reduce duplication and allow the Company to source high quality, local talent in our operational locations. The relocation of our corporate teams and roles, which has been carefully planned, will be phased over a period of 12 to 18 months to ensure that a meticulous handover of roles and responsibilities can be undertaken,” Botten continued. “On behalf of the Board, I would like to sincerely thank Julian for all his hard work over the past five years. Joining Karoon in 2020, Julian was instrumental in developing and delivering Karoon’s 2021 Strategic Plan. This included the successful Baúna intervention campaign, the development of the Patola field and the strategic Who

Read More »

LG rolls out new AI services to help consumers with daily tasks

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More LG kicked off the AI bandwagon today with a new set of AI services to help consumers in their daily tasks at home, in the car and in the office. The aim of LG’s CES 2025 press event was to show how AI will work in a day of someone’s life, with the goal of redefining the concept of space, said William Joowan Cho, CEO of LG Electronics at the event. The presentation showed LG is fully focused on bringing AI into just about all of its products and services. Cho referred to LG’s AI efforts as “affectionate intelligence,” and he said it stands out from other strategies with its human-centered focus. The strategy focuses on three things: connected devices, capable AI agents and integrated services. One of things the company announced was a strategic partnership with Microsoft on AI innovation, where the companies pledged to join forces to shape the future of AI-powered spaces. One of the outcomes is that Microsoft’s Xbox Ultimate Game Pass will appear via Xbox Cloud on LG’s TVs, helping LG catch up with Samsung in offering cloud gaming natively on its TVs. LG Electronics will bring the Xbox App to select LG smart TVs. That means players with LG Smart TVs will be able to explore the Gaming Portal for direct access to hundreds of games in the Game Pass Ultimate catalog, including popular titles such as Call of Duty: Black Ops 6, and upcoming releases like Avowed (launching February 18, 2025). Xbox Game Pass Ultimate members will be able to play games directly from the Xbox app on select LG Smart TVs through cloud gaming. With Xbox Game Pass Ultimate and a compatible Bluetooth-enabled

Read More »

Big tech must stop passing the cost of its spiking energy needs onto the public

Julianne Malveaux is an MIT-educated economist, author, educator and political commentator who has written extensively about the critical relationship between public policy, corporate accountability and social equity.  The rapid expansion of data centers across the U.S. is not only reshaping the digital economy but also threatening to overwhelm our energy infrastructure. These data centers aren’t just heavy on processing power — they’re heavy on our shared energy infrastructure. For Americans, this could mean serious sticker shock when it comes to their energy bills. Across the country, many households are already feeling the pinch as utilities ramp up investments in costly new infrastructure to power these data centers. With costs almost certain to rise as more data centers come online, state policymakers and energy companies must act now to protect consumers. We need new policies that ensure the cost of these projects is carried by the wealthy big tech companies that profit from them, not by regular energy consumers such as family households and small businesses. According to an analysis from consulting firm Bain & Co., data centers could require more than $2 trillion in new energy resources globally, with U.S. demand alone potentially outpacing supply in the next few years. This unprecedented growth is fueled by the expansion of generative AI, cloud computing and other tech innovations that require massive computing power. Bain’s analysis warns that, to meet this energy demand, U.S. utilities may need to boost annual generation capacity by as much as 26% by 2028 — a staggering jump compared to the 5% yearly increases of the past two decades. This poses a threat to energy affordability and reliability for millions of Americans. Bain’s research estimates that capital investments required to meet data center needs could incrementally raise consumer bills by 1% each year through 2032. That increase may

Read More »

Final 45V hydrogen tax credit guidance draws mixed response

Dive Brief: The final rule for the 45V clean hydrogen production tax credit, which the U.S. Treasury Department released Friday morning, drew mixed responses from industry leaders and environmentalists. Clean hydrogen development within the U.S. ground to a halt following the release of the initial guidance in December 2023, leading industry participants to call for revisions that would enable more projects to qualify for the tax credit. While the final rule makes “significant improvements” to Treasury’s initial proposal, the guidelines remain “extremely complex,” according to the Fuel Cell and Hydrogen Energy Association. FCHEA President and CEO Frank Wolak and other industry leaders said they look forward to working with the Trump administration to refine the rule. Dive Insight: Friday’s release closed what Wolak described as a “long chapter” for the hydrogen industry. But industry reaction to the final rule was decidedly mixed, and it remains to be seen whether the rule — which could be overturned as soon as Trump assumes office — will remain unchanged. “The final 45V rule falls short,” Marty Durbin, president of the U.S. Chamber’s Global Energy Institute, said in a statement. “While the rule provides some of the additional flexibility we sought, … we believe that it still will leave billions of dollars of announced projects in limbo. The incoming Administration will have an opportunity to improve the 45V rules to ensure the industry will attract the investments necessary to scale the hydrogen economy and help the U.S. lead the world in clean manufacturing.” But others in the industry felt the rule would be sufficient for ending hydrogen’s year-long malaise. “With this added clarity, many projects that have been delayed may move forward, which can help unlock billions of dollars in investments across the country,” Kim Hedegaard, CEO of Topsoe’s Power-to-X, said in a statement. Topsoe

Read More »

Texas, Utah, Last Energy challenge NRC’s ‘overburdensome’ microreactor regulations

Dive Brief: A 69-year-old Nuclear Regulatory Commission rule underpinning U.S. nuclear reactor licensing exceeds the agency’s statutory authority and creates an unreasonable burden for microreactor developers, the states of Texas and Utah and advanced nuclear technology company Last Energy said in a lawsuit filed Dec. 30 in federal court in Texas. The plaintiffs asked the Eastern District of Texas court to exempt Last Energy’s 20-MW reactor design and research reactors located in the plaintiff states from the NRC’s definition of nuclear “utilization facilities,” which subjects all U.S. commercial and research reactors to strict regulatory scrutiny, and order the NRC to develop a more flexible definition for use in future licensing proceedings. Regardless of its merits, the lawsuit underscores the need for “continued discussion around proportional regulatory requirements … that align with the hazards of the reactor and correspond to a safety case,” said Patrick White, research director at the Nuclear Innovation Alliance. Dive Insight: Only three commercial nuclear reactors have been built in the United States in the past 28 years, and none are presently under construction, according to a World Nuclear Association tracker cited in the lawsuit. “Building a new commercial reactor of any size in the United States has become virtually impossible,” the plaintiffs said. “The root cause is not lack of demand or technology — but rather the [NRC], which, despite its name, does not really regulate new nuclear reactor construction so much as ensure that it almost never happens.” More than a dozen advanced nuclear technology developers have engaged the NRC in pre-application activities, which the agency says help standardize the content of advanced reactor applications and expedite NRC review. Last Energy is not among them.  The pre-application process can itself stretch for years and must be followed by a formal application that can take two

Read More »

Qualcomm unveils AI chips for PCs, cars, smart homes and enterprises

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Qualcomm unveiled AI technologies and collaborations for PCs, cars, smart homes and enterprises at CES 2025. At the big tech trade show in Las Vegas, Qualcomm Technologies showed how it’s using AI capabilities in its chips to drive the transformation of user experiences across diverse device categories, including PCs, automobiles, smart homes and into enterprises. The company unveiled the Snapdragon X platform, the fourth platform in its high-performance PC portfolio, the Snapdragon X Series, bringing industry-leading performance, multi-day battery life, and AI leadership to more of the Windows ecosystem. Qualcomm has talked about how its processors are making headway grabbing share from the x86-based AMD and Intel rivals through better efficiency. Qualcomm’s neural processing unit gets about 45 TOPS, a key benchmark for AI PCs. The Snapdragon X family of AI PC processors. Additionally, Qualcomm Technologies showcased continued traction of the Snapdragon X Series, with over 60 designs in production or development and more than 100 expected by 2026. Snapdragon for vehicles Qualcomm demoed chips that are expanding its automotive collaborations. It is working with Alpine, Amazon, Leapmotor, Mobis, Royal Enfield, and Sony Honda Mobility, who look to Snapdragon Digital Chassis solutions to drive AI-powered in-cabin and advanced driver assistance systems (ADAS). Qualcomm also announced continued traction for its Snapdragon Elite-tier platforms for automotive, highlighting its work with Desay, Garmin, and Panasonic for Snapdragon Cockpit Elite. Throughout the show, Qualcomm will highlight its holistic approach to improving comfort and focusing on safety with demonstrations on the potential of the convergence of AI, multimodal contextual awareness, and cloudbased services. Attendees will also get a first glimpse of the new Snapdragon Ride Platform with integrated automated driving software stack and system definition jointly

Read More »

Oil, Gas Execs Reveal Where They Expect WTI Oil Price to Land in the Future

Executives from oil and gas firms have revealed where they expect the West Texas Intermediate (WTI) crude oil price to be at various points in the future as part of the fourth quarter Dallas Fed Energy Survey, which was released recently. The average response executives from 131 oil and gas firms gave when asked what they expect the WTI crude oil price to be at the end of 2025 was $71.13 per barrel, the survey showed. The low forecast came in at $53 per barrel, the high forecast was $100 per barrel, and the spot price during the survey was $70.66 per barrel, the survey pointed out. This question was not asked in the previous Dallas Fed Energy Survey, which was released in the third quarter. That survey asked participants what they expect the WTI crude oil price to be at the end of 2024. Executives from 134 oil and gas firms answered this question, offering an average response of $72.66 per barrel, that survey showed. The latest Dallas Fed Energy Survey also asked participants where they expect WTI prices to be in six months, one year, two years, and five years. Executives from 124 oil and gas firms answered this question and gave a mean response of $69 per barrel for the six month mark, $71 per barrel for the year mark, $74 per barrel for the two year mark, and $80 per barrel for the five year mark, the survey showed. Executives from 119 oil and gas firms answered this question in the third quarter Dallas Fed Energy Survey and gave a mean response of $73 per barrel for the six month mark, $76 per barrel for the year mark, $81 per barrel for the two year mark, and $87 per barrel for the five year mark, that

Read More »

How nonprofits and academia are stepping up to salvage US climate programs

Nonprofits are striving to preserve a US effort to modernize greenhouse-gas measurements, amid growing fears that the Trump administration’s dismantling of federal programs will obscure the nation’s contributions to climate change. The Data Foundation, a Washington, DC, nonprofit that advocates for open data, is fundraising for an initiative that will coordinate efforts among nonprofits, technical experts, and companies to improve the accuracy and accessibility of climate emissions information. It will build on an effort to improve the collection of emissions data that former president Joe Biden launched in 2023—and which President Trump nullified on his first day in office.  The initiative will help prioritize responses to changes in federal greenhouse-gas monitoring and measurement programs, but the Data Foundation stresses that it will primarily serve a “long-standing need for coordination” of such efforts outside of government agencies. The new greenhouse-gas coalition is one of a growing number of nonprofit and academic groups that have spun up or shifted focus to keep essential climate monitoring and research efforts going amid the Trump administration’s assault on environmental funding, staffing, and regulations. Those include efforts to ensure that US scientists can continue to contribute to the UN’s major climate report and publish assessments of the rising domestic risks of climate change. Otherwise, the loss of these programs will make it increasingly difficult for communities to understand how more frequent or severe wildfires, droughts, heat waves, and floods will harm them—and how dire the dangers could become. 
Few believe that nonprofits or private industry can come close to filling the funding holes that the Trump administration is digging. But observers say it’s essential to try to sustain efforts to understand the risks of climate change that the federal government has historically overseen, even if the attempts are merely stopgap measures.  If we give up these sources of emissions data, “we’re flying blind,” says Rachel Cleetus, senior policy director with the climate and energy program at the Union of Concerned Scientists. “We’re deliberating taking away the very information that would help us understand the problem and how to address it best.”
Improving emissions estimates The Environmental Protection Agency, the National Oceanic and Atmospheric Administration, the US Forest Service, and other agencies have long collected information about greenhouse gases in a variety of ways. These include self-reporting by industry; shipboard, balloon, and aircraft readings of gas concentrations in the atmosphere; satellite measurements of the carbon dioxide and methane released by wildfires; and on-the-ground measurements of trees. The EPA, in turn, collects and publishes the data from these disparate sources as the Inventory of US Greenhouse Gas Emissions and Sinks. But that report comes out on a two-year lag, and studies show that some of the estimates it relies on could be way off—particularly the self-reported ones. A recent analysis using satellites to measure methane pollution from four large landfills found they produce, on average, six times more emissions than the facilities had reported to the EPA. Likewise, a 2018 study in Science found that the actual methane leaks from oil and gas infrastructure were about 60% higher than the self-reported estimates in the agency’s inventory. The Biden administration’s initiative—the National Strategy to Advance an Integrated US Greenhouse Gas Measurement, Monitoring, and Information System—aimed to adopt state-of-the-art tools and methods to improve the accuracy of these estimates, including satellites and other monitoring technologies that can replace or check self-reported information. The administration specifically sought to achieve these improvements through partnerships between government, industry, and nonprofits. The initiative called for the data collected across groups to be published to an online portal in formats that would be accessible to policymakers and the public. Moving toward a system that produces more current and reliable data is essential for understanding the rising risks of climate change and tracking whether industries are abiding by government regulations and voluntary climate commitments, says Ben Poulter, a former NASA scientist who coordinated the Biden administration effort as a deputy director in the Office of Science and Technology Policy. “Once you have this operational system, you can provide near-real-time information that can help drive climate action,” Poulter says. He is now a senior scientist at Spark Climate Solutions, a nonprofit focused on accelerating emerging methods of combating climate change, and he is advising the Data Foundation’s Climate Data Collaborative, which is overseeing the new greenhouse-gas initiative.  Slashed staffing and funding   But the momentum behind the federal strategy deflated when Trump returned to office. On his first day, he signed an executive order that effectively halted it. The White House has since slashed staffing across the agencies at the heart of the effort, sought to shut down specific programs that generate emissions data, and raised uncertainties about the fate of numerous other program components. 

In April, the administration missed a deadline to share the updated greenhouse-gas inventory with the United Nations, for the first time in three decades, as E&E News reported. It eventually did release the report in May, but only after the Environmental Defense Fund filed a Freedom of Information Act request. There are also indications that the collection of emissions data might be in jeopardy. In March, the EPA said it would “reconsider” the Greenhouse Gas Reporting Program, which requires thousands of power plants, refineries, and other industrial facilities to report emissions each year. In addition, the tax and spending bill that Trump signed into law earlier this month rescinds provisions in Biden’s Inflation Reduction Act that provided incentives or funding for corporate greenhouse-gas reporting and methane monitoring.  Meanwhile, the White House has also proposed slashing funding for the National Oceanic and Atmospheric Administration and shuttering a number of its labs. Those include the facility that supports the Mauna Loa Observatory in Hawaii, the world’s longest-running carbon dioxide measuring program, as well as the Global Monitoring Laboratory, which operates a global network of collection flasks that capture air samples used to measure concentrations of nitrous oxide, chlorofluorocarbons, and other greenhouse gases. Under the latest appropriations negotiations, Congress seems set to spare NOAA and other agencies the full cuts pushed by the Trump administration, but that may or may not protect various climate programs within them. As observers have noted, the loss of experts throughout the federal government, coupled with the priorities set by Trump-appointed leaders of those agencies, could still prevent crucial emissions data from being collected, analyzed, and published. “That’s a huge concern,” says David Hayes, a professor at the Stanford Doerr School of Sustainability, who previously worked on the effort to upgrade the nation’s emissions measurement and monitoring as special assistant to President Biden for climate policy. It’s not clear “whether they’re going to continue and whether the data availability will drop off.” ‘A natural disaster’ Amid all these cutbacks and uncertainties, those still hoping to make progress toward an improved system for measuring greenhouse gases have had to adjust their expectations: It’s now at least as important to simply preserve or replace existing federal programs as it is to move toward more modern tools and methods. But Ryan Alexander, executive director of the Data Foundation’s Climate Data Collaborative, is optimistic that there will be opportunities to do both. 
She says the new greenhouse-gas coalition will strive to identify the highest-priority needs and help other nonprofits or companies accelerate the development of new tools or methods. It will also aim to ensure that these organizations avoid replicating one another’s efforts and deliver data with high scientific standards, in open and interoperable formats.  The Data Foundation declines to say what other nonprofits will be members of the coalition or how much money it hopes to raise, but it plans to make a formal announcement in the coming weeks. 
Nonprofits and companies are already playing a larger role in monitoring emissions, including organizations like Carbon Mapper, which operates satellites and aircraft that detect and measure methane emissions from particular facilities. The EDF also launched a satellite last year, known as MethaneSAT, that could spot large and small sources of emissions—though it lost power earlier this month and probably cannot be recovered.  Alexander notes that shifting from self-reported figures to observational technology like satellites could not just replace but perhaps also improve on the EPA reporting program that the Trump administration has moved to shut down. Given the “dramatic changes” brought about by this administration, “the future will not be the past,” she says. “This is like a natural disaster. We can’t think about rebuilding in the way that things have been in the past. We have to look ahead and say, ‘What is needed? What can people afford?’” Organizations can also use this moment to test and develop emerging technologies that could improve greenhouse-gas measurements, including novel sensors or artificial intelligence tools, Hayes says.  “We are at a time when we have these new tools, new technologies for measurement, measuring, and monitoring,” he says. “To some extent it’s a new era anyway, so it’s a great time to do some pilot testing here and to demonstrate how we can create new data sets in the climate area.” Saving scientific contributions It’s not just the collection of emissions data that nonprofits and academic groups are hoping to save. Notably, the American Geophysical Union and its partners have taken on two additional climate responsibilities that traditionally fell to the federal government.
The US State Department’s Office of Global Change historically coordinated the nation’s contributions to the UN Intergovernmental Panel on Climate Change’s major reports on climate risks, soliciting and nominating US scientists to help write, oversee, or edit sections of the assessments. The US Global Change Research Program, an interagency group that ran much of the process, also covered the cost of trips to a series of in-person meetings with international collaborators.  But the US government seems to have relinquished any involvement as the IPCC kicks off the process for the Seventh Assessment Report. In late February, the administration blocked federal scientists including NASA’s Katherine Calvin, who was previously selected as a cochair for one of the working groups, from attending an early planning meeting in China. (Calvin was the agency’s chief scientist at the time but was no longer serving in that role as of April, according to NASA’s website.) The agency didn’t respond to inquiries from interested scientists after the UN panel issued a call for nominations in March, and it failed to present a list of nominations by the deadline in April, scientists involved in the process say. The Trump administration also canceled funding for the Global Change Research Program and, earlier this month, fired the last remaining staffers working at the Office of Global Change. In response, 10 universities came together in March to form the US Academic Alliance for the IPCC, in partnership with the AGU, to request and evalute applications from US researchers. The universities—which include Yale, Princeton, and the University of California, San Diego—together nominated nearly 300 scientists, some of whom the IPCC has since officially selected. The AGU is now conducting a fundraising campaign to help pay for travel expenses. 
Pamela McElwee, a professor at Rutgers who helped establish the academic coalition, says it’s crucial for US scientists to continue participating in the IPCC process. “It is our flagship global assessment report on the state of climate, and it plays a really important role in influencing country policies,” she says. “To not be part of it makes it much more difficult for US scientists to be at the cutting edge and advance the things we need to do.”  The AGU also stepped in two months later, after the White House dismissed hundreds of researchers working on the National Climate Assessment, an annual report analyzing the rising dangers of climate change across the country. The AGU and American Meteorological Society together announced plans to publish a “special collection” to sustain the momentum of that effort. “It’s incumbent on us to ensure our communities, our neighbors, our children are all protected and prepared for the mounting risks of climate change,” said Brandon Jones, president of the AGU, in an earlier statement. The AGU declined to discuss the status of the project. Stopgap solution The sheer number of programs the White House is going after will require organizations to make hard choices about what they attempt to save and how they go about it. Moreover, relying entirely on nonprofits and companies to take over these federal tasks is not viable over the long term.  Given the costs of these federal programs, it could prove prohibitive to even keep a minimum viable version of some essential monitoring systems and research programs up and running. Dispersing across various organizations the responsibility of calculating the nation’s emissions sources and sinks also creates concerns about the scientific standards applied and the accessibility of that data, Cleetus says. Plus, moving away from the records that NOAA, NASA, and other agencies have collected for decades would break the continuity of that data, undermining the ability to detect or project trends. More basically, publishing national emissions data should be a federal responsibility, particularly for the government of the world’s second-largest climate polluter, Cleetus adds. Failing to calculate and share its contributions to climate change sidesteps the nation’s global responsibilities and sends a terrible signal to other countries.  Poulter stresses that nonprofits and the private sector can do only so much, for so long, to keep these systems up and running. “We don’t want to give the impression that this greenhouse-gas coalition, if it gets off the ground, is a long-term solution,” he says. “But we can’t afford to have gaps in these data sets, so somebody needs to step in and help sustain those measurements.”

Read More »

Anthropic unveils ‘auditing agents’ to test for AI misalignment

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. Therefore, it’s essential that, in addition to performance evaluations, organizations conduct alignment testing. However, alignment audits often present two major challenges: scalability and validation. Alignment testing takes so much time for human researchers, and it’s challenging to be assured that the audit caught everything.  In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers said these agents, which were created during the pre-deployment testing of Claude Opus 4, improved alignment validation tests and allowed researchers to run multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub.  New Anthropic research: Building and evaluating alignment auditing agents. We developed three AI agents to autonomously complete alignment auditing tasks. In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0 — Anthropic (@AnthropicAI) July 24, 2025 “We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper.  The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF The three agents they explored were: Tool-using investigator agent for

Read More »

Freed says 20,000 clinicians are using its medical AI transcription ‘scribe,’ but competition is rising fast

Even generative AI critics and detractors have to admit the technology is great for something: transcription.If you’ve joined a meeting on Zoom, Microsoft Teams, Google Meet or other video call platform of your choice at any point in the last year or so, you’ve likely noticed an increased number of AI notetakers joining the conference call as well.Indeed, not only do these platforms all have AI transcription features built in, but there are of course other stand alone services like Otter AI (used by VentureBeat along with the Google Workspace suite of Apps), and models such as OpenAI’s new gpt-4o-transcribe and older open-source Whisper, aiOla, and many others with specific niches and roles.One such startup is San Fransisco-based Freed AI, co-founded in 2022 by former Facebook engineers Erez Druk and Andrey Bannikov, now its CEO and CTO, respectively. The idea was simple: give doctors and medical professionals a way to automatically transcribe their conversations with patients, capture accurate health specific terminology, and extract insights and action plans from the conversations without the physician having to lift a finger.

Read More »

Trump’s AI Action Plan is a distraction

On Wednesday, President Trump issued three executive orders, delivered a speech, and released an action plan, all on the topic of continuing American leadership in AI.  The plan contains dozens of proposed actions, grouped into three “pillars”: accelerating innovation, building infrastructure, and leading international diplomacy and security. Some of its recommendations are thoughtful even if incremental, some clearly serve ideological ends, and many enrich big tech companies, but the plan is just a set of recommended actions.  The three executive orders, on the other hand, actually operationalize one subset of actions from each pillar:  One aims to prevent “woke AI” by mandating that the federal government procure only large language models deemed “truth-seeking” and “ideologically neutral” rather than ones allegedly favoring DEI. This action purportedly accelerates AI innovation. A second aims to accelerate construction of AI data centers. A much more industry-friendly version of an order issued under President Biden, it makes available rather extreme policy levers, like effectively waiving a broad swath of environmental protections, providing government grants to the wealthiest companies in the world, and even offering federal land for private data centers. A third promotes and finances the export of US AI technologies and infrastructure, aiming to secure American diplomatic leadership and reduce international dependence on AI systems from adversarial countries. This flurry of actions made for glitzy press moments, including an hour-long speech from the president and onstage signings. But while the tech industry cheered these announcements (which will swell their coffers), they obscured the fact that the administration is currently decimating the very policies that enabled America to become the world leader in AI in the first place.
To maintain America’s leadership in AI, you have to understand what produced it. Here are four specific long-standing public policies that helped the US achieve this leadership—advantages that the administration is undermining.  Investing federal funding in R&D  Generative AI products released recently by American companies, like ChatGPT, were developed with industry-funded research and development. But the R&D that enables today’s AI was actually funded in large part by federal government agencies—like the Defense Department, the National Science Foundation, NASA, and the National Institutes of Health—starting in the 1950s. This includes the first successful AI program in 1956, the first chatbot in 1961, and the first expert systems for doctors in the 1970s, along with breakthroughs in machine learning, neural networks, backpropagation, computer vision, and natural-language processing.
American tax dollars also funded advances in hardware, communications networks, and other technologies underlying AI systems. Public research funding undergirded the development of lithium-ion batteries, micro hard drives, LCD screens, GPS, radio-frequency signal compression, and more in today’s smartphones, along with the chips used in AI data centers, and even the internet itself. Instead of building on this world-class research history, the Trump administration is slashing R&D funding, firing federal scientists, and squeezing leading research universities. This week’s action plan recommends investing in R&D, but the administration’s actual budget proposes cutting nondefense R&D by 36%. It also proposed actions to better coordinate and guide federal R&D, but coordination won’t yield more funding. Some say that companies’ R&D investments will make up the difference. However, companies conduct research that benefits their bottom line, not necessarily the national interest. Public investment allows broad scientific inquiry, including basic research that lacks immediate commercial applications but sometimes ends up opening massive markets years or decades later. That’s what happened with today’s AI industry. Supporting immigration and immigrants Beyond public R&D investment, America has long attracted the world’s best researchers and innovators. Today’s generative AI is based on the transformer model (the T in ChatGPT), first described by a team at Google in 2017. Six of the eight researchers on that team were born outside the US, and the other two are children of immigrants.  This isn’t an exception. Immigrants have been central to American leadership in AI. Of the 42 American companies included in the 2025 Forbes ranking of the 50 top AI startups, 60% have at least one immigrant cofounder, according to an analysis by the Institute for Progress. Immigrants also cofounded or head the companies at the center of the AI ecosystem: OpenAI, Anthropic, Google, Microsoft, Nvidia, Intel, and AMD. “Brain drain” is a term that was first coined to describe scientists’ leaving other countries for the US after World War II—to the Americans’ benefit. Sadly, the trend has begun reversing this year. Recent studies suggest that the US is already losing its AI talent edge through the administration’s anti-immigration actions (including actions taken against AI researchers) and cuts to R&D funding. Banning noncompetes Attracting talented minds is only half the equation; giving them freedom to innovate is just as crucial.

Silicon Valley got its name because of mid-20th-century companies that made semiconductors from silicon, starting with the founding of Shockley Semiconductor in 1955. Two years later, a group of employees, the “Traitorous Eight,” quit to launch a competitor, Fairchild Semiconductor. By the end of the 1960s, successive groups of former Fairchild employees had left to start Intel, AMD, and others collectively dubbed the “Fairchildren.”  Software and internet companies eventually followed, again founded by people who had worked for their predecessors. In the 1990s, former Yahoo employees founded WhatsApp, Slack, and Cloudera; the “PayPal Mafia” created LinkedIn, YouTube, and fintech firms like Affirm. Former Google employees have launched more than 1,200 companies, including Instagram and Foursquare. AI is no different. OpenAI has founders that worked at other tech companies and alumni who have gone on to launch over a dozen AI startups, including notable ones like Anthropic and Perplexity. This labor fluidity and the innovation it has created were possible in large part, according to many historians, because California’s 1872 constitution has been interpreted to prohibit noncompete agreements in employment contracts—a statewide protection the state originally shared only with North Dakota and Oklahoma. These agreements bind one in five American workers. Last year, the Federal Trade Commission under President Biden moved to ban noncompetes nationwide, but a Trump-appointed federal judge has halted the action. The current FTC has signaled limited support for the ban and may be comfortable dropping it. If noncompetes persist, American AI innovation, especially outside California, will be limited. Pursuing antitrust actions One of this week’s announcements requires the review of FTC investigations and settlements that “burden AI innovation.” During the last administration the agency was reportedly investigating Microsoft’s AI actions, and several big tech companies have settlements that their lawyers surely see as burdensome, meaning this one action could thwart recent progress in antitrust policy. That’s an issue because, in addition to the labor fluidity achieved by banning noncompetes, antitrust policy has also acted as a key lubricant to the gears of Silicon Valley innovation.  Major antitrust cases in the second half of the 1900s, against AT&T, IBM, and Microsoft, allowed innovation and a flourishing market for semiconductors, software, and internet companies, as the antitrust scholar Giovanna Massarotto has described. William Shockley was able to start the first semiconductor company in Silicon Valley only because AT&T had been forced to license its patent on the transistor as part of a consent decree resolving a DOJ antitrust lawsuit against the company in the 1950s. 
The early software market then took off because in the late 1960s, IBM unbundled its software and hardware offerings as a response to antitrust pressure from the federal government. As Massarotto explains, the 1950s AT&T consent decree also aided the flourishing of open-source software, which plays a major role in today’s technology ecosystem, including the operating systems for mobile phones and cloud computing servers. Meanwhile, many attribute the success of early 2000s internet companies like Google to the competitive breathing room created by the federal government’s antitrust lawsuit against Microsoft in the 1990s. 
Over and over, antitrust actions targeting the dominant actors of one era enabled the formation of the next. And today, big tech is stifling the AI market. While antitrust advocates were rightly optimistic about this administration’s posture given key appointments early on, this week’s announcements should dampen that excitement.  I don’t want to lose focus on where things are: We should want a future in which lives are improved by the positive uses of AI.  But if America wants to continue leading the world in this technology, we must invest in what made us leaders in the first place: bold public research, open doors for global talent, and fair competition.  Prioritizing short-term industry profits over these bedrock principles won’t just put our technological future at risk—it will jeopardize America’s role as the world’s innovation superpower.  Asad Ramzanali is the director of artificial intelligence and technology policy at the Vanderbilt Policy Accelerator. He previously served as the chief of staff and deputy director of strategy of the White House Office of Science and Technology Policy under President Biden.

Read More »

America’s AI watchdog is losing its bite

Most Americans encounter the Federal Trade Commission only if they’ve been scammed: It handles identity theft, fraud, and stolen data. During the Biden administration, the agency went after AI companies for scamming customers with deceptive advertising or harming people by selling irresponsible technologies. With yesterday’s announcement of President Trump’s AI Action Plan, that era may now be over.  In the final months of the Biden administration under chair Lina Khan, the FTC levied a series of high-profile fines and actions against AI companies for overhyping their technology and bending the truth—or in some cases making claims that were entirely false. It found that the security giant Evolv lied about the accuracy of its AI-powered security checkpoints, which are used in stadiums and schools but failed to catch a seven-inch knife that was ultimately used to stab a student. It went after the facial recognition company Intellivision, saying the company made unfounded claims that its tools operated without gender or racial bias. It fined startups promising bogus “AI lawyer” services and one that sold fake product reviews generated with AI. These actions did not result in fines that crippled the companies, but they did stop them from making false statements and offered customers ways to recover their money or get out of contracts. In each case, the FTC found, everyday people had been harmed by AI companies that let their technologies run amok.
The plan released by the Trump administration yesterday suggests it believes these actions went too far. In a section about removing “red tape and onerous regulation,” the White House says it will review all FTC actions taken under the Biden administration “to ensure that they do not advance theories of liability that unduly burden AI innovation.” In the same section, the White House says it will withhold AI-related federal funding from states with “burdensome” regulations. This move by the Trump administration is the latest in its evolving attack on the agency, which provides a significant route of redress for people harmed by AI in the US. It’s likely to result in faster deployment of AI with fewer checks on accuracy, fairness, or consumer harm.
Under Khan, a Biden appointee, the FTC found fans in unexpected places. Progressives called for it to break up monopolistic behavior in Big Tech, but some in Trump’s orbit, including Vice President JD Vance, also supported Khan in her fights against tech elites, albeit for the different goal of ending their supposed censorship of conservative speech.  But in January, with Khan out and Trump back in the White House, this dynamic all but collapsed. Trump released an executive order in February promising to “rein in” independent agencies like the FTC that wage influence without consulting the president. The next month, he started taking that vow to—and past—its legal limits. In March, he fired the only two Democratic commissioners at the FTC. On July 17 a federal court ruled that one of those firings, of commissioner Rebecca Slaughter, was illegal given the independence of the agency, which restored Slaughter to her position (the other fired commissioner, Alvaro Bedoya, opted to resign rather than battle the dismissal in court, so his case was dismissed). Slaughter now serves as the sole Democrat. In naming the FTC in its action plan, the White House now goes a step further, painting the agency’s actions as a major obstacle to US victory in the “arms race” to develop better AI more quickly than China. It promises not just to change the agency’s tack moving forward, but to review and perhaps even repeal AI-related sanctions it has imposed in the past four years. How might this play out? Leah Frazier, who worked at the FTC for 17 years before leaving in May and served as an advisor to Khan, says it’s helpful to think about the agency’s actions against AI companies as falling into two areas, each with very different levels of support across political lines.  The first is about cases of deception, where AI companies mislead consumers. Consider the case of Evolv, or a recent case announced in April where the FTC alleges that a company called Workado, which offers a tool to detect whether something was written with AI, doesn’t have the evidence to back up its claims. Deception cases enjoyed fairly bipartisan support during her tenure, Frazier says. “Then there are cases about responsible use of AI, and those did not seem to enjoy too much popular support,” adds Frazier, who now directs the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under Law. These cases don’t allege deception; rather, they charge that companies have deployed AI in a way that harms people. The most serious of these, which resulted in perhaps the most significant AI-related action ever taken by the FTC and was investigated by Frazier, was announced in 2023. The FTC banned Rite Aid from using AI facial recognition in its stores after it found the technology falsely flagged people, particularly women and people of color, as shoplifters. “Acting on false positive alerts,” the FTC wrote, Rite Aid’s employees “followed consumers around its stores, searched them, ordered them to leave, [and] called the police to confront or remove consumers.”

The FTC found that Rite Aid failed to protect people from these mistakes, did not monitor or test the technology, and did not properly train employees on how to use it. The company was banned from using facial recognition for five years.  This was a big deal. This action went beyond fact-checking the deceptive promises made by AI companies to make Rite Aid liable for how its AI technology harmed consumers. These types of responsible-AI cases are the ones Frazier imagines might disappear in the new FTC, particularly if they involve testing AI models for bias. “There will be fewer, if any, enforcement actions about how companies are deploying AI,” she says. The White House’s broader philosophy toward AI, referred to in the plan, is a “try first” approach that attempts to propel faster AI adoption everywhere from the Pentagon to doctor’s offices. The lack of FTC enforcement that is likely to ensue, Frazier says, “is dangerous for the public.”

Read More »

The Download: gas and oil’s role in climate tech, and using AI to decipher ancient Latin

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. What role should oil and gas companies play in climate tech? —Casey Crownhart After writing about Quaise, a geothermal startup that’s trying to commercialize new drilling technology, I’ve been thinking about the role oil and gas companies are playing in the energy transition. It’s becoming increasingly common in climate tech to see a startup join up with a bigger fossil fuel company in its field, like Quaise has with Nabors Industries, one of the biggest drilling firms in the world.
This industry has resources and energy expertise—but also a vested interest in fossil fuels. Can it really be part of addressing climate change? Read the full story. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
Google DeepMind’s new AI can help historians understand ancient Latin inscriptions Google DeepMind has unveiled new artificial intelligence software that could help historians recover the meaning and context behind ancient Latin engravings. Aeneas can analyze words written in long-weathered stone to say when and where they were originally inscribed.  It follows Google’s previous archaeological tool Ithaca, which also used deep learning to reconstruct and contextualize ancient text, in its case Greek. But while Ithaca and Aeneas use some similar systems, Aeneas also promises to give researchers jumping-off points for further analysis. Read the full story. —Peter Hall The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Donald Trump has unveiled his AI Action Plan He signed multiple orders to boost US AI exports and loosen regulations. (Bloomberg $)+ The plans could reshape how US tech firms train their models. (TechCrunch)+ We’re living in the age of chatbot culture wars. (NYT $)2 China hopes to sell its excess computing powerAfter rapidly building too many data centers. (Reuters)+ China built hundreds of AI data centers to catch the AI boom. Now many stand unused. (MIT Technology Review) 3 How corn is worsening Indiana’s dangerous heatwaveResidents are increasingly at risk of severe heat illness, as the moisture from corn increases humidity levels. (Axios)+ What is corn sweat, exactly? (Vox)+ Here’s how much heat your body can take. (MIT Technology Review)4 What’s next for Tesla?Its sales are falling, and its push into robotaxis is coming at a steep cost. (TechCrunch)+ Elon Musk appeared oddly upbeat on an analyst call. (The Information $)+ Why scaling up robotaxi fleets is such a challenge. (FT $)5 The US is poised to reinstate a banned herbicideDicamba has caused substantial damage to neighboring crops in the past. (WP $)+ The weeds are winning. (MIT Technology Review) 6 Why Amazon is eyeing AI gadgetsA bracelet that records conversations is the latest addition to its roster. (WSJ $)+ Why AI hardware needs to be open. (MIT Technology Review)7 Americans love China’s short video dramasWatch out Hollywood—duanju clips are on the rise. (Wired $) + China’s next cultural export could be TikTok-style short soap operas. (MIT Technology Review) 8 How a YouTube channel captured the spirit of rogue music discoveryMusic Place has gained a cult following from sharing obscure gems. (Pitchfork) 9 Pinterest isn’t immune to AI slopGood luck remodelling your home based on its fantastical designs. (FT $) 10 AI videos are coming to YouTube ShortsIt’ll do everything from creating backgrounds to conjuring up video elements from a text prompt. (Ars Technica)+ What’s next for generative video. (MIT Technology Review)
Quote of the day “You could throw out the results of all these papers.”
—Jeffrey Morris, a biostatistics professor at the University of Pennsylvania, criticizes scientific papers co-authored by the US government’s vaccine safety investigator and vaccine skeptic David Geier to the Atlantic. One more thing What is AI?Artificial intelligence is the hottest technology of our time. But what is it? It sounds like a stupid question, but it’s one that’s never been more urgent.If you’re willing to buckle up and come for a ride, I can tell you why nobody really knows, why everybody seems to disagree, and why you’re right to care about it. Read the full story. —Will Douglas Heaven We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)+ What could be more fun than bumping into Billy Joel while cycling a pedicab around NYC.+ Margarita ice cream is certainly one way to cool down in the summer heat.+ The Denver Museum of Nature & Science has made an extremely unusual discovery—a 67.5 million year old dinosaur fossil under its parking lot.+ In praise of Jane Austen, by way of Clueless.

Read More »

BP to Exit $36B Australian Green Hydrogen Hub

BP Plc will exit its role in a massive green hydrogen production facility planned in Australia as the British oil major refocuses on the fossil fuels that drive its profits.  The company told its partners in the Australian Renewable Energy Hub that it plans to leave the project as both operator and equity holder, according to a statement from a BP spokesperson. It’s the latest setback for green hydrogen, a fuel once touted as a key way for Big Oil to profit from the energy transition that has so far proved too costly for mass production and consumption.  The AREH project company will take over as operator over coming months with support from founding partner InterContinental Energy, according to an AREH spokesperson. BP’s decision to exit the project doesn’t reflect the opportunity the hub presents to decarbonize the Pilbara and support the creation of a green iron industry, they said.  BP’s entry into the project – once estimated to cost about $36 billion – came at a time when the company sought to rapidly build up a business in low-carbon energy and shrink its oil business. But after years of stock under-performance compared with its peers and the departure of the plan’s architect – Chief Executive Officer Bernard Looney – BP has refined its strategy to focus more squarely on profits than green goals.  The company is far from alone in leaving its ambitions for green hydrogen behind. Scores of companies that once saw the fuel as the next big thing in energy have cut back plans as hoped for cost declines failed to materialize. Also on Thursday, Fortescue Ltd. said it would abandon plans for a $550 million Arizona Hydrogen Project in the US and a $150 million PEM50 Project in Gladstone, Australia – resulting in a pretax writedown of $150 million. Meanwhile, Woodside

Read More »

Sysadmins ready for AI, but skepticism abounds

The report shows that AI is being deployed typically in high-volume, repetitive tasks. Troubleshooting and log analysis lead the way, with 41% and 35% of sysadmins, respectively, reporting use of AI in those areas—up significantly from 2024. Respondents reported that the following tasks are most likely to be automated with AI in the next two years: Vulnerability prioritization: 67% Monitoring of server CPU and memory utilization: 67% Detecting and remediating incidents: 66% Patch management processes: 66% Security controls and compliance analysis: 65% Performing post-incident reviews: 57% Providing IT staff with guidance and training: 55% Troubleshooting: 55% In contrast, sysadmins surveyed reported that the following tasks are less likely to be automated with AI in the next two years: Managing SSO and passwords: 48% Administering user permissions: 44% Managing files: 38% Defining system usage policies and procedures: 34% Providing end users with first-level IT support: 30% Installing and maintaining software: 29% Troubleshooting: 24% Performing post-incident reviews: 23% Providing IT staff with guidance and training: 23% AI concerns persist Sysadmins indicated they worry about falling behind peers. Some 40% expressed concern about being left behind by more AI-literate peers, a slight improvement from 45% in 2024. The top three reasons sysadmins are hesitant to embrace AI are: 79% worry about accuracy and reliability. 78% cite data privacy and security risks. 60% fear loss of control over automated actions. As AI adoption increases, so does the visibility of its shortcomings. Troubleshooting, the most common use case, is also the topic where AI struggles the most—30% of sysadmins reported failures in this area, a figure that is nearly double from last year’s survey. Another 20% of sysadmins said AI implementations have caused operational disruptions. Other areas that show AI isn’t yet performing up to expectations are: log analysis with 12% failure reports, and first-level

Read More »

AI Project Stargate struggles to get off the ground

Analysts aren’t surprised at the news. “Big IT projects have a long history of dramatically overpromising and it appears that trend is quickly moving into the world of AI data center-based projects as well. The Stargate project, in particular, also seems to have more of a political bent to it than many other projects so that’s likely complicating matters as well,” said Bob O’Donnell, president and chief analyst with TECHnalysis Research. “There’s little doubt we will see massive investments by many different organizations to build out AI infrastructure here in the US, but I’m not convinced that individual projects will end up mattering that much in the long run,” he added. “I have always been skeptical about the huge number that was projected. In the hundreds of billions,” said Patrick Moorhead, CEO & chief analyst with Moor Insights & Strategy. “The only problem was that only a few billion in new funding was raised. And now there’s strife between OpenAI and SoftBank. To be fair, Oracle is part of Stargate now and OpenAI will soak up many GPUs in the Texas facility, but this was already in process when the Stargate announcement happened.”

Read More »

It’s Qwen’s summer: new open source Qwen3-235B-A22B-Thinking-2507 tops OpenAI, Gemini reasoning models on key benchmarks

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now If the AI industry had an equivalent to the recording industry’s “song of the summer” — a hit that catches on in the warmer months here in the Northern Hemisphere and is heard playing everywhere — the clear honoree for that title would go to Alibaba’s Qwen Team. Over just the past week, the frontier model AI research division of the Chinese e-commerce behemoth has released not one, not two, not three, but four (!!) new open source generative AI models that offer record-setting benchmarks, besting even some leading proprietary options. Last night, Qwen Team capped it off with the release of Qwen3-235B-A22B-Thinking-2507, it’s updated reasoning large language model (LLM), which takes longer to respond than a non-reasoning or “instruct” LLM, engaging in “chains-of-thought” or self-reflection and self-checking that hopefully result in more correct and comprehensive responses on more difficult tasks. Indeed, the new Qwen3-Thinking-2507, as we’ll call it for short, now leads or closely trails top-performing models across several major benchmarks. The AI Impact Series Returns to San Francisco – August 5 The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation. Secure your spot now – space is limited: https://bit.ly/3GuuPLF As AI influencer and news aggregator Andrew Curran wrote on X: “Qwen’s strongest reasoning model has arrived, and it is at the frontier.” In the AIME25 benchmark—designed to evaluate problem-solving ability in mathematical and logical contexts — Qwen3-Thinking-2507 leads all reported models with a score of 92.3, narrowly surpassing both OpenAI’s o4-mini (92.7) and Gemini-2.5

Read More »

Unexpected costs drive on-premises computing

“This reversal challenges the assumption that cloud is always the end goal, and highlights growing concerns about cost predictability, control, and performance in shared cloud environments,” MacDonald told Network World. The survey found 86% of IT professionals report that their organizations currently use dedicated servers, with government (93%), information technology (91%), and finance (90%) being the most likely industries to do so. Fifty-three percent of IT professionals still view dedicated servers as essential, and nearly half (45%) expect their role to grow by 2030. The majority of respondents use dedicated servers for databases, with file storage and web hosting also in the mix. Another finding as relates to customization is that 32% of IT professionals believe their current cloud spend is wasted on features or capacity they don’t fully use. Cloud service providers are notorious for providing features whether you want them or not and not giving customers the choice to opt out and cut bills. On premises computing is not just holding steady but growing. More than one-third of respondents (34%) said their organizations increased spending on dedicated servers in 2024, compared to 24% in 2023 and 26% in 2022. IT managers are finding their biggest challenge is explaining the value of dedicated infrastructure to management. Nearly one-third (31%) said they don’t feel confident explaining infrastructure choices to nontechnical leadership and 70% of IT professionals surveyed believe that executive leadership underestimates the role dedicated servers play in a modern tech stack.

Read More »

Chevron to Cut Positions as Part of Hess Integration

Chevron will “consolidate or eliminate some positions” as part of its integration with Hess Corporation, a Chevron spokesperson told Rigzone. “Chevron completed the merger with Hess Corporation on July 18,” the spokesperson said. “We are working quickly to integrate the Hess workforce and are focused on maintaining safe and reliable operations throughout the transition period,” the spokesperson added. “As part of the integration, we will consolidate or eliminate some positions. As required by the WARN Act, Chevron has provided notice of a planned workforce reduction to appropriate state and local government representatives for Downtown Houston and North Dakota,” the spokesperson went on to state. When asked by Rigzone to confirm how many positions will be affected, the Chevron spokesperson said, “for the WARN Notices issued on July 21, Chevron anticipates a reduction of approximately 575 employees in Downtown Houston and 70 employees in North Dakota”. The spokesperson told Rigzone that “these are difficult decisions which … [the company does] not make lightly”. “We understand the impact this news may have on employees, their families and the communities where we operate,” the spokesperson said. “Our priority is to support our employees through this transition. We are offering severance benefits and outplacement support,” the Chevron representative added. In a statement posted on its website on July 18, Chevron announced that it had completed its acquisition of Hess Corporation following the satisfaction of all necessary closing conditions, including a favorable arbitration outcome regarding Hess’ offshore Guyana asset. “This merger of two great American companies brings together the best in the industry,” Chevron Chairman and CEO Mike Wirth said in that statement. “The combination enhances and extends our growth profile well into the next decade, which we believe will drive greater long-term value to shareholders,” he added. In this statement, former Hess Corporation CEO

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE