Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

Oil Falls as Saudi Price Cuts Signal Market Gloom

Oil extended declines after Saudi Arabia lowered the prices of its crude, signaling uncertainty surrounding the supply outlook, while equities slumped in another pressure on the commodity. West Texas Intermediate settled near $59, sliding around 0.3% on the day after falling in the previous two sessions. Volatility hit Wall Street, also weighing on oil prices. Saudi Arabia lowered the price of its main oil grade to Asia for December to the lowest level in 11 months. Even though the price cut met expectations, traders saw it as a bearish signal about the cartel’s confidence in the market’s ability to absorb new supply, with a glut widely expected to begin next year. Prices have given up ground since the US sanctioned Russia’s two largest oil producers last month over Moscow’s war in Ukraine, and abundant supplies have so far managed to cushion the impact of stunted flows from the OPEC+ member to major buyers India and China. Key price gauges indicate that supply perceptions are worsening, with the premium that front-month WTI futures command over the next month’s contract, known as the prompt spread, narrowing in the past few weeks to near February lows. That’s also true for Brent crude. Still, US shale companies are forging ahead with their production plans, with Diamondback Energy Inc., Coterra Energy Inc. and Ovintiv Inc. this week announcing they inend to raise output slightly for this year or 2026 despite oil prices falling close to the threshold needed for many US shale wells to break even. Oversupply gloom hasn’t permeated refined products markets, though. Traders are still assessing how those supplies may be impacted by the US clampdown on purchases of Russian crude and Ukraine’s strikes on its neighbor’s energy assets. Those factors, as well as diminishing global refining capacity, bolstered diesel futures and gasoil

Read More »

Clearway projects strong renewables outlook while adding gas assets

300 MW Average size of Clearway’s current projects. Most of the projects slated for 2030 and beyond are 500 MW or more, the company said.  >90% Percentage of projects planned for 2031-2032 that are focused in the Western U.S or the PJM Interconnection, “where renewables are cost competitive and/or valued.” 1.8 GW Capacity of power purchase agreements meant to support data center loads the company has signed so far this year. Meeting digital infrastructure energy needs Independent power producer Clearway Energy announced a U.S. construction pipeline of 27 GW of generation and storage resources following strong third-quarter earnings. The San Francisco-based company is owned by Global Infrastructure Partners and TotalEnergies. According to its earnings presentation, it has an operating portfolio of more than 12 GW of wind, solar, gas and storage.   Clearway President and CEO Craig Cornelius said on an earnings call Tuesday that the company is positioning itself to serve large load data centers.  “Growth in both the medium and long term reflects the strong traction we’ve made in supporting the energy needs of our country’s digital infrastructure build-out and reindustrialization,” Cornelius said during the call. “We expect this to be a core driver of Clearway’s growth outlook well into the 2030s.” Cornelius noted that Clearway executed and awarded 1.8 GW of power purchase agreements meant to support data center loads so far this year, and is currently developing generation aimed at serving “gigawatt class co-located data centers across five states.” Its 27 GW pipeline of projects in development or under construction includes 8.2 GW of solar, 4.6 GW of wind, 1.3 GW of wind repowering, 8 GW of standalone storage, 2.1 GW of paired storage and 2.6 GW of natural gas aimed at serving data centers. Under the OBBBA, wind and solar projects that begin construction by July 4,

Read More »

Google Cloud aims for more cost-effective Arm computing with Axion N4A

It’s not alone: AWS introduced its own Arm-based chip, Graviton, in 2018 to reduce the cost of running internal cloud workloads such as Amazon retail IT, and now 50% of new AWS instances run on it. Microsoft, too, recently developed an Arm chip, Cobalt, to run Microsoft 365 and to offer Azure services. Google’s N4A instances will be available across services including Compute Engine for running virtual machines directly, Google Kubernetes Engine (GKE) for running containerized workloads, and Dataproc for big data and analytics. They will be accessible in the us-central1 (Iowa), us-east4 (N. Virginia), europe-west3 (Frankfurt) and europe-west4 (Netherlands) regions initially. The company describes N4A as a complement to the C4A instances it launched last October. These are designed for heavier workloads such as high-traffic web and application servers, ad servers, game servers, data analytics, databases of any size, and CPU-based AI and machine learning. Also coming “soon” is C4A Metal, a bare-metal instance for specialized workloads in a non-virtualized environment, such as custom hypervisors, security workloads, or CI/CD pipelines.

Read More »

Google’s cheaper, faster TPUs are here, while users of other AI processors face a supply crunch

Opportunities for the AI industry LLM vendors such as OpenAI and Anthropic, which still have relatively young code bases and are continuously evolving them, also have much to gain from the arrival of Ironwood for training their models, said Forrester vice president and principal analyst Charlie Dai. In fact, Anthropic has already agreed to procure 1 million TPUs for training and its models and using them for inferencing. Other, smaller vendors using Google’s TPUs for training models include Lightricks and Essential AI. Google has seen a steady increase in demand for its TPUs (which it also uses to run interna services), and is expected to buy $9.8 billion worth of TPUs from Broadcom this year, compared to $6.2 billion and $2.04 billion in 2024 and 2023 respectively, according to Harrowell. “This makes them the second-biggest AI chip program for cloud and enterprise data centers, just tailing Nvidia, with approximately 5% of the market. Nvidia owns about 78% of the market,” Harrowell said. The legacy problem While some analysts were optimistic about the prospects for TPUs in the enterprise, IDC research director Brandon Hoff said enterprises will most likely to stay away from Ironwood or TPUs in general because of their existing code base written for other platforms. “For enterprise customers who are writing their own inferencing, they will be tied into Nvidia’s software platform,” Hoff said, referring to CUDA, the software platform that runs on Nvidia GPUs. CUDA was released to the public in 2007, while the first version of TensorFlow has only been around since 2015.

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Cisco launches AI infrastructure, AI practitioner certifications

“This new certification focuses on artificial intelligence and machine learning workloads, helping technical professionals become AI-ready and successfully embed AI into their workflows,” said Pat Merat, vice president at Learn with Cisco, in a blog detailing the new AI Infrastructure Specialist certification. “The certification validates a candidate’s comprehensive knowledge in designing, implementing, operating, and troubleshooting AI solutions across Cisco infrastructure.” Separately, the AITECH certification is part of the Cisco AI Infrastructure track, which complements its existing networking, data center, and security certifications. Cisco says the AITECH cert training is intended for network engineers, system administrators, solution architects, and other IT professionals who want to learn how AI impacts enterprise infrastructure. The training curriculum covers topics such as: Utilizing AI for code generation, refactoring, and using modern AI-assisted coding workflows. Using generative AI for exploratory data analysis, data cleaning, transformation, and generating actionable insights. Designing and implementing multi-step AI-assisted workflows and understanding complex agentic systems for automation. Learning AI-powered requirements, evaluating customization approaches, considering deployment strategies, and designing robust AI workflows. Evaluating, fine-tuning, and deploying pre-trained AI models, and implementing Retrieval Augmented Generation (RAG) systems. Monitoring, maintaining, and optimizing AI-powered workflows, ensuring data integrity and security. AITECH certification candidates will learn how to use AI to enhance productivity, automate routine tasks, and support the development of new applications. The training program includes hands-on labs and simulations to demonstrate practical use cases for AI within Cisco and multi-vendor environments.

Read More »

Oil Falls as Saudi Price Cuts Signal Market Gloom

Oil extended declines after Saudi Arabia lowered the prices of its crude, signaling uncertainty surrounding the supply outlook, while equities slumped in another pressure on the commodity. West Texas Intermediate settled near $59, sliding around 0.3% on the day after falling in the previous two sessions. Volatility hit Wall Street, also weighing on oil prices. Saudi Arabia lowered the price of its main oil grade to Asia for December to the lowest level in 11 months. Even though the price cut met expectations, traders saw it as a bearish signal about the cartel’s confidence in the market’s ability to absorb new supply, with a glut widely expected to begin next year. Prices have given up ground since the US sanctioned Russia’s two largest oil producers last month over Moscow’s war in Ukraine, and abundant supplies have so far managed to cushion the impact of stunted flows from the OPEC+ member to major buyers India and China. Key price gauges indicate that supply perceptions are worsening, with the premium that front-month WTI futures command over the next month’s contract, known as the prompt spread, narrowing in the past few weeks to near February lows. That’s also true for Brent crude. Still, US shale companies are forging ahead with their production plans, with Diamondback Energy Inc., Coterra Energy Inc. and Ovintiv Inc. this week announcing they inend to raise output slightly for this year or 2026 despite oil prices falling close to the threshold needed for many US shale wells to break even. Oversupply gloom hasn’t permeated refined products markets, though. Traders are still assessing how those supplies may be impacted by the US clampdown on purchases of Russian crude and Ukraine’s strikes on its neighbor’s energy assets. Those factors, as well as diminishing global refining capacity, bolstered diesel futures and gasoil

Read More »

Clearway projects strong renewables outlook while adding gas assets

300 MW Average size of Clearway’s current projects. Most of the projects slated for 2030 and beyond are 500 MW or more, the company said.  >90% Percentage of projects planned for 2031-2032 that are focused in the Western U.S or the PJM Interconnection, “where renewables are cost competitive and/or valued.” 1.8 GW Capacity of power purchase agreements meant to support data center loads the company has signed so far this year. Meeting digital infrastructure energy needs Independent power producer Clearway Energy announced a U.S. construction pipeline of 27 GW of generation and storage resources following strong third-quarter earnings. The San Francisco-based company is owned by Global Infrastructure Partners and TotalEnergies. According to its earnings presentation, it has an operating portfolio of more than 12 GW of wind, solar, gas and storage.   Clearway President and CEO Craig Cornelius said on an earnings call Tuesday that the company is positioning itself to serve large load data centers.  “Growth in both the medium and long term reflects the strong traction we’ve made in supporting the energy needs of our country’s digital infrastructure build-out and reindustrialization,” Cornelius said during the call. “We expect this to be a core driver of Clearway’s growth outlook well into the 2030s.” Cornelius noted that Clearway executed and awarded 1.8 GW of power purchase agreements meant to support data center loads so far this year, and is currently developing generation aimed at serving “gigawatt class co-located data centers across five states.” Its 27 GW pipeline of projects in development or under construction includes 8.2 GW of solar, 4.6 GW of wind, 1.3 GW of wind repowering, 8 GW of standalone storage, 2.1 GW of paired storage and 2.6 GW of natural gas aimed at serving data centers. Under the OBBBA, wind and solar projects that begin construction by July 4,

Read More »

Google Cloud aims for more cost-effective Arm computing with Axion N4A

It’s not alone: AWS introduced its own Arm-based chip, Graviton, in 2018 to reduce the cost of running internal cloud workloads such as Amazon retail IT, and now 50% of new AWS instances run on it. Microsoft, too, recently developed an Arm chip, Cobalt, to run Microsoft 365 and to offer Azure services. Google’s N4A instances will be available across services including Compute Engine for running virtual machines directly, Google Kubernetes Engine (GKE) for running containerized workloads, and Dataproc for big data and analytics. They will be accessible in the us-central1 (Iowa), us-east4 (N. Virginia), europe-west3 (Frankfurt) and europe-west4 (Netherlands) regions initially. The company describes N4A as a complement to the C4A instances it launched last October. These are designed for heavier workloads such as high-traffic web and application servers, ad servers, game servers, data analytics, databases of any size, and CPU-based AI and machine learning. Also coming “soon” is C4A Metal, a bare-metal instance for specialized workloads in a non-virtualized environment, such as custom hypervisors, security workloads, or CI/CD pipelines.

Read More »

Google’s cheaper, faster TPUs are here, while users of other AI processors face a supply crunch

Opportunities for the AI industry LLM vendors such as OpenAI and Anthropic, which still have relatively young code bases and are continuously evolving them, also have much to gain from the arrival of Ironwood for training their models, said Forrester vice president and principal analyst Charlie Dai. In fact, Anthropic has already agreed to procure 1 million TPUs for training and its models and using them for inferencing. Other, smaller vendors using Google’s TPUs for training models include Lightricks and Essential AI. Google has seen a steady increase in demand for its TPUs (which it also uses to run interna services), and is expected to buy $9.8 billion worth of TPUs from Broadcom this year, compared to $6.2 billion and $2.04 billion in 2024 and 2023 respectively, according to Harrowell. “This makes them the second-biggest AI chip program for cloud and enterprise data centers, just tailing Nvidia, with approximately 5% of the market. Nvidia owns about 78% of the market,” Harrowell said. The legacy problem While some analysts were optimistic about the prospects for TPUs in the enterprise, IDC research director Brandon Hoff said enterprises will most likely to stay away from Ironwood or TPUs in general because of their existing code base written for other platforms. “For enterprise customers who are writing their own inferencing, they will be tied into Nvidia’s software platform,” Hoff said, referring to CUDA, the software platform that runs on Nvidia GPUs. CUDA was released to the public in 2007, while the first version of TensorFlow has only been around since 2015.

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Cisco launches AI infrastructure, AI practitioner certifications

“This new certification focuses on artificial intelligence and machine learning workloads, helping technical professionals become AI-ready and successfully embed AI into their workflows,” said Pat Merat, vice president at Learn with Cisco, in a blog detailing the new AI Infrastructure Specialist certification. “The certification validates a candidate’s comprehensive knowledge in designing, implementing, operating, and troubleshooting AI solutions across Cisco infrastructure.” Separately, the AITECH certification is part of the Cisco AI Infrastructure track, which complements its existing networking, data center, and security certifications. Cisco says the AITECH cert training is intended for network engineers, system administrators, solution architects, and other IT professionals who want to learn how AI impacts enterprise infrastructure. The training curriculum covers topics such as: Utilizing AI for code generation, refactoring, and using modern AI-assisted coding workflows. Using generative AI for exploratory data analysis, data cleaning, transformation, and generating actionable insights. Designing and implementing multi-step AI-assisted workflows and understanding complex agentic systems for automation. Learning AI-powered requirements, evaluating customization approaches, considering deployment strategies, and designing robust AI workflows. Evaluating, fine-tuning, and deploying pre-trained AI models, and implementing Retrieval Augmented Generation (RAG) systems. Monitoring, maintaining, and optimizing AI-powered workflows, ensuring data integrity and security. AITECH certification candidates will learn how to use AI to enhance productivity, automate routine tasks, and support the development of new applications. The training program includes hands-on labs and simulations to demonstrate practical use cases for AI within Cisco and multi-vendor environments.

Read More »

Oil Falls as Saudi Price Cuts Signal Market Gloom

Oil extended declines after Saudi Arabia lowered the prices of its crude, signaling uncertainty surrounding the supply outlook, while equities slumped in another pressure on the commodity. West Texas Intermediate settled near $59, sliding around 0.3% on the day after falling in the previous two sessions. Volatility hit Wall Street, also weighing on oil prices. Saudi Arabia lowered the price of its main oil grade to Asia for December to the lowest level in 11 months. Even though the price cut met expectations, traders saw it as a bearish signal about the cartel’s confidence in the market’s ability to absorb new supply, with a glut widely expected to begin next year. Prices have given up ground since the US sanctioned Russia’s two largest oil producers last month over Moscow’s war in Ukraine, and abundant supplies have so far managed to cushion the impact of stunted flows from the OPEC+ member to major buyers India and China. Key price gauges indicate that supply perceptions are worsening, with the premium that front-month WTI futures command over the next month’s contract, known as the prompt spread, narrowing in the past few weeks to near February lows. That’s also true for Brent crude. Still, US shale companies are forging ahead with their production plans, with Diamondback Energy Inc., Coterra Energy Inc. and Ovintiv Inc. this week announcing they inend to raise output slightly for this year or 2026 despite oil prices falling close to the threshold needed for many US shale wells to break even. Oversupply gloom hasn’t permeated refined products markets, though. Traders are still assessing how those supplies may be impacted by the US clampdown on purchases of Russian crude and Ukraine’s strikes on its neighbor’s energy assets. Those factors, as well as diminishing global refining capacity, bolstered diesel futures and gasoil

Read More »

Clearway projects strong renewables outlook while adding gas assets

300 MW Average size of Clearway’s current projects. Most of the projects slated for 2030 and beyond are 500 MW or more, the company said.  >90% Percentage of projects planned for 2031-2032 that are focused in the Western U.S or the PJM Interconnection, “where renewables are cost competitive and/or valued.” 1.8 GW Capacity of power purchase agreements meant to support data center loads the company has signed so far this year. Meeting digital infrastructure energy needs Independent power producer Clearway Energy announced a U.S. construction pipeline of 27 GW of generation and storage resources following strong third-quarter earnings. The San Francisco-based company is owned by Global Infrastructure Partners and TotalEnergies. According to its earnings presentation, it has an operating portfolio of more than 12 GW of wind, solar, gas and storage.   Clearway President and CEO Craig Cornelius said on an earnings call Tuesday that the company is positioning itself to serve large load data centers.  “Growth in both the medium and long term reflects the strong traction we’ve made in supporting the energy needs of our country’s digital infrastructure build-out and reindustrialization,” Cornelius said during the call. “We expect this to be a core driver of Clearway’s growth outlook well into the 2030s.” Cornelius noted that Clearway executed and awarded 1.8 GW of power purchase agreements meant to support data center loads so far this year, and is currently developing generation aimed at serving “gigawatt class co-located data centers across five states.” Its 27 GW pipeline of projects in development or under construction includes 8.2 GW of solar, 4.6 GW of wind, 1.3 GW of wind repowering, 8 GW of standalone storage, 2.1 GW of paired storage and 2.6 GW of natural gas aimed at serving data centers. Under the OBBBA, wind and solar projects that begin construction by July 4,

Read More »

USA Crude Oil Stocks Rise More Than 5MM Barrels WoW

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR) increased by 5.2 million barrels from the week ending October 24 to the week ending October 31. That’s what the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on November 5 and included data for the week ending October 31. The EIA report showed that crude oil stocks, not including the SPR, stood at 421.2 million barrels on October 31, 416.0 million barrels on October 24, and 427.7 million barrels on November 1, 2024. Crude oil in the SPR stood at 409.6 million barrels on October 31, 409.1 million barrels on October 24, and 387.2 million barrels on November 1, 2024, the report highlighted. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.679 billion barrels on October 31, the report revealed. Total petroleum stocks were up 1.1 million barrels week on week and up 44.5 million barrels year on year, the report showed. “At 421.2 million barrels, U.S. crude oil inventories are about four percent below the five year average for this time of year,” the EIA said in its latest weekly petroleum status report. “Total motor gasoline inventories decreased by 4.7 million barrels from last week and are about five percent below the five year average for this time of year. Both finished gasoline and blending components inventories decreased last week,” it added. “Distillate fuel inventories decreased by 0.6 million barrels last week and are about nine percent below the five year average for this time of year. Propane/propylene inventories increased by 0.4 million barrels from last week and are 15 percent above the five year average

Read More »

Energy Transfer Bags 20-Year Deal to Deliver Gas for Entergy Louisiana

Energy Transfer LP has signed a 20-year transport agreement to deliver natural gas to Entergy Corp to support the power utility’s operations in Louisiana. “Under the agreement, Energy Transfer would initially provide 250,000 MMBtu per day of firm transportation service beginning in February 2028 and continuing through January 2048”, a joint statement said. “The deal structure also provides an option to Entergy to expand delivery capacity in the region to meet future energy demand and demonstrates both companies’ long-term commitment to meeting the region’s growing energy needs. “The natural gas supplied through this agreement, already in Entergy’s financial plan, will help fuel Entergy Louisiana’s combined-cycle combustion turbine facilities, which are being developed to provide efficient, cleaner energy for the company’s customers and to support projects like Meta’s new hyperscale data center in Richland Parish. “The project includes expanding Energy Transfer’s Tiger Pipeline with the construction of a 12-mile lateral with a capacity of up to one Bcfd. Natural gas supply for this project will be sourced from Energy Transfer’s extensive pipeline network which is connected to all the major producing basins in the U.S.” Entergy Louisiana had 1.1 million electric customers in 58 of Louisiana’s 64 parishes as of December 2024, Entergy Louisiana says on its website. Earlier Energy Transfer secured an agreement to deliver gas for a power-data center partnership between VoltaGrid LLC and Oracle Corp. VoltaGrid will deploy 2.3 gigawatts of “cutting-edge, ultra-low-emissions infrastructure, supplied by Energy Transfer’s pipeline network, to support the energy demands of Oracle Cloud Infrastructure’s (OCI) next-generation artificial intelligence data centers”, VoltaGrid said in a press release October 15. “The VoltaGrid power infrastructure will be delivered through the proprietary VoltaGrid platform – a modular, high-transient-response system developed by VoltaGrid with key suppliers, including INNIO Jenbacher and ABB”. “This power plant deployment is being supplied with firm natural gas from Energy Transfer’s expansive pipeline

Read More »

SRP to Convert Unit 4 of SGS Station in Arizona to Use Gas

Three of the four units at the coal-fired Springerville Generation Station (SGS) in Arizona will shift to natural gas in the early 2030s. This week Salt River Project’s (SRP) board approved the conversion of Unit 4. Earlier this year Tucson Electric Power (TEP), which operates all four units, said it will convert Units 1 and 2. Unit 3, owned by the Tri-State Generation and Transmission Association, is set to be retired. “Today’s decision is the lowest-cost option to preserve the plant’s 400-megawatt (MW) generating capacity, enough to serve 90,000 homes, which is important to meeting the Valley’s growing power need in the early 2030s”, SRP said in a statement on its website. “Converting SGS Unit 4 to run on natural gas is expected to save SRP customers about $45 million compared to building a new natural gas facility and about $826 million relative to adding new long-duration lithium-ion batteries over the same period”, the public power utility added. “The decision also provides a bridge to the mid-2040s, when other generating technology options, including advanced nuclear, are mature”. Gas for the converted Unit 4 will come from a new pipeline that SRP will build. The pipeline will also supply gas to the Coronado Generating Station (CGS). On June 24 SRP announced board approval for the conversion of CGS from coal to gas. “SRP is working to more than double the capacity of its power system in the next 10 years while maintaining reliability and affordability and making continued progress toward our sustainability goals”, SRP said. “SRP will accomplish this through an all-of-the-above approach that plans to add renewables in addition to natural gas and storage resources”.  SRP currently supplies 3,000 MW of “carbon-free energy” including over 1,500 MW of solar, with nearly 1,300 MW of battery and pumped hydro storage supporting its

Read More »

Libya NOC Announces New Oil Find

In a statement posted on its Facebook page this week, which was translated, Libya’s National Oil Corporation announced the discovery of oil in the Ghaddams basin. “The National Oil Corporation announced the discovery of a new oil for the Gulf Arab Oil Company in Al-Beer H1 – MN 4 (H1-NC4) located in the Ghadams Al-Rasoubi basin,” the translated statement said. “The daily production of this well is estimated at 4,675 barrels per day of crude oil, and about two million cubic feet of gas,” it added. In the statement, the National Oil Corporation highlighted that the project is 100 percent owned by the corporation. In a statement posted on its website on October 29, Libya’s National Oil Corporation announced a new oil discovery in the Sirte Basin. “The National Oil Corporation (NOC) has announced a new oil discovery by OMV Austria Ltd. – Libya Branch in the Sirte Basin, specifically at well B1 in Block 106/4,” the National Oil Corporation said in that statement. “Production tests show that this exploratory well, reaching a depth of 10,476 feet, is producing over 4,200 barrels of oil per day, with gas production expected to exceed 2.6 million cubic feet daily,” it added. “This well marks the first discovery for OMV in Block 106/4, under the Exploration and Production Sharing Agreement (EPSA) signed in 2008 between the NOC, as the owner, and OMV, as the operator,” Libya’s National Oil Corporation pointed out. When Rigzone asked OMV for comment on this statement, an OMV spokesperson directed Rigzone to an OMV post on LinkedIn about the discovery. “OMV has safely completed an onshore exploration well in the Contract Area 106/4 (EPSA C103) of Libya’s Sirte Basin,” OMV noted in that post. “The well, drilled in the ‘Essar’ prospect, encountered oil-bearing formations with estimated contingent recoverable volumes

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Three Aberdeen oil company headquarters sell for £45m

Three Aberdeen oil company headquarters have been sold in a deal worth £45 million. The CNOOC, Apache and Taqa buildings at the Prime Four business park in Kingswells have been acquired by EEH Ventures. The trio of buildings, totalling 275,000 sq ft, were previously owned by Canadian firm BMO. The financial services powerhouse first bought the buildings in 2014 but took the decision to sell the buildings as part of a “long-standing strategy to reduce their office exposure across the UK”. The deal was the largest to take place throughout Scotland during the last quarter of 2024. Trio of buildings snapped up London headquartered EEH Ventures was founded in 2013 and owns a number of residential, offices, shopping centres and hotels throughout the UK. All three Kingswells-based buildings were pre-let, designed and constructed by Aberdeen property developer Drum in 2012 on a 15-year lease. © Supplied by CBREThe Aberdeen headquarters of Taqa. Image: CBRE The North Sea headquarters of Middle-East oil firm Taqa has previously been described as “an amazing success story in the Granite City”. Taqa announced in 2023 that it intends to cease production from all of its UK North Sea platforms by the end of 2027. Meanwhile, Apache revealed at the end of last year it is planning to exit the North Sea by the end of 2029 blaming the windfall tax. The US firm first entered the North Sea in 2003 but will wrap up all of its UK operations by 2030. Aberdeen big deals The Prime Four acquisition wasn’t the biggest Granite City commercial property sale of 2024. American private equity firm Lone Star bought Union Square shopping centre from Hammerson for £111m. © ShutterstockAberdeen city centre. Hammerson, who also built the property, had originally been seeking £150m. BP’s North Sea headquarters in Stoneywood, Aberdeen, was also sold. Manchester-based

Read More »

2025 ransomware predictions, trends, and how to prepare

Zscaler ThreatLabz research team has revealed critical insights and predictions on ransomware trends for 2025. The latest Ransomware Report uncovered a surge in sophisticated tactics and extortion attacks. As ransomware remains a key concern for CISOs and CIOs, the report sheds light on actionable strategies to mitigate risks. Top Ransomware Predictions for 2025: ● AI-Powered Social Engineering: In 2025, GenAI will fuel voice phishing (vishing) attacks. With the proliferation of GenAI-based tooling, initial access broker groups will increasingly leverage AI-generated voices; which sound more and more realistic by adopting local accents and dialects to enhance credibility and success rates. ● The Trifecta of Social Engineering Attacks: Vishing, Ransomware and Data Exfiltration. Additionally, sophisticated ransomware groups, like the Dark Angels, will continue the trend of low-volume, high-impact attacks; preferring to focus on an individual company, stealing vast amounts of data without encrypting files, and evading media and law enforcement scrutiny. ● Targeted Industries Under Siege: Manufacturing, healthcare, education, energy will remain primary targets, with no slowdown in attacks expected. ● New SEC Regulations Drive Increased Transparency: 2025 will see an uptick in reported ransomware attacks and payouts due to new, tighter SEC requirements mandating that public companies report material incidents within four business days. ● Ransomware Payouts Are on the Rise: In 2025 ransom demands will most likely increase due to an evolving ecosystem of cybercrime groups, specializing in designated attack tactics, and collaboration by these groups that have entered a sophisticated profit sharing model using Ransomware-as-a-Service. To combat damaging ransomware attacks, Zscaler ThreatLabz recommends the following strategies. ● Fighting AI with AI: As threat actors use AI to identify vulnerabilities, organizations must counter with AI-powered zero trust security systems that detect and mitigate new threats. ● Advantages of adopting a Zero Trust architecture: A Zero Trust cloud security platform stops

Read More »

The Download: how doctors fight conspiracy theories, and your AI footprint

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. How conspiracy theories infiltrated the doctor’s office As anyone who has googled their symptoms and convinced themselves that they’ve got a brain tumor will attest, the internet makes it very easy to self-(mis)diagnose your health problems. And although social media and other digital forums can be a lifeline for some people looking for a diagnosis or community, when that information is wrong, it can put their well-being and even lives in danger.We spoke to a number of health-care professionals who told us how this modern impulse to “do your own research” is changing their profession. Read the full story.—Rhiannon Williams This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.
Stop worrying about your AI footprint. Look at the big picture instead.
—Casey Crownhart As a climate technology reporter, I’m often asked by people whether they should be using AI, given how awful it is for the environment. Generally, I tell them not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.That response might surprise some. I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. But I feel strongly about not putting the onus on individuals. Here’s why. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. A new ion-based quantum computer makes error correction simpler A company called Quantinuum has just unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability.Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling.But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s. Read the full story. —Sophia Chen

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 A new California law could change how all Americans browse online It gives web users the chance to opt out of having their personal information sold or shared. (The Markup) 2 The FDA has fast-tracked a pill to treat pancreatic cancerThe experimental drug appears promising, but experts worry corners may be cut. (WP $)+ Demand for AstraZeneca’s cancer and diabetes drugs is pushing profits up. (Bloomberg $)+ A new cancer treatment kills cells using localized heat. (Wired $)3 AI pioneers claim it is already superior to humans in many tasksBut not all tasks are created equal. (FT $)+ Are we all wandering into an AGI trap? (Vox)+ How AGI became the most consequential conspiracy theory of our time. (MIT Technology Review) 4 IBM is planning on cutting thousands of jobsIt’s shifting its focus to software and AI consulting, apparently. (Bloomberg $)+ It’s keen to grow the number of its customers seeking AI advice. (NYT $) 5 Big Tech’s data centers aren’t the job-generators we were promisedThe jobs they do create are largely in security and cleaning. (Rest of World)+ We did the math on AI’s energy footprint. Here’s the story you haven’t heard. (MIT Technology Review) 6 Microsoft let AI shopping agents loose in a fake marketplace They were easily manipulated into buying goods, it found. (TechCrunch)+ When AIs bargain, a less advanced agent could cost you. (MIT Technology Review) 7 Sony has compiled a dataset to test the fairness of computer vision modelsAnd it’s confident it’s been compiled in a fair and ethical way. (The Register)+ These new tools could make AI vision systems less biased. (MIT Technology Review)
8 The social network is no moreWe’re living in an age of anti-social media. (The Atlantic $)+ Scam ads are rife across platforms, but these former Meta workers have a plan. (Wired $)+ The ultimate online flex? Having no followers. (New Yorker $) 9 Vibe coding is Collins dictionary’s word of 2025 📖Beating stiff competition from “clanker.” (The Guardian)+ What is vibe coding, exactly? (MIT Technology Review)10 These people found romance with their chatbot companionsThe AI may not be real, but the humans’ feelings certainly are. (NYT $)+ It’s surprisingly easy to stumble into a relationship with an AI chatbot. (MIT Technology Review)
Quote of the day “The opportunistic side of me is realizing that your average accountant won’t be doing this.” —Sal Abdulla, founder of accounting-software startup NixSheets, tells the Wall Street Journal he’s using AI tools to gain an edge on his competitors. One more thing
Ethically sourced “spare” human bodies could revolutionize medicineMany challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain.Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create “spare” bodies, both human and nonhuman. Read the full story.—Carsten T. Charlesworth, Henry T. Greely & Hiromitsu Nakauchi We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Make sure to look up so you don’t miss November’s supermoon.+ If you keep finding yourself mindlessly scrolling (and who doesn’t?), maybe this whopping six-pound phone case could solve your addiction.+ Life lessons from a 101-year old who has no plans to retire.+ Are you a fan of movement snacking?

Read More »

Stop worrying about your AI footprint. Look at the big picture instead.

Picture it: I’m minding my business at a party, parked by the snack table (of course). A friend of a friend wanders up, and we strike up a conversation. It quickly turns to work, and upon learning that I’m a climate technology reporter, my new acquaintance says something like: “Should I be using AI? I’ve heard it’s awful for the environment.”  This actually happens pretty often now. Generally, I tell people not to worry—let a chatbot plan your vacation, suggest recipe ideas, or write you a poem if you want.  That response might surprise some people, but I promise I’m not living under a rock, and I have seen all the concerning projections about how much electricity AI is using. Data centers could consume up to 945 terawatt-hours annually by 2030. (That’s roughly as much as Japan.)  But I feel strongly about not putting the onus on individuals, partly because AI concerns remind me so much of another question: “What should I do to reduce my carbon footprint?” 
That one gets under my skin because of the context: BP helped popularize the concept of a carbon footprint in a marketing campaign in the early 2000s. That framing effectively shifts the burden of worrying about the environment from fossil-fuel companies to individuals.  The reality is, no one person can address climate change alone: Our entire society is built around burning fossil fuels. To address climate change, we need political action and public support for researching and scaling up climate technology. We need companies to innovate and take decisive action to reduce greenhouse-gas emissions. Focusing too much on individuals is a distraction from the real solutions on the table. 
Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters I see something similar today with AI. People are asking climate reporters at barbecues whether they should feel guilty about using chatbots too frequently when we need to focus on the bigger picture.  Big tech companies are playing into this narrative by providing energy-use estimates for their products at the user level. A couple of recent reports put the electricity used to query a chatbot at about 0.3 watt-hours, the same as powering a microwave for about a second. That’s so small as to be virtually insignificant. But stopping with the energy use of a single query obscures the full truth, which is that this industry is growing quickly, building energy-hungry infrastructure at a nearly incomprehensible scale to satisfy the AI appetites of society as a whole. Meta is currently building a data center in Louisiana with five gigawatts of computational power—about the same demand as the entire state of Maine at the summer peak.  (To learn more, read our Power Hungry series online.) Increasingly, there’s no getting away from AI, and it’s not as simple as choosing to use or not use the technology. Your favorite search engine likely gives you an AI summary at the top of your search results. Your email provider’s suggested replies? Probably AI. Same for chatting with customer service while you’re shopping online.  Just as with climate change, we need to look at this as a system rather than a series of individual choices.  Massive tech companies using AI in their products should be disclosing their total energy and water use and going into detail about how they complete their calculations. Estimating the burden per query is a start, but we also deserve to see how these impacts add up for billions of users, and how that’s changing over time as companies (hopefully) make their products more efficient. Lawmakers should be mandating these disclosures, and we should be asking for them, too.  That’s not to say there’s absolutely no individual action that you can take. Just as you could meaningfully reduce your individual greenhouse-gas emissions by taking fewer flights and eating less meat, there are some reasonable things that you can do to reduce your AI footprint. Generating videos tends to be especially energy-intensive, as does using reasoning models to engage with long prompts and produce long answers. Asking a chatbot to help plan your day, suggest fun activities to do with your family, or summarize a ridiculously long email has relatively minor impact.  Ultimately, as long as you aren’t relentlessly churning out AI slop, you shouldn’t be too worried about your individual AI footprint. But we should all be keeping our eye on what this industry will mean for our grid, our society, and our planet.  This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Read More »

A new ion-based quantum computer makes error correction simpler

The US- and UK-based company Quantinuum today unveiled Helios, its third-generation quantum computer, which includes expanded computing power and error correction capability.  Like all other existing quantum computers, Helios is not powerful enough to execute the industry’s dream money-making algorithms, such as those that would be useful for materials discovery or financial modeling. But Quantinuum’s machines, which use individual ions as qubits, could be easier to scale up than quantum computers that use superconducting circuits as qubits, such as Google’s and IBM’s. “Helios is an important proof point in our road map about how we’ll scale to larger physical systems,” says Jennifer Strabley, vice president at Quantinuum, which formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum. Honeywell remains Quantinuum’s majority owner. Located at Quantinuum’s facility in Colorado, Helios comprises a myriad of components, including mirrors, lasers, and optical fiber. Its core is a thumbnail-size chip containing the barium ions that serve as the qubits, which perform the actual computing. Helios computes with 98 barium ions at a time; its predecessor, H2, used 56 ytterbium qubits. The barium ions are an upgrade, as they have proven easier to control than ytterbium.  These components all sit within a chamber that is cooled to about 15 Kelvin (-432.67 ℉), on top of an optical table. Users can access the computer by logging in remotely over the cloud.
Helios encodes information in the ions’ quantum states, which can represent not only 0s and 1s, like the bits in classical computing, but probabilistic combinations of both, known as superpositions. A hallmark of quantum computing, these superposition states are akin to the state of a coin flipping in the air—neither heads nor tails, but some probability of both.  Quantum computing exploits the unique mathematics of quantum-mechanical objects like ions to perform computations. Proponents of the technology believe this should enable commercially useful applications, such as highly accurate chemistry simulations for the development of batteries or better optimization algorithms for logistics and finance. 
In the last decade, researchers at companies and academic institutions worldwide have incrementally developed the technology with billions of dollars of private and public funding. Still, quantum computing is in an awkward teenage phase. It’s unclear when it will bring profitable applications. Of late, developers have focused on scaling up the machines.  A key challenge to making a more powerful quantum computer is implementing error correction. Like all computers, quantum computers occasionally make mistakes. Classical computers correct these errors by storing information redundantly. Owing to quirks of quantum mechanics, quantum computers can’t do this and require special correction techniques.  Quantum error correction involves storing a single unit of information in multiple qubits rather than in a single qubit. The exact methods vary depending on the specific hardware of the quantum computer, with some machines requiring more qubits per unit of information than others. The industry refers to an error-corrected unit of quantum information as a “logical qubit.” Helios needs two ions, or “physical qubits,” to create one logical qubit. This is fewer physical qubits than needed in recent quantum computers made of superconducting circuits. In 2024, Google used 105 physical qubits to create a logical qubit. This year, IBM used 12 physical qubits per single logical qubit, and Amazon Web Services used nine physical qubits to produce a single logical qubit. All three companies use variations of superconducting circuits as qubits. Helios is noteworthy for its qubits’ precision, says Rajibul Islam, a physicist at the University of Waterloo in Canada, who is not affiliated with Quantinuum. The computer’s qubit error rates are low to begin with, which means it doesn’t need to devote as much of its hardware to error correction. Quantinuum had pairs of qubits interact in an operation known as entanglement and found that they behaved as expected 99.921% of the time. “To the best of my knowledge, no other platform is at this level,” says Islam. This advantage comes from a design property of ions. Unlike superconducting circuits, which are affixed to the surface of a quantum computing chip, ions on Quantinuum’s Helios chip can be shuffled around. Because the ions can move, they can interact with every other ion in the computer, a capacity known as “all-to-all connectivity.” This connectivity allows for error correction approaches that use fewer physical qubits. In contrast, superconducting qubits can only interact with their direct neighbors, so a computation between two non-adjacent qubits requires several intermediate steps involving the qubits in between. “It’s becoming increasingly more apparent how important all-to-all-connectivity is for these high-performing systems,” says Strabley. Still, it’s not clear what type of qubit will win in the long run. Each type has design benefits that could ultimately make it easier to scale. Ions (which are used by the US-based startup IonQ as well as Quantinuum) offer an advantage because they produce relatively few errors, says Islam: “Even with fewer physical qubits, you can do more.” However, it’s easier to manufacture superconducting qubits. And qubits made of neutral atoms, such as the quantum computers built by the Boston-based startup QuEra, are “easier to trap” than ions, he says.  Besides increasing the number of qubits on its chip, another notable achievement for Quantinuum is that it demonstrated error correction “on the fly,” says David Hayes, the company’s director of computational theory and design, That’s a new capability for its machines. Nvidia GPUs were used to identify errors in the qubits in parallel. Hayes thinks that GPUs are more effective for error correction than chips known as FPGAs, also used in the industry.

Quantinuum has used its computers to investigate the basic physics of magnetism and superconductivity. Earlier this year, it reported simulating a magnet on H2, Quantinuum’s predecessor, with the claim that it “rivals the best classical approaches in expanding our understanding of magnetism.” Along with announcing the introduction of Helios, the company has used the machine to simulate the behavior of electrons in a high-temperature superconductor.  “These aren’t contrived problems,” says Hayes. “These are problems that the Department of Energy, for example, is very interested in.” Quantinuum plans to build another version of Helios in its facility in Minnesota. It has already begun to build a prototype for a fourth-generation computer, Sol, which it plans to deliver in 2027, with 192 physical qubits. Then, in 2029, the company hopes to release Apollo, which it says will have thousands of physical qubits and should be “fully fault tolerant,” or able to implement error correction at a large scale.

Read More »

The Download: the solar geoengineering race, and future gazing with the The Simpsons

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Why the for-profit race into solar geoengineering is bad for science and public trust —David Keith is the professor of geophysical sciences at the University of Chicago and Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup.The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story.As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about emerging efforts to start and fund private companies to deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. Read the full story.
This story is part of Heat Exchange, MIT Technology Review’s guest opinion series offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the series here.
Can “The Simpsons” really predict the future? According to internet listicles, the animated sitcom The Simpsons has predicted the future anywhere from 17 to 55 times.The show foresaw Donald Trump becoming US President a full 17 years before the real estate mogul was inaugurated as the 45th leader of the United States. Earlier, in 1993, an episode of the show featured the “Osaka flu,” which some felt was eerily prescient of the coronavirus pandemic. And—somehow!—Simpsons writers just knew that the US Olympic curling team would beat Sweden eight whole years before they did it.Al Jean has worked on The Simpsons on and off since 1989; he is the cartoon’s longest-serving showrunner. Here, he reflects on the conspiracy theories that have sprung from these apparent prophecies. Read the full story. —Amelia Tait This story is part of MIT Technology Review’s series “The New Conspiracy Age,” about how the present boom in conspiracy theories is reshaping science and technology. MIT Technology Review Narrated: Therapists are secretly using ChatGPT. Clients are triggered. Declan would never have found out his therapist was using ChatGPT had it not been for a technical mishap where his therapist began inadvertently sharing his screen. For the rest of the session, Declan was privy to a real-time stream of ChatGPT analysis rippling across his therapist’s screen, who was taking what Declan was saying, putting it into ChatGPT, and then parroting its answers.

But Declan is not alone. In fact, a growing number of people are reporting receiving AI-generated communiqués from their therapists. Clients’ trust and privacy are being abandoned in the process. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Amazon is suing Perplexity over its Comet AI agentIt alleges Perplexity is committing computer fraud by not disclosing when Comet is shopping on a human’s behalf. (Bloomberg $)+ In turn, Perplexity has accused Amazon of bullying. (CNBC) 2 Trump has nominated the billionaire entrepreneur Jared Isaacman to lead NASAFive months after he withdrew Isaacman’s nomination for the same job. (WP $)+ It was around the same time Elon Musk left the US government. (WSJ $)3 Homeland Security has released an app for police forces to scan people’s faces Mobile Fortify uses facial recognition to identify whether someone’s been given a deportation order. (404 Media)+ Another effort to track ICE raids was just taken offline. (MIT Technology Review) 4 Scientific journals are being swamped with AI-written lettersResearchers are sifting through their inbox trying to work out what to believe. (NYT $)+ ArXiv is no longer accepting certain papers for fear they’ve been written by AI. (404 Media)
5 The AI boom has proved a major windfall for equipment makers Makers of small turbines and fuel cells, rejoice. (WSJ $) 6 Chronic kidney disease may be the first chronic illness linked to climate changeExperts have linked a surge in the disease to hotter temperatures. (Undark)+ The quest to find out how our bodies react to extreme temperatures. (MIT Technology Review)
7 Brazil is proposing a fund to protect tropical forestsIt would pay countries not to fell their trees. (NYT $) 8 New York has voted for a citywide digital mapIt’ll officially represent the five boroughs for the first time. (Fast Company $) 9 The internet could be at risk of catastrophic collapseMeet the people preparing for that exact eventuality. (New Scientist $)10 A Chinese space craft may have been hit by space junkThree astronauts have been forced to remain on the Tiangong space station while the damage is investigated. (Ars Technica) Quote of the day “I am not sure how I earned the trust of so many, but I will do everything I can to live up to those expectations.”
—Jared Isaacman, Donald Trump’s renomination to lead NASA, doesn’t appear entirely sure in his own abilities to lead the agency, Ars Technica reports. One more thing Is the digital dollar dead?In 2020, digital currencies were one of the hottest topics in town. China was well on its way to launching its own central bank digital currency, or CBDC, and many other countries launched CBDC research projects, including the US.How things change. Years later, the digital dollar—even though it doesn’t exist—has become political red meat, as some politicians label it a dystopian tool for surveillance. And late last year, the Boston Fed quietly stopped working on its CBDC project. So is the dream of the digital dollar dead? Read the full story.—Mike Orcutt

Read More »

From vibe coding to context engineering: 2025 in software development

Provided byThoughtworks This year, we’ve seen a real-time experiment playing out across the technology industry, one in which AI’s software engineering capabilities have been put to the test against human technologists. And although 2025 may have started with AI looking strong, the transition from vibe coding to what’s being termed context engineering shows that while the work of human developers is evolving, they nevertheless remain absolutely critical. This is captured in the latest volume of the “Thoughtworks Technology Radar,” a report on the technologies used by our teams on projects with clients. In it, we see the emergence of techniques and tooling designed to help teams better tackle the problem of managing context when working with LLMs and AI agents.  Taken together, there’s a clear signal of the direction of travel in software engineering and even AI more broadly. After years of the industry assuming progress in AI is all about scale and speed, we’re starting to see that what matters is the ability to handle context effectively. Vibes, antipatterns, and new innovations  In February 2025, Andrej Karpathy coined the term vibe coding. It took the industry by storm. It certainly sparked debate at Thoughtworks; many of us were skeptical. On an April episode of our technology podcast, we talked about our concerns and were cautious about how vibe coding might evolve.
Unsurprisingly given the implied imprecision of vibe-based coding, antipatterns have been proliferating. We’ve once again noted, for instance, complacency with AI generated code on the latest volume of the Technology Radar, but it’s also worth pointing out that early ventures into vibe coding also exposed a degree of complacency about what AI models can actually handle — users demanded more and prompts grew larger, but model reliability started to falter. Experimenting with generative AI  This is one of the drivers behind increasing interest in engineering context. We’re well aware of its importance, working with coding assistants like Claude Code and Augment Code. Providing necessary context—or knowledge priming—is crucial. It ensures outputs are more consistent and reliable, which will ultimately lead to better software that needs less work — reducing rewrites and potentially driving productivity.
When effectively prepared, we’ve seen good results when using generative AI to understand legacy codebases. Indeed, done effectively with the appropriate context, it can even help when we don’t have full access to source code.  It’s important to remember that context isn’t just about more data and more detail. This is one of the lessons we’ve taken from using generative AI for forward engineering. It might sound counterintuitive, but in this scenario, we’ve found AI to be more effective when it’s further abstracted from the underlying system — or, in other words, further removed from the specifics of the legacy code. This is because the solution space becomes much wider, allowing us to better leverage the generative and creative capabilities of the AI models we use. Context is critical in the agentic era The backdrop of changes that have happened over recent months is the growth of agents and agentic systems — both as products organizations want to develop and as technology they want to leverage. This has forced the industry to properly reckon with context and move away from a purely vibes-based approach. Indeed, far from simply getting on with tasks they’ve been programmed to do, agents require significant human intervention to ensure they are equipped to respond to complex and dynamic contexts.  There are a number of context-related technologies aimed at tackling this challenge, including agents.md, Context7, and Mem0. But it’s also a question of approach. For instance, we’ve found success with anchoring coding agents to a reference application — essentially providing agents with a contextual ground truth. We’re also experimenting with using teams of coding agents; while this might sound like it increases complexity, it actually removes some of the burden of having to give a single agent all the dense layers of context it needs to do its job successfully. Toward consensus Hopefully the space will mature as practices and standards embed. It would be remiss to not mention the significance of the Model Context Protocol, which has emerged as the go-to protocol for connecting LLMs or agentic AI to sources of context. Relatedly, the agent2agent (A2A) protocol leads the way with standardizing how agents interact with one another.  It remains to be seen whether these standards win out. But in any case, it’s important to consider the day-to-day practices that allow us, as software engineers and technologists, to collaborate effectively even when dealing with highly complex and dynamic systems. Sure, AI needs context, but so do we. Techniques like curated shared instructions for software teams may not sound like the hottest innovation on the planet, but they can be remarkably powerful for helping teams work together. There’s perhaps also a conversation to be had about what these changes mean for agile software development. Spec-driven development is one idea that appears to have some traction, but there are still questions about how we remain adaptable and flexible while also building robust contextual foundations and ground truths for AI systems.

Software engineers can solve the context challenge Clearly, 2025 has been a huge year in the evolution of software engineering as a practice. There’s a lot the industry needs to monitor closely, but it’s also an exciting time. And while fears about AI job automation may remain, the fact the conversation has moved from questions of speed and scale to context puts software engineers right at the heart of things.  Once again, it will be down to them to experiment, collaborate, and learn — the future depends on it. This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Read More »

Why the for-profit race into solar geoengineering is bad for science and public trust

Last week, an American-Israeli company that claims it’s developed proprietary technology to cool the planet announced it had raised $60 million, by far the largest known venture capital round to date for a solar geoengineering startup. The company, Stardust, says the funding will enable it to develop a system that could be deployed by the start of the next decade, according to Heatmap, which broke the story. Heat Exchange MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here. As scientists who have worked on the science of solar geoengineering for decades, we have grown increasingly concerned about the emerging efforts to start and fund private companies to build and deploy technologies that could alter the climate of the planet. We also strongly dispute some of the technical claims that certain companies have made about their offerings. 
Given the potential power of such tools, the public concerns about them, and the importance of using them responsibly, we argue that they should be studied, evaluated, and developed mainly through publicly coordinated and transparently funded science and engineering efforts.  In addition, any decisions about whether or how they should be used should be made through multilateral government discussions, informed by the best available research on the promise and risks of such interventions—not the profit motives of companies or their investors. The basic idea behind solar geoengineering, or what we now prefer to call sunlight reflection methods (SRM), is that humans might reduce climate change by making the Earth a bit more reflective, partially counteracting the warming caused by the accumulation of greenhouse gases. 
There is strong evidence, based on years of climate modeling and analyses by researchers worldwide, that SRM—while not perfect—could significantly and rapidly reduce climate changes and avoid important climate risks. In particular, it could ease the impacts in hot countries that are struggling to adapt.   The goals of doing research into SRM can be diverse: identifying risks as well as finding better methods. But research won’t be useful unless it’s trusted, and trust depends on transparency. That means researchers must be eager to examine pros and cons, committed to following the evidence where it leads, and driven by a sense that research should serve public interests, not be locked up as intellectual property. In recent years, a handful of for-profit startup companies have emerged that are striving to develop SRM technologies or already trying to market SRM services. That includes Make Sunsets, which sells “cooling credits” for releasing sulfur dioxide in the stratosphere. A new company, Sunscreen, which hasn’t yet been announced, intends to use aerosols in the lower atmosphere to achieve cooling over small areas, purportedly to help farmers or cities deal with extreme heat.   Our strong impression is that people in these companies are driven by the same concerns about climate change that move us in our research. We agree that more research, and more innovation, is needed. However, we do not think startups—which by definition must eventually make money to stay in business—can play a productive role in advancing research on SRM. Many people already distrust the idea of engineering the atmosphere—at whichever scale—to address climate change, fearing negative side effects, inequitable impacts on different parts of the world, or the prospect that a world expecting such solutions will feel less pressure to address the root causes of climate change. Adding business interests, profit motives, and rich investors into this situation just creates more cause for concern, complicating the ability of responsible scientists and engineers to carry out the work needed to advance our understanding. The only way these startups will make money is if someone pays for their services, so there’s a reasonable fear that financial pressures could drive companies to lobby governments or other parties to use such tools. A decision that should be based on objective analysis of risks and benefits would instead be strongly influenced by financial interests and political connections. The need to raise money or bring in revenue often drives companies to hype the potential or safety of their tools. Indeed, that’s what private companies need to do to attract investors, but it’s not how you build public trust—particularly when the science doesn’t support the claims.

Notably, Stardust says on its website that it has developed novel particles that can be injected into the atmosphere to reflect away more sunlight, asserting that they’re “chemically inert in the stratosphere, and safe for humans and ecosystems.” According to the company, “The particles naturally return to Earth’s surface over time and recycle safely back into the biosphere.” But it’s nonsense for the company to claim they can make particles that are inert in the stratosphere. Even diamonds, which are extraordinarily nonreactive, would alter stratospheric chemistry. First of all, much of that chemistry depends on highly reactive radicals that react with any solid surface, and second, any particle may become coated by background sulfuric acid in the stratosphere. That could accelerate the loss of the protective ozone layer by spreading that existing sulfuric acid over a larger surface area. (Stardust didn’t provide a response to an inquiry about the concerns raised in this piece.) In materials presented to potential investors, which we’ve obtained a copy of, Stardust further claims its particles “improve” on sulfuric acid, which is the most studied material for SRM. But the point of using sulfate for such studies was never that it was perfect, but that its broader climatic and environmental impacts are well understood. That’s because sulfate is widespread on Earth, and there’s an immense body of scientific knowledge about the fate and risks of sulfur that reaches the stratosphere through volcanic eruptions or other means. If there’s one great lesson of 20th-century environmental science, it’s how crucial it is to understand the ultimate fate of any new material introduced into the environment.  Chlorofluorocarbons and the pesticide DDT both offered safety advantages over competing technologies, but they both broke down into products that accumulated in the environment in unexpected places, causing enormous and unanticipated harms.  The environmental and climate impacts of sulfate aerosols have been studied in many thousands of scientific papers over a century, and this deep well of knowledge greatly reduces the chance of unknown unknowns.  Grandiose claims notwithstanding—and especially considering that Stardust hasn’t disclosed anything about its particles or research process—it would be very difficult to make a pragmatic, risk-informed decision to start SRM efforts with these particles instead of sulfate.
We don’t want to claim that every single answer lies in academia. We’d be fools to not be excited by profit-driven innovation in solar power, EVs, batteries, or other sustainable technologies. But the math for sunlight reflection is just different. Why?    Because the role of private industry was essential in improving the efficiency, driving down the costs, and increasing the market share of renewables and other forms of cleantech. When cost matters and we can easily evaluate the benefits of the product, then competitive, for-profit capitalism can work wonders.  
But SRM is already technically feasible and inexpensive, with deployment costs that are negligible compared with the climate damage it averts. The essential questions of whether or how to use it come down to far thornier societal issues: How can we best balance the risks and benefits? How can we ensure that it’s used in an equitable way? How do we make legitimate decisions about SRM on a planet with such sharp political divisions? Trust will be the most important single ingredient in making these decisions. And trust is the one product for-profit innovation does not naturally manufacture.  Ultimately, we’re just two researchers. We can’t make investors in these startups do anything differently. Our request is that they think carefully, and beyond the logic of short-term profit. If they believe geoengineering is worth exploring, could it be that their support will make it harder, not easier, to do that?   David Keith is the professor of geophysical sciences at the University of Chicago and founding faculty director of the school’s Climate Systems Engineering Initiative. Daniele Visioni is an assistant professor of earth and atmospheric sciences at Cornell University and head of data for Reflective, a nonprofit that develops tools and provides funding to support solar geoengineering research.

Read More »

Oil Falls as Saudi Price Cuts Signal Market Gloom

Oil extended declines after Saudi Arabia lowered the prices of its crude, signaling uncertainty surrounding the supply outlook, while equities slumped in another pressure on the commodity. West Texas Intermediate settled near $59, sliding around 0.3% on the day after falling in the previous two sessions. Volatility hit Wall Street, also weighing on oil prices. Saudi Arabia lowered the price of its main oil grade to Asia for December to the lowest level in 11 months. Even though the price cut met expectations, traders saw it as a bearish signal about the cartel’s confidence in the market’s ability to absorb new supply, with a glut widely expected to begin next year. Prices have given up ground since the US sanctioned Russia’s two largest oil producers last month over Moscow’s war in Ukraine, and abundant supplies have so far managed to cushion the impact of stunted flows from the OPEC+ member to major buyers India and China. Key price gauges indicate that supply perceptions are worsening, with the premium that front-month WTI futures command over the next month’s contract, known as the prompt spread, narrowing in the past few weeks to near February lows. That’s also true for Brent crude. Still, US shale companies are forging ahead with their production plans, with Diamondback Energy Inc., Coterra Energy Inc. and Ovintiv Inc. this week announcing they inend to raise output slightly for this year or 2026 despite oil prices falling close to the threshold needed for many US shale wells to break even. Oversupply gloom hasn’t permeated refined products markets, though. Traders are still assessing how those supplies may be impacted by the US clampdown on purchases of Russian crude and Ukraine’s strikes on its neighbor’s energy assets. Those factors, as well as diminishing global refining capacity, bolstered diesel futures and gasoil

Read More »

Clearway projects strong renewables outlook while adding gas assets

300 MW Average size of Clearway’s current projects. Most of the projects slated for 2030 and beyond are 500 MW or more, the company said.  >90% Percentage of projects planned for 2031-2032 that are focused in the Western U.S or the PJM Interconnection, “where renewables are cost competitive and/or valued.” 1.8 GW Capacity of power purchase agreements meant to support data center loads the company has signed so far this year. Meeting digital infrastructure energy needs Independent power producer Clearway Energy announced a U.S. construction pipeline of 27 GW of generation and storage resources following strong third-quarter earnings. The San Francisco-based company is owned by Global Infrastructure Partners and TotalEnergies. According to its earnings presentation, it has an operating portfolio of more than 12 GW of wind, solar, gas and storage.   Clearway President and CEO Craig Cornelius said on an earnings call Tuesday that the company is positioning itself to serve large load data centers.  “Growth in both the medium and long term reflects the strong traction we’ve made in supporting the energy needs of our country’s digital infrastructure build-out and reindustrialization,” Cornelius said during the call. “We expect this to be a core driver of Clearway’s growth outlook well into the 2030s.” Cornelius noted that Clearway executed and awarded 1.8 GW of power purchase agreements meant to support data center loads so far this year, and is currently developing generation aimed at serving “gigawatt class co-located data centers across five states.” Its 27 GW pipeline of projects in development or under construction includes 8.2 GW of solar, 4.6 GW of wind, 1.3 GW of wind repowering, 8 GW of standalone storage, 2.1 GW of paired storage and 2.6 GW of natural gas aimed at serving data centers. Under the OBBBA, wind and solar projects that begin construction by July 4,

Read More »

Google Cloud aims for more cost-effective Arm computing with Axion N4A

It’s not alone: AWS introduced its own Arm-based chip, Graviton, in 2018 to reduce the cost of running internal cloud workloads such as Amazon retail IT, and now 50% of new AWS instances run on it. Microsoft, too, recently developed an Arm chip, Cobalt, to run Microsoft 365 and to offer Azure services. Google’s N4A instances will be available across services including Compute Engine for running virtual machines directly, Google Kubernetes Engine (GKE) for running containerized workloads, and Dataproc for big data and analytics. They will be accessible in the us-central1 (Iowa), us-east4 (N. Virginia), europe-west3 (Frankfurt) and europe-west4 (Netherlands) regions initially. The company describes N4A as a complement to the C4A instances it launched last October. These are designed for heavier workloads such as high-traffic web and application servers, ad servers, game servers, data analytics, databases of any size, and CPU-based AI and machine learning. Also coming “soon” is C4A Metal, a bare-metal instance for specialized workloads in a non-virtualized environment, such as custom hypervisors, security workloads, or CI/CD pipelines.

Read More »

Google’s cheaper, faster TPUs are here, while users of other AI processors face a supply crunch

Opportunities for the AI industry LLM vendors such as OpenAI and Anthropic, which still have relatively young code bases and are continuously evolving them, also have much to gain from the arrival of Ironwood for training their models, said Forrester vice president and principal analyst Charlie Dai. In fact, Anthropic has already agreed to procure 1 million TPUs for training and its models and using them for inferencing. Other, smaller vendors using Google’s TPUs for training models include Lightricks and Essential AI. Google has seen a steady increase in demand for its TPUs (which it also uses to run interna services), and is expected to buy $9.8 billion worth of TPUs from Broadcom this year, compared to $6.2 billion and $2.04 billion in 2024 and 2023 respectively, according to Harrowell. “This makes them the second-biggest AI chip program for cloud and enterprise data centers, just tailing Nvidia, with approximately 5% of the market. Nvidia owns about 78% of the market,” Harrowell said. The legacy problem While some analysts were optimistic about the prospects for TPUs in the enterprise, IDC research director Brandon Hoff said enterprises will most likely to stay away from Ironwood or TPUs in general because of their existing code base written for other platforms. “For enterprise customers who are writing their own inferencing, they will be tied into Nvidia’s software platform,” Hoff said, referring to CUDA, the software platform that runs on Nvidia GPUs. CUDA was released to the public in 2007, while the first version of TensorFlow has only been around since 2015.

Read More »

Top network and data center events 2025 & 2026

Denise Dubie is a senior editor at Network World with nearly 30 years of experience writing about the tech industry. Her coverage areas include AIOps, cybersecurity, networking careers, network management, observability, SASE, SD-WAN, and how AI transforms enterprise IT. A seasoned journalist and content creator, Denise writes breaking news and in-depth features, and she delivers practical advice for IT professionals while making complex technology accessible to all. Before returning to journalism, she held senior content marketing roles at CA Technologies, Berkshire Grey, and Cisco. Denise is a trusted voice in the world of enterprise IT and networking.

Read More »

Cisco launches AI infrastructure, AI practitioner certifications

“This new certification focuses on artificial intelligence and machine learning workloads, helping technical professionals become AI-ready and successfully embed AI into their workflows,” said Pat Merat, vice president at Learn with Cisco, in a blog detailing the new AI Infrastructure Specialist certification. “The certification validates a candidate’s comprehensive knowledge in designing, implementing, operating, and troubleshooting AI solutions across Cisco infrastructure.” Separately, the AITECH certification is part of the Cisco AI Infrastructure track, which complements its existing networking, data center, and security certifications. Cisco says the AITECH cert training is intended for network engineers, system administrators, solution architects, and other IT professionals who want to learn how AI impacts enterprise infrastructure. The training curriculum covers topics such as: Utilizing AI for code generation, refactoring, and using modern AI-assisted coding workflows. Using generative AI for exploratory data analysis, data cleaning, transformation, and generating actionable insights. Designing and implementing multi-step AI-assisted workflows and understanding complex agentic systems for automation. Learning AI-powered requirements, evaluating customization approaches, considering deployment strategies, and designing robust AI workflows. Evaluating, fine-tuning, and deploying pre-trained AI models, and implementing Retrieval Augmented Generation (RAG) systems. Monitoring, maintaining, and optimizing AI-powered workflows, ensuring data integrity and security. AITECH certification candidates will learn how to use AI to enhance productivity, automate routine tasks, and support the development of new applications. The training program includes hands-on labs and simulations to demonstrate practical use cases for AI within Cisco and multi-vendor environments.

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE