Your Gateway to Power, Energy, Datacenters, Bitcoin and AI

Dive into the latest industry updates, our exclusive Paperboy Newsletter, and curated insights designed to keep you informed. Stay ahead with minimal time spent.

Discover What Matters Most to You

Explore ONMINE’s curated content, from our Paperboy Newsletter to industry-specific insights tailored for energy, Bitcoin mining, and AI professionals.

AI

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Bitcoin:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Datacenter:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Energy:

Lorem Ipsum is simply dummy text of the printing and typesetting industry.

Shape
Discover What Matter Most to You

Featured Articles

SPE Offshore Europe starts countdown to event in Aberdeen

The success of the upcoming Offshore Europe event in Aberdeen will be a barometer for the industry as it navigates a “turbulent, interesting but difficult” period. Organisers officially launched the SPE Offshore Europe 2025 event which will take place in September. Offshore Europe organisers RX Global, formerly known as Reed Exhibitions, said they expect 40,000 attendees to attend the biennial event which started a 140-day countdown until doors open at the free-to attend trade show for the offshore energy industry. © Erikka Askeland/DCT MediaOEUK CEO Dave Whitehouse and chair of SPE Offshore Europe committee. David Whitehouse, the chief executive of trade body Offshore Energies UK (OEUK) and the event committee chair, said Offshore Europe would retain focus on its  “clear oil and gas heritage”  but will also emphasise opportunities in wind, floating wind, carbon storage, hydrogen and geothermal as well as engaging young people to consider careers in the energy industry. In addition to a wide slate of speakers including senior politicians and c-suite executives from the likes of BP, Subsea7 and Harbour Energy, the event will also aim to be “impactful” and meaningful by welcoming representatives from trade unions, and “voices that don’t always agree with people like myself”, he said. He added: “What we will be debating is the future role of oil and gas in Europe’s energy future. How do we turn those renewable energy targets into true delivery plans? How do we unlick the real opportunities for our supply chain? “Europe is going through what appears to be deindustrialisation masquerading as decarbonisation – how do we change that path?” The official theme of the event is “unlocking Europe’s potential in offshore energy” But Whitehouse noted that between now and then the industry is facing major policy interventions that underpin the future of North Sea industry including oil and

Read More »

Business group plea for Statera Kintore hydrogen plan

Calls to support Statera’s 3GW hydrogen production scheme in the north east of Scotland have been made as councillors prepare to decide its fate. A decision on Kintore Hydrogen is expected to comes to an Aberdeenshire Council meeting Thursday 24 April after the proposal received 83 letters of objection against the plan. Council officers have recommended that the project is approved, citing sustainable economic growth and the potential for Aberdeenshire to play a leading role in the development of clean, green energy. The local authority also recently backed a large battery energy storage (BESS) scheme nearby, thought to be led by Chinese firm CR Power. Statera has said the project’s size and scale is on a par with major hydrogen production schemes in Saudi Arabia and the Port of Rotterdam, with associated economic benefits. © Supplied by StateraKintore Hydrogen. The private-equity backed company has predicted the firm will create over 3,000 jobs in construction and a further 300 or when operational, generating £400 million in GVA for the region’s economy. Aberdeen Grampian Chamber of Commerce (AGCC) has called on councillors to give the green light to the “major energy transition project in Aberdeenshire”. The proposed scheme would use electricity from offshore wind to produce green hydrogen. AGCC chief executive Russell Borthwick said: “Kintore Hydrogen is vital to north east Scotland’s energy transition ambition – delivering an estimated £1 billion boost to the economy, £400 million of that in our region, supporting almost 3,500 jobs. “The local workforce and supply chain stand to be the biggest winners from this major investment in our region as Aberdeenshire moves towards a clean energy future. “That’s why it’s so important that the whole community – businesses, local residents and government at all levels – unites behind ensuring this vision becomes a reality. “The opportunity for our region to play

Read More »

Lower gas reserves expected this winter as UK’s largest storage facility halts

Centrica has ceased injecting natural gas into the UK’s largest energy storage facility, located in the North Sea, which is likely to mean lower gas reserves this winter. The company is believed to have stopped refilling the Rough gas storage facility off the Yorkshire coast this month, which comprises about half of the UK’s energy storage capacity. Centrica warned in December that the Rough facility was making a loss of between £50m and £100m for the Centrica Energy Storage+ business division. The company has indicated that the storage facility, which was reopened in 2022 due to the energy crisis to plug demand, was not financially viable in prevailing market conditions. British Gas owner Centrica has said that it needs a cap-and-floor mechanism to redevelop the facility with £2 billion of its own cash so that it can store hydrogen. The company has broached talks with government over the future operation of the plant and met with Ed Miliband in March to discuss options for keeping the plant open. A spokesperson for the Department of Energy Security and Net Zero (DESNZ) said the government is “open to discussing proposals on gas storage sites, as long as it provides value for money for taxpayers”. Clean power mission boss Chris Stark said at a parliamentary hearing earlier this year in January that the government was considering a regulatory mechanism to support hydrogen storage from around 2030. Unabated gas is envisaged to comprise up to 5% of the UK’s energy demand by 2030 under a system operator study on the clean power mission. Its group chief executive Chris O’Shea said on a webinar with analysts at the release of its annual results in February that the company was considering all options for Rough and had not made a decision around its continuation. The impetus

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right? In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality. This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for. Understanding Model Differences Each AI model family has its own strengths and limitations. Some key aspects to consider include: Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost. Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens. Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions.  Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting. Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance. Migrating from OpenAI to Anthropic Imagine a real-world scenario where you’ve

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right?

In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality.

This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for.

Understanding Model Differences

Each AI model family has its own strengths and limitations. Some key aspects to consider include:

Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost.

Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens.

Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions. 

Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting.

Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance.

Migrating from OpenAI to Anthropic

Imagine a real-world scenario where you’ve just benchmarked GPT-4o, and now your CTO wants to try Claude 3.5. Make sure to refer to the pointers below before making any decision:

Tokenization variations

All model providers pitch extremely competitive per-token costs. For example, this post shows how the tokenization costs for GPT-4 plummeted in just one year between 2023 and 2024. However, from a machine learning (ML) practitioner’s viewpoint, making model choices and decisions based on purported per-token costs can often be misleading. 

A practical case study comparing GPT-4o and Sonnet 3.5 exposes the verbosity of Anthropic models’ tokenizers. In other words, the Anthropic tokenizer tends to break down the same text input into more tokens than OpenAI’s tokenizer. 

Context window differences

Each model provider is pushing the boundaries to allow longer and longer input text prompts. However, different models may handle different prompt lengths differently. For example, Sonnet-3.5 offers a larger context window up to 200K tokens as compared to the 128K context window of GPT-4. Despite this, it is noticed that OpenAI’s GPT-4 is the most performant in handling contexts up to 32K, whereas Sonnet-3.5’s performance declines with increased prompts longer than 8K-16K tokens.

Moreover, there is evidence that different context lengths are treated differently within intra-family models by the LLM, i.e., better performance at short contexts and worse performance at longer contexts for the same given task. This means that replacing one model with another (either from the same or a different family) might result in unexpected performance deviations.

Formatting preferences

Unfortunately, even the current state-of-the-art LLMs are highly sensitive to minor prompt formatting. This means the presence or absence of formatting in the form of markdown and XML tags can highly vary the model performance on a given task.

Empirical results across multiple studies suggest that OpenAI models prefer markdownified prompts including sectional delimiters, emphasis, lists, etc. In contrast, Anthropic models prefer XML tags for delineating different parts of the input prompt. This nuance is commonly known to data scientists and there is ample discussion on the same in public forums (Has anyone found that using markdown in the prompt makes a difference?, Formatting plain text to markdown, Use XML tags to structure your prompts).

For more insights, check out the official best prompt engineering practices released by OpenAI and Anthropic, respectively.  

Model response structure

OpenAI GPT-4o models are generally biased toward generating JSON-structured outputs. However, Anthropic models tend to adhere equally to the requested JSON or XML schema, as specified in the user prompt.However, imposing or relaxing the structures on models’ outputs is a model-dependent and empirically driven decision based on the underlying task. During a model migration phase, modifying the expected output structure would also entail slight adjustments in the post-processing of the generated responses.

Cross-model platforms and ecosystems

LLM switching is more complicated than it looks. Recognizing the challenge, major enterprises are increasingly focusing on providing solutions to tackle it. Companies like Google (Vertex AI), Microsoft (Azure AI Studio) and AWS (Bedrock) are actively investing in tools to support flexible model orchestration and robust prompt management.

For example, Google Cloud Next 2025 recently announced that Vertex AI allows users to work with more than 130 models by facilitating an expanded model garden, unified API access, and the new feature AutoSxS, which enables head-to-head comparisons of different model outputs by providing detailed insights into why one model’s output is better than the other.

Standardizing model and prompt methodologies

Migrating prompts across AI model families requires careful planning, testing and iteration. By understanding the nuances of each model and refining prompts accordingly, developers can ensure a smooth transition while maintaining output quality and efficiency.

ML practitioners must invest in robust evaluation frameworks, maintain documentation of model behaviors and collaborate closely with product teams to ensure the model outputs align with end-user expectations. Ultimately, standardizing and formalizing the model and prompt migration methodologies will equip teams to future-proof their applications, leverage best-in-class models as they emerge, and deliver users more reliable, context-aware, and cost-efficient AI experiences.

Read More »

Keystone Restarts Oil Pipeline After Leak Prompted Shutdown

The operator of the Keystone oil pipeline brought the conduit back into service, putting an end to a week-long outage caused by an estimated 3,500-barrel spill in rural North Dakota.  Most of the oil released has been recovered and remediation efforts have started, South Bow Corp. said in a statement Wednesday. The line will be able to operate at no more of 80% of pressure levels at the time of the April 8 spill. At the time of failure, the line was transporting 17,844 barrels per hour, or the equivalent of 428,000 barrels a day. The restart, delayed by inclement weather, comes roughly two days after it met all conditions imposed by the Pipeline and Hazardous Materials Safety Administration. South Bow will continue to monitor the system as an investigation into the causes of the spill continues, the company said. Keystone can transport as much as 620,000 barrels of Canadian crude daily to US Midwest and Gulf Coast markets. 

Read More »

SPE Offshore Europe starts countdown to event in Aberdeen

The success of the upcoming Offshore Europe event in Aberdeen will be a barometer for the industry as it navigates a “turbulent, interesting but difficult” period. Organisers officially launched the SPE Offshore Europe 2025 event which will take place in September. Offshore Europe organisers RX Global, formerly known as Reed Exhibitions, said they expect 40,000 attendees to attend the biennial event which started a 140-day countdown until doors open at the free-to attend trade show for the offshore energy industry. © Erikka Askeland/DCT MediaOEUK CEO Dave Whitehouse and chair of SPE Offshore Europe committee. David Whitehouse, the chief executive of trade body Offshore Energies UK (OEUK) and the event committee chair, said Offshore Europe would retain focus on its  “clear oil and gas heritage”  but will also emphasise opportunities in wind, floating wind, carbon storage, hydrogen and geothermal as well as engaging young people to consider careers in the energy industry. In addition to a wide slate of speakers including senior politicians and c-suite executives from the likes of BP, Subsea7 and Harbour Energy, the event will also aim to be “impactful” and meaningful by welcoming representatives from trade unions, and “voices that don’t always agree with people like myself”, he said. He added: “What we will be debating is the future role of oil and gas in Europe’s energy future. How do we turn those renewable energy targets into true delivery plans? How do we unlick the real opportunities for our supply chain? “Europe is going through what appears to be deindustrialisation masquerading as decarbonisation – how do we change that path?” The official theme of the event is “unlocking Europe’s potential in offshore energy” But Whitehouse noted that between now and then the industry is facing major policy interventions that underpin the future of North Sea industry including oil and

Read More »

Business group plea for Statera Kintore hydrogen plan

Calls to support Statera’s 3GW hydrogen production scheme in the north east of Scotland have been made as councillors prepare to decide its fate. A decision on Kintore Hydrogen is expected to comes to an Aberdeenshire Council meeting Thursday 24 April after the proposal received 83 letters of objection against the plan. Council officers have recommended that the project is approved, citing sustainable economic growth and the potential for Aberdeenshire to play a leading role in the development of clean, green energy. The local authority also recently backed a large battery energy storage (BESS) scheme nearby, thought to be led by Chinese firm CR Power. Statera has said the project’s size and scale is on a par with major hydrogen production schemes in Saudi Arabia and the Port of Rotterdam, with associated economic benefits. © Supplied by StateraKintore Hydrogen. The private-equity backed company has predicted the firm will create over 3,000 jobs in construction and a further 300 or when operational, generating £400 million in GVA for the region’s economy. Aberdeen Grampian Chamber of Commerce (AGCC) has called on councillors to give the green light to the “major energy transition project in Aberdeenshire”. The proposed scheme would use electricity from offshore wind to produce green hydrogen. AGCC chief executive Russell Borthwick said: “Kintore Hydrogen is vital to north east Scotland’s energy transition ambition – delivering an estimated £1 billion boost to the economy, £400 million of that in our region, supporting almost 3,500 jobs. “The local workforce and supply chain stand to be the biggest winners from this major investment in our region as Aberdeenshire moves towards a clean energy future. “That’s why it’s so important that the whole community – businesses, local residents and government at all levels – unites behind ensuring this vision becomes a reality. “The opportunity for our region to play

Read More »

Lower gas reserves expected this winter as UK’s largest storage facility halts

Centrica has ceased injecting natural gas into the UK’s largest energy storage facility, located in the North Sea, which is likely to mean lower gas reserves this winter. The company is believed to have stopped refilling the Rough gas storage facility off the Yorkshire coast this month, which comprises about half of the UK’s energy storage capacity. Centrica warned in December that the Rough facility was making a loss of between £50m and £100m for the Centrica Energy Storage+ business division. The company has indicated that the storage facility, which was reopened in 2022 due to the energy crisis to plug demand, was not financially viable in prevailing market conditions. British Gas owner Centrica has said that it needs a cap-and-floor mechanism to redevelop the facility with £2 billion of its own cash so that it can store hydrogen. The company has broached talks with government over the future operation of the plant and met with Ed Miliband in March to discuss options for keeping the plant open. A spokesperson for the Department of Energy Security and Net Zero (DESNZ) said the government is “open to discussing proposals on gas storage sites, as long as it provides value for money for taxpayers”. Clean power mission boss Chris Stark said at a parliamentary hearing earlier this year in January that the government was considering a regulatory mechanism to support hydrogen storage from around 2030. Unabated gas is envisaged to comprise up to 5% of the UK’s energy demand by 2030 under a system operator study on the clean power mission. Its group chief executive Chris O’Shea said on a webinar with analysts at the release of its annual results in February that the company was considering all options for Rough and had not made a decision around its continuation. The impetus

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right? In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality. This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for. Understanding Model Differences Each AI model family has its own strengths and limitations. Some key aspects to consider include: Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost. Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens. Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions.  Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting. Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance. Migrating from OpenAI to Anthropic Imagine a real-world scenario where you’ve

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right?

In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality.

This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for.

Understanding Model Differences

Each AI model family has its own strengths and limitations. Some key aspects to consider include:

Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost.

Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens.

Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions. 

Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting.

Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance.

Migrating from OpenAI to Anthropic

Imagine a real-world scenario where you’ve just benchmarked GPT-4o, and now your CTO wants to try Claude 3.5. Make sure to refer to the pointers below before making any decision:

Tokenization variations

All model providers pitch extremely competitive per-token costs. For example, this post shows how the tokenization costs for GPT-4 plummeted in just one year between 2023 and 2024. However, from a machine learning (ML) practitioner’s viewpoint, making model choices and decisions based on purported per-token costs can often be misleading. 

A practical case study comparing GPT-4o and Sonnet 3.5 exposes the verbosity of Anthropic models’ tokenizers. In other words, the Anthropic tokenizer tends to break down the same text input into more tokens than OpenAI’s tokenizer. 

Context window differences

Each model provider is pushing the boundaries to allow longer and longer input text prompts. However, different models may handle different prompt lengths differently. For example, Sonnet-3.5 offers a larger context window up to 200K tokens as compared to the 128K context window of GPT-4. Despite this, it is noticed that OpenAI’s GPT-4 is the most performant in handling contexts up to 32K, whereas Sonnet-3.5’s performance declines with increased prompts longer than 8K-16K tokens.

Moreover, there is evidence that different context lengths are treated differently within intra-family models by the LLM, i.e., better performance at short contexts and worse performance at longer contexts for the same given task. This means that replacing one model with another (either from the same or a different family) might result in unexpected performance deviations.

Formatting preferences

Unfortunately, even the current state-of-the-art LLMs are highly sensitive to minor prompt formatting. This means the presence or absence of formatting in the form of markdown and XML tags can highly vary the model performance on a given task.

Empirical results across multiple studies suggest that OpenAI models prefer markdownified prompts including sectional delimiters, emphasis, lists, etc. In contrast, Anthropic models prefer XML tags for delineating different parts of the input prompt. This nuance is commonly known to data scientists and there is ample discussion on the same in public forums (Has anyone found that using markdown in the prompt makes a difference?, Formatting plain text to markdown, Use XML tags to structure your prompts).

For more insights, check out the official best prompt engineering practices released by OpenAI and Anthropic, respectively.  

Model response structure

OpenAI GPT-4o models are generally biased toward generating JSON-structured outputs. However, Anthropic models tend to adhere equally to the requested JSON or XML schema, as specified in the user prompt.However, imposing or relaxing the structures on models’ outputs is a model-dependent and empirically driven decision based on the underlying task. During a model migration phase, modifying the expected output structure would also entail slight adjustments in the post-processing of the generated responses.

Cross-model platforms and ecosystems

LLM switching is more complicated than it looks. Recognizing the challenge, major enterprises are increasingly focusing on providing solutions to tackle it. Companies like Google (Vertex AI), Microsoft (Azure AI Studio) and AWS (Bedrock) are actively investing in tools to support flexible model orchestration and robust prompt management.

For example, Google Cloud Next 2025 recently announced that Vertex AI allows users to work with more than 130 models by facilitating an expanded model garden, unified API access, and the new feature AutoSxS, which enables head-to-head comparisons of different model outputs by providing detailed insights into why one model’s output is better than the other.

Standardizing model and prompt methodologies

Migrating prompts across AI model families requires careful planning, testing and iteration. By understanding the nuances of each model and refining prompts accordingly, developers can ensure a smooth transition while maintaining output quality and efficiency.

ML practitioners must invest in robust evaluation frameworks, maintain documentation of model behaviors and collaborate closely with product teams to ensure the model outputs align with end-user expectations. Ultimately, standardizing and formalizing the model and prompt migration methodologies will equip teams to future-proof their applications, leverage best-in-class models as they emerge, and deliver users more reliable, context-aware, and cost-efficient AI experiences.

Read More »

Keystone Restarts Oil Pipeline After Leak Prompted Shutdown

The operator of the Keystone oil pipeline brought the conduit back into service, putting an end to a week-long outage caused by an estimated 3,500-barrel spill in rural North Dakota.  Most of the oil released has been recovered and remediation efforts have started, South Bow Corp. said in a statement Wednesday. The line will be able to operate at no more of 80% of pressure levels at the time of the April 8 spill. At the time of failure, the line was transporting 17,844 barrels per hour, or the equivalent of 428,000 barrels a day. The restart, delayed by inclement weather, comes roughly two days after it met all conditions imposed by the Pipeline and Hazardous Materials Safety Administration. South Bow will continue to monitor the system as an investigation into the causes of the spill continues, the company said. Keystone can transport as much as 620,000 barrels of Canadian crude daily to US Midwest and Gulf Coast markets. 

Read More »

Humber carbon emitter wants government signal on Viking CCS

Power company VPI has called for clarity to progress the Viking carbon capture and storage (CCS) project and help drive the future of heavy industries in the Humber. VPI requested a signal from the UK government in its upcoming comprehensive spending review that it will be selected as an anchor emitter for the CCS project. The group owns the nearly 1.3GW Immingham thermal power plant, which provides power to the Humber’s two large oil refineries. VPI is planning to deploy a £1.5 billion carbon capture proposal, which will utilise Harbour Energy’s Viking CCS pipeline to transport carbon that will be buried in a depleted gas field in the North Sea. VPI chief executive Jorge Pikunic said: “Carbon capture and storage provides a once-in-a-generation opportunity to turn the Humber into a powerhouse of the future. If missed, it may not come again. “For the last five years, public officials have worked tirelessly with industry to set in motion the development of Viking CCS, a unique carbon capture and storage network, here in the Humber. “Proceeding with the next stage of Viking CCS now will demonstrate how a strategic, mission-driven government can successfully transition an industrial hub into a future powerhouse, in a prudent, value-for money driven, just and meaningful way.” Viking CCS The Viking CCS pipeline will transport CO₂ captured from the industrial cluster at Immingham out to the Viking reservoirs via the Theddlethorpe gas terminal and an existing 75-mile (120km) pipeline as part of the Lincolnshire offshore gas gathering system (LOGGS). The project forms part of the UK’s track 2 CCS projects along with Scotland’s Acorn CCS project. While the UK government has backed the track 1 projects with around £22 billion of government funding, the track 2 proposal have not received similar pledges of support. Business leaders have warned

Read More »

APA Corp Makes Leadership Changes, Names Ben Rodgers as CFO

Oil and natural gas exploration and production company APA Corporation has recently made changes to its executive leadership team. The company said in a media release that Ben Rodgers has been named executive vice president (EVP) and chief financial officer (CFO), effective May 12, 2025. Furthermore, Steve Riley will continue in his role as president, while Shad Frazier joined the company as senior vice president, U.S. Onshore Operations. Additionally, Donald Martin will join the company as vice president, Decommissioning, effective May 26, 2025, APA said. In this role as EVP and CFO, Rodgers will oversee all financial activities and departments, including Accounting, Audit, Investor Relations, Planning, Tax, and Treasury. He joined APA in 2018 and previously served as SVP, Finance, and Treasurer. He also served as CFO of Altus Midstream and later as a director on the board of Kinetik Holdings Inc., APA said. He currently serves on the board of Khalda Petroleum Company, a joint venture between APA subsidiary Apache Corporation and Egypt Petroleum Company. In his position, Riney will continue overseeing asset development and operations. Both Frazier and Martin have been added to Riney’s team to help oversee operations. APA highlighted that Frazier has nearly 30 years of industry experience, most recently as vice president, Production Operations at Endeavor Energy Resources, LP. Previously, he held various leadership positions at Legacy Reserves and SandRidge Energy. Martin brings 20 years of operations and decommissioning portfolio experience, most recently as the head of decommissioning and projects at Spirit Energy. He has also managed decommissioning at Canadian Natural Resources, APA said. “I am pleased to welcome Ben to our executive leadership team. He has done a tremendous job and will bring valuable expertise to our financial operations”, John J. Christmann, APA Corporation CEO, said. “I am also excited to welcome both Shad and Donald

Read More »

TotalEnergies Agrees 15-year LNG Supply Deal with Enadom

Global energy major TotalEnergies SE signed a heads of agreement (HoA) with Energia Natural Dominicana Enadom, S.R.L. (Enadom) for the delivery of 400,000 tons of liquefied natural gas (LNG) per year. TotalEnergies said in a media release that the HoA with the joint venture between AES Dominicana and Energas in the Dominican Republic is subject to the finalization of sale and purchase agreements (SPAs). Once the SPAs are signed, the agreement will start in mid-2027, with a 15-year term, and the price will be indexed to Henry Hub. The deal enables Enadom to supply natural gas to the 470 MW combined-cycle power plant, currently under construction, which will increase the country’s electricity generation capacity, TotalEnergies said. This project contributes to the energy transition of the Dominican Republic by reducing its dependence on coal and fuel oil through the use of a less carbon-intensive energy source, natural gas, the company said. “We are pleased to have signed this agreement to answer, alongside AES and its partners, the energy needs of the Dominican Republic. This new contract underscores TotalEnergies’ leadership in the LNG sector and our commitment to supporting the island’s energy transition. It will be a natural outlet for our US LNG supply which will progressively increase”, Gregory Joffroy, Senior Vice President LNG at TotalEnergies, said. TotalEnergies said it is the world’s third largest LNG player with a global portfolio of 40 Mt/y in 2024 thanks to its interests in liquefaction plants in all geographies. “This agreement with TotalEnergies is the result of the confidence placed in the Dominican Republic’s energy sector and, specifically, in Enadom and AES. This partnership, alongside Enadom’s, has demonstrated investment capabilities in providing natural gas to the Dominican electricity market by ensuring a reliable, competitive, and environmentally responsible energy supply. Enadom is proud to play a pivotal

Read More »

Superdielectrics spies £60m for domestic energy storage, possible IPO

E.ON has partnered with Cambridge-based innovator Superdielectrics to lower the cost of consumer electricity bills by bringing energy storage into people’s homes. Led by former chief executive of ITM Power, Jim Heathcote, and backed by 190 investors including board member Michael Spencer, the company has its sights on raising up to £60 million to commercialise its technology. Heathcote, who floated the electrolyser business on the stock market in 2004, said his latest company would be “looking to raise some money for a pilot production plant, probably £40m to £60m would be the range”. Timing would depend on capital markets, he added. He compared Superdielectrics’ technology to the personal computer, describing traditional battery storage technology as similar to the computer mainframe. “Imagine grid balancing with large-scale battery technology,” he said in an interview. “That is like the mainframe computer in the 1960s.” The company’s vision is to move towards a distributed energy storage system, a shift so great that it would be comparable to the roll out of the home computer. He said listing the energy storage company on the public markets “would be good for accelerating the company’s development”. “What we’re trying to do is to develop a completely new energy system that is lower cost than the existing fossil fuel system,” Heathcote said. “The storage technology has got to be low cost and safe so that when you add the cost of the renewables to the storage technology, the cost of the energy that you store and use is cheaper than the existing electricity price.” Heathcote explained that the membrane-based energy storage technology developed by Superdielectrics can help combat fluctuations in solar and wind power as more renewable energy is adopted on the grid. While solar panels are not prerequisite, homes with solar installed could make further cost savings

Read More »

EPA grants exemptions to mercury, air toxics rule to more than a third of US coal capacity

The Environmental Protection Agency is giving more than a third of U.S. coal-fired capacity two-year exemptions from the Mercury and Air Toxics Standards rule, according to a list of affected power plants the EPA appears to have released Monday. The exemptions are part of a broad Trump administration effort to bolster coal-fired generation, including by potentially revising the MATS rule. The EPA gave reprieves from the most recent version of the MATS rule to coal-fired power plants totaling about 71.3 GW, or about 37% of the U.S. coal fleet, which totaled about 193 GW at the start of 2024. Power plant owners receiving the largest exemptions are Southern Co., at about 11,285 MW; NRG, at about 7,100 MW; the Tennessee Valley Authority, at about 6,660 MW; and Basin Electric Power Cooperative, at about 3,960 MW, according to the units listed by the EPA and power plant data from the U.S. Energy Information Administration. Keystone-Conemaugh landed exemptions for two power plants in Pennsylvania totaling 3,823 MW, according to the EPA and EIA. Ameren Missouri received exemptions for two power plants totaling 3,490 MW, and Associated Electric Cooperative received exemptions for two power plants in Missouri totaling 2,482 MW. FirstEnergy’s Monongahela Power was granted exemptions for two power plants in West Virginia totaling 3,204 MW. Oklahoma Gas and Electric was granted exemptions for units in Oklahoma totaling 2,114. President Donald Trump on April 8 signed an executive order directing the EPA to allow certain coal-fired plants to comply with a less stringent version of the MATS rule for two years after it takes effect on July 8, 2027. The most recent version of the rule, which imposes more stringent requirements for control of those emissions, was put in place by the Biden administration. Trump said pollution control equipment that would enable the power plants

Read More »

Solar advocates lobby on strong fundamentals amid political uncertainty

Dive Brief: As industry advocates seek to keep the solar-boosting elements of the Inflation Reduction Act intact, they’re making the case to lawmakers that solar energy is boosting American manufacturing while helping meet spiking electricity demand. “One of the things that’s resonating with lawmakers now is that you don’t want to strand investments that have been made by American businesses in local economies,” said Sean Gallagher, senior vice president of policy at the Solar Energy Industries Association. “You don’t want these factories that have opened up in the last couple years to go dark.” At the same time, the Trump administration’s trade policies are putting pressure on the entire clean energy industry along with its investors, said Paul DeCotis, a senior partner and head of East Coast energy and utilities at West Monroe.  Dive Insight: “As we start this policy of isolationism internationally, we’re going to lose access to the very same minerals, rare earths and materials we need to bolster the clean energy industry,” DeCotis said. “We can manufacture solar panels in the U.S.,” he added — or offshore wind turbines, or batteries. “But our supply chain comes from international partners, so unless we start building our own supply chain of rare earths and minerals and upstream materials needed for the clean energy industry … we don’t have the inputs necessary to support the manufacturing of those technologies.” The buildout of the clean energy economy that was envisioned by the Biden administration, with IRA and Infrastructure Investment and Jobs Act funding, offered a “glide path” for investors to predict their trajectory, DeCotis said — but the macroeconomic impacts of President Trump’s tariffs and other policy initiatives are disrupting that. “We’re beginning to see some supply chain concerns over availability of equipment for these large infrastructure projects, and for some

Read More »

West of Orkney developers helped support 24 charities last year

The developers of the 2GW West of Orkney wind farm paid out a total of £18,000 to 24 organisations from its small donations fund in 2024. The money went to projects across Caithness, Sutherland and Orkney, including a mental health initiative in Thurso and a scheme by Dunnet Community Forest to improve the quality of meadows through the use of traditional scythes. Established in 2022, the fund offers up to £1,000 per project towards programmes in the far north. In addition to the small donations fund, the West of Orkney developers intend to follow other wind farms by establishing a community benefit fund once the project is operational. West of Orkney wind farm project director Stuart McAuley said: “Our donations programme is just one small way in which we can support some of the many valuable initiatives in Caithness, Sutherland and Orkney. “In every case we have been immensely impressed by the passion and professionalism each organisation brings, whether their focus is on sport, the arts, social care, education or the environment, and we hope the funds we provide help them achieve their goals.” In addition to the local donations scheme, the wind farm developers have helped fund a £1 million research and development programme led by EMEC in Orkney and a £1.2m education initiative led by UHI. It also provided £50,000 to support the FutureSkills apprenticeship programme in Caithness, with funds going to employment and training costs to help tackle skill shortages in the North of Scotland. The West of Orkney wind farm is being developed by Corio Generation, TotalEnergies and Renewable Infrastructure Development Group (RIDG). The project is among the leaders of the ScotWind cohort, having been the first to submit its offshore consent documents in late 2023. In addition, the project’s onshore plans were approved by the

Read More »

Biden bans US offshore oil and gas drilling ahead of Trump’s return

US President Joe Biden has announced a ban on offshore oil and gas drilling across vast swathes of the country’s coastal waters. The decision comes just weeks before his successor Donald Trump, who has vowed to increase US fossil fuel production, takes office. The drilling ban will affect 625 million acres of federal waters across America’s eastern and western coasts, the eastern Gulf of Mexico and Alaska’s Northern Bering Sea. The decision does not affect the western Gulf of Mexico, where much of American offshore oil and gas production occurs and is set to continue. In a statement, President Biden said he is taking action to protect the regions “from oil and natural gas drilling and the harm it can cause”. “My decision reflects what coastal communities, businesses, and beachgoers have known for a long time: that drilling off these coasts could cause irreversible damage to places we hold dear and is unnecessary to meet our nation’s energy needs,” Biden said. “It is not worth the risks. “As the climate crisis continues to threaten communities across the country and we are transitioning to a clean energy economy, now is the time to protect these coasts for our children and grandchildren.” Offshore drilling ban The White House said Biden used his authority under the 1953 Outer Continental Shelf Lands Act, which allows presidents to withdraw areas from mineral leasing and drilling. However, the law does not give a president the right to unilaterally reverse a drilling ban without congressional approval. This means that Trump, who pledged to “unleash” US fossil fuel production during his re-election campaign, could find it difficult to overturn the ban after taking office. Sunset shot of the Shell Olympus platform in the foreground and the Shell Mars platform in the background in the Gulf of Mexico Trump

Read More »

The Download: our 10 Breakthrough Technologies for 2025

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Introducing: MIT Technology Review’s 10 Breakthrough Technologies for 2025 Each year, we spend months researching and discussing which technologies will make the cut for our 10 Breakthrough Technologies list. We try to highlight a mix of items that reflect innovations happening in various fields. We look at consumer technologies, large industrial­-scale projects, biomedical advances, changes in computing, climate solutions, the latest in AI, and more.We’ve been publishing this list every year since 2001 and, frankly, have a great track record of flagging things that are poised to hit a tipping point. It’s hard to think of another industry that has as much of a hype machine behind it as tech does, so the real secret of the TR10 is really what we choose to leave off the list.Check out the full list of our 10 Breakthrough Technologies for 2025, which is front and center in our latest print issue. It’s all about the exciting innovations happening in the world right now, and includes some fascinating stories, such as: + How digital twins of human organs are set to transform medical treatment and shake up how we trial new drugs.+ What will it take for us to fully trust robots? The answer is a complicated one.+ Wind is an underutilized resource that has the potential to steer the notoriously dirty shipping industry toward a greener future. Read the full story.+ After decades of frustration, machine-learning tools are helping ecologists to unlock a treasure trove of acoustic bird data—and to shed much-needed light on their migration habits. Read the full story. 
+ How poop could help feed the planet—yes, really. Read the full story.
Roundtables: Unveiling the 10 Breakthrough Technologies of 2025 Last week, Amy Nordrum, our executive editor, joined our news editor Charlotte Jee to unveil our 10 Breakthrough Technologies of 2025 in an exclusive Roundtable discussion. Subscribers can watch their conversation back here. And, if you’re interested in previous discussions about topics ranging from mixed reality tech to gene editing to AI’s climate impact, check out some of the highlights from the past year’s events. This international surveillance project aims to protect wheat from deadly diseases For as long as there’s been domesticated wheat (about 8,000 years), there has been harvest-devastating rust. Breeding efforts in the mid-20th century led to rust-resistant wheat strains that boosted crop yields, and rust epidemics receded in much of the world.But now, after decades, rusts are considered a reemerging disease in Europe, at least partly due to climate change.  An international initiative hopes to turn the tide by scaling up a system to track wheat diseases and forecast potential outbreaks to governments and farmers in close to real time. And by doing so, they hope to protect a crop that supplies about one-fifth of the world’s calories. Read the full story. —Shaoni Bhattacharya

The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Meta has taken down its creepy AI profiles Following a big backlash from unhappy users. (NBC News)+ Many of the profiles were likely to have been live from as far back as 2023. (404 Media)+ It also appears they were never very popular in the first place. (The Verge) 2 Uber and Lyft are racing to catch up with their robotaxi rivalsAfter abandoning their own self-driving projects years ago. (WSJ $)+ China’s Pony.ai is gearing up to expand to Hong Kong.  (Reuters)3 Elon Musk is going after NASA He’s largely veered away from criticising the space agency publicly—until now. (Wired $)+ SpaceX’s Starship rocket has a legion of scientist fans. (The Guardian)+ What’s next for NASA’s giant moon rocket? (MIT Technology Review) 4 How Sam Altman actually runs OpenAIFeaturing three-hour meetings and a whole lot of Slack messages. (Bloomberg $)+ ChatGPT Pro is a pricey loss-maker, apparently. (MIT Technology Review) 5 The dangerous allure of TikTokMigrants’ online portrayal of their experiences in America aren’t always reflective of their realities. (New Yorker $) 6 Demand for electricity is skyrocketingAnd AI is only a part of it. (Economist $)+ AI’s search for more energy is growing more urgent. (MIT Technology Review) 7 The messy ethics of writing religious sermons using AISkeptics aren’t convinced the technology should be used to channel spirituality. (NYT $)
8 How a wildlife app became an invaluable wildfire trackerWatch Duty has become a safeguarding sensation across the US west. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review) 9 Computer scientists just love oracles 🔮 Hypothetical devices are a surprisingly important part of computing. (Quanta Magazine)
10 Pet tech is booming 🐾But not all gadgets are made equal. (FT $)+ These scientists are working to extend the lifespan of pet dogs—and their owners. (MIT Technology Review) Quote of the day “The next kind of wave of this is like, well, what is AI doing for me right now other than telling me that I have AI?” —Anshel Sag, principal analyst at Moor Insights and Strategy, tells Wired a lot of companies’ AI claims are overblown.
The big story Broadband funding for Native communities could finally connect some of America’s most isolated places September 2022 Rural and Native communities in the US have long had lower rates of cellular and broadband connectivity than urban areas, where four out of every five Americans live. Outside the cities and suburbs, which occupy barely 3% of US land, reliable internet service can still be hard to come by.
The covid-19 pandemic underscored the problem as Native communities locked down and moved school and other essential daily activities online. But it also kicked off an unprecedented surge of relief funding to solve it. Read the full story. —Robert Chaney We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Rollerskating Spice Girls is exactly what your Monday morning needs.+ It’s not just you, some people really do look like their dogs!+ I’m not sure if this is actually the world’s healthiest meal, but it sure looks tasty.+ Ah, the old “bitten by a rabid fox chestnut.”

Read More »

Equinor Secures $3 Billion Financing for US Offshore Wind Project

Equinor ASA has announced a final investment decision on Empire Wind 1 and financial close for $3 billion in debt financing for the under-construction project offshore Long Island, expected to power 500,000 New York homes. The Norwegian majority state-owned energy major said in a statement it intends to farm down ownership “to further enhance value and reduce exposure”. Equinor has taken full ownership of Empire Wind 1 and 2 since last year, in a swap transaction with 50 percent co-venturer BP PLC that allowed the former to exit the Beacon Wind lease, also a 50-50 venture between the two. Equinor has yet to complete a portion of the transaction under which it would also acquire BP’s 50 percent share in the South Brooklyn Marine Terminal lease, according to the latest transaction update on Equinor’s website. The lease involves a terminal conversion project that was intended to serve as an interconnection station for Beacon Wind and Empire Wind, as agreed on by the two companies and the state of New York in 2022.  “The expected total capital investments, including fees for the use of the South Brooklyn Marine Terminal, are approximately $5 billion including the effect of expected future tax credits (ITCs)”, said the statement on Equinor’s website announcing financial close. Equinor did not disclose its backers, only saying, “The final group of lenders includes some of the most experienced lenders in the sector along with many of Equinor’s relationship banks”. “Empire Wind 1 will be the first offshore wind project to connect into the New York City grid”, the statement added. “The redevelopment of the South Brooklyn Marine Terminal and construction of Empire Wind 1 will create more than 1,000 union jobs in the construction phase”, Equinor said. On February 22, 2024, the Bureau of Ocean Energy Management (BOEM) announced

Read More »

USA Crude Oil Stocks Drop Week on Week

U.S. commercial crude oil inventories, excluding those in the Strategic Petroleum Reserve (SPR), decreased by 1.2 million barrels from the week ending December 20 to the week ending December 27, the U.S. Energy Information Administration (EIA) highlighted in its latest weekly petroleum status report, which was released on January 2. Crude oil stocks, excluding the SPR, stood at 415.6 million barrels on December 27, 416.8 million barrels on December 20, and 431.1 million barrels on December 29, 2023, the report revealed. Crude oil in the SPR came in at 393.6 million barrels on December 27, 393.3 million barrels on December 20, and 354.4 million barrels on December 29, 2023, the report showed. Total petroleum stocks – including crude oil, total motor gasoline, fuel ethanol, kerosene type jet fuel, distillate fuel oil, residual fuel oil, propane/propylene, and other oils – stood at 1.623 billion barrels on December 27, the report revealed. This figure was up 9.6 million barrels week on week and up 17.8 million barrels year on year, the report outlined. “At 415.6 million barrels, U.S. crude oil inventories are about five percent below the five year average for this time of year,” the EIA said in its latest report. “Total motor gasoline inventories increased by 7.7 million barrels from last week and are slightly below the five year average for this time of year. Finished gasoline inventories decreased last week while blending components inventories increased last week,” it added. “Distillate fuel inventories increased by 6.4 million barrels last week and are about six percent below the five year average for this time of year. Propane/propylene inventories decreased by 0.6 million barrels from last week and are 10 percent above the five year average for this time of year,” it went on to state. In the report, the EIA noted

Read More »

More telecom firms were breached by Chinese hackers than previously reported

Broader implications for US infrastructure The Salt Typhoon revelations follow a broader pattern of state-sponsored cyber operations targeting the US technology ecosystem. The telecom sector, serving as a backbone for industries including finance, energy, and transportation, remains particularly vulnerable to such attacks. While Chinese officials have dismissed the accusations as disinformation, the recurring breaches underscore the pressing need for international collaboration and policy enforcement to deter future attacks. The Salt Typhoon campaign has uncovered alarming gaps in the cybersecurity of US telecommunications firms, with breaches now extending to over a dozen networks. Federal agencies and private firms must act swiftly to mitigate risks as adversaries continue to evolve their attack strategies. Strengthening oversight, fostering industry-wide collaboration, and investing in advanced defense mechanisms are essential steps toward safeguarding national security and public trust.

Read More »

What is vibe coding, exactly?

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. When OpenAI cofounder Andrej Karpathy excitedly took to X back in February to post about his new hobby, he probably had no idea he was about to coin a phrase that encapsulated an entire movement steadily gaining momentum across the world. “There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists,” he said. “I’m building a project or webapp, but it’s not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”  If this all sounds very different from poring over lines of code, that’s because Karpathy was talking about a particular style of coding with AI assistance. His words struck a chord among software developers and enthusiastic amateurs alike. In the months since, his post has sparked think pieces and impassioned debates across the internet. But what exactly is vibe coding? Who does it benefit, and what’s its likely future?
So, what is it? To truly understand vibe coding, it’s important to note that while the term may be new, the coding technology behind it isn’t. For the past few years, general-purpose chatbots like Anthropic’s Claude, OpenAI’s ChatGPT, and Google DeepMind’s Gemini have been getting better at writing code to build software, including games, websites, and apps. But it’s the recent advent of specially created AI coding assistants, including Cursor’s Chat (previously known as Composer) and GitHub Copilot, that really ushered in vibe coding. These assistants can make real-time predictions about what you’re trying to do and offer intuitive suggestions to make it easier than ever to create software, even if you’ve never written code before. “Over the past three or four years, these AI autocomplete tools have become better and better—they started off completing single lines of code and can now rewrite an entire file for you, or create new components,” says Barron Webster, a software designer at the interface company Sandbar. “The remit of what you can take your hands off the wheel and let the machine do is continually growing over time.”  
… and what doesn’t count as vibe coding? But not all AI-assisted coding is vibe coding. To truly vibe-code, you have to be prepared to let the AI fully take control and refrain from checking and directly tweaking the code it generates as you go along—surrendering to the vibes. In Karpathy’s longer post he explained that when he’s vibe coding, he breezily accepts all suggestions that Cursor’s tool gives him and puts his trust in its ability to fix its own mistakes. “When I get error messages I just copy paste them in with no comment, usually that fixes it,” he wrote. “Sometimes the LLMs can’t fix a bug so I just work around it or ask for random changes until it goes away.” Essentially, vibe coding is interacting with a code base through prompts, so that the engineer’s role is simply to converse with the tool and examine its outcome, explains Sergey Tselovalnikov, a software engineer at the design platform Canva who regularly uses AI assistive tools. “Andrej is a bit of an influencer, and he defined that term very intentionally,” he says. “He just posted a joke of sorts, but because he highlighted what was going on in the industry more or less correctly, it just took off.” Is vibe coding right for my project? The people most likely to benefit from vibe coding fall into two camps, says Tobin South, an AI security researcher at the MIT Media Lab. One is people like Karpathy, who already have a good grasp of coding and know how to fix any errors if anything goes seriously wrong if they’re using it to build anything important; the other is absolute amateurs with little to no coding experience. “I’d define vibe coding as having a vision that you can’t execute, but AI can,” he says. The major appeal of vibe coding lies in how easy and accessible it is. The AI assistive tools make it much quicker to produce code and to whip up small projects like a prototype website, game, or web app than it would be for a human. But while this hands-off approach may make sense when it comes to creating these kinds of low-stakes, simple digital products, it’s far riskier in bigger, more complex systems where the stakes are much higher. Because AI coding tools are powered by LLMs, the code they generate is just as likely to contain errors as the answers LLM-powered chatbots spit out. That’s a big problem if what you’re trying to code requires access to large databases of information, security measures to protect that data, large numbers of users, or data inputted from users, says Tselovalnikov. “Vibe coding can make a lot of errors and problems, but in the environment of a tiny game or a small app that doesn’t store any data, it’s a lot less relevant,” he says. “I’d personally be a lot more careful with larger projects, because if you don’t know if there are any security vulnerabilities and you didn’t test the code yourself, that’s very dangerous.” This is particularly applicable to non-coders. Leo, a user on X and a champion of vibe coding, found this out the hard way when he posted about having built a SaaS application (software that runs over the internet, instead of being downloaded to a user’s device) solely using Cursor last month. The post immediately caught the attention of mischievous web users, who instantly started poking holes in his service’s security. “Guys, I’m under attack,” he posted two days later. “I’m not technical, so this is taking me longer than usual to figure out. For now, I will stop sharing what I do publicly on X. There are just some weird ppl out there.” Ultimately, while vibe coding can help make a vague idea for a website or a game into a reality, it can’t make it reliable or secure. But there are already plenty of existing tools to do this, helping you with everything from creating databases to adding authentication measures. So while you can’t vibe-code real, valuable, secure, robust apps into existence, it can be a useful place to start so long as you’re careful, says South.  He believes that AI-assisted coding assistants are going to keep becoming more capable and that web hosting companies will keep integrating AI into their tools to make them easier to use, meaning the barriers to creating software will keep falling. “It takes the cost of producing software and dramatically reduces it to an exponential degree,” he says. “The world will have to adapt to this new reality. It isn’t going anywhere.”

Read More »

When AI reasoning goes wrong: Microsoft Research shows more tokens can mean more problems

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Large language models (LLMs) are increasingly capable of complex reasoning through “inference-time scaling,” a set of techniques that allocate more computational resources during inference to generate answers. However, a new study from Microsoft Research reveals that the effectiveness of these scaling methods isn’t universal. Performance boosts vary significantly across different models, tasks and problem complexities. The core finding is that simply throwing more compute at a problem during inference doesn’t guarantee better or more efficient results. The findings can help enterprises better understand cost volatility and model reliability as they look to integrate advanced AI reasoning into their applications. Putting scaling methods to the test The Microsoft Research team conducted an extensive empirical analysis across nine state-of-the-art foundation models. This included both “conventional” models like GPT-4o, Claude 3.5 Sonnet, Gemini 2.0 Pro and Llama 3.1 405B, as well as models specifically fine-tuned for enhanced reasoning through inference-time scaling. This included OpenAI’s o1 and o3-mini, Anthropic’s Claude 3.7 Sonnet, Google’s Gemini 2 Flash Thinking, and DeepSeek R1. They evaluated these models using three distinct inference-time scaling approaches: Standard Chain-of-Thought (CoT): The basic method where the model is prompted to answer step-by-step. Parallel Scaling: the model generates multiple independent answers for the same question and uses an aggregator (like majority vote or selecting the best-scoring answer) to arrive at a final result. Sequential Scaling: The model iteratively generates an answer and uses feedback from a critic (potentially from the model itself) to refine the answer in subsequent attempts. These approaches were tested on eight challenging benchmark datasets covering a wide range of tasks that benefit from step-by-step problem-solving: math and STEM reasoning (AIME, Omni-MATH, GPQA), calendar planning (BA-Calendar), NP-hard problems (3SAT, TSP), navigation

Read More »

Sam Altman at TED 2025: Inside the most uncomfortable — and important — AI interview of the year

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI CEO Sam Altman revealed that his company has grown to 800 million weekly active users and is experiencing “unbelievable” growth rates, during a sometimes tense interview at the TED 2025 conference in Vancouver last week. “I have never seen growth in any company, one that I’ve been involved with or not, like this,” Altman told TED head Chris Anderson during their on-stage conversation. “The growth of ChatGPT — it is really fun. I feel deeply honored. But it is crazy to live through, and our teams are exhausted and stressed.” The interview, which closed out the final day of TED 2025: Humanity Reimagined, showcased not just OpenAI’s skyrocketing success but also the increasing scrutiny the company faces as its technology transforms society at a pace that alarms even some of its supporters. ‘Our GPUs are melting’: OpenAI struggles to scale amid unprecedented demand Altman painted a picture of a company struggling to keep up with its own success, noting that OpenAI’s GPUs are “melting” due to the popularity of its new image generation features. “All day long, I call people and beg them to give us their GPUs. We are so incredibly constrained,” he said. This exponential growth comes as OpenAI is reportedly considering launching its own social network to compete with Elon Musk’s X, according to CNBC. Altman neither confirmed nor denied these reports during the TED interview. The company recently closed a $40 billion funding round, valuing it at $300 billion — the largest private tech funding in history — and this influx of capital will likely help address some of these infrastructure challenges. From non-profit to $300 billion giant: Altman responds to ‘Ring of Power’ accusations Throughout

Read More »

A small US city experiments with AI to find out what residents want

Bowling Green, Kentucky, is home to 75,000 residents who recently wrapped up an experiment in using AI for democracy: Can an online polling platform, powered by machine learning, capture what residents want to see happen in their city? When Doug Gorman, elected leader of the county that includes Bowling Green, took office in 2023, it was the fastest-growing city in the state and projected to double in size by 2050, but it lacked a plan for how that growth would unfold. Gorman had a meeting with Sam Ford, a local consultant who had worked with the surveying platform Pol.is, which uses machine learning to gather opinions from large groups of people.  They “needed a vision” for the anticipated growth, Ford says. The two convened a group of volunteers with experience in eight areas: economic development, talent, housing, public health, quality of life, tourism, storytelling, and infrastructure. They built a plan to use Pol.is to help write a 25-year plan for the city. The platform is just one of several new technologies used in Europe and increasingly in the US to help make sure that local governance is informed by public opinion. After a month of advertising, the Pol.is portal launched in February. Residents could go to the website and anonymously submit an idea (in less than 140 characters) for what the 25-year plan should include. They could also vote on whether they agreed or disagreed with other ideas. The tool could be translated into a participant’s preferred language, and human moderators worked to make sure the traffic was coming from the Bowling Green area. 
Over the month that it was live, 7,890 residents participated, and 2,000 people submitted their own ideas. An AI-powered tool from Google Jigsaw then analyzed the data to find what people agreed and disagreed on.  Experts on democracy technologies who were not involved in the project say this level of participation—about 10% of the city’s residents—was impressive.
“That is a lot,” says Archon Fung, director of the Ash Center for Innovation and Democratic Governance at the Harvard Kennedy School. A local election might see a 25% turnout, he says, and that requires nothing more than filling out a ballot.  “Here, it’s a more demanding kind of participation, right? You’re actually voting on or considering some substantive things, and 2,000 people are contributing ideas,” he says. “So I think that’s a lot of people who are engaged.” The plans that received the most attention in the Bowling Green experiment were hyperlocal. The ideas with the broadest support were increasing the number of local health-care specialists so residents wouldn’t have to travel to nearby Nashville for medical care, enticing more restaurants and grocery stores to open on the city’s north side, and preserving historic buildings.  More contentious ideas included approving recreational marijuana, adding sexual orientation and gender identity to the city’s nondiscrimination clause, and providing more options for private education. Out of 3,940 unique ideas, 2,370 received more than 80% agreement, including initiatives like investing in stormwater infrastructure and expanding local opportunities for children and adults with autism.   The volunteers running the experiment were not completely hands-off. Submitted ideas were screened according to a moderation policy, and redundant ideas were not posted. Ford says that 51% of ideas were published, and 31% were deemed redundant. About 6% of ideas were not posted because they were either completely off-topic or contained a personal attack. But some researchers who study the technologies that can make democracy more effective question whether soliciting input in this manner is a reliable way to understand what a community wants. One problem is self-selection—for example, certain kinds of people tend to show up to in-person forums like town halls. Research shows that seniors, homeowners, and people with high levels of education are the most likely to attend, Fung says. It’s possible that similar dynamics are at play among the residents of Bowling Green who decided to participate in the project. “Self-selection is not an adequate way to represent the opinions of a public,” says James Fishkin, a political scientist at Stanford who’s known for developing a process he calls deliberative polling, in which a representative sample of a population’s residents are brought together for a weekend, paid about $300 each for their participation, and asked to deliberate in small groups. Other methods, used in some European governments, use jury-style groups of residents to make public policy decisions. 

What’s clear to everyone who studies the effectiveness of these tools is that they promise to move a city in a more democratic direction, but we won’t know if Bowling Green’s experiment worked until residents see what the city does with the ideas that they raised. “You can’t make policy based on a tweet,” says Beth Simone Noveck, who directs a lab that studies democracy and technology at Northeastern University. As she points out, residents were voting on 140-character ideas, and those now need to be formed into real policies.  “What comes next,” she says, “is the conversation between the city and residents to develop a short proposal into something that can actually be implemented.” For residents to trust that their voice actually matters, the city must be clear on why it’s implementing some ideas and not others.  For now, the organizers have made the results public, and they will make recommendations to the Warren County leadership later this year. 

Read More »

Claude just gained superpowers: Anthropic’s AI can now search your entire Google Workspace without you

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Anthropic launched major upgrades to its Claude AI assistant today, introducing an autonomous research capability and Google Workspace integration that transform the AI into what the company calls a “true virtual collaborator” for enterprise users. The expansion directly challenges OpenAI and Microsoft in the increasingly competitive market for AI productivity tools. The new Research capability enables Claude to independently conduct multiple searches that build upon each other while determining what to investigate next. Simultaneously, the Google Workspace integration connects Claude to users’ emails, calendars, and documents, eliminating the need for manual uploads. Credit: Anthropic ‘Minutes not hours’: How Claude’s research speed aims to win over busy executives Anthropic positions Claude’s Research functionality as dramatically faster than competing solutions, promising comprehensive answers in minutes rather than the “up to 30 minutes” they claim rival products require. “At Anthropic, we’re laser-focused on enterprise workers and use cases, and our Research tool is reflective of that,” an Anthropic spokesperson told VentureBeat. “Research is a tool to help enterprise workers get well-researched answers to queries in less than a minute. Other solutions on the market take up to 30 minutes to generate responses – that’s not what your average Sales exec or financial services employee needs.” This speed-focused approach represents a calculated bet that enterprise users prioritize quick responses for time-sensitive decisions over more exhaustive but slower research capabilities. Enterprise-grade security promises to keep company data protected while Claude works For technical decision makers considering AI tools, data security remains paramount. Anthropic emphasizes its security-first approach, particularly for the Google Drive Catalog feature that uses retrieval augmented generation (RAG) techniques. “Privacy is foundational to our approach. We don’t train our models on user data by

Read More »

Moveworks joins AI agent library craze

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More AI agent marketplaces have become ubiquitous as enterprises look for ready-made agents they can customize and find agents for most of their use cases.  ServiceNow, Google, Writer, Amazon Web Services and Microsoft are just a few of the companies that have or recently announced platforms where customers can choose pre-built agents and then deploy them to their organizations.  Banking on this popularity, enterprise AI company Moveworks launched and AI Agent Marketplace, where customers can find more than 100 pre-built agents and install these into their systems.  Bhavin Shah, founder and CEO of Moveworks, told VentureBeat in an interview that agent marketplaces exist for enterprises to spin up agentic use cases quickly and can act as idea generators for other use cases.  “What we found is that to go from this transformation and business productivity is to identify the kinds of agents that can be used to translate these objectives that people have,” Shah said. “Sometimes you have an idea and ask how do we make it work, so what we’ve done with the AI Agent Marketplace is show real agents that connect to third-party systems.” The AI Agent Marketplace from Marketplace has over 100 agents across HR, sales, finance and IT operations. Some of these agents are for timesheet management, talent recruitment and expense management.  Customize agents to fit workflows Shah said enterprises can take one of the agents in the marketplace and configure it to their own needs.  The marketplace offers agent templates that connect to third-party platforms that can be integrated into an organization’s tech and data environment.  Shah noted that in the previous iteration of agents, that is robotic process automation (RPA), enterprises had to write out workflows

Read More »

SPE Offshore Europe starts countdown to event in Aberdeen

The success of the upcoming Offshore Europe event in Aberdeen will be a barometer for the industry as it navigates a “turbulent, interesting but difficult” period. Organisers officially launched the SPE Offshore Europe 2025 event which will take place in September. Offshore Europe organisers RX Global, formerly known as Reed Exhibitions, said they expect 40,000 attendees to attend the biennial event which started a 140-day countdown until doors open at the free-to attend trade show for the offshore energy industry. © Erikka Askeland/DCT MediaOEUK CEO Dave Whitehouse and chair of SPE Offshore Europe committee. David Whitehouse, the chief executive of trade body Offshore Energies UK (OEUK) and the event committee chair, said Offshore Europe would retain focus on its  “clear oil and gas heritage”  but will also emphasise opportunities in wind, floating wind, carbon storage, hydrogen and geothermal as well as engaging young people to consider careers in the energy industry. In addition to a wide slate of speakers including senior politicians and c-suite executives from the likes of BP, Subsea7 and Harbour Energy, the event will also aim to be “impactful” and meaningful by welcoming representatives from trade unions, and “voices that don’t always agree with people like myself”, he said. He added: “What we will be debating is the future role of oil and gas in Europe’s energy future. How do we turn those renewable energy targets into true delivery plans? How do we unlick the real opportunities for our supply chain? “Europe is going through what appears to be deindustrialisation masquerading as decarbonisation – how do we change that path?” The official theme of the event is “unlocking Europe’s potential in offshore energy” But Whitehouse noted that between now and then the industry is facing major policy interventions that underpin the future of North Sea industry including oil and

Read More »

Business group plea for Statera Kintore hydrogen plan

Calls to support Statera’s 3GW hydrogen production scheme in the north east of Scotland have been made as councillors prepare to decide its fate. A decision on Kintore Hydrogen is expected to comes to an Aberdeenshire Council meeting Thursday 24 April after the proposal received 83 letters of objection against the plan. Council officers have recommended that the project is approved, citing sustainable economic growth and the potential for Aberdeenshire to play a leading role in the development of clean, green energy. The local authority also recently backed a large battery energy storage (BESS) scheme nearby, thought to be led by Chinese firm CR Power. Statera has said the project’s size and scale is on a par with major hydrogen production schemes in Saudi Arabia and the Port of Rotterdam, with associated economic benefits. © Supplied by StateraKintore Hydrogen. The private-equity backed company has predicted the firm will create over 3,000 jobs in construction and a further 300 or when operational, generating £400 million in GVA for the region’s economy. Aberdeen Grampian Chamber of Commerce (AGCC) has called on councillors to give the green light to the “major energy transition project in Aberdeenshire”. The proposed scheme would use electricity from offshore wind to produce green hydrogen. AGCC chief executive Russell Borthwick said: “Kintore Hydrogen is vital to north east Scotland’s energy transition ambition – delivering an estimated £1 billion boost to the economy, £400 million of that in our region, supporting almost 3,500 jobs. “The local workforce and supply chain stand to be the biggest winners from this major investment in our region as Aberdeenshire moves towards a clean energy future. “That’s why it’s so important that the whole community – businesses, local residents and government at all levels – unites behind ensuring this vision becomes a reality. “The opportunity for our region to play

Read More »

Lower gas reserves expected this winter as UK’s largest storage facility halts

Centrica has ceased injecting natural gas into the UK’s largest energy storage facility, located in the North Sea, which is likely to mean lower gas reserves this winter. The company is believed to have stopped refilling the Rough gas storage facility off the Yorkshire coast this month, which comprises about half of the UK’s energy storage capacity. Centrica warned in December that the Rough facility was making a loss of between £50m and £100m for the Centrica Energy Storage+ business division. The company has indicated that the storage facility, which was reopened in 2022 due to the energy crisis to plug demand, was not financially viable in prevailing market conditions. British Gas owner Centrica has said that it needs a cap-and-floor mechanism to redevelop the facility with £2 billion of its own cash so that it can store hydrogen. The company has broached talks with government over the future operation of the plant and met with Ed Miliband in March to discuss options for keeping the plant open. A spokesperson for the Department of Energy Security and Net Zero (DESNZ) said the government is “open to discussing proposals on gas storage sites, as long as it provides value for money for taxpayers”. Clean power mission boss Chris Stark said at a parliamentary hearing earlier this year in January that the government was considering a regulatory mechanism to support hydrogen storage from around 2030. Unabated gas is envisaged to comprise up to 5% of the UK’s energy demand by 2030 under a system operator study on the clean power mission. Its group chief executive Chris O’Shea said on a webinar with analysts at the release of its annual results in February that the company was considering all options for Rough and had not made a decision around its continuation. The impetus

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right? In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality. This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for. Understanding Model Differences Each AI model family has its own strengths and limitations. Some key aspects to consider include: Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost. Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens. Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions.  Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting. Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance. Migrating from OpenAI to Anthropic Imagine a real-world scenario where you’ve

Read More »

Swapping LLMs isn’t plug-and-play: Inside the hidden cost of model migration

Swapping large language models (LLMs) is supposed to be easy, isn’t it? After all, if they all speak “natural language,” switching from GPT-4o to Claude or Gemini should be as simple as changing an API key… right?

In reality, each model interprets and responds to prompts differently, making the transition anything but seamless. Enterprise teams who treat model switching as a “plug-and-play” operation often grapple with unexpected regressions: broken outputs, ballooning token costs or shifts in reasoning quality.

This story explores the hidden complexities of cross-model migration, from tokenizer quirks and formatting preferences to response structures and context window performance. Based on hands-on comparisons and real-world tests, this guide unpacks what happens when you switch from OpenAI to Anthropic or Google’s Gemini and what your team needs to watch for.

Understanding Model Differences

Each AI model family has its own strengths and limitations. Some key aspects to consider include:

Tokenization variations—Different models use different tokenization strategies, which impact the input prompt length and its total associated cost.

Context window differences—Most flagship models allow a context window of 128K tokens; however, Gemini extends this to 1M and 2M tokens.

Instruction following – Reasoning models prefer simpler instructions, while chat-style models require clean and explicit instructions. 

Formatting preferences – Some models prefer markdown while others prefer XML tags for formatting.

Model response structure—Each model has its own style of generating responses, which affects verbosity and factual accuracy. Some models perform better when allowed to “speak freely,” i.e., without adhering to an output structure, while others prefer JSON-like output structures. Interesting research shows the interplay between structured response generation and overall model performance.

Migrating from OpenAI to Anthropic

Imagine a real-world scenario where you’ve just benchmarked GPT-4o, and now your CTO wants to try Claude 3.5. Make sure to refer to the pointers below before making any decision:

Tokenization variations

All model providers pitch extremely competitive per-token costs. For example, this post shows how the tokenization costs for GPT-4 plummeted in just one year between 2023 and 2024. However, from a machine learning (ML) practitioner’s viewpoint, making model choices and decisions based on purported per-token costs can often be misleading. 

A practical case study comparing GPT-4o and Sonnet 3.5 exposes the verbosity of Anthropic models’ tokenizers. In other words, the Anthropic tokenizer tends to break down the same text input into more tokens than OpenAI’s tokenizer. 

Context window differences

Each model provider is pushing the boundaries to allow longer and longer input text prompts. However, different models may handle different prompt lengths differently. For example, Sonnet-3.5 offers a larger context window up to 200K tokens as compared to the 128K context window of GPT-4. Despite this, it is noticed that OpenAI’s GPT-4 is the most performant in handling contexts up to 32K, whereas Sonnet-3.5’s performance declines with increased prompts longer than 8K-16K tokens.

Moreover, there is evidence that different context lengths are treated differently within intra-family models by the LLM, i.e., better performance at short contexts and worse performance at longer contexts for the same given task. This means that replacing one model with another (either from the same or a different family) might result in unexpected performance deviations.

Formatting preferences

Unfortunately, even the current state-of-the-art LLMs are highly sensitive to minor prompt formatting. This means the presence or absence of formatting in the form of markdown and XML tags can highly vary the model performance on a given task.

Empirical results across multiple studies suggest that OpenAI models prefer markdownified prompts including sectional delimiters, emphasis, lists, etc. In contrast, Anthropic models prefer XML tags for delineating different parts of the input prompt. This nuance is commonly known to data scientists and there is ample discussion on the same in public forums (Has anyone found that using markdown in the prompt makes a difference?, Formatting plain text to markdown, Use XML tags to structure your prompts).

For more insights, check out the official best prompt engineering practices released by OpenAI and Anthropic, respectively.  

Model response structure

OpenAI GPT-4o models are generally biased toward generating JSON-structured outputs. However, Anthropic models tend to adhere equally to the requested JSON or XML schema, as specified in the user prompt.However, imposing or relaxing the structures on models’ outputs is a model-dependent and empirically driven decision based on the underlying task. During a model migration phase, modifying the expected output structure would also entail slight adjustments in the post-processing of the generated responses.

Cross-model platforms and ecosystems

LLM switching is more complicated than it looks. Recognizing the challenge, major enterprises are increasingly focusing on providing solutions to tackle it. Companies like Google (Vertex AI), Microsoft (Azure AI Studio) and AWS (Bedrock) are actively investing in tools to support flexible model orchestration and robust prompt management.

For example, Google Cloud Next 2025 recently announced that Vertex AI allows users to work with more than 130 models by facilitating an expanded model garden, unified API access, and the new feature AutoSxS, which enables head-to-head comparisons of different model outputs by providing detailed insights into why one model’s output is better than the other.

Standardizing model and prompt methodologies

Migrating prompts across AI model families requires careful planning, testing and iteration. By understanding the nuances of each model and refining prompts accordingly, developers can ensure a smooth transition while maintaining output quality and efficiency.

ML practitioners must invest in robust evaluation frameworks, maintain documentation of model behaviors and collaborate closely with product teams to ensure the model outputs align with end-user expectations. Ultimately, standardizing and formalizing the model and prompt migration methodologies will equip teams to future-proof their applications, leverage best-in-class models as they emerge, and deliver users more reliable, context-aware, and cost-efficient AI experiences.

Read More »

Keystone Restarts Oil Pipeline After Leak Prompted Shutdown

The operator of the Keystone oil pipeline brought the conduit back into service, putting an end to a week-long outage caused by an estimated 3,500-barrel spill in rural North Dakota.  Most of the oil released has been recovered and remediation efforts have started, South Bow Corp. said in a statement Wednesday. The line will be able to operate at no more of 80% of pressure levels at the time of the April 8 spill. At the time of failure, the line was transporting 17,844 barrels per hour, or the equivalent of 428,000 barrels a day. The restart, delayed by inclement weather, comes roughly two days after it met all conditions imposed by the Pipeline and Hazardous Materials Safety Administration. South Bow will continue to monitor the system as an investigation into the causes of the spill continues, the company said. Keystone can transport as much as 620,000 barrels of Canadian crude daily to US Midwest and Gulf Coast markets. 

Read More »

Stay Ahead with the Paperboy Newsletter

Your weekly dose of insights into AI, Bitcoin mining, Datacenters and Energy indusrty news. Spend 3-5 minutes and catch-up on 1 week of news.

Smarter with ONMINE

Streamline Your Growth with ONMINE