Stay Ahead, Stay ONMINE

AlphaGenome: AI for better understanding the genome

Science Published 25 June 2025 Authors Ziga Avsec and Natasha Latysheva Introducing a new, unifying DNA sequence model that advances regulatory variant-effect prediction and promises to shed new light on genome function — now available via API.The genome is our cellular instruction manual. It’s the complete set of DNA which guides nearly every part of a living organism, from appearance and function to growth and reproduction. Small variations in a genome’s DNA sequence can alter an organism’s response to its environment or its susceptibility to disease. But deciphering how the genome’s instructions are read at the molecular level — and what happens when a small DNA variation occurs — is still one of biology’s greatest mysteries.Today, we introduce AlphaGenome, a new artificial intelligence (AI) tool that more comprehensively and accurately predicts how single variants or mutations in human DNA sequences impact a wide range of biological processes regulating genes. This was enabled, among other factors, by technical advances allowing the model to process long DNA sequences and output high-resolution predictions.To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future.We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.How AlphaGenome worksOur AlphaGenome model takes a long DNA sequence as input — up to 1 million letters, also known as base-pairs — and predicts thousands of molecular properties characterising its regulatory activity. It can also score the effects of genetic variants or mutations by comparing predictions of mutated sequences with unmutated ones.Predicted properties include where genes start and where they end in different cell types and tissues, where they get spliced, the amount of RNA being produced, and also which DNA bases are accessible, close to one another, or bound by certain proteins. Training data was sourced from large public consortia including ENCODE, GTEx, 4D Nucleome and FANTOM5, which experimentally measured these properties covering important modalities of gene regulation across hundreds of human and mouse cell types and tissues. Animation showing AlphaGenome taking one million DNA letters as input and predicting diverse molecular properties across different tissues and cell types. The AlphaGenome architecture uses convolutional layers to initially detect short patterns in the genome sequence, transformers to communicate information across all positions in the sequence, and a final series of layers to turn the detected patterns into predictions for different modalities. During training, this computation is distributed across multiple interconnected Tensor Processing Units (TPUs) for a single sequence.This model builds on our previous genomics model, Enformer and is complementary to AlphaMissense, which specializes in categorizing the effects of variants within protein-coding regions. These regions cover 2% of the genome. The remaining 98%, called non-coding regions, are crucial for orchestrating gene activity and contain many variants linked to diseases. AlphaGenome offers a new perspective for interpreting these expansive sequences and the variants within them. AlphaGenome’s distinctive featuresAlphaGenome offers several distinctive features compared to existing DNA sequence models:Long sequence-context at high resolutionOur model analyzes up to 1 million DNA letters and makes predictions at the resolution of individual letters. Long sequence context is important for covering regions regulating genes from far away and base-resolution is important for capturing fine-grained biological details.Previous models had to trade off sequence length and resolution, which limited the range of modalities they could jointly model and accurately predict. Our technical advances address this limitation without significantly increasing the training resources — training a single AlphaGenome model (without distillation) took four hours and required half of the compute budget used to train our original Enformer model.Comprehensive multimodal predictionBy unlocking high resolution prediction for long input sequences, AlphaGenome can predict the most diverse range of modalities. In doing so, AlphaGenome provides scientists with more comprehensive information about the complex steps of gene regulation.Efficient variant scoringIn addition to predicting a diverse range of molecular properties, AlphaGenome can efficiently score the impact of a genetic variant on all of these properties in a second. It does this by contrasting predictions of mutated sequences with unmutated ones, and efficiently summarising that contrast using different approaches for different modalities.Novel splice-junction modelingMany rare genetic diseases, such as spinal muscular atrophy and some forms of cystic fibrosis, can be caused by errors in RNA splicing — a process where parts of the RNA molecule are removed, or “spliced out”, and the remaining ends rejoined. For the first time, AlphaGenome can explicitly model the location and expression level of these junctions directly from sequence, offering deeper insights about the consequences of genetic variants on RNA splicing.State-of-the-art performance across benchmarksAlphaGenome achieves state-of-the-art performance across a wide range of genomic prediction benchmarks, such as predicting which parts of the DNA molecule will be in close proximity, whether a genetic variant will increase or decrease expression of a gene, or whether it will change the gene’s splicing pattern. Bar graph showing AlphaGenome’s relative improvements on selected DNA sequence and variant effect tasks, compared against results for the current best methods in each category. When producing predictions for single DNA sequences, AlphaGenome outperformed the best external models on 22 out of 24 evaluations. And when predicting the regulatory effect of a variant, it matched or exceeded the top-performing external models on 24 out of 26 evaluations.This comparison included models specialized for individual tasks. AlphaGenome was the only model that could jointly predict all of the assessed modalities, highlighting its generality. Read more in our preprint.The benefits of a unifying modelAlphaGenome’s generality allows scientists to simultaneously explore a variant’s impact on a number of modalities with a single API call. This means that scientists can generate and test hypotheses more rapidly, without having to use multiple models to investigate different modalities.Moreover AlphaGenome’s strong performance indicates it has learned a relatively general representation of DNA sequence in the context of gene regulation. This makes it a strong foundation for the wider community to build upon. Once the model is fully released, scientists will be able to adapt and fine-tune it on their own datasets to better tackle their unique research questions.Finally, this approach provides a flexible and scalable architecture for the future. By extending the training data, AlphaGenome’s capabilities could be extended to yield better performance, cover more species, or include additional modalities to make the model even more comprehensive. “ It’s a milestone for the field. For the first time, we have a single model that unifies long-range context, base-level precision and state-of-the-art performance across a whole spectrum of genomic tasks. Dr. Caleb Lareau, Memorial Sloan Kettering Cancer Center A powerful research toolAlphaGenome’s predictive capabilities could help several research avenues:Disease understanding: By more accurately predicting genetic disruptions, AlphaGenome could help researchers pinpoint the potential causes of disease more precisely, and better interpret the functional impact of variants linked to certain traits, potentially uncovering new therapeutic targets. We think the model is especially suitable for studying rare variants with potentially large effects, such as those causing rare Mendelian disorders.Synthetic biology: Its predictions could be used to guide the design of synthetic DNA with specific regulatory function — for example, only activating a gene in nerve cells but not muscle cells.Fundamental research: It could accelerate our understanding of the genome by assisting in mapping its crucial functional elements and defining their roles, identifying the most essential DNA instructions for regulating a specific cell type’s function.For example, we used AlphaGenome to investigate the potential mechanism of a cancer-associated mutation. In an existing study of patients with T-cell acute lymphoblastic leukemia (T-ALL), researchers observed mutations at particular locations in the genome. Using AlphaGenome, we predicted that the mutations would activate a nearby gene called TAL1 by introducing a MYB DNA binding motif, which replicated the known disease mechanism and highlighted AlphaGenome’s ability to link specific non-coding variants to disease genes. “ AlphaGenome will be a powerful tool for the field. Determining the relevance of different non-coding variants can be extremely challenging, particularly to do at scale. This tool will provide a crucial piece of the puzzle, allowing us to make better connections to understand diseases like cancer. Professor Marc Mansour, University College London Current limitationsAlphaGenome marks a significant step forward, but it’s important to acknowledge its current limitations.Like other sequence-based models, accurately capturing the influence of very distant regulatory elements, like those over 100,000 DNA letters away, is still an ongoing challenge. Another priority for future work is further increasing the model’s ability to capture cell- and tissue-specific patterns.We haven’t designed or validated AlphaGenome for personal genome prediction, a known challenge for AI models. Instead, we focused more on characterising the performance on individual genetic variants. And while AlphaGenome can predict molecular outcomes, it doesn’t give the full picture of how genetic variations lead to complex traits or diseases. These often involve broader biological processes, like developmental and environmental factors, that are beyond the direct scope of our model.We’re continuing to improve our models and gathering feedback to help us address these gaps.Enabling the community to unlock AlphaGenome’s potentialAlphaGenome is now available for non-commercial use via our AlphaGenome API. Please note that our model’s predictions are intended only for research use and haven’t been designed or validated for direct clinical purposes.Researchers worldwide are invited to get in touch with potential use-cases for AlphaGenome and to ask questions or share feedback through the community forum.We hope AlphaGenome will be an important tool for better understanding the genome and we’re committed to working alongside external experts across academia, industry, and government organizations to ensure AlphaGenome benefits as many people as possible.Together with the collective efforts of the wider scientific community, we hope it will deepen our understanding of the complex cellular processes encoded in the DNA sequence and the effects of variants, and drive exciting new discoveries in genomics and healthcare. Learn more about AlphaGenome AcknowledgementsWe would like to thank Juanita Bawagan, Arielle Bier, Stephanie Booth, Irina Andronic, Armin Senoner, Dhavanthi Hariharan, Rob Ashley, Agata Laydon and Kathryn Tunyasuvunakool for their help with the text and figures.This work was done thanks to the contributions of the AlphaGenome co-authors: Žiga Avsec, Natasha Latysheva, Jun Cheng, Guido Novati, Kyle R. Taylor, Tom Ward, Clare Bycroft, Lauren Nicolaisen, Eirini Arvaniti, Joshua Pan, Raina Thomas, Vincent Dutordoir, Matteo Perino, Soham De, Alexander Karollus, Adam Gayoso, Toby Sargeant, Anne Mottram, Lai Hong Wong, Pavol Drotár, Adam Kosiorek, Andrew Senior, Richard Tanburn, Taylor Applebaum, Souradeep Basu, Demis Hassabis and Pushmeet Kohli.We would also like to thank Dhavanthi Hariharan, Charlie Taylor, Ottavia Bertolli, Yannis Assael, Alex Botev, Anna Trostanetski, Lucas Tenório, Victoria Johnston, Richard Green, Kathryn Tunyasuvunakool, Molly Beck, Uchechi Okereke, Rachael Tremlett, Sarah Chakera, Ibrahim I. Taskiran, Andreea-Alexandra Muşat, Raiyan Khan, Ren Yi and the greater Google DeepMind team for their support, help and feedback.

Science

Published
25 June 2025
Authors

Ziga Avsec and Natasha Latysheva

A central, light-blue DNA double helix stands in sharp focus, flanked by a series of DNA strands that fade into a soft, blurry background, giving the impression of a field of genetic information. The backdrop is bathed in a soft light that transitions from pink to purple.

Introducing a new, unifying DNA sequence model that advances regulatory variant-effect prediction and promises to shed new light on genome function — now available via API.

The genome is our cellular instruction manual. It’s the complete set of DNA which guides nearly every part of a living organism, from appearance and function to growth and reproduction. Small variations in a genome’s DNA sequence can alter an organism’s response to its environment or its susceptibility to disease. But deciphering how the genome’s instructions are read at the molecular level — and what happens when a small DNA variation occurs — is still one of biology’s greatest mysteries.

Today, we introduce AlphaGenome, a new artificial intelligence (AI) tool that more comprehensively and accurately predicts how single variants or mutations in human DNA sequences impact a wide range of biological processes regulating genes. This was enabled, among other factors, by technical advances allowing the model to process long DNA sequences and output high-resolution predictions.

To advance scientific research, we’re making AlphaGenome available in preview via our AlphaGenome API for non-commercial research, and planning to release the model in the future.

We believe AlphaGenome can be a valuable resource for the scientific community, helping scientists better understand genome function, disease biology, and ultimately, drive new biological discoveries and the development of new treatments.

How AlphaGenome works

Our AlphaGenome model takes a long DNA sequence as input — up to 1 million letters, also known as base-pairs — and predicts thousands of molecular properties characterising its regulatory activity. It can also score the effects of genetic variants or mutations by comparing predictions of mutated sequences with unmutated ones.

Predicted properties include where genes start and where they end in different cell types and tissues, where they get spliced, the amount of RNA being produced, and also which DNA bases are accessible, close to one another, or bound by certain proteins. Training data was sourced from large public consortia including ENCODE, GTEx, 4D Nucleome and FANTOM5, which experimentally measured these properties covering important modalities of gene regulation across hundreds of human and mouse cell types and tissues.

Animation showing AlphaGenome taking one million DNA letters as input and predicting diverse molecular properties across different tissues and cell types.

The AlphaGenome architecture uses convolutional layers to initially detect short patterns in the genome sequence, transformers to communicate information across all positions in the sequence, and a final series of layers to turn the detected patterns into predictions for different modalities. During training, this computation is distributed across multiple interconnected Tensor Processing Units (TPUs) for a single sequence.

This model builds on our previous genomics model, Enformer and is complementary to AlphaMissense, which specializes in categorizing the effects of variants within protein-coding regions. These regions cover 2% of the genome. The remaining 98%, called non-coding regions, are crucial for orchestrating gene activity and contain many variants linked to diseases. AlphaGenome offers a new perspective for interpreting these expansive sequences and the variants within them.

AlphaGenome’s distinctive features

AlphaGenome offers several distinctive features compared to existing DNA sequence models:

Long sequence-context at high resolution

Our model analyzes up to 1 million DNA letters and makes predictions at the resolution of individual letters. Long sequence context is important for covering regions regulating genes from far away and base-resolution is important for capturing fine-grained biological details.

Previous models had to trade off sequence length and resolution, which limited the range of modalities they could jointly model and accurately predict. Our technical advances address this limitation without significantly increasing the training resources — training a single AlphaGenome model (without distillation) took four hours and required half of the compute budget used to train our original Enformer model.

Comprehensive multimodal prediction

By unlocking high resolution prediction for long input sequences, AlphaGenome can predict the most diverse range of modalities. In doing so, AlphaGenome provides scientists with more comprehensive information about the complex steps of gene regulation.

Efficient variant scoring

In addition to predicting a diverse range of molecular properties, AlphaGenome can efficiently score the impact of a genetic variant on all of these properties in a second. It does this by contrasting predictions of mutated sequences with unmutated ones, and efficiently summarising that contrast using different approaches for different modalities.

Novel splice-junction modeling

Many rare genetic diseases, such as spinal muscular atrophy and some forms of cystic fibrosis, can be caused by errors in RNA splicing — a process where parts of the RNA molecule are removed, or “spliced out”, and the remaining ends rejoined. For the first time, AlphaGenome can explicitly model the location and expression level of these junctions directly from sequence, offering deeper insights about the consequences of genetic variants on RNA splicing.

State-of-the-art performance across benchmarks

AlphaGenome achieves state-of-the-art performance across a wide range of genomic prediction benchmarks, such as predicting which parts of the DNA molecule will be in close proximity, whether a genetic variant will increase or decrease expression of a gene, or whether it will change the gene’s splicing pattern.

Bar graph showing AlphaGenome’s relative improvements on selected DNA sequence and variant effect tasks, compared against results for the current best methods in each category.

When producing predictions for single DNA sequences, AlphaGenome outperformed the best external models on 22 out of 24 evaluations. And when predicting the regulatory effect of a variant, it matched or exceeded the top-performing external models on 24 out of 26 evaluations.

This comparison included models specialized for individual tasks. AlphaGenome was the only model that could jointly predict all of the assessed modalities, highlighting its generality. Read more in our preprint.

The benefits of a unifying model

AlphaGenome’s generality allows scientists to simultaneously explore a variant’s impact on a number of modalities with a single API call. This means that scientists can generate and test hypotheses more rapidly, without having to use multiple models to investigate different modalities.

Moreover AlphaGenome’s strong performance indicates it has learned a relatively general representation of DNA sequence in the context of gene regulation. This makes it a strong foundation for the wider community to build upon. Once the model is fully released, scientists will be able to adapt and fine-tune it on their own datasets to better tackle their unique research questions.

Finally, this approach provides a flexible and scalable architecture for the future. By extending the training data, AlphaGenome’s capabilities could be extended to yield better performance, cover more species, or include additional modalities to make the model even more comprehensive.

It’s a milestone for the field. For the first time, we have a single model that unifies long-range context, base-level precision and state-of-the-art performance across a whole spectrum of genomic tasks.

Dr. Caleb Lareau, Memorial Sloan Kettering Cancer Center

A powerful research tool

AlphaGenome’s predictive capabilities could help several research avenues:

  1. Disease understanding: By more accurately predicting genetic disruptions, AlphaGenome could help researchers pinpoint the potential causes of disease more precisely, and better interpret the functional impact of variants linked to certain traits, potentially uncovering new therapeutic targets. We think the model is especially suitable for studying rare variants with potentially large effects, such as those causing rare Mendelian disorders.
  2. Synthetic biology: Its predictions could be used to guide the design of synthetic DNA with specific regulatory function — for example, only activating a gene in nerve cells but not muscle cells.
  3. Fundamental research: It could accelerate our understanding of the genome by assisting in mapping its crucial functional elements and defining their roles, identifying the most essential DNA instructions for regulating a specific cell type’s function.

For example, we used AlphaGenome to investigate the potential mechanism of a cancer-associated mutation. In an existing study of patients with T-cell acute lymphoblastic leukemia (T-ALL), researchers observed mutations at particular locations in the genome. Using AlphaGenome, we predicted that the mutations would activate a nearby gene called TAL1 by introducing a MYB DNA binding motif, which replicated the known disease mechanism and highlighted AlphaGenome’s ability to link specific non-coding variants to disease genes.

AlphaGenome will be a powerful tool for the field. Determining the relevance of different non-coding variants can be extremely challenging, particularly to do at scale. This tool will provide a crucial piece of the puzzle, allowing us to make better connections to understand diseases like cancer.

Professor Marc Mansour, University College London

Current limitations

AlphaGenome marks a significant step forward, but it’s important to acknowledge its current limitations.

Like other sequence-based models, accurately capturing the influence of very distant regulatory elements, like those over 100,000 DNA letters away, is still an ongoing challenge. Another priority for future work is further increasing the model’s ability to capture cell- and tissue-specific patterns.

We haven’t designed or validated AlphaGenome for personal genome prediction, a known challenge for AI models. Instead, we focused more on characterising the performance on individual genetic variants. And while AlphaGenome can predict molecular outcomes, it doesn’t give the full picture of how genetic variations lead to complex traits or diseases. These often involve broader biological processes, like developmental and environmental factors, that are beyond the direct scope of our model.

We’re continuing to improve our models and gathering feedback to help us address these gaps.

Enabling the community to unlock AlphaGenome’s potential

AlphaGenome is now available for non-commercial use via our AlphaGenome API. Please note that our model’s predictions are intended only for research use and haven’t been designed or validated for direct clinical purposes.

Researchers worldwide are invited to get in touch with potential use-cases for AlphaGenome and to ask questions or share feedback through the community forum.

We hope AlphaGenome will be an important tool for better understanding the genome and we’re committed to working alongside external experts across academia, industry, and government organizations to ensure AlphaGenome benefits as many people as possible.

Together with the collective efforts of the wider scientific community, we hope it will deepen our understanding of the complex cellular processes encoded in the DNA sequence and the effects of variants, and drive exciting new discoveries in genomics and healthcare.

Learn more about AlphaGenome

Acknowledgements

We would like to thank Juanita Bawagan, Arielle Bier, Stephanie Booth, Irina Andronic, Armin Senoner, Dhavanthi Hariharan, Rob Ashley, Agata Laydon and Kathryn Tunyasuvunakool for their help with the text and figures.

This work was done thanks to the contributions of the AlphaGenome co-authors: Žiga Avsec, Natasha Latysheva, Jun Cheng, Guido Novati, Kyle R. Taylor, Tom Ward, Clare Bycroft, Lauren Nicolaisen, Eirini Arvaniti, Joshua Pan, Raina Thomas, Vincent Dutordoir, Matteo Perino, Soham De, Alexander Karollus, Adam Gayoso, Toby Sargeant, Anne Mottram, Lai Hong Wong, Pavol Drotár, Adam Kosiorek, Andrew Senior, Richard Tanburn, Taylor Applebaum, Souradeep Basu, Demis Hassabis and Pushmeet Kohli.

We would also like to thank Dhavanthi Hariharan, Charlie Taylor, Ottavia Bertolli, Yannis Assael, Alex Botev, Anna Trostanetski, Lucas Tenório, Victoria Johnston, Richard Green, Kathryn Tunyasuvunakool, Molly Beck, Uchechi Okereke, Rachael Tremlett, Sarah Chakera, Ibrahim I. Taskiran, Andreea-Alexandra Muşat, Raiyan Khan, Ren Yi and the greater Google DeepMind team for their support, help and feedback.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Lenovo expands enterprise AI server options

Lenovo is also introducing four new Hybrid AI Advantage offerings which are product bundles consisting of Lenovo hardware and third-party AI software. The four are: Hospitality with Centific AI Data Foundry and Nvidia: Enables hotels and resorts to boost real-time personalization, streamlined operations, and gain guest data-driven insights. Workplace Safety

Read More »

DigitalOcean teams with AMD for low-cost GPU access

DigitalOcean’s GPU Droplets are packaged such that customers get all the capabilities they need with a single click. Saha claims other cloud providers require customers to separately allocate compute, storage, networking, VPC, and other components. This adds complexity in setting up the infrastructure as well as in pricing. “It is

Read More »

StorONE launches turnkey enterprise AI storage package

Thanks to the GPU integration, ONEai eliminates the need for a separate AI stack or external orchestration and cloud-based workflows. It offers full on-premises processing for complete data sovereignty and control over sensitive data. ONEai automatically recognizes and responds to file creation, modification and deletion, offering real-time insights into data

Read More »

Utilities, energy developers back Senate’s more lenient tax credit timeline

Dive Brief: Eighteen trade associations representing utilities, power plant developers and electrical equipment manufacturers on Friday urged Senate leaders to preserve a more lenient standard for energy projects to qualify for the technology-neutral clean energy investment and production tax credits in the Republican budget bill.  Reverting from the “start of construction” standard in the current Senate Finance Committee text to the more restrictive “placed in service” standard in the version the House passed last month would “upend investment expectations, introduce substantial business uncertainty, harm electricity customers, and risk delaying or even canceling critical U.S. energy infrastructure projects already underway,” the letter said. The Senate Finance Committee text still sets a high bar for wind and solar generation, restricting full credit eligibility to projects that start construction by the end of this year and dropping credit values to 60% and 20% for projects that break ground in 2026 and 2027, respectively. Nuclear, geothermal and other clean energy technology developers can earn credits on projects that break ground well into the 2030s. Dive Insight: With Congressional Republicans facing a self-imposed July 4 deadline to send the budget bill to President Trump, the Senate could vote as soon as Friday on its version of the package. If it passes, House and Senate negotiators would then hash out a compromise bill for both chambers to vote on. Provisions pertaining to the technology-neutral clean energy tax credits the Inflation Reduction Act authorized in 2022 could change at any point during the process.  Current policy allows wind, solar and other clean energy technologies to qualify for the full investment or production tax credits — generally 30% of the total qualifying project value — through 2032. The House budget bill would begin phasing out credit eligibility for projects placed in service next year and eliminate it completely

Read More »

$1.4B in new clean energy factories, projects canceled in May: E2

Dive Brief: Companies canceled $1.4 billion in new clean energy factories and projects in May, as Congressional Republicans work through a reconciliation bill expected to dramatically pare back clean energy tax incentives, according to a report released Monday by clean energy nonprofits E2 and the Clean Economy Tracker. Nearly $15.5 billion in new factories and electricity projects have been canceled since the beginning of the year, according to the report. The majority of those investments were slated for Republican-held congressional districts, where $9 billion worth of canceled investments were planned. While the version of the reconciliation bill working through the Senate would reduce some of the cuts in the House-passed version, it still drastically cuts incentives for wind and solar. Dive Insight: E2 has been tracking clean energy manufacturing and project announcements monthly since the Inflation Reduction Act was passed in August 2022, but only began tracking cancellations in Q1 of this year. The updated cancellation calculations come after E2 previously reported nearly $8 billion in clean energy projects were canceled, closed or downsized in the first fiscal quarter of 2025. Monday’s report found that 30 projects have been canceled, closed or downsized since the beginning of the year. May’s cancellations included General Motors scrapping a $300 million EV manufacturing facility, and battery maker Li-Cycle canceling and closing four battery manufacturing plans in three states. Li-Cycle’s May updates included abandoning plans for a $960 million battery storage manufacturing facility planned for New York. Corporations and organizations across the clean energy economy expressed broad concerns about the version of the “One Big Beautiful Bill Act” that passed in the House of Representatives before Memorial Day in the U.S. Local leaders and some GOP lawmakers have expressed a need to conserve and tweak the clean energy tax credits, respectively. E2 Communications Director

Read More »

Alberta Premier Warns Carney He Must Act to Quell Separatist Threat

Alberta Premier Danielle Smith played down concerns that the secession movement in her province will scare away investors, saying it’s up to the government of Mark Carney to prove Canada can be a more attractive place for capital.  Polls show a significant minority of Albertans are interested in exploring independence from Canada, partly because they’re frustrated about environmental rules that limit the development of oil and gas. The cancellation of proposed crude oil pipelines including Energy East is the result of federal “anti-investment policies,” Smith said, and she argued Carney must reverse those measures if he wants to tamp down separatism. “They have to take responsibility for the fact that that sentiment is there,” the Alberta leader said in an interview with Bloomberg News in Calgary. “I’m telling him what the pathway is to have it subside, and I guess it’ll be up to him to choose whether or not he takes that pathway.” A survey published last month by the Angus Reid Institute said 36% of Albertans would likely vote to leave in a referendum on seceding from Canada. But the polling firm also found many of those people would be open to changing their minds if concessions are made to help the province’s No. 1 industry, such as scrapping the emissions cap and the ban on large oil tankers off much of British Columbia’s coast. Both were policies of former Prime Minister Justin Trudeau.  The tanker prohibition restricts Canada’s ability to ship Alberta oil to Asian markets, and is one reason the vast majority of its crude is sold to US refiners at a discount. Removing it is one of nine demands Smith made after Carney became prime minister.  Smith reiterated that she doesn’t support secession from Canada but her government recently passed legislation that makes it easier for citizens to force referendums. A petition of just 177,000 voters’ signatures

Read More »

Maersk Training Opens New Facility in Louisiana

In a release sent to Rigzone recently, Maersk Training announced the opening of a new maritime and safety training facility at Fletcher Technical Community College in Houma, Louisiana. The company stated in the release that this expansion marks a significant milestone in Maersk Training’s commitment to enhancing workforce development, safety, and operational performance in key industries across the Gulf Coast. “By combining world-class training expertise with Fletcher’s strong educational foundation, the facility will equip workers with essential skills and certifications to enhance safety and performance in real-world job settings,” Maersk Training said in the release. “Louisiana serves as an energy hub, playing a critical role in the nation’s oil, gas, and maritime industries,” the company added. “As one of the top oil and gas production areas in the world, the region is home to a substantial workforce dedicated to the energy sector. This makes Houma an ideal location for Maersk Training’s expansion, ensuring workers have access to high-quality, industry-specific training,” it continued. In its release, Maersk Training noted that the new maritime and safety training facility at Fletcher Technical Community College will primarily serve the offshore oil and gas industry and the maritime sector. The center will offer a wide range of industry-accredited training courses focused on offshore safety and survival, as well as industrial safety, according to Maersk Training, which said course certifications will be approved by industry bodies such as OPITO, OSHA, STCW, IADC, and API. “One of the most exciting aspects of the facility is its OPITO and STCW-certified courses, including Basic Offshore Safety Induction and Emergency Training (BOSIET) and Tropical Helicopter Underwater Escape Training (T-HUET),” Maersk Training said in the release. “Unique to this location, the training will utilize a twin-fall davit launched from a working barge into the intracoastal waterway, providing the most realistic OPITO-certified

Read More »

Cheniere Approves Two More Trains for Corpus Christi LNG Expansion

Cheniere Energy Inc. said Tuesday it had made a positive FID (final investment decision) to add two “midscale” trains to the Corpus Christi LNG facility in South Texas. The Houston, Texas-based LNG producer “issued full notice to proceed to Bechtel Energy, Inc. for construction of CCL Midscale Trains 8 & 9”, a company statement said. The two trains will raise the terminal’s capacity by over 3 million metric tons per annum (MMtpa). In July 2023 the United States Department of Energy (DOE) granted CCL Midscale Trains 8 & 9 authorization to export to countries with a free trade agreement (FTA) with the U.S. The DOE has yet to grant the project a non-FTA permit. However, the agency has resumed issuing final orders on pending decisions paused by the previous administration last year, in support of President Donald Trump’s “unleashing American energy” agenda. Trains 8 and 9 will rise next to CCL Stage 3, which is also under construction. Stage 3 will have seven midscale trains with a total capacity of more than 10 MMtpa, raising the terminal’s capacity to over 25 MMtpa. Midscale trains 1-7 are permitted to export the equivalent of 582.14 billion cubic feet a year of natural gas to both FTA and non-FTA countries on a non-additive basis. In March Cheniere said train 1 of Stage 3 had been commissioned. Train 1 had already started production December 2024 and dispatched its first cargo February 2025, the company said earlier in a quarterly report. Cheniere said Tuesday Stage 3’s train 2 had begun production earlier this month. Currently Corpus Christi LNG has a production capacity of around 16.5 MMtpa from four trains. It has dispatched about 1,140 cargoes since 2018, Cheniere says on its website. In Tuesday’s statement the company said, “In addition, Cheniere is developing further brownfield liquefaction capacity

Read More »

Prevalon brings 80-MW battery storage online for Idaho Power

Dive Brief: Prevalon Energy has brought online a four-hour, 80-MW battery storage project that will be owned by Idaho Power, the companies said Tuesday. Prevelon, a joint venture between Mitsubishi Power Americas and EES, in January said it signed a contract to build Idaho Power an additional 200-MW/800-MWh battery storage project. The project at the Happy Valley substation in Nampa, Idaho, started operating as Idaho Power has been lining up battery storage projects to help meet potential near-term capacity shortfalls and to prepare for a planned shift to 100% clean power by 2045. Dive Insight: Idaho Power has contracts to buy battery storage projects totaling 330 MW and it has entered into power purchase agreements to buy the output from storage facilities totaling 250 MW over 20 years, Idacorp, the utility’s parent company, said in a May 30 U.S. Securities and Exchange Commission report. The Boise, Idaho-based utility owns about 230 MW of energy storage and has about 5,100 MW of generation on its system, including non-utility resources, according to a June investor presentation. Idaho Power could add 705 MW of 4-hour storage between 2026 and 2030, according to a summary of its draft integrated resource plan that was presented to a stakeholder group last month. The draft IRP also calls for adding 745 MW of solar and 700 MW of wind by the end of this decade as the utility converts its remaining 480 MW of coal-fired generation to gas. Idaho Power expects to file the IRP with utility regulators in Idaho and Oregon by the end of this month. The Idaho Public Utilities Commission in November approved a 150-MW, 20-year energy storage agreement under which Idaho Power will buy the output from the roughly $323 million Kuna project, which is owned by Aypa Power. Meanwhile, Idaho PUC staff is

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

HPE announces GreenLake Intelligence, goes all-in with agentic AI

Like a teammate who never sleeps Agentic AI is coming to Aruba Central as well, with an autonomous supervisory module talking to multiple specialized models to, for example, determine the root cause of an issue and provide recommendations. David Hughes, SVP and chief product officer, HPE Aruba Networking, said, “It’s like having a teammate who can work while you’re asleep, work on problems, and when you arrive in the morning, have those proposed answers there, complete with chain of thought logic explaining how they got to their conclusions.” Several new services for FinOps and sustainability in GreenLake Cloud are also being integrated into GreenLake Intelligence, including a new workload and capacity optimizer, extended consumption analytics to help organizations control costs, and predictive sustainability forecasting and a managed service mode in the HPE Sustainability Insight Center. In addition, updates to the OpsRamp operations copilot, launched in 2024, will enable agentic automation including conversational product help, an agentic command center that enables AI/ML-based alerts, incident management, and root cause analysis across the infrastructure when it is released in the fourth quarter of 2025. It is now a validated observability solution for the Nvidia Enterprise AI Factory. OpsRamp will also be part of the new HPE CloudOps software suite, available in the fourth quarter, which will include HPE Morpheus Enterprise and HPE Zerto. HPE said the new suite will provide automation, orchestration, governance, data mobility, data protection, and cyber resilience for multivendor, multi cloud, multi-workload infrastructures. Matt Kimball, principal analyst for datacenter, compute, and storage at Moor Insights & strategy, sees HPE’s latest announcements aligning nicely with enterprise IT modernization efforts, using AI to optimize performance. “GreenLake Intelligence is really where all of this comes together. I am a huge fan of Morpheus in delivering an agnostic orchestration plane, regardless of operating stack

Read More »

MEF goes beyond metro Ethernet, rebrands as Mplify with expanded scope on NaaS and AI

While MEF is only now rebranding, Vachon said that the scope of the organization had already changed by 2005. Instead of just looking at metro Ethernet, the organization at the time had expanded into carrier Ethernet requirements.  The organization has also had a growing focus on solving the challenge of cross-provider automation, which is where the LSO framework fits in. LSO provides the foundation for an automation framework that allows providers to more efficiently deliver complex services across partner networks, essentially creating a standardized language for service integration.  NaaS leadership and industry blueprint Building on the LSO automation framework, the organization has been working on efforts to help providers with network-as-a-service (NaaS) related guidance and specifications. The organization’s evolution toward NaaS reflects member-driven demands for modern service delivery models. Vachon noted that MEF member organizations were asking for help with NaaS, looking for direction on establishing common definitions and some standard work. The organization responded by developing comprehensive industry guidance. “In 2023 we launched the first blueprint, which is like an industry North Star document. It includes what we think about NaaS and the work we’re doing around it,” Vachon said. The NaaS blueprint encompasses the complete service delivery ecosystem, with APIs including last mile, cloud, data center and security services. (Read more about its vision for NaaS, including easy provisioning and integrated security across a federated network of providers)

Read More »

AMD rolls out first Ultra Ethernet-compliant NIC

The UEC was launched in 2023 under the Linux Foundation. Members include major tech-industry players such as AMD, Intel, Broadcom, Arista, Cisco, Google, Microsoft, Meta, Nvidia, and HPE. The specification includes GPU and accelerator interconnects as well as support for data center fabrics and scalable AI clusters. AMD’s Pensando Pollara 400GbE NICs are designed for massive scale-out environments containing thousands of AI processors. Pollara is based on customizable hardware that supports using a fully programmable Remote Direct Memory Access (RDMA) transport and hardware-based congestion control. Pollara supports GPU-to-GPU communication with intelligent routing technologies to reduce latency, making it very similar to Nvidia’s NVLink c2c. In addition to being UEC-ready, Pollara 400 offers RoCEv2 compatibility and interoperability with other NICs.

Read More »

Can Intel cut its way to profit with factory layoffs?

Matt Kimball, principal analyst at Moor Insights & Strategy, said, “While I’m sure tariffs have some impact on Intel’s layoffs, this is actually pretty simple — these layoffs are largely due to the financial challenges Intel is facing in terms of declining revenues.” The move, he said, “aligns with what the company had announced some time back, to bring expenses in line with revenues. While it is painful, I am confident that Intel will be able to meet these demands, as being able to produce quality chips in a timely fashion is critical to their comeback in the market.”  Intel, said Kimball, “started its turnaround a few years back when ex-CEO Pat Gelsinger announced its five nodes in four years plan. While this was an impressive vision to articulate, its purpose was to rebuild trust with customers, and to rebuild an execution discipline. I think the company has largely succeeded, but of course the results trail a bit.” Asked if a combination of layoffs and the moving around of jobs will affect the cost of importing chips, Kimball predicted it will likely not have an impact: “Intel (like any responsible company) is extremely focused on cost and supply chain management. They have this down to a science and it is so critical to margins. Also, while I don’t have insights, I would expect Intel is employing AI and/or analytics to help drive supply chain and manufacturing optimization.” The company’s number one job, he said, “is to deliver the highest quality chips to its customers — from the client to the data center. I have every confidence it will not put this mandate at risk as it considers where/how to make the appropriate resourcing decisions. I think everybody who has been through corporate restructuring (I’ve been through too many to count)

Read More »

Intel appears stuck between ‘a rock and a hard place’

Intel, said Kimball, “started its turnaround a few years back when ex-CEO Pat Gelsinger announced its five nodes in four years plan. While this was an impressive vision to articulate, its purpose was to rebuild trust with customers, and to rebuild an execution discipline. I think the company has largely succeeded, but of course the results trail a bit.” Asked if a combination of layoffs and the moving around of jobs will affect the cost of importing chips, Kimball predicted it will likely not have an impact: “Intel (like any responsible company) is extremely focused on cost and supply chain management. They have this down to a science and it is so critical to margins. Also, while I don’t have insights, I would expect Intel is employing AI and/or analytics to help drive supply chain and manufacturing optimization.” The company’s number one job, he said, “is to deliver the highest quality chips to its customers — from the client to the data center. I have every confidence it will not put this mandate at risk as it considers where/how to make the appropriate resourcing decisions. I think everybody who has been through corporate restructuring (I’ve been through too many to count) realizes that, when planning for these, ensuring the resilience of these mission critical functions is priority one.”  Added Bickley, “trimming the workforce, delaying construction of the US fab plants, and flattening the decision structure of the organization are prudent moves meant to buy time in the hopes that their new chip designs and foundry processes attract new business.”

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »