Stay Ahead, Stay ONMINE

Why it’s so hard to use AI to diagnose cancer

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. Peering into the body to find and diagnose cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes and look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread. In theory, artificial intelligence should be great at helping out. “Our job is pattern recognition,” says Andrew Norgan, a pathologist and medical director of the Mayo Clinic’s digital pathology platform. “We look at the slide and we gather pieces of information that have been proven to be important.”  Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis. We’re starting to see lots of new efforts to build such a model—at least seven attempts in the last year alone—but they all remain experimental. What will it take to make them good enough to be used in the real world? Details about the latest effort to build such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, but it reveals much about the challenges of bringing such a tool to real clinical settings.  The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the model’s predictions are compared with the correct answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest score for categorizing cancerous colorectal tissue, reaching the same conclusion as human pathologists 97.1% of the time. For another task, though—classifying tumors from prostate cancer biopsies—Atlas beat the other models’ high scores with a score of just 70.5%. Its average across nine benchmarks showed that it got the same answers as human experts 84.6% of the time.  Let’s think about what this means. The best way to know what’s happening to cancerous cells in tissues is to have a sample examined by a pathologist, so that’s the performance that AI models are measured against. The best models are approaching humans in particular detection tasks but lagging behind in many others. So how good does a model have to be to be clinically useful? “Ninety percent is probably not good enough. You need to be even better,” says Carlo Bifulco, chief medical officer at Providence Genomics and co-creator of GigaPath, one of the other AI pathology models examined in the Mayo Clinic study. But, Bifulco says, AI models that don’t score perfectly can still be useful in the short term, and could potentially help pathologists speed up their work and make diagnoses more quickly.     What obstacles are getting in the way of better performance? Problem number one is training data. “Fewer than 10% of pathology practices in the US are digitized,” Norgan says. That means tissue samples are placed on slides and analyzed under microscopes, and then stored in massive registries without ever being documented digitally. Though European practices tend to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to train on, there’s still not a ton to work with.  Without diverse data sets, AI models struggle to identify the wide range of abnormalities that human pathologists have learned to interpret. That includes for rare diseases, says Maximilian Alber, cofounder and CTO of Aignostics. Scouring the publicly available databases for tissue samples of particularly rare diseases, “you’ll find 20 samples over 10 years,” he says.  Around 2022, the Mayo Clinic foresaw that this lack of training data would be a problem. It decided to digitize all of its own pathology practices moving forward, along with 12 million slides from its archives dating back decades (patients had consented to their being used for research). It hired a company to build a robot that began taking high-resolution photos of the tissues, working through up to a million samples per month. From these efforts, the team was able to collect the 1.2 million high-quality samples used to train the Mayo model.  This brings us to problem number two for using AI to spot cancer. Tissue samples from biopsies are tiny—often just a couple of millimeters in diameter—but are magnified to such a degree that digital images of them contain more than 14 billion pixels. That makes them about 287,000 times larger than images used to train the best AI image recognition models to date.  “That obviously means lots of storage costs and so forth,” says Hoifung Poon, an AI researcher at Microsoft who worked with Bifulco to create GigaPath, which was featured in Nature last year. But it also forces important decisions about which bits of the image you use to train the AI model, and which cells you might miss in the process. To make Atlas, the Mayo Clinic used what’s referred to as a tile method, essentially creating lots of snapshots from the same sample to feed into the AI model. Figuring out how to select these tiles is both art and science, and it’s not yet clear which ways of doing it lead to the best results. Thirdly, there’s the question of which benchmarks are most important for a cancer-spotting AI model to perform well on. The Atlas researchers tested their model in the challenging domain of molecular-related benchmarks, which involves trying to find clues from sample tissue images to guess what’s happening on a molecular level. Here’s an example: Your body’s mismatch repair genes are of particular concern for cancer, because they catch errors made when your DNA gets replicated. If these errors aren’t caught, they can drive the development and progression of cancer.  “Some pathologists might tell you they kind of get a feeling when they think something’s mismatch-repair deficient based on how it looks,” Norgan says. But pathologists don’t act on that gut feeling alone. They can do molecular testing for a more definitive answer. What if instead, Norgan says, we can use AI to predict what’s happening on the molecular level? It’s an experiment: Could the AI model spot underlying molecular changes that humans can’t see? Generally no, it turns out. Or at least not yet. Atlas’s average for the molecular testing was 44.9%. That’s the best performance for AI so far, but it shows this type of testing has a long way to go.  Bifulco says Atlas represents incremental but real progress. “My feeling, unfortunately, is that everybody’s stuck at a similar level,” he says. “We need something different in terms of models to really make dramatic progress, and we need larger data sets.” Deeper Learning OpenAI has created an AI model for longevity science AI has long had its fingerprints on the science of protein folding. But OpenAI now says it’s created a model that can engineer proteins, turning regular cells into stem cells. That goal has been pursued by companies in longevity science, because stem cells can produce any other tissue in the body and, in theory, could be a starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells.  Why it matters: The work was a product of OpenAI’s collaboration with the longevity company Retro Labs, in which Sam Altman invested $180 million. It represents OpenAI’s first model focused on biological data and its first public claim that its models can deliver scientific results. The AI model reportedly engineered more effective proteins, and more quickly, than the company’s scientists could. But outside scientists can’t evaluate the claims until the studies have been published.  Read more from Antonio Regalado. Bits and Bytes What we know about the TikTok ban The popular video app went dark in the United States late Saturday and then came back around noon on Sunday, even as a law banning it took effect. (The New York Times) Why Meta might not end up like X  X lost lots of advertising dollars as Elon Musk changed the platform’s policies. But Facebook and Instagram’s massive scale make them hard platforms for advertisers to avoid. (Wall Street Journal) What to expect from Neuralink in 2025 More volunteers will get Elon Musk’s brain implant, but don’t expect a product soon. (MIT Technology Review) A former fact-checking outlet for Meta signed a new deal to help train AI models Meta paid media outlets like Agence France-Presse for years to do fact checking on its platforms. Since Meta announced it would shutter those programs, Europe’s leading AI company, Mistral, has signed a deal with AFP to use some of its content in its AI models. (Financial Times) OpenAI’s AI reasoning model “thinks” in Chinese sometimes, and no one really knows why While it comes to its response, the model often switches to Chinese, perhaps a reflection of the fact that many data labelers are based in China. (Tech Crunch)

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Peering into the body to find and diagnose cancer is all about spotting patterns. Radiologists use x-rays and magnetic resonance imaging to illuminate tumors, and pathologists examine tissue from kidneys, livers, and other areas under microscopes and look for patterns that show how severe a cancer is, whether particular treatments could work, and where the malignancy may spread.

In theory, artificial intelligence should be great at helping out. “Our job is pattern recognition,” says Andrew Norgan, a pathologist and medical director of the Mayo Clinic’s digital pathology platform. “We look at the slide and we gather pieces of information that have been proven to be important.” 

Visual analysis is something that AI has gotten quite good at since the first image recognition models began taking off nearly 15 years ago. Even though no model will be perfect, you can imagine a powerful algorithm someday catching something that a human pathologist missed, or at least speeding up the process of getting a diagnosis. We’re starting to see lots of new efforts to build such a model—at least seven attempts in the last year alone—but they all remain experimental. What will it take to make them good enough to be used in the real world?

Details about the latest effort to build such a model, led by the AI health company Aignostics with the Mayo Clinic, were published on arXiv earlier this month. The paper has not been peer-reviewed, but it reveals much about the challenges of bringing such a tool to real clinical settings. 

The model, called Atlas, was trained on 1.2 million tissue samples from 490,000 cases. Its accuracy was tested against six other leading AI pathology models. These models compete on shared tests like classifying breast cancer images or grading tumors, where the model’s predictions are compared with the correct answers given by human pathologists. Atlas beat rival models on six out of nine tests. It earned its highest score for categorizing cancerous colorectal tissue, reaching the same conclusion as human pathologists 97.1% of the time. For another task, though—classifying tumors from prostate cancer biopsies—Atlas beat the other models’ high scores with a score of just 70.5%. Its average across nine benchmarks showed that it got the same answers as human experts 84.6% of the time. 

Let’s think about what this means. The best way to know what’s happening to cancerous cells in tissues is to have a sample examined by a pathologist, so that’s the performance that AI models are measured against. The best models are approaching humans in particular detection tasks but lagging behind in many others. So how good does a model have to be to be clinically useful?

“Ninety percent is probably not good enough. You need to be even better,” says Carlo Bifulco, chief medical officer at Providence Genomics and co-creator of GigaPath, one of the other AI pathology models examined in the Mayo Clinic study. But, Bifulco says, AI models that don’t score perfectly can still be useful in the short term, and could potentially help pathologists speed up their work and make diagnoses more quickly.    

What obstacles are getting in the way of better performance? Problem number one is training data.

“Fewer than 10% of pathology practices in the US are digitized,” Norgan says. That means tissue samples are placed on slides and analyzed under microscopes, and then stored in massive registries without ever being documented digitally. Though European practices tend to be more digitized, and there are efforts underway to create shared data sets of tissue samples for AI models to train on, there’s still not a ton to work with. 

Without diverse data sets, AI models struggle to identify the wide range of abnormalities that human pathologists have learned to interpret. That includes for rare diseases, says Maximilian Alber, cofounder and CTO of Aignostics. Scouring the publicly available databases for tissue samples of particularly rare diseases, “you’ll find 20 samples over 10 years,” he says. 

Around 2022, the Mayo Clinic foresaw that this lack of training data would be a problem. It decided to digitize all of its own pathology practices moving forward, along with 12 million slides from its archives dating back decades (patients had consented to their being used for research). It hired a company to build a robot that began taking high-resolution photos of the tissues, working through up to a million samples per month. From these efforts, the team was able to collect the 1.2 million high-quality samples used to train the Mayo model. 

This brings us to problem number two for using AI to spot cancer. Tissue samples from biopsies are tiny—often just a couple of millimeters in diameter—but are magnified to such a degree that digital images of them contain more than 14 billion pixels. That makes them about 287,000 times larger than images used to train the best AI image recognition models to date. 

“That obviously means lots of storage costs and so forth,” says Hoifung Poon, an AI researcher at Microsoft who worked with Bifulco to create GigaPath, which was featured in Nature last year. But it also forces important decisions about which bits of the image you use to train the AI model, and which cells you might miss in the process. To make Atlas, the Mayo Clinic used what’s referred to as a tile method, essentially creating lots of snapshots from the same sample to feed into the AI model. Figuring out how to select these tiles is both art and science, and it’s not yet clear which ways of doing it lead to the best results.

Thirdly, there’s the question of which benchmarks are most important for a cancer-spotting AI model to perform well on. The Atlas researchers tested their model in the challenging domain of molecular-related benchmarks, which involves trying to find clues from sample tissue images to guess what’s happening on a molecular level. Here’s an example: Your body’s mismatch repair genes are of particular concern for cancer, because they catch errors made when your DNA gets replicated. If these errors aren’t caught, they can drive the development and progression of cancer. 

“Some pathologists might tell you they kind of get a feeling when they think something’s mismatch-repair deficient based on how it looks,” Norgan says. But pathologists don’t act on that gut feeling alone. They can do molecular testing for a more definitive answer. What if instead, Norgan says, we can use AI to predict what’s happening on the molecular level? It’s an experiment: Could the AI model spot underlying molecular changes that humans can’t see?

Generally no, it turns out. Or at least not yet. Atlas’s average for the molecular testing was 44.9%. That’s the best performance for AI so far, but it shows this type of testing has a long way to go. 

Bifulco says Atlas represents incremental but real progress. “My feeling, unfortunately, is that everybody’s stuck at a similar level,” he says. “We need something different in terms of models to really make dramatic progress, and we need larger data sets.”


Deeper Learning

OpenAI has created an AI model for longevity science

AI has long had its fingerprints on the science of protein folding. But OpenAI now says it’s created a model that can engineer proteins, turning regular cells into stem cells. That goal has been pursued by companies in longevity science, because stem cells can produce any other tissue in the body and, in theory, could be a starting point for rejuvenating animals, building human organs, or providing supplies of replacement cells. 

Why it matters: The work was a product of OpenAI’s collaboration with the longevity company Retro Labs, in which Sam Altman invested $180 million. It represents OpenAI’s first model focused on biological data and its first public claim that its models can deliver scientific results. The AI model reportedly engineered more effective proteins, and more quickly, than the company’s scientists could. But outside scientists can’t evaluate the claims until the studies have been published.  Read more from Antonio Regalado.

Bits and Bytes

What we know about the TikTok ban

The popular video app went dark in the United States late Saturday and then came back around noon on Sunday, even as a law banning it took effect. (The New York Times)

Why Meta might not end up like X 

X lost lots of advertising dollars as Elon Musk changed the platform’s policies. But Facebook and Instagram’s massive scale make them hard platforms for advertisers to avoid. (Wall Street Journal)

What to expect from Neuralink in 2025

More volunteers will get Elon Musk’s brain implant, but don’t expect a product soon. (MIT Technology Review)

A former fact-checking outlet for Meta signed a new deal to help train AI models

Meta paid media outlets like Agence France-Presse for years to do fact checking on its platforms. Since Meta announced it would shutter those programs, Europe’s leading AI company, Mistral, has signed a deal with AFP to use some of its content in its AI models. (Financial Times)

OpenAI’s AI reasoning model “thinks” in Chinese sometimes, and no one really knows why

While it comes to its response, the model often switches to Chinese, perhaps a reflection of the fact that many data labelers are based in China. (Tech Crunch)

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Keysight network packet brokers gain AI-powered features

The technology has matured considerably since then. Over the last five years, Singh said that most of Keysight’s NPB customers are global Fortune 500 organizations that have large network visibility practices. Meaning they deploy a lot of packet brokers with capabilities ranging anywhere from one gigabit networking at the edge,

Read More »

Adding, managing and deleting groups on Linux

$ sudo groupadd -g 1111 techs In this case, a specific group ID (1111) is being assigned. Omit the -g option to use the next available group ID (e.g., sudo groupadd techs). Once a group is added, you will find it in the /etc/group file. $ grep techs /etc/grouptechs:x:1111: Adding

Read More »

Oil Executives Talk Permitting But Not Prices in Trump Meeting

Oil executives pressed for faster permitting — and didn’t discuss concerns about falling crude prices — during a meeting with President Donald Trump on Wednesday, Interior Secretary Doug Burgum said.  The more than hour-long session brought Trump, an unabashed champion of American oil and gas might, face to face with more than a dozen executives eager to help shape the president’s “energy dominance” agenda.  Executives praised Trump’s early moves to approve natural gas export licenses and unwind regulations that have raised the industry’s operational costs. The backdrop for the meeting, however, was mounting concern about the president’s plan to slash energy prices, potentially to levels that could make some domestic production unprofitable. On Wednesday, West Texas Intermediate crude, the US benchmark, closed at $67.16 per barrel, down from $75.89 Trump’s first full day in the White House this year. Still, in the meeting, “there was really no discussion on price,” Burgum said, emphasizing that’s “set by supply and demand,” and “there’s nothing we can say in that room that would change that one iota.” Instead, the group focused heavily on the need to speed up permitting times and ensure project approvals have lasting durability. While some moves to streamline permitting can be achieved administratively by the executive branch, oil industry leaders have emphasized the importance of getting those changes passed by Congress and enshrined into law.  “We did talk a lot about permitting, because one of the things that this industry has faced is the onslaught of regulation that really had one goal in mind: trying to drive their business out of business,” Burgum said. In story after story, Burgum said he and Energy Secretary Chris Wright heard how “the permitting process takes longer than the actual building process on critical infrastructure in our country.” Ahead of the meeting, participants were set

Read More »

Crude Climbs Amid US Crackdown on Iran Exports

Oil advanced after the US ramped up measures to hobble Iran’s crude exports, increasing pressure on Tehran amid a push for a new nuclear accord. West Texas Intermediate rose 1.6% to settle above $68 a barrel for the first time in more than two weeks, while global benchmark Brent climbed to settle at $72. The US Treasury Department sanctioned a Chinese oil refinery and its chief executive officer for allegedly buying Iranian oil as well as multiple vessels allegedly linked to a “shadow fleet” of ships that carry the OPEC member’s crude. The moves herald a return to the so-called maximum pressure campaign against Iran that squeezed the nation’s crude exports during US President Donald Trump’s first term. Trump told Iran’s Supreme Leader Ali Khamenei in a recently delivered letter that his country has a two-month deadline to reach a new nuclear deal. “Trump is positioning to significantly impact Iranian exports if negotiations fail,” said Rebecca Babin, a senior energy trader at CIBC Private Wealth Group. “A modest price move higher makes sense, but I’d be surprised if this drives more than a $1 to $2 increase,” given the two-month deadline, she said. A maximum-pressure campaign could remove as much as 1.5 million barrels of daily Iranian exports from global markets, according to Jorge Leon, head of geopolitical analysis at Rystad Energy A/S. The sanctions injected a jolt of bullishness into a session that had earlier seen crude drift between gains and losses as markets assessed the Federal Reserve’s US economic outlook. While Fed Chair Jerome Powell said on Wednesday that the central bank is in no hurry to cut rates, the Treasury market boosted its bets on lower rates because of the uncertainty that he highlighted. Still, crude remains markedly below its mid-January peak, as bearish factors pressures prices.

Read More »

MISO proposes framework to speed generation interconnection

Dive Brief: The Midcontinent Independent System Operator on Monday asked federal regulators to approve an Expedited Resource Addition Study process, or ERAS, to provide a framework for the accelerated study of generation projects “that can address urgent resource adequacy and reliability needs in the near term.” MISO asked the Federal Energy Regulatory Commission to approve the ERAS proposal to be effective May 17. The grid operator is on pace for near-term capacity shortfalls, should resource retirements continue as planned, it said. MISO proposed for projects entering the ERAS process, as opposed to MISO’s standard Generator Interconnection Queue, to be studied serially each quarter and granted an Expedited Generator Interconnection Agreement within 90 days. Renewable energy stakeholders, however, warn the ERAS proposal “adds chaos to an already complex process.” Dive Insight: Recent surveys and forecasts demonstrate the urgency with which MISO needs to “address significant resource adequacy needs in its footprint that are compounded by the addition of unexpected large spot loads,” the grid operator told FERC. NERC’s 2024 Long-Term Reliability Assessment projected MISO will experience a 4.7 GW shortfall by 2028 if the current expected generator retirements occur, the grid operator said. And last year the grid operator and the Organization of MISO States published a report warning of possible capacity shortfalls beginning this summer. The ERAS proposal “is MISO’s answer to addressing these resource adequacy and reliability needs in the near-term,” it said in its proposal. “ERAS is a unique process which recognizes that the responsibility for providing grid reliability and resource adequacy in the MISO region is shared by Load Serving Entities … the states, and MISO.” According to MISO’s application, as of March 13 its generator interconnection queue contained 1,603 active interconnection requests. “This considerable backlog of applications is spread over all five of MISO’s study regions and includes queue cycles going

Read More »

Federal judge blocks EPA’s $14B GGRF funding freeze

A federal judge issued an order Tuesday blocking the U.S. Environmental Protection Agency’s freeze order on Greenhouse Gas Reduction Fund grants, finding that the EPA did not provide sufficient evidence of waste, fraud or abuse. “Based on the record before the court, and under the relevant statutes and various agreements, it does not appear that EPA Defendants took the legally required steps necessary to terminate these grants, such that its actions were arbitrary and capricious,” said U.S. District Court for the District of Columbia Judge Tanya Chutkan in a Tuesday memorandum opinion.  The case was brought before Chutkan on March 8 by the Climate United Fund, the recipient of a $6.97 billion Greenhouse Gas Reduction Fund grant which was frozen Feb. 18 after the EPA issued a notice of termination regarding the $20 billion GGRF fund.  The Climate United Fund said in a Monday release that the group, along with two other National Clean Investment Fund awardees, had been granted a temporary restraining order “halting the [EPA’s] termination of the grant agreements and preventing Citibank from transferring funds out of grantee bank accounts.” “Climate United will continue its legal process to fully restore its program,” the group said.  The other two NCIF awardees, the Coalition for Green Capital and Power Forward Communities, had $5 billion and $2 billion frozen, respectively, PBS reported. In a March 2 letter to Acting Inspector General Nicole Murley, Acting Deputy EPA Administrator Chad McIntosh said the EPA “launched certain oversight and accountability measures” to investigate GGRF disbursement for “financial mismanagement, conflicts of interest, and oversight failures.” But Chutkan said that when questioned at a March 12 hearing, “EPA Defendants proffered no evidence to support their basis for the sudden terminations, or that they followed the proper procedures.” “In the termination letters, EPA Defendants vaguely reference ‘multiple ongoing investigations’ into

Read More »

EIA Fuel Update Shows USA Gasoline, Diesel Prices Declining

The U.S. regular gasoline price and the U.S. on-highway diesel fuel price are both in a declining trend, the U.S. Energy Information Administration’s (EIA) latest gasoline and diesel fuel update showed. This update, which was released this week, put the average U.S. regular gasoline price at $3.078 per gallon on March 3, $3.069 per gallon on March 10, and $3.058 per gallon on March 17. It put the U.S. on-highway diesel fuel price at $3.635 per gallon on March 3, $3.582 per gallon on March 10, and $3.549 per gallon on March 17. Of the five Petroleum Administration for Defense District (PADD) regions highlighted in the EIA’s latest fuel update, the West Coast was shown to have the highest U.S. regular gasoline price as of March 17, at $4.061 per gallon. The Gulf Coast was shown to have the lowest U.S. regular gasoline price as of March 17, at $2.629 per gallon. In the update, the West Coast was also shown to have the highest U.S. on-highway diesel fuel price as of March 17, at $4.203 per gallon. The Gulf Coast was shown to have the lowest U.S. on-highway diesel fuel price as of March 17, at $3.245 per gallon. A glossary section of the EIA site notes that the 50 U.S. states and the District of Columbia are divided into five districts, with PADD 1 further split into three subdistricts. PADDs 6 and 7 encompass U.S. territories, the site adds. According to the AAA Fuel Prices website, the average U.S. regular gasoline price is $3.121 per gallon, as of March 20. Yesterday’s average was $3.102 per gallon, the week ago average was $3.079  per gallon, the month ago average was $3.165  per gallon, and the year ago average was $3.515 per gallon, the site showed. The average U.S. diesel price

Read More »

Inverness jobs growth on back of pumped hydro projects

Power generation firm Excitation & Engineering Services (EES) is expanding operations its operations to the Scottish Highlands amid a boom in energy investment in the region. EES said its new Inverness base will strengthen its ability to support its customers in the power sector as investment in hydroelectric and long duration energy storage (LDES) projects “continues to grow”. This includes the 1.3 GW Coire Glas project, set to be the first large-scale pumped storage scheme developed in the UK in 40 years. SSE Renewables is developing the £1.5 billion project in the Great Glen near Loch Lochy, around 50 miles from Inverness. When complete, the Coire Glas scheme will double the UK’s LDES capacity. With nine other pumped hydro schemes in development in Scotland, EES has launched a recruitment campaign for two engineers with local expertise to capitalise on opportunities in the region, “with potential for further expansion”. Inverness base EES director Ryan Kavanagh said establishing an Inverness base is a “major step towards enhancing our support for the region’s power generation industry”. “With more investment flowing into renewable energy, it’s crucial that we can offer specialised, responsive support locally,” he said. © Supplied by SSEAerial view of Loch Lochy, where the Coire Glas scheme will be built. “This office will help us serve our customers, improve collaboration with plant operators and support the maintenance and improvement of Scotland’s electricity supply.” EES founder and director Douglas Cope said the growing sector is a “great opportunity for engineers to develop their careers”. Cope founded the Tamworth-based firm in 2011 alongside a group of electrical engineers from firms including RWE and Alstom. “Scotland has a wealth of talent and we want to contribute to the region’s growth while fostering local expertise,” Cope said. Scotland pumped hydro boom Pumped storage projects and other

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »