Stay Ahead, Stay ONMINE

Motor neuron diseases took their voices. AI is bringing them back.

Jules Rodriguez lost his voice in October of last year. His speech had been deteriorating since a diagnosis of amyotrophic lateral sclerosis (ALS) in 2020, as the muscles in his head and neck progressively weakened along with those in the rest of his body. By 2024, doctors were worried that he might not be able to breathe on his own for much longer. So Rodriguez opted to have a small tube inserted into his windpipe to help him breathe. The tracheostomy would extend his life, but it also brought an end to his ability to speak. “A tracheostomy is a scary endeavor for people living with ALS, because it signifies crossing a new stage in life, a stage that is close to the end,” Rodriguez tells me using a communication device. “Before the procedure I still had some independence, and I could still speak somewhat, but now I am permanently connected to a machine that breathes for me.” Rodriguez and his wife, Maria Fernandez, who live in Miami, thought they would never hear his voice again. Then they re-created it using AI. After feeding old recordings of Rodriguez’s voice into a tool trained on voices from film, television, radio, and podcasts, the couple were able to generate a voice clone—a way for Jules to communicate in his “old voice.” “Hearing my voice again, after I hadn’t heard it for some time, lifted my spirits,” says Rodriguez, who today communicates by typing sentences using a device that tracks his eye movements, which can then be “spoken” in the cloned voice. The clone has enhanced his ability to interact and connect with other people, he says. He has even used it to perform comedy sets on stage. Rodriguez is one of over a thousand people with speech difficulties who have used the voice cloning tool since ElevenLabs, the company that developed it, made it available to them for free. Like many new technologies, the AI voice clones aren’t perfect, and some people find them impractical in day-to-day life. But the voices represent a vast improvement on previous communication technologies and are already improving the lives of people with motor neuron diseases, says Richard Cave, a speech and language therapist at the Motor Neuron Disease Association in the UK. “This is genuinely AI for good,” he says. Cloning a voice Motor neuron diseases are a group of disorders in which the neurons that control muscles and movement are progressively destroyed. They can be difficult to diagnose, but typically, people with these disorders start to lose the ability to move various muscles. Eventually, they can struggle to breathe, too. There is no cure. Rodriguez started showing symptoms of ALS in the summer of 2019. “He started losing some strength in his left shoulder,” says Fernandez, who sat next to him during our video call. “We thought it was just an old sports injury.” His arm started to get thinner, too. In November, his right thumb “stopped working” while he was playing video games. It wasn’t until February 2020, when Rodriguez saw a hand specialist, that he was told he might have ALS. He was 35 years old. “It was really, really, shocking to hear from somebody … you see about your hand,” says Fernandez. “That was a really big blow.” Like others with ALS, Rodriguez was advised to “bank” his voice—to tape recordings of himself saying hundreds of phrases. These recordings can be used to create a “banked voice” to use in communication devices. The result was jerky and robotic. It’s a common experience, says Cave, who has helped 50 people with motor neuron diseases bank their voices. “When I first started at the MND Association [around seven years ago], people had to read out 1,500 phrases,” he says. It was an arduous task that would take months.  And there was no way to predict how lifelike the resulting voice would be—often it ended up sounding quite artificial. “It might sound a bit like them, but it certainly couldn’t be confused for them,” he says. Since then, the technology has improved, and for the last year or two the people Cave has worked with have only needed to spend around half an hour recording their voices. But though the process was quicker, he says, the resulting synthetic voice was no more lifelike. Then came the voice clones. ElevenLabs has been developing AI-generated voices for use in films, televisions, and podcasts since it was founded three years ago, says Sophia Noel, who oversees partnerships between the company and nonprofits. The company’s original goal was to improve dubbing, making voice-overs in a new language seem more natural and less obvious. But then the technical lead of Bridging Voice, an organization that works to help people with ALS communicate, told ElevenLabs that its voice clones were useful to that group, says Noel. Last August, ElevenLabs launched a program to make the technology freely available to people with speech difficulties. Suddenly, it became much faster and easier to create a voice clone, says Cave. Instead of having to record phrases, users can instead upload voice recordings from past WhatsApp voice messages or wedding videos, for example. “You need a minimum of a minute to make anything, but ideally you want around 30 minutes,” says Noel. “You upload it into ElevenLabs. It takes about a week, and then it comes out with this voice.” Rodriguez played me a statement using both his banked voice and his voice clone. The difference was stark: The banked voice was distinctly unnatural, but the voice clone sounded like a person. It wasn’t entirely natural—the words came a little fast, and the emotive quality was slightly lacking. But it was a huge improvement. The difference between the two is, as Fernandez puts it, “like night and day.” The ums and ers Cave started introducing the technology to people with MND a few months ago. Since then, 130 of them have started using it, “and the feedback has been unremittingly good,” he says. The voice clones sound far more lifelike than the results of voice banking. “They [include] pauses for breath, the ums, the ers, and sometimes there are stammers,” says Cave, who himself has a subtle stammer. “That feels very real to me, because actually I would rather have a synthetic voice representing me that stammered, because that’s just who I am.” Joyce Esser is one of the 130 people Cave has introduced to voice cloning. Esser, who is 65 years old and lives in Southend-on-Sea in the UK, was diagnosed with bulbar MND in May last year. Bulbar MND is a form of the disease that first affects muscles in the face, throat, and mouth, which can make speaking and swallowing difficult. Esser can still talk, but slowly and with difficulty. She’s a chatty person, but she says her speech has deteriorated “quite quickly” since January. We communicated via a combination of email, video call, speaking, a writing board, and text-to-speech tools. “To say this diagnosis has been devastating is an understatement,” she tells me. “Losing my voice has been a massive deal for me, because it’s such a big part of who I am.” Joyce Esser and her husband Paul on holiday in the Maldives.COURTESY OF JOYCE ESSER Esser has lots of friends all over the country, Paul Esser, her husband of 38 years, tells me. “But when they get together, they have a rule: Don’t talk about it,” he says. Talking about her MND can leave Joyce sobbing uncontrollably. She had prepared a box of tissues for our conversation. Voice banking wasn’t an option for Esser. By the time her MND was diagnosed, she was already losing her ability to speak. Then Cave introduced her to the ElevenLabs offering. Esser had a four-and-a-half-minute-long recording of her voice from a recent local radio interview and sent it to Cave to create her voice clone. “When he played me my AI voice, I just burst into tears,” she says. “I’D GOT MY VOICE BACK!!!! Yippeeeee!” “We were just beside ourselves,” adds Paul. “We thought we’d lost [her voice] forever.” Hearing a “lost” voice can be an incredibly emotional experience for everyone involved. “It was bittersweet,” says Fernandez, recalling the first time she heard Rodriguez’s voice clone. “At the time, I felt sorrow, because [hearing the voice clone] reminds you of who he was and what we’ve lost,” she says. “But overwhelmingly, I was just so thrilled … it was so miraculous.” Rodriguez says he uses the voice clone as much as he can. “I feel people understand me better compared to my banked voice,” he says. “People are wowed when they first hear it … as I speak to friends and family, I do get a sense of normalcy compared to when I just had my banked voice.” Cave has heard similar sentiments from other people with motor neuron disease. “Some [of the people with MND I’ve been working with] have told me that once they started using ElevenLabs voices people started to talk to them more, and that people would pop by more and feel more comfortable talking to them,” he says. That’s important, he stresses. Social isolation is common for people with MND, especially for those with advanced cases, he says, and anything that can make social interactions easier stands to improve the well-being of people with these disorders: “This is something that [could] help make lives better in what is the hardest time for them.” “I don’t think I would speak or interact with others as much as I do without it,” says Rodriguez. A “very slow game of Ping-Pong” But the tool is not a perfect speech aid. In order to create text for the voice clone, words must be typed out. There are lots of devices that help people with MND to type using their fingers or eye or tongue movements, for example. The setup works fine for prepared sentences, and Rodriguez has used his voice clone to deliver a comedy routine—something he had started to do before his ALS diagnosis. “As time passed and I began to lose my voice and my ability to walk, I thought that was it,” he says. “But when I heard my voice for the first time, I knew this tool could be used to tell jokes again.” Being on stage was “awesome” and “invigorating,” he adds. Jules Rodriguez performs his comedy set on stage.DAN MONO FROM DART VISION But typing isn’t instant, and any conversations will include silent pauses. “Our arguments are very slow paced,” says Fernandez. Conversations are like “a very slow game of Ping-Pong,” she says. Joyce Esser loves being able to re-create her old voice. But she finds the technology impractical. “It’s good for pre-prepared statements, but not for conversation,” she says. She has her voice clone loaded onto a phone app designed for people with little or no speech, which works with ElevenLabs. But it doesn’t allow her to use “swipe typing”—a form of typing she finds to be quicker and easier. And the app requires her to type sections of text and then upload them one at a time, she says, adding: “I’d just like a simple device with my voice installed onto it that I can swipe type into and have my words spoken instantly. For the time being, her “first choice” communication device is a simple writing board. “It’s quick and the listener can engage by reading as I write, so it’s as instant and inclusive as can be,” she says.  Esser also finds that when she uses the voice clone, the volume is too low for people to hear, and it speaks too quickly and isn’t expressive enough. She says she’d like to be able to use emojis to signal when she’s excited or angry, for example. Rodriguez would like that option too. The voice clone can sound a bit emotionally flat, and it can be difficult to convey various sentiments. “The issue I have is that when you write something long, the AI voice almost seems to get tired,” he says.   “We appear to have the authenticity of voice,” says Cave. “What we need now is the authenticity of delivery.” Other groups are working on that part of the equation. The Scott-Morgan Foundation, a charity with the goal of making new technologies available to improve the well-being of people with disorders like MND, is working with technology companies to develop custom-made systems for 10 individuals, says executive director LaVonne Roberts. The charity is investigating pairing ElevenLabs’ voice clones with an additional technology— hyperrealistic avatars for people with motor neuron disease. These “twins” look and sound like a person and can “speak” from a screen. Several companies are working on AI-generated avatars. The Scott-Morgan Foundation is working with D-ID. Creating the avatar isn’t an easy process. To create hers, Erin Taylor, who was diagnosed with ALS when she was 23, had to speak 500 sentences into a camera and stand for five hours, says Roberts. “We were worried it was going to be impossible,” she says. The result is impressive. “Her mom told me, ‘You’re starting to capture [Erin’s] smile,’” says Roberts. “That really hit me deeper and heavier than anything.” Taylor showcased her avatar at a technology conference in January with a pre-typed speech. It’s not clear how avatars like these might be useful on a day-to-day basis, says Cave: “The technology is so new that we’re still trying to come up with use cases that work for people with MND. The question is … how do we want to be represented?” Cave says he has seen people advocate for a system where hyperrealistic avatars of a person with MND are displayed on a screen in front of the person’s real face. “I would question that right from the start,” he says. Both Rodriguez and Esser can see how avatars might help people with MND communicate. “Facial expressions are a massive part of communication, so the idea of an avatar sounds like a good idea,” says Esser. “But not one that covers the user’s face … you still need to be able to look into their eyes and their souls.” The Scott-Morgan Foundation will continue to work with technology companies to develop more communication tools for people who need them, says Roberts. And ElevenLabs plans to partner with other organizations that work with people with speech difficulties so that more of them can access the technology. “Our goal is to give the power of voice to 1 million people,” says Noel. In the meantime, people like Cave, Esser, and Rodriguez are keen to spread the word on voice clones to others in the MND community. “It really does change the game for us,” says Fernandez. “It doesn’t take away most of the things we are dealing with, but it really enhances the connection we can have together as a family.”

Jules Rodriguez lost his voice in October of last year. His speech had been deteriorating since a diagnosis of amyotrophic lateral sclerosis (ALS) in 2020, as the muscles in his head and neck progressively weakened along with those in the rest of his body.

By 2024, doctors were worried that he might not be able to breathe on his own for much longer. So Rodriguez opted to have a small tube inserted into his windpipe to help him breathe. The tracheostomy would extend his life, but it also brought an end to his ability to speak.

“A tracheostomy is a scary endeavor for people living with ALS, because it signifies crossing a new stage in life, a stage that is close to the end,” Rodriguez tells me using a communication device. “Before the procedure I still had some independence, and I could still speak somewhat, but now I am permanently connected to a machine that breathes for me.”

Rodriguez and his wife, Maria Fernandez, who live in Miami, thought they would never hear his voice again. Then they re-created it using AI. After feeding old recordings of Rodriguez’s voice into a tool trained on voices from film, television, radio, and podcasts, the couple were able to generate a voice clone—a way for Jules to communicate in his “old voice.”

“Hearing my voice again, after I hadn’t heard it for some time, lifted my spirits,” says Rodriguez, who today communicates by typing sentences using a device that tracks his eye movements, which can then be “spoken” in the cloned voice. The clone has enhanced his ability to interact and connect with other people, he says. He has even used it to perform comedy sets on stage.

Rodriguez is one of over a thousand people with speech difficulties who have used the voice cloning tool since ElevenLabs, the company that developed it, made it available to them for free. Like many new technologies, the AI voice clones aren’t perfect, and some people find them impractical in day-to-day life. But the voices represent a vast improvement on previous communication technologies and are already improving the lives of people with motor neuron diseases, says Richard Cave, a speech and language therapist at the Motor Neuron Disease Association in the UK. “This is genuinely AI for good,” he says.

Cloning a voice

Motor neuron diseases are a group of disorders in which the neurons that control muscles and movement are progressively destroyed. They can be difficult to diagnose, but typically, people with these disorders start to lose the ability to move various muscles. Eventually, they can struggle to breathe, too. There is no cure.

Rodriguez started showing symptoms of ALS in the summer of 2019. “He started losing some strength in his left shoulder,” says Fernandez, who sat next to him during our video call. “We thought it was just an old sports injury.” His arm started to get thinner, too. In November, his right thumb “stopped working” while he was playing video games. It wasn’t until February 2020, when Rodriguez saw a hand specialist, that he was told he might have ALS. He was 35 years old. “It was really, really, shocking to hear from somebody … you see about your hand,” says Fernandez. “That was a really big blow.”

Like others with ALS, Rodriguez was advised to “bank” his voice—to tape recordings of himself saying hundreds of phrases. These recordings can be used to create a “banked voice” to use in communication devices. The result was jerky and robotic.

It’s a common experience, says Cave, who has helped 50 people with motor neuron diseases bank their voices. “When I first started at the MND Association [around seven years ago], people had to read out 1,500 phrases,” he says. It was an arduous task that would take months. 

And there was no way to predict how lifelike the resulting voice would be—often it ended up sounding quite artificial. “It might sound a bit like them, but it certainly couldn’t be confused for them,” he says. Since then, the technology has improved, and for the last year or two the people Cave has worked with have only needed to spend around half an hour recording their voices. But though the process was quicker, he says, the resulting synthetic voice was no more lifelike.

Then came the voice clones. ElevenLabs has been developing AI-generated voices for use in films, televisions, and podcasts since it was founded three years ago, says Sophia Noel, who oversees partnerships between the company and nonprofits. The company’s original goal was to improve dubbing, making voice-overs in a new language seem more natural and less obvious. But then the technical lead of Bridging Voice, an organization that works to help people with ALS communicate, told ElevenLabs that its voice clones were useful to that group, says Noel. Last August, ElevenLabs launched a program to make the technology freely available to people with speech difficulties.

Suddenly, it became much faster and easier to create a voice clone, says Cave. Instead of having to record phrases, users can instead upload voice recordings from past WhatsApp voice messages or wedding videos, for example. “You need a minimum of a minute to make anything, but ideally you want around 30 minutes,” says Noel. “You upload it into ElevenLabs. It takes about a week, and then it comes out with this voice.”

Rodriguez played me a statement using both his banked voice and his voice clone. The difference was stark: The banked voice was distinctly unnatural, but the voice clone sounded like a person. It wasn’t entirely natural—the words came a little fast, and the emotive quality was slightly lacking. But it was a huge improvement. The difference between the two is, as Fernandez puts it, “like night and day.”

The ums and ers

Cave started introducing the technology to people with MND a few months ago. Since then, 130 of them have started using it, “and the feedback has been unremittingly good,” he says. The voice clones sound far more lifelike than the results of voice banking. “They [include] pauses for breath, the ums, the ers, and sometimes there are stammers,” says Cave, who himself has a subtle stammer. “That feels very real to me, because actually I would rather have a synthetic voice representing me that stammered, because that’s just who I am.”

Joyce Esser is one of the 130 people Cave has introduced to voice cloning. Esser, who is 65 years old and lives in Southend-on-Sea in the UK, was diagnosed with bulbar MND in May last year.

Bulbar MND is a form of the disease that first affects muscles in the face, throat, and mouth, which can make speaking and swallowing difficult. Esser can still talk, but slowly and with difficulty. She’s a chatty person, but she says her speech has deteriorated “quite quickly” since January. We communicated via a combination of email, video call, speaking, a writing board, and text-to-speech tools. “To say this diagnosis has been devastating is an understatement,” she tells me. “Losing my voice has been a massive deal for me, because it’s such a big part of who I am.”

Joyce Esser
Joyce Esser and her husband Paul on holiday in the Maldives.
COURTESY OF JOYCE ESSER

Esser has lots of friends all over the country, Paul Esser, her husband of 38 years, tells me. “But when they get together, they have a rule: Don’t talk about it,” he says. Talking about her MND can leave Joyce sobbing uncontrollably. She had prepared a box of tissues for our conversation.

Voice banking wasn’t an option for Esser. By the time her MND was diagnosed, she was already losing her ability to speak. Then Cave introduced her to the ElevenLabs offering. Esser had a four-and-a-half-minute-long recording of her voice from a recent local radio interview and sent it to Cave to create her voice clone. “When he played me my AI voice, I just burst into tears,” she says. “I’D GOT MY VOICE BACK!!!! Yippeeeee!”

“We were just beside ourselves,” adds Paul. “We thought we’d lost [her voice] forever.”

Hearing a “lost” voice can be an incredibly emotional experience for everyone involved. “It was bittersweet,” says Fernandez, recalling the first time she heard Rodriguez’s voice clone. “At the time, I felt sorrow, because [hearing the voice clone] reminds you of who he was and what we’ve lost,” she says. “But overwhelmingly, I was just so thrilled … it was so miraculous.”

Rodriguez says he uses the voice clone as much as he can. “I feel people understand me better compared to my banked voice,” he says. “People are wowed when they first hear it … as I speak to friends and family, I do get a sense of normalcy compared to when I just had my banked voice.”

Cave has heard similar sentiments from other people with motor neuron disease. “Some [of the people with MND I’ve been working with] have told me that once they started using ElevenLabs voices people started to talk to them more, and that people would pop by more and feel more comfortable talking to them,” he says. That’s important, he stresses. Social isolation is common for people with MND, especially for those with advanced cases, he says, and anything that can make social interactions easier stands to improve the well-being of people with these disorders: “This is something that [could] help make lives better in what is the hardest time for them.”

“I don’t think I would speak or interact with others as much as I do without it,” says Rodriguez.

A “very slow game of Ping-Pong”

But the tool is not a perfect speech aid. In order to create text for the voice clone, words must be typed out. There are lots of devices that help people with MND to type using their fingers or eye or tongue movements, for example. The setup works fine for prepared sentences, and Rodriguez has used his voice clone to deliver a comedy routine—something he had started to do before his ALS diagnosis. “As time passed and I began to lose my voice and my ability to walk, I thought that was it,” he says. “But when I heard my voice for the first time, I knew this tool could be used to tell jokes again.” Being on stage was “awesome” and “invigorating,” he adds.

Jules Rodriguez on stage
Jules Rodriguez performs his comedy set on stage.
DAN MONO FROM DART VISION

But typing isn’t instant, and any conversations will include silent pauses. “Our arguments are very slow paced,” says Fernandez. Conversations are like “a very slow game of Ping-Pong,” she says.

Joyce Esser loves being able to re-create her old voice. But she finds the technology impractical. “It’s good for pre-prepared statements, but not for conversation,” she says. She has her voice clone loaded onto a phone app designed for people with little or no speech, which works with ElevenLabs. But it doesn’t allow her to use “swipe typing”—a form of typing she finds to be quicker and easier. And the app requires her to type sections of text and then upload them one at a time, she says, adding: “I’d just like a simple device with my voice installed onto it that I can swipe type into and have my words spoken instantly.

For the time being, her “first choice” communication device is a simple writing board. “It’s quick and the listener can engage by reading as I write, so it’s as instant and inclusive as can be,” she says. 

Esser also finds that when she uses the voice clone, the volume is too low for people to hear, and it speaks too quickly and isn’t expressive enough. She says she’d like to be able to use emojis to signal when she’s excited or angry, for example.

Rodriguez would like that option too. The voice clone can sound a bit emotionally flat, and it can be difficult to convey various sentiments. “The issue I have is that when you write something long, the AI voice almost seems to get tired,” he says.  

“We appear to have the authenticity of voice,” says Cave. “What we need now is the authenticity of delivery.”

Other groups are working on that part of the equation. The Scott-Morgan Foundation, a charity with the goal of making new technologies available to improve the well-being of people with disorders like MND, is working with technology companies to develop custom-made systems for 10 individuals, says executive director LaVonne Roberts.

The charity is investigating pairing ElevenLabs’ voice clones with an additional technology— hyperrealistic avatars for people with motor neuron disease. These “twins” look and sound like a person and can “speak” from a screen. Several companies are working on AI-generated avatars. The Scott-Morgan Foundation is working with D-ID.

Creating the avatar isn’t an easy process. To create hers, Erin Taylor, who was diagnosed with ALS when she was 23, had to speak 500 sentences into a camera and stand for five hours, says Roberts. “We were worried it was going to be impossible,” she says. The result is impressive. “Her mom told me, ‘You’re starting to capture [Erin’s] smile,’” says Roberts. “That really hit me deeper and heavier than anything.”

Taylor showcased her avatar at a technology conference in January with a pre-typed speech. It’s not clear how avatars like these might be useful on a day-to-day basis, says Cave: “The technology is so new that we’re still trying to come up with use cases that work for people with MND. The question is … how do we want to be represented?” Cave says he has seen people advocate for a system where hyperrealistic avatars of a person with MND are displayed on a screen in front of the person’s real face. “I would question that right from the start,” he says.

Both Rodriguez and Esser can see how avatars might help people with MND communicate. “Facial expressions are a massive part of communication, so the idea of an avatar sounds like a good idea,” says Esser. “But not one that covers the user’s face … you still need to be able to look into their eyes and their souls.”

The Scott-Morgan Foundation will continue to work with technology companies to develop more communication tools for people who need them, says Roberts. And ElevenLabs plans to partner with other organizations that work with people with speech difficulties so that more of them can access the technology. “Our goal is to give the power of voice to 1 million people,” says Noel. In the meantime, people like Cave, Esser, and Rodriguez are keen to spread the word on voice clones to others in the MND community.

“It really does change the game for us,” says Fernandez. “It doesn’t take away most of the things we are dealing with, but it really enhances the connection we can have together as a family.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia partners with cybersecurity vendors for real-time monitoring

Unlike conventional offerings that rely on intrusive methods or software agents, BlueField-3 DPUs function as a virtual security overlay. They inspect network traffic and safeguard host integrity without disrupting operations. Other packages rely on tapping devices to access network data, which helps create a map of interconnected devices. But these

Read More »

PX Group wins ‘landmark’ Teesside biomass contract

Saltend Chemicals Park-owner PX Group has been awarded an operations and maintenance (O&M) contract for the Tees Renewable Energy Plant (Tees REP). The facility, which is located in Tees Valley, is one of the world’s largest purpose-built pellet biomass power plants, according to PX Group’s announcement. It has the capacity to generate 299 MW of electricity, or 2.3 TWh per year, equivalent to powering 600,000 homes. Tees REP is owned and operated by MGT Teesside, which was acquired by Australian investor Macquarie Group and Danish pension fund PKA in 2016 on a 50:50 basis. On its website, MGT says the plant uses co-products from timberland that is primarily for growing saw-timber for its biomass pellets. In order to meet sustainability criteria regulated by Ofgem the biomass is traced from point of origin, though the website does not provide further details. However, previous announcements from several years ago have said that the feedstock would be sourced from sustainable forestry projects developed by the MGT team and partners in North and South America, as well as the Baltics. US-based Enviva Wilmington Holdings firmed up a 15-year offtake agreement with Tees REP in 2016 for nearly 1m tonnes per year (tpy) of wood pellets. The plant was initially expected to enter service in 2020 but ran into delays during construction. Although it is reported to have begun generating electricity in 2022, it subsequently experienced technical challenges that have led to outages at the facility. PX Group, a portfolio company of private equity firm Ara Partners, is taking over O&M at Tees REP following a prior collaboration through its PX Engineering division, which provided consultancy services to the plant during its commissioning phase. PX operates the St Fergus Gas plant in Peterhead and the Teesside Gas processing plant on behalf of owners, North Sea

Read More »

New Jersey residential customers face 20% bill hikes, driven by PJM capacity prices: BPU

New Jersey’s residential customers face electricity bill hikes of up to 20% beginning in June based on the results of a just-held electricity supply auction, the New Jersey Board of Public Utilities said Wednesday. The results of the “Basic Generation Service” annual auction are mainly driven by the PJM Interconnection’s most recent capacity auction, according to Christine Guhl-Sadovy, BPU president. Increasing electricity demand and a lack of new power supplies due to lagging generation interconnection are also factors in the auction’s results, she said in a press release. The Basic Generation Service auction helps set the cost of electricity for most New Jersey residents and many businesses for a 12-month period starting June 1, the BPU said. PJM’s capacity auction in July cleared at record-setting prices, according to Brian Lipman, director for the Division of Rate Counsel, which represents utility ratepayers. “While some of that is due to an anticipated increase in the demand for electricity, most of the increase is due to PJM’s failure to fix its market rules or timely interconnect new generation supply,” Lipman said in the press release. “The Board’s authority is limited at the federal and regional level, but must carefully examine every state-level filing before it with an eye towards affordability.” PJM’s last capacity auction sparked complaints at the Federal Energy Regulatory Commission by ratepayer advocates and others as well as proposals by PJM to change its capacity auction rules and to bring more power supplies online. New Jersey Gov. Phil Murphy joined other governors in pressing PJM for rule changes that would protect ratepayers from rising costs in upcoming capacity auctions.  The BPU estimates that monthly electric bills for Public Service Electric and Gas residential customers will increase by 17.2% on average as a result of the agency’s electricity supply auction. The agency

Read More »

Petrobras Sells First Biofuel-Blend VLSFO in Singapore

State-owned energy company Petróleo Brasileiro S.A. (Petrobras) completed its first sale of very low sulfur fuel oil (VLSFO) with 24 percent renewable content in the Asian bunker market. The company said in a media release that it sold the VLSFO with Golden Island, a licensed bunker supplier in Singapore, noting that the sale was made in early February, for delivery before the end of the month. The product sold by Petrobras Singapore consists of a mixture comprising 76 percent mineral fuel oil, primarily obtained from Petrobras refineries, and 24 percent used cooking oil methyl ester (UCOME), a biofuel created from the processing of used cooking oil (UCO) sourced locally, the company said. It added that Petrobras Singapore holds the ISCC EU certification, “which guarantees that its product meets the strict sustainability criteria that accompany the biofuel logistics chain involved in the process”. For the formulation, Petrobras said it used the facilities of the Jurong Port Universal Terminal, where it has a lease agreement for fuel oil and B24 tanks. The process of supplying bunkers with renewable content adheres to the same operational procedures used for 100 percent mineral bunkers, primarily utilizing smaller vessels to load the product at the terminal and deliver it to the consuming ship, Petrobras said. “The commercialization of VLSFO with 24 percent renewable content in the Asian market is in line with Petrobras’ strategy of developing new products towards a low-carbon market, innovating to generate value for the business, and enabling solutions in new energy and decarbonization”, Claudio Schlosser,  Petrobras’ Director of Logistics, Commercialization and Markets, said. The first VLFSO sale in the Asian bunker market comes only days after the company reported that it had hit all its production targets for the year 2024, set out in its 2024-2028+ Strategic Plan, within the ±4 percent

Read More »

USA Compression Doubles Q4 Profit Year on Year

USA Compression Partners LP reported a net income of $25.4 million in the fourth quarter of 2024, doubling the net income reported in the corresponding quarter a year prior. The company noted that its revenues hit a record high in the fourth quarter of 2024, reaching $245.9 million, which compares to $225 million reported in Q3, 2023. “Our fourth-quarter financial results included another consecutive quarter of record-setting revenues and Adjusted EBITDA, as well as record-setting Distributable Cash Flow and Distributable Cash Flow Coverage. These financial results were driven by improved operational efficiencies as we again achieved record average revenue per-horsepower of $20.85 and record revenue-generating horsepower of 3.56 million, which continues to reflect the tight contract compression service space”, Clint Green, USA Compression’s President and Chief Executive Officer, said. USA Compression’s adjusted EBITDA (earnings before interest, taxes, depreciation, and amortization) for the fourth quarter of 2024 reached $155.5 million, versus $138.6 million in the fourth quarter of 2023. “We believe the macro backdrop continues to be favorable in the near- and medium-term. We expect the price of oil to remain constructive and continue to drive growth in associated gas volumes, particularly in the Permian”, Green said. “We believe our assets in Texas, Oklahoma, and Louisiana will benefit from anticipated growth in natural gas volumes necessary to support increased LNG and pipeline exports along the Gulf Coast, as well as the electrification of everything, driven by AI and data center demand”, he added. According to Green, USA Compression anticipates an expansion capital range of $120 million to $140 million with a refocus on contracted new horsepower unit additions that will be largely back-end loaded for the year. “Additionally, we expect the Energy Transfer shared services model to begin taking effect at the outset of 2025 and anticipate a reduction in back-office

Read More »

Carrier unveils data center strategy, growth in Q4 HVAC sales

Dive Brief: Carrier Global Corp. saw 6% year-over-year sales growth in the fourth quarter of 2024, led by an 11% organic increase in HVAC sales, the company reported Tuesday. HVAC sales in the Americas saw growth in the high teens, driven by continued strength in commercial and North America residential, which were both up double digits and partially offset declines in its light commercial segment, Carrier said in its report. Carrier, like many other HVAC and cooling providers, is also pushing into the data center market and is expecting $1 billion in sales during 2025, CEO David Gitlin said on its Feb. 11 earnings call. “The idea for hyperscalers and certainly the [co-locators] is to say, ‘You worry about running the data centers; let us worry about all the cooling you need by having optimized integrated cooling systems,’” Gitlin said. Dive Insight: Carrier’s global commercial HVAC grew in the mid-teens compared with last year, “positioning [Carrier] for another year of double-digit growth in 2025 in the business,” Gitlin said on the call. Commercial HVAC and aftermarket sales were also up by double digits year-over-year for the fourth consecutive year, according to the company’s earnings presentation.  HVAC in the Americas increased by low-teen percentages year-over-year, Gitlin said. The company has also worked to improve manufacturing capacity in the Americas, increasing output of its facility in Charlotte, North Carolina, by about 50%. In addition to a new facility the company is building, “we can more than double our output for North America and the demand has been great,” Gitlin said.  Within the U.S., Carrier is working with utilities to provide an end-to-end integrated battery heat pump solution with automated controls that can run the system using stored energy during peak hours, while recharging during the trough of energy grid usage, Gitlin said.

Read More »

Exelon data center pipeline jumps to 17 GW as load forecast turns positive

Dive Brief: The pipeline of data centers and and other “high density load” projects in Exelon utilities service territories more than doubled to 17 GW from a year ago, Calvin Butler, Exelon president and CEO, said Wednesday during an earnings conference call. Exelon expects its load will grow by 1.3% over the next four years starting this year compared with a 0.4% decline over the previous eight years, according to an earnings call presentation. Amid concerns that load growth in the PJM Interconnection may outstrip electricity supply, Butler called for non-market approaches to ensuring the grid operator has adequate and affordable generating capacity. “It’s clear that states are and should be proactively involved in supply solutions that complement the markets,” Butler said. “A variety of solutions across regulated and merchant participants is necessary.” Dive Insight: “It’s clear as we face rapid and significant load growth we need enhanced solutions at PJM, and we also need other approaches complementary to PJM that can meet those evolving customer needs as cost effectively as possible,” Butler said. Exelon officials are consulting with stakeholders on 45 state bills that affect supply or demand-side solutions, according to Butler. Exelon supports states taking a more direct role in oversight of energy security through solutions that complement the markets and deliver supply cost-effectively, the Chicago-based company said in the presentation. Options states are considering include expanding demand-side solutions, building power lines, state involvement in obtaining supply, including regulated utility participation, and revising clean energy goals, according to Exelon. Highlighting Butler’s concern about electricity prices, New Jersey utility regulators on Wednesday said they expect electricity bills will increase 17.3% for Exelon’s Atlantic City Electric utility subsidiary and up to 20% for other utilities in the state based on the results of a just-held electricity supply auction. Exelon expects

Read More »

Cisco financials catch AI demand, enterprise networking growth

Second, AI inference and enterprise clouds. “Our Nexus switches, Nvidia-based AI servers, AI Pods, and Hyperfabric and AI Defense software are designed to simplify and de-risk AI infrastructure deployment and bring the power of open, hyperscale AI networking to the enterprise,” Robbins said. And third, AI network connectivity. “Customers are leveraging our technology platforms across switching, routing, security and observability to modernize, secure, and  automate their network operations to prepare for pervasive deployment of AI applications,” Robbins said.  “This, combined with mature back-end models will lead to increased capacity requirements from both private and public front-end cloud networks,” Robbins said. While AI networking is just at the beginning, enterprise networking is Cisco’s bread-and-butter and at least in the second quarter of 2025, results are positive. “Networking product orders grew double-digits driven by switching, enterprise routing, webscale infrastructure, and industrial networking applications in our IoT products,” Robbins said. “Campus switching orders were up double digits and we expect our campus switching portfolio, as well as our WiFi 7 access points, to gain traction with increasing return-to-office policies,” Robbins said. “We also continue to see robust order growth for data center switching, this being our fourth consecutive quarter of double-digit growth,” Robbins said.   “We expect this to continue as our 800G Nexus switches based on our 51.2 terabit Silicon One chip become available in April for AI cloud buildouts.”

Read More »

Cisco doubles down on AI with new, updated certifications

Essentials is geared toward data center network engineers, data center system engineers, IT infrastructure architects, and IT operations engineers. The training focuses on professionals working on a multi-vendor approach to building AI infrastructure. In addition to adding AI infrastructure skills to its data center certification track from CCNP to CCIE, Cisco is also working to add relevant AI and machine learning skills to its existing certifications. The focus on AI in training is important now, according to Cisco, as business and technology leaders work to keep pace with technology advances and stay ahead of their competitors. “In a dynamic landscape where competition is fierce, speed decides the winders. Leaders who act decisively today to build resilient, future-proofed networks will be the AI-forward leaders driving real value for their business,” said Jeetu Patel, Cisco’s Chief Product Officer, in a statement. “Eventually, there will be only two kinds of companies: those that are AI companies, and those that are irrelevant.” Also at Cisco Live EMEA in Amsterdam, the company released the results of its CEO study, which was conducted by Opinion Matters between December 24, 2024 and January 2, 2025 and surveyed some 2,503 CEOs from companies with more than 250 employees worldwide. The research shows that the majority of CEOs polled recognize AI’s potential benefits and plan to integrate AI into their operations, 74% fear that gaps in knowledge will hinder decisions in the boardroom and 58% worry it will stifle growth. More than 70% of the CEOs worry about losing ground to competitors and missing out on opportunities because of IT and infrastructure gaps. The study shows that 61% of CEOs are improving AI education to address their concerns. “CEOs are turning to AI for its transformative potential: driving efficiency (69%), spurring innovation (68%), and outpacing competitors (54%). But fulfilling

Read More »

Cisco data center switches feature baked-in security for AI, networking duties

The N9324C, available soon, is positioned as an edge device where customers can inspect and protect traffic and access as users come in and out of the network, Wollenweber said. The second model, which will be released towards the middle of the year, is a top-of-rack switch with 25G ports as well as 100/400G uplinks for server connectivity. “We have some customers that are going to deploy these when they do their next network refresh. When they start to look at new switches, they’ll deploy these smart switches today and deploy it with Hypershield. Or, they’ll even add these network services over time, because they now have a more intelligent device that can take on new personas or new features and functions,” Wollenweber said.  Integrated security with Cisco Hypershield a draw for enterprises The range of networking and security services the new N9300s support will make them attractive to data center customers, experts said.  “While AI applications have brought the bandwidth and latency concerns back to the top of the networking requirements, additional capabilities are also top-of-mind. Security, especially in hybrid and multi-cloud networks, requires segmentation and enforcement, and the Cisco N9300 can be hooked into Cisco Hypershield to be a network-based enforcement node for certain policies,” said Paul Nicholson, research vice president, cloud and datacenter networks, with IDC.  “Also, the digital twin capabilities, where upgrades and changes can be tested on a shadow data plane before going into production, will be attractive to IT operations, especially if they do not have the capability today,” Nicholson said.  Additional hardware capabilities can offer multiple benefits – accelerating security policies, offloading other processors to concentrate on their core tasks for better networking performance, and adding capabilities at scale that would not be practical before, Nicholson said. 

Read More »

FPGAs lose their luster in the GenAI era

Part of the problem is that they are one trick pony. Both Intel and AMD use their FPGAs for high-end networking cards. “I see these things are basically really powerful networking cards and nothing more or very little beyond that,” said Alvin Nguyen, senior analyst with Forrester Research. “I think AI and GenAI helped kind of push away focus from leveraging [FPGA]. And I think there were already moves away from it prior to [the GenAI revolution], that put the pedal to the metal in terms of not looking at the FPGAs at the high end. I think now it’s [all about] DeepSeek and is kind of a nice reset moment,” he added. One of the things about the recent news around DeepSeek AI that rattled Wall Street so hard is the Chinese company achieved performance comparable to ChatGPT and Google Gemini but without the billions of dollars’ worth of Nvidia chips. It was done using commercial, consumer grade cards that were considerably cheaper than their data center counterparts. That means all might not be lost when it comes to FPGA. “After DeepSeek showing that you could use lower power devices were more commonly available, [FPGA] might be valuable again,” said Nguyen. But he adds “It’s not going to be valuable for all AI workloads like the LLMs,  where you need as much memory, as much network bandwidth, as much compute, in terms of GPU as possible.” So Nguyen feels that DeepSeek show you don’t necessarily need billions of dollars of cutting-edge Nvidia GPUs, you can get away with an FPGA, a CPU, or use consumer grade GPUs. “I think that’s kind of a nice ‘aha’ moment from an AI perspective, to show there’s a new low bar that’s being set. If you can throw CPUs with a bunch of

Read More »

Your ‘new’ Seagate data center hard drive is likely a used one

The cryptocurrency connection: how used HDDs entered the market The affected hard drives reportedly stem from cryptocurrency mining farms, particularly those that mined Chia cryptocurrency. Unlike traditional cryptocurrency mining that relies on GPUs, Chia mining is storage-intensive, leading to a surge in HDD demand during its peak. As Chia’s profitability declined, many mining operations shut down, offloading their heavily used HDDs into secondary markets. Some of these drives have had their internal usage logs reset to appear as new, deceiving customers, the report added. The fraudulent sales first came to light in January when buyers began inspecting their newly purchased Seagate Exos data center-grade HDDs. SMART parameters, which track drive usage, had been reset to mask wear. However, deeper analysis using FARM (Field-Accessible Reliability Metrics) values exposed the true operational history of these drives. More than 200 reports have surfaced globally, detailing instances where supposedly new Seagate Exos data center HDDs had been in operation for 15,000 to 50,000 hours. These drives have been sold via third-party marketplaces, including eBay, and have been reported in countries such as Germany, Switzerland, Austria, the UK, Japan, and the US, the report added. A request for comment from Seagate on further actions remains unanswered. Industry implications and next steps The widespread nature of this fraudulent resale practice raises concerns about transparency in the secondary market. System integrators who resell HDDs may not always provide adequate warranties, leaving enterprise customers at risk.

Read More »

Nokia changes CEO; Intel data center chief takes over

Nokia has announced that CEO Pekka Lundmark will step down. He took up the position in 2020. He will be replaced by Justin Hotard, who is currently Intel’s Chief Data Center Officer and has previously held executive positions at technology companies such as Hewlett Packard Enterprise and NCR Corporation. “I am honored to have the opportunity to lead Nokia, a global leader in connectivity with a unique technology heritage. Networks are the backbone that drives society and businesses and enables generational shifts in technology, such as the one we are currently experiencing in AI,” said Justin Hotard in a statement. Lundmark will step down on March 31, 2025 and continue as an advisor to the new CEO for the remainder of the year. Hotard will take office on April 1, 2025. He will be based at Nokia headquarters in Espoo, Finland.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »