Stay Ahead, Stay ONMINE

Synthesia’s AI clones are more expressive than ever. Soon they’ll be able to talk back.

Earlier this summer, I walked through the glassy lobby of a fancy office in London, into an elevator, and then along a corridor into a clean, carpeted room. Natural light flooded in through its windows, and a large pair of umbrella-like lighting rigs made the room even brighter. I tried not to squint as I took my place in front of a tripod equipped with a large camera and a laptop displaying an autocue. I took a deep breath and started to read out the script. I’m not a newsreader or an actor auditioning for a movie—I was visiting the AI company Synthesia to give it what it needed to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me.  When Synthesia launched in 2017, its primary purpose was to match AI versions of real human faces—for example, the former footballer David Beckham—with dubbed voices speaking in different languages. A few years later, in 2020, it started giving the companies that signed up for its services the opportunity to make professional-level presentation videos starring either AI versions of staff members or consenting actors. But the technology wasn’t perfect. The avatars’ body movements could be jerky and unnatural, their accents sometimes slipped, and the emotions indicated by their voices didn’t always match their facial expressions. Now Synthesia’s avatars have been updated with more natural mannerisms and movements, as well as expressive voices that better preserve the speaker’s accent—making them appear more humanlike than ever before. For Synthesia’s corporate clients, these avatars will make for slicker presenters of financial results, internal communications, or staff training videos. I found the video demonstrating my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was. This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?   The creation process When my former colleague Melissa visited Synthesia’s London studio to create an avatar of herself last year, she had to go through a long process of calibrating the system, reading out a script in different emotional states, and mouthing the sounds needed to help her avatar form vowels and consonants. As I stand in the brightly lit room 15 months later, I’m relieved to hear that the creation process has been significantly streamlined. Josh Baker-Mendoza, Synthesia’s technical supervisor, encourages me to gesture and move my hands as I would during natural conversation, while simultaneously warning me not to move too much. I duly repeat an overly glowing script that’s designed to encourage me to speak emotively and enthusiastically. The result is a bit as if if Steve Jobs had been resurrected as a blond British woman with a low, monotonous voice.  It also has the unfortunate effect of making me sound like an employee of Synthesia.“I am so thrilled to be with you today to show off what we’ve been working on. We are on the edge of innovation, and the possibilities are endless,” I parrot eagerly, trying to sound lively rather than manic. “So get ready to be part of something that will make you go, ‘Wow!’ This opportunity isn’t just big—it’s monumental.” Just an hour later, the team has all the footage it needs. A couple of weeks later I receive two avatars of myself: one powered by the previous Express-1 model and the other made with the latest Express-2 technology. The latter, Synthesia claims, makes its synthetic humans more lifelike and true to the people they’re modeled on, complete with more expressive hand gestures, facial movements, and speech. You can see the results for yourself below.  COURTESY SYNTHESIA Last year, Melissa found that her Express-1-powered avatar failed to match her transatlantic accent. Its range of emotions was also limited—when she asked her avatar to read a script angrily, it sounded more whiny than furious. In the months since, Synthesia has improved Express-1, but the version of my avatar made with the same technology blinks furiously and still struggles to synchronize body movements with speech. By way of contrast, I’m struck by just how much my new Express-2 avatar looks like me: Its facial features mirror my own perfectly. Its voice is spookily accurate too, and although it gesticulates more than I do, its hand movements generally marry up with what I’m saying.  But the tiny telltale signs of AI generation are still there if you know where to look. The palms of my hands are bright pink and as smooth as putty. Strands of hair hang stiffly around my shoulders instead of moving with me. Its eyes stare glassily ahead, rarely blinking. And although the voice is unmistakably mine, there’s something slightly off about my digital clone’s intonations and speech patterns. “This is great!” my avatar randomly declares, before slipping back into a saner register. Anna Eiserbeck, a postdoctoral psychology researcher at the Humboldt University of Berlin who has studied how humans react to perceived deepfake faces, says she isn’t sure she’d have been able to identify my avatar as a deepfake at first glance. But she would eventually have noticed something amiss. It’s not just the small details that give it away—my oddly static earring, the way my body sometimes moves in small, abrupt jerks. It’s something that runs much deeper, she explains. “Something seemed a bit empty. I know there’s no actual emotion behind it— it’s not a conscious being. It does not feel anything,” she says. Watching the video gave her “this kind of uncanny feeling.”  My digital clone, and Eiserbeck’s reaction to it, make me wonder how realistic these avatars really need to be.  I realize that part of the reason I feel disconcerted by my avatar is that it behaves in a way I rarely have to. Its oddly upbeat register is completely at odds with how I normally speak; I’m a die-hard cynical Brit who finds it difficult to inject enthusiasm into my voice even when I’m genuinely thrilled or excited. It’s just the way I am. Plus, watching the videos on a loop makes me question if I really do wave my hands about that way, or move my mouth in such a weird manner. If you thought being confronted with your own face on a Zoom call was humbling, wait until you’re staring at a whole avatar of yourself.  When Facebook was first taking off in the UK almost 20 years ago, my friends and I thought illicitly logging into each other’s accounts and posting the most outrageous or rage-inducing status updates imaginable was the height of comedy. I wonder if the equivalent will soon be getting someone else’s avatar to say something truly embarrassing: expressing support for a disgraced politician or (in my case) admitting to liking Ed Sheeran’s music.  Express-2 remodels every person it’s presented with into a polished professional speaker with the body language of a hyperactive hype man. And while this makes perfect sense for a company focused on making glossy business videos, watching my avatar doesn’t feel like watching me at all. It feels like something else entirely. How it works The real technical challenge these days has less to do with creating avatars that match our appearance than with getting them to replicate our behavior, says Björn Schuller, a professor of artificial intelligence at Imperial College London. “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” he says. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.” To achieve an improved level of realism, Synthesia developed a number of new audio and video AI models. The team created a voice cloning model to preserve the human speaker’s accent, intonation, and expressiveness—unlike other voice models, which can flatten speakers’ distinctive accents into generically American-sounding voices. When a user uploads a script to Express-1, its system analyzes the words to infer the correct tone to use. That information is then fed into a diffusion model, which renders the avatar’s facial expressions and movements to match the speech.  Alongside the voice model, Express-2 uses three other models to create and animate the avatars. The first generates an avatar’s gestures to accompany the speech fed into it by the Express-Voice model. A second evaluates how closely the input audio aligns with the multiple versions of the corresponding generated motion before selecting the best one. Then a final model renders the avatar with that chosen motion.  This third rendering model is significantly more powerful than its Express-1 predecessor. Whereas the previous model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. This means it takes less time to create the avatar, says Youssef Alami Mejjati, Synthesia’s head of research and development: “With Express-1, it needed to first see someone expressing emotions to be able to render them. Now, because we’ve trained it on much more diverse data and much larger data sets, with much more compute, it just learns these associations automatically without needing to see them.”  Narrowing the uncanny valley Although humanlike AI-generated avatars have been around for years, the recent boom in generative AI is making it increasingly easier and more affordable to create lifelike synthetic humans—and they’re already being put to work. Synthesia isn’t alone: AI avatar companies like Yuzu Labs, Creatify, Arcdads, and Vidyard give businesses the tools to quickly generate and edit videos starring either AI actors or artificial versions of members of staff, promising cost-effective ways to make compelling ads that audiences connect with. Similarly, AI-generated clones of livestreamers have exploded in popularity across China in recent years, partly because they can sell products 24/7 without getting tired or needing to be paid.  For now at least, Synthesia is “laser focused” on the corporate sphere. But it’s not ruling out expanding into new sectors such as entertainment or education, says Peter Hill, the company’s chief technical officer. In an apparent step toward this, Synthesia recently partnered with Google to integrate Google’s powerful new generative video model Veo 3 into its platform, allowing users to directly generate and embed clips into Synthesia’s videos. It suggests that in the future, these hyperrealistic artificial humans could take up starring roles in detailed universes with ever-changeable backdrops.  At present this could, for example, involve using Veo 3 to generate a video of meat-processing machinery, with a Synthesia avatar next to the machines talking about how to use them safely. But future versions of Synthesia’s technology could result in educational videos customizable to an individual’s level of knowledge, says Alex Voica, head of corporate affairs and policy at Synthesia. For example, a video about the evolution of life on Earth could be tweaked for someone with a biology degree or someone with high-school-level knowledge. “It’s going to be such a much more engaging and personalized way of delivering content that I’m really excited about,” he says.  The next frontier, according to Synthesia, will be avatars that can talk back, “understanding” conversations with users and responding in real time Think ChatGPT, but with a lifelike digital human attached.  Synthesia has already added an interactive element by letting users click through on-screen questions during quizzes presented by its avatars. But it’s also exploring making them truly interactive: Future users could ask their avatar to pause and expand on a point, or ask it a question. “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive,” says Alami Mejjati. “This, for me, is the missing part in online learning experiences today. And I know we’re very close to solving that.” We already know that humans can—and do—form deep emotional bonds with AI systems, even with basic text-based chatbots. Combining agentic technology—which is already capable of navigating the web, coding, and playing video games unsupervised—with a realistic human face could usher in a whole new kind of AI addiction, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab.   “If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” he says. “We’ve seen many cases where AI companions have influenced dangerous behavior even when they are basically texting. If an avatar had a talking head, it would be even more addictive.” Schuller agrees that avatars in the near future will be perfectly optimized to adjust their projected levels of emotion and charisma so that their human audiences will stay engaged for as long as possible. “It will be very hard [for humans] to compete with charismatic AI of the future; it’s always present, always has an ear for you, and is always understanding,” he says. “Al will change that human-to-human connection.” As I pause and replay my Express-2 avatar, I imagine holding conversations with it—this uncanny, permanently upbeat, perpetually available product of pixels and algorithms that looks like me and sounds like me, but fundamentally isn’t me. Virtual Rhiannon has never laughed until she’s cried, or fallen in love, or run a marathon, or watched the sun set in another country.  But, I concede, she could deliver a damned good presentation about why Ed Sheeran is the greatest musician ever to come out of the UK. And only my closest friends and family would know that it’s not the real me.

Earlier this summer, I walked through the glassy lobby of a fancy office in London, into an elevator, and then along a corridor into a clean, carpeted room. Natural light flooded in through its windows, and a large pair of umbrella-like lighting rigs made the room even brighter. I tried not to squint as I took my place in front of a tripod equipped with a large camera and a laptop displaying an autocue. I took a deep breath and started to read out the script.

I’m not a newsreader or an actor auditioning for a movie—I was visiting the AI company Synthesia to give it what it needed to create a hyperrealistic AI-generated avatar of me. The company’s avatars are a decent barometer of just how dizzying progress has been in AI over the past few years, so I was curious just how accurately its latest AI model, introduced last month, could replicate me. 

When Synthesia launched in 2017, its primary purpose was to match AI versions of real human faces—for example, the former footballer David Beckham—with dubbed voices speaking in different languages. A few years later, in 2020, it started giving the companies that signed up for its services the opportunity to make professional-level presentation videos starring either AI versions of staff members or consenting actors. But the technology wasn’t perfect. The avatars’ body movements could be jerky and unnatural, their accents sometimes slipped, and the emotions indicated by their voices didn’t always match their facial expressions.

Now Synthesia’s avatars have been updated with more natural mannerisms and movements, as well as expressive voices that better preserve the speaker’s accent—making them appear more humanlike than ever before. For Synthesia’s corporate clients, these avatars will make for slicker presenters of financial results, internal communications, or staff training videos.

I found the video demonstrating my avatar as unnerving as it is technically impressive. It’s slick enough to pass as a high-definition recording of a chirpy corporate speech, and if you didn’t know me, you’d probably think that’s exactly what it was. This demonstration shows how much harder it’s becoming to distinguish the artificial from the real. And before long, these avatars will even be able to talk back to us. But how much better can they get? And what might interacting with AI clones do to us?  

The creation process

When my former colleague Melissa visited Synthesia’s London studio to create an avatar of herself last year, she had to go through a long process of calibrating the system, reading out a script in different emotional states, and mouthing the sounds needed to help her avatar form vowels and consonants. As I stand in the brightly lit room 15 months later, I’m relieved to hear that the creation process has been significantly streamlined. Josh Baker-Mendoza, Synthesia’s technical supervisor, encourages me to gesture and move my hands as I would during natural conversation, while simultaneously warning me not to move too much. I duly repeat an overly glowing script that’s designed to encourage me to speak emotively and enthusiastically. The result is a bit as if if Steve Jobs had been resurrected as a blond British woman with a low, monotonous voice. 

It also has the unfortunate effect of making me sound like an employee of Synthesia.“I am so thrilled to be with you today to show off what we’ve been working on. We are on the edge of innovation, and the possibilities are endless,” I parrot eagerly, trying to sound lively rather than manic. “So get ready to be part of something that will make you go, ‘Wow!’ This opportunity isn’t just big—it’s monumental.”

Just an hour later, the team has all the footage it needs. A couple of weeks later I receive two avatars of myself: one powered by the previous Express-1 model and the other made with the latest Express-2 technology. The latter, Synthesia claims, makes its synthetic humans more lifelike and true to the people they’re modeled on, complete with more expressive hand gestures, facial movements, and speech. You can see the results for yourself below. 

COURTESY SYNTHESIA

Last year, Melissa found that her Express-1-powered avatar failed to match her transatlantic accent. Its range of emotions was also limited—when she asked her avatar to read a script angrily, it sounded more whiny than furious. In the months since, Synthesia has improved Express-1, but the version of my avatar made with the same technology blinks furiously and still struggles to synchronize body movements with speech.

By way of contrast, I’m struck by just how much my new Express-2 avatar looks like me: Its facial features mirror my own perfectly. Its voice is spookily accurate too, and although it gesticulates more than I do, its hand movements generally marry up with what I’m saying. 

But the tiny telltale signs of AI generation are still there if you know where to look. The palms of my hands are bright pink and as smooth as putty. Strands of hair hang stiffly around my shoulders instead of moving with me. Its eyes stare glassily ahead, rarely blinking. And although the voice is unmistakably mine, there’s something slightly off about my digital clone’s intonations and speech patterns. “This is great!” my avatar randomly declares, before slipping back into a saner register.

Anna Eiserbeck, a postdoctoral psychology researcher at the Humboldt University of Berlin who has studied how humans react to perceived deepfake faces, says she isn’t sure she’d have been able to identify my avatar as a deepfake at first glance.

But she would eventually have noticed something amiss. It’s not just the small details that give it away—my oddly static earring, the way my body sometimes moves in small, abrupt jerks. It’s something that runs much deeper, she explains.

“Something seemed a bit empty. I know there’s no actual emotion behind it— it’s not a conscious being. It does not feel anything,” she says. Watching the video gave her “this kind of uncanny feeling.” 

My digital clone, and Eiserbeck’s reaction to it, make me wonder how realistic these avatars really need to be. 

I realize that part of the reason I feel disconcerted by my avatar is that it behaves in a way I rarely have to. Its oddly upbeat register is completely at odds with how I normally speak; I’m a die-hard cynical Brit who finds it difficult to inject enthusiasm into my voice even when I’m genuinely thrilled or excited. It’s just the way I am. Plus, watching the videos on a loop makes me question if I really do wave my hands about that way, or move my mouth in such a weird manner. If you thought being confronted with your own face on a Zoom call was humbling, wait until you’re staring at a whole avatar of yourself. 

When Facebook was first taking off in the UK almost 20 years ago, my friends and I thought illicitly logging into each other’s accounts and posting the most outrageous or rage-inducing status updates imaginable was the height of comedy. I wonder if the equivalent will soon be getting someone else’s avatar to say something truly embarrassing: expressing support for a disgraced politician or (in my case) admitting to liking Ed Sheeran’s music. 

Express-2 remodels every person it’s presented with into a polished professional speaker with the body language of a hyperactive hype man. And while this makes perfect sense for a company focused on making glossy business videos, watching my avatar doesn’t feel like watching me at all. It feels like something else entirely.

How it works

The real technical challenge these days has less to do with creating avatars that match our appearance than with getting them to replicate our behavior, says Björn Schuller, a professor of artificial intelligence at Imperial College London. “There’s a lot to consider to get right; you have to have the right micro gesture, the right intonation, the sound of voice and the right word,” he says. “I don’t want an AI [avatar] to frown at the wrong moment—that could send an entirely different message.”

To achieve an improved level of realism, Synthesia developed a number of new audio and video AI models. The team created a voice cloning model to preserve the human speaker’s accent, intonation, and expressiveness—unlike other voice models, which can flatten speakers’ distinctive accents into generically American-sounding voices.

When a user uploads a script to Express-1, its system analyzes the words to infer the correct tone to use. That information is then fed into a diffusion model, which renders the avatar’s facial expressions and movements to match the speech. 

Alongside the voice model, Express-2 uses three other models to create and animate the avatars. The first generates an avatar’s gestures to accompany the speech fed into it by the Express-Voice model. A second evaluates how closely the input audio aligns with the multiple versions of the corresponding generated motion before selecting the best one. Then a final model renders the avatar with that chosen motion. 

This third rendering model is significantly more powerful than its Express-1 predecessor. Whereas the previous model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. This means it takes less time to create the avatar, says Youssef Alami Mejjati, Synthesia’s head of research and development:

“With Express-1, it needed to first see someone expressing emotions to be able to render them. Now, because we’ve trained it on much more diverse data and much larger data sets, with much more compute, it just learns these associations automatically without needing to see them.” 

Narrowing the uncanny valley

Although humanlike AI-generated avatars have been around for years, the recent boom in generative AI is making it increasingly easier and more affordable to create lifelike synthetic humans—and they’re already being put to work. Synthesia isn’t alone: AI avatar companies like Yuzu Labs, Creatify, Arcdads, and Vidyard give businesses the tools to quickly generate and edit videos starring either AI actors or artificial versions of members of staff, promising cost-effective ways to make compelling ads that audiences connect with. Similarly, AI-generated clones of livestreamers have exploded in popularity across China in recent years, partly because they can sell products 24/7 without getting tired or needing to be paid. 

For now at least, Synthesia is “laser focused” on the corporate sphere. But it’s not ruling out expanding into new sectors such as entertainment or education, says Peter Hill, the company’s chief technical officer. In an apparent step toward this, Synthesia recently partnered with Google to integrate Google’s powerful new generative video model Veo 3 into its platform, allowing users to directly generate and embed clips into Synthesia’s videos. It suggests that in the future, these hyperrealistic artificial humans could take up starring roles in detailed universes with ever-changeable backdrops. 

At present this could, for example, involve using Veo 3 to generate a video of meat-processing machinery, with a Synthesia avatar next to the machines talking about how to use them safely. But future versions of Synthesia’s technology could result in educational videos customizable to an individual’s level of knowledge, says Alex Voica, head of corporate affairs and policy at Synthesia. For example, a video about the evolution of life on Earth could be tweaked for someone with a biology degree or someone with high-school-level knowledge. “It’s going to be such a much more engaging and personalized way of delivering content that I’m really excited about,” he says. 

The next frontier, according to Synthesia, will be avatars that can talk back, “understanding” conversations with users and responding in real time Think ChatGPT, but with a lifelike digital human attached. 

Synthesia has already added an interactive element by letting users click through on-screen questions during quizzes presented by its avatars. But it’s also exploring making them truly interactive: Future users could ask their avatar to pause and expand on a point, or ask it a question. “We really want to make the best learning experience, and that means through video that’s entertaining but also personalized and interactive,” says Alami Mejjati. “This, for me, is the missing part in online learning experiences today. And I know we’re very close to solving that.”

We already know that humans can—and do—form deep emotional bonds with AI systems, even with basic text-based chatbots. Combining agentic technology—which is already capable of navigating the web, coding, and playing video games unsupervised—with a realistic human face could usher in a whole new kind of AI addiction, says Pat Pataranutaporn, an assistant professor at the MIT Media Lab.  

“If you make the system too realistic, people might start forming certain kinds of relationships with these characters,” he says. “We’ve seen many cases where AI companions have influenced dangerous behavior even when they are basically texting. If an avatar had a talking head, it would be even more addictive.”

Schuller agrees that avatars in the near future will be perfectly optimized to adjust their projected levels of emotion and charisma so that their human audiences will stay engaged for as long as possible. “It will be very hard [for humans] to compete with charismatic AI of the future; it’s always present, always has an ear for you, and is always understanding,” he says. “Al will change that human-to-human connection.”

As I pause and replay my Express-2 avatar, I imagine holding conversations with it—this uncanny, permanently upbeat, perpetually available product of pixels and algorithms that looks like me and sounds like me, but fundamentally isn’t me. Virtual Rhiannon has never laughed until she’s cried, or fallen in love, or run a marathon, or watched the sun set in another country. 

But, I concede, she could deliver a damned good presentation about why Ed Sheeran is the greatest musician ever to come out of the UK. And only my closest friends and family would know that it’s not the real me.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Quinas readies UltraRam, flash memory with DRAM speed

For starters, the memory is built on what is called a” III-V technology,” a class of semiconductor materials that are composed of elements from groups III and V of the periodic table, the company stated. These materials have unique properties that make them ideal for use in electronic devices such

Read More »

7 Wi-Fi certifications to bolster wireless networking skills

Organization: Certified Wireless Network Professionals (CWNP) Price: $149.99 for the CWTS-102 exam How to prepare: CWNP offers several resources to prepare including: a live training class, a self-paced training kit, a study and reference guide, an electronic practice test, an eLearning module, an eLearning bundle, and a test and go

Read More »

USA Hits New Crude Oil Production Record

The U.S. hit a new crude oil production record recently, a data page on the U.S. Energy Information Administration (EIA) website showed. The data page – which displays monthly U.S. field production of crude oil, was last updated on August 29, and includes data from January 1920 to June 2025 – revealed that monthly U.S. field production of crude oil averaged 13.58 million barrels per day in June. This figure is the highest in the data set, with the second highest figure coming in October 2024, at 13.530 million barrels per day. The third highest figure in the data set was seen in April this year, at 13.466 million barrels per day. Monthly U.S. field production of crude oil has averaged 13 million barrels per day or more on 21 occasions, according to the data page. Four of these were seen in 2023, 11 came in 2024, and six were in 2025, the data page highlighted. A data page on the EIA site showing annual U.S. field production of crude oil, which was also last updated on August 29 and which includes data from 1859 to 2024, showed that annual U.S. field production of crude oil averaged 13.235 million barrels per day in 2024. Prior to this, annual U.S. field production of crude oil had never averaged 13 million barrels per day or more, the data revealed. The closest it came to an annual average of 13 million barrels per day was in 2023, at 12.943 million barrels per day, the data showed. In a market analysis sent to Rigzone recently, Antonio Di Giacomo, Financial Markets Analyst for LATAM at XS, noted that, “in June, the United States reached a new all-time high in crude oil production, hitting 13.58 million barrels per day”. “This milestone was primarily driven by increases in Texas, New

Read More »

MPLX Closes $2.4B Acquisition of Northwind Midstream Assets

MPLX LP said it has completed the $2.375 billion acquisition of Northwind Delaware Holdings LLC from Five Point Infrastructure LLC. The acquisition of Northwind, which provides sour gas gathering, treating, and processing services in Lea County, New Mexico, will enhance its Permian natural gas and natural gas liquid (NGL) value chains, MPLX said in a news release. The acquisition is expected to be immediately accretive to distributable cash flow, and inclusive of estimated incremental capital of $500 million, represents a 7x multiple on forecast 2027 EBITDA and an estimated mid-teen unlevered return, the company said. The acquisition was financed and the incremental capital expenditures associated with in-process expansion projects will be funded by net proceeds from MPLX’s $4.5 billion senior notes issued in August, the company stated. MPLX said that the acquired business is complementary and adjacent to its existing Delaware basin natural gas system. It consists of over 200,000 dedicated acres, more than 200 miles of gathering pipelines, two in-service acid gas injection wells at 20 million cubic feet per day (MMcfpd), and a third permitted well that will bring its total capacity to 37 MMcfpd, according to the release. The system currently has 150 MMcfpd of sour gas treating capacity and in-process expansion projects will increase capacity to 440 MMcfpd, expected to be completed in the second half of 2026, the company said. The system is supported by minimum volume commitments from “top regional producers,” MPLX said. Last week, MPLX said it entered into a definitive agreement to divest its Rockies gathering and processing assets to a subsidiary of Harvest Midstream for $1.0 billion in cash consideration, subject to customary purchase price adjustments. Harvest has contractually agreed to dedicate approximately 12 thousand barrels per day of NGLs from these assets to MPLX for a period of seven years

Read More »

BW Offshore Signs HoA with Equinor for Bay Du Nord FPSO

BW Offshore said it has signed a head of agreement (HoA) with Equinor Canada Ltd as the preferred bidder for the floating production, storage and offloading (FPSO) unit for the Bay du Nord project offshore Newfoundland and Labrador, Canada. Bay du Nord, Canada’s first deepwater oil project, is operated by Equinor in partnership with BP plc and holds an estimated 400 million barrels of recoverable light crude in its initial phase, BW Offshore said in a news release. Under the HoA, BW Offshore and Equinor will further discuss how to proceed with the technical and commercial aspects of the FPSO project, including a smart and cost-effective design through front end engineering design (FEED) work, as well as agreeing on a commercial solution, according to the release. BW Offshore said the planned FPSO will be tailored for the harsh environment of the sub-Arctic. The unit is expected to support production of up to 160,000 barrels of oil per day. The topside is planned to include emission reduction initiatives such as high-efficiency power generation and heat recovery, variable speed drives and a closed flare system. The FPSO will be designed for future tiebacks to enhance the project’s long-term economic viability and value creation, the company said. Following pre-FEED completion in mid-September, BW Offshore and Equinor plan to enter into a bridging phase to prepare for FEED in early 2026, subject to approvals by Equinor and BP. BW Offshore said it plans to establish a local office in Newfoundland during FEED. “We are honored to have been selected by Equinor, which shows confidence placed in BW Offshore’s capabilities to support such a pioneering Canadian project,” BW Offshore CEO Marco Beenen said. “This HoA follows a constructive and close dialogue with Equinor since late 2023 and highlights our commitment to bringing substantial value to

Read More »

Strategists Forecast USA Crude Stock Drop

In an oil and gas report sent to Rigzone late Tuesday by the Macquarie team, Macquarie strategists, including Walt Chancellor, revealed that they are forecasting that U.S. crude inventories will be down by 1.1 million barrels for the week ending August 29. “This follows a 2.4 million barrel draw in the prior week, with the crude balance realizing slightly tighter than our expectations,” the strategists said in the report. “As we have previously noted, we believe persistently strong implied supply has been a key feature of the U.S. crude oil balance across Q3 to date. As such, with U.S. oil production exiting Q2 at a record 13.6 million barrels per day in June, we see potential for this figure to move higher as subsequent monthly data is reported,” the strategists added. “For this week’s balance, from refineries, we model a minimal reduction in crude runs. Among net imports, we model a slight decrease, with exports (+0.3 million barrels per day) and imports (+0.2 million barrels per day) higher on a nominal basis,” they went on to state. The strategists warned in the report that timing of cargoes remains a source of potential volatility in this week’s crude balance. “From implied domestic supply (prod.+adj.+transfers), we again look for an increase (+0.3 million barrels per day) on a nominal basis this week,” the strategists said in the report. “Rounding out the picture, we anticipate a smaller increase (+0.5 million barrels) in SPR [Strategic Petroleum Reserve] stocks this week,” they added. The Macquarie strategists also stated in the report that, “among products”, they “look for a healthy gasoline draw (-3.0 million barrels) largely offset by builds in distillate (+1.4 million barrels) and jet (+1.0 million barrels)”. “We model implied demand for these three products at ~14.5 million barrels per day for the week

Read More »

Hafnia Secures Preliminary Deal to Acquire Over 14 Pct Stake in TORM

Oaktree Capital Management LP has entered a preliminary agreement to sell around 14.1 million shares it holds in TORM PLC to Hafnia Ltd for $311.43 million, or $22 per share. The volume corresponds to about 14.45 percent of TORM’s issued share capital, Hafnia, part of Singapore-based BW Group, said in a statement on its website. Hafnia and TORM own tanker fleets that ship oil, oil products and chemicals. Hafnia says it owns about 200 vessels. TORM says it owns over 80 vessels. Oaktree meanwhile is a Los Angeles-based investor. Hafnia trades on the Oslo Stock Exchange and the New York Stock Exchange while TORM is listed on the Copenhagen Stock Exchange and Nasdaq in New York. “Hafnia looks forward to making this sizeable investment in TORM with the belief that TORM is a well-managed company with a high-quality fleet”, Hafnia said. “With respect to Hafnia’s long-term position as a shareholder in TORM, Hafnia believes generally that consolidation is positive for the tanker industry but has made no decisions in this regard”. TORM said separately, “TORM has not been involved in the transaction and has no further information”. Market Outlook Earlier Hafnia reported $346.56 million in operating revenue for the second quarter, down from $563.1 million for 2Q 2024. Operating profit was $83.09 million, down from $262.14 million for 2Q 2024. Profit before income tax landed at $78 million, down from $260.77 million. Net profit was $75.34 million, or $0.15 per share – down from $259.2 million. “Strong product demand, low global inventories, improving refining margins and high export volumes have gradually supported the second quarter product tanker market and have continued into the third quarter”, Hafnia said. “Refined product volumes on water have steadily increased, and daily loadings of refined products have grown even more in the third quarter, signaling further

Read More »

Shell and BP Stations Running Low in Indonesia on Import Cuts

Shell Plc and BP Plc-branded gas stations in Indonesia have run low on fuel as supplies are curbed by import restrictions. The three brands of gasoline sold by Shell in the country are unavailable at some locations until further notice, according to a statement on its website. The company – which accounts for a small part of Indonesia’s market – is coordinating with the country’s Energy Ministry to ensure fuel availability, it said. Indonesia’s imports of gasoline, which are regulated by government quotas, fell 22 percent year-on-year at the end of August, according to Kpler data. While the country does produce some crude from local fields, it’s a net oil importer, and a lack of refining capacity means that some petroleum products are shipped in from overseas. Energy Minister Bahlil Lahadalia said private companies had requested larger quotas, Kompas reported, citing an interview. He encouraged them to purchase supplies from state-owned PT Pertamina, which dominates the country’s gasoline retailing, according to the report. Meanwhile, the volume and frequency at which Pertamina has sought to procure gasoline from overseas for the remainder of the year has surged over the last two weeks, according to tender documents tallied by Bloomberg. BP-branded stations are also facing shortages of some products, the Jakarta Post reported, citing an executive from their operator, PT Aneka Petroindo Raya. About 50 forecourts in Indonesia carry the British firm’s brand, according to a company document. Earlier this year Shell agreed to sell its Indonesian gas stations and associated fuel distribution operations to a joint venture between Philippines-based Citadel Pacific Ltd. and Sefas Group. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our

Read More »

Inside the AI-optimized data center: Why next-gen infrastructure is non-negotiable

How are AI data centers different from traditional data centers? AI data centers and traditional data centers can be physically similar, as they contain hardware, servers, networking equipment, and storage systems. The difference lies in their capabilities: Traditional data centers were built to support general computing tasks, while AI data centers are specifically designed for more sophisticated, time and resource-intensive workloads. Conventional data centers are simply not optimized for AI’s advanced tasks and necessary high-speed data transfer. Here’s a closer look at their differences: AI-optimized vs. traditional data centers Traditional data centers: Handle everyday computing needs such as web browsing, cloud services, email and enterprise app hosting, data storage and retrieval, and a variety of other relatively low-resource tasks. They can also support simpler AI applications, such as chatbots, that do not require intensive processing power or speed. AI data centers: Built to compute significant volumes of data and run complex algorithms, ML and AI tasks, including agentic AI workflows. They feature high-speed networking and low-latency interconnects for rapid scaling and data transfer to support AI apps and edge and internet of things (IoT) use cases. Physical infrastructure Traditional data centers: Typically composed of standard networking architectures such as CPUs suitable for handling networking, apps, and storage. AI data centers: Feature more advanced graphics processing units (GPU) (popularized by chip manufacturer Nvidia), tensor processing units (TPUs) (developed by Google), and other specialized accelerators and equipment. Storage and data management Traditional data centers: Generally, store data in more static cloud storage systems, databases, data lakes, and data lakehouses. AI data centers: Handle huge amounts of unstructured data including text, images, video, audio, and other files. They also incorporate high-performance tools including parallel file systems, multiple network servers, and NVMe solid state drives (SSDs). Power consumption Traditional data centers: Require robust cooling

Read More »

From Cloud to Concrete: How Explosive Data Center Demand is Redefining Commercial Real Estate

The world will generate 181 ZB of data in 2025, an increase of 23.13% year over year and 2.5 quintillion bytes (a quintillion byte is also called an exabyte, EB) created daily, according to a report from Demandsage. To put that in perspective: One exabyte is equal to 1 quintillion bytes, which is 1,000,000,000,000,000,000 bytes. That’s 29 TB every second, or 2.5 million TB per day. It’s no wonder data centers have become so crucial for creating, consuming, and storing data — and no wonder investor interest has skyrocketed.  The surging demand for secure, scalable, high-performance retail and wholesale colocation and hyperscale data centers is spurred by the relentless, global expansion of cloud computing and demand for AI as data generation from businesses, governments, and consumers continues to surge. Power access, sustainable infrastructure, and land acquisition have become critical factors shaping where and how data center facilities are built.  As a result, investors increasingly view these facilities not just as technology assets, but as a unique convergence of real estate, utility infrastructure, and mission-critical systems. Capitalizing on this momentum, private equity and real estate investment firms are rapidly expanding into the sector through acquisitions, joint ventures, and new funds—targeting opportunities to build and operate facilities with a focus on energy efficiency and scalability.

Read More »

Ai4 2025 Navigates Rapid Change in AI Policy, Education

The pace of innovation in artificial intelligence is fundamentally reshaping the landscape of education, and the changes are happening rapidly. At the forefront of this movement stand developers, policy makers, educational practitioners, and associated experts at the recent Ai4 2025 conference (Aug. 11-13) in Las Vegas, where leading voices such as Geoffrey Hinton “The Godfather of AI,” top executives from Google and U.S. Bank, and representatives from multiple government agencies gathered to chart the future of AI development. Importantly, educators and academic institutions played a central role, ensuring that the approach to AI in schools is informed by those closest to the classroom. Key discussions at Ai4 and recent educator symposia underscored both the promise and peril of swift technological change. Generative AI, with its lightning-fast adoption since the advent of tools like ChatGPT, is opening new possibilities for personalized learning, skills development, and operational efficiency. But participants were quick to note that acceleration brings good and bad consequences. On one hand, there’s excitement about practical classroom implementations and the potential for students to engage with cutting-edge technology. On the other, concerns about governance, ethics, safety, and the depth of genuine learning remain at the forefront. This urgency to “do this right” is echoed by teachers, unions, and developers who are united by the challenges and opportunities on the ground. Their voices highlight the need for agreement on education policy and associated regulations to keep pace with technological progress, create frameworks for ethical and responsible use, and ensure that human agency remains central in shaping the future of childhood and learning. In this rapidly evolving environment, bringing all stakeholders to the table is no longer optional; it is essential for steering AI in education toward outcomes that benefit both students and society. Global Context: America, China, and the AI Race

Read More »

Two Lenses on One Market: JLL and CBRE Show Data Centers in a Pinch

The two dominant real estate research houses, JLL and CBRE, have released midyear snapshots of the North American data center market, and both paint the same picture in broad strokes: demand remains insatiable, vacancy has plunged to record lows, and the growth of AI and hyperscale deployments is reshaping every aspect of the business. But their lenses capture different angles of the same story: one emphasizing preleasing and capital flows, the other highlighting hyperscale requirements and regional shifts. Vacancy Falls Through the Floor JLL sets the stage with a stark headline: colocation vacancy is nearing 0%. The JLL Midyear 2025 North America Data Center report warns that this scarcity “is constraining economic growth and undermining national security,” underscoring the role of data centers as critical infrastructure. CBRE’s North American Data Center Trends H1 2025 numbers back this up, recording an all-time low North America vacancy rate of 1.6%, the tightest in more than a decade. Both agree that market loosening is years away — JLL projecting vacancy hovering around 2% through 2027, CBRE noting 74.3% of new capacity already spoken for. The takeaway seems clear: without preleasing, operators and tenants alike are effectively shut out of core markets. Absorption and Preleasing Drive Growth JLL drills down into the mechanics. With virtually all absorption tied to preleasing, the firm points to Northern Virginia (647 MW) and Dallas (575 MW) as the twin engines of growth in H1, joined by Chicago, Austin/San Antonio, and Atlanta. CBRE’s absorption math is slightly different, but the conclusion aligns: Northern Virginia again leads the nation, with 538.6 MW net absorption and a remarkable 80% surge in under-construction capacity. CBRE sharpens the view by noting that the fiercest competition is at the top end: single-tenant requirements of 10 MW or more are setting pricing records as hyperscalers

Read More »

Data Center Frontier Trends Summit 2025: AI, Power Constraints, and Moonshots Take the Stage in Reston

Aug. 28, RESTON, Va. — It’s the last day of the second-annual Data Center Frontier Trends Summit, marking the conclusion of a gathering of significant players in the data center world and their customers, all of whom are looking to get a better handle on the data center industry as it grapples with AI-fueled power demands, grid constraints, and an urgent need for infrastructure innovation. Taking place in the heart of Northern Virginia’s Data Center Alley, acknowledged as the world’s premier data center hotspot, the conference in Reston, VA saw a significant increase in attendance in its second year, going from just over 300 attendees in its inaugural year to close to five hundred attendees this year. Not unexpected, many of the attendees were newcomers to the event, attracted by the strong list of speakers focused on critical topics to the industry, with an emphasis on power and artificial intelligence. Many conversation with the attendees had them identifying specific topics that were primary motivators to attend, while one attendee simply told us “after reading the presentation descriptions and the speakers list, how could we not attend?” From the opening keynote “Playbook Interrupted” presented by Chris Downie, CEO of Flexential, the tone and message of the conference was made clear. Touching on topics such as AI’s insatiable resource appetite, tightening energy policies, and power scarcity, Chris made it clear that todays’ data centers are breaking old frameworks and demanding new strategies for growth and resilience. The message was clear; times have changed and industry executives needed to be ready to change with them. Staying ahead of the curve was going to be more difficult, but just as important. It was tyime to develop a new playbook for your business operations. Getting to the Core of It With the demand for AI centric

Read More »

NTT Data, Google Cloud Forge Alliance to Expand AI, Cloud Modernization

The partnership is designed to accelerate development of repeatable, scalable solutions, according to reps. NTT Data’s GenAI framework called “Takumi” is at the heart of this development, designed to help clients move from idea to deployment by integrating with Google Cloud’s AI stack supporting rapid prototyping and GenAI use-case creation. This initiative expands NTT Data’s Smart AI Agent Ecosystem, which unites strategic technology partnerships, specialized assets, and an AI-ready talent engine to help users deploy and manage AI at scale.  New Business Group NTT Data has established a dedicated global Google Cloud Business Group comprising thousands of engineers, architects, and advisory consultants. This team will collaborate with Google Cloud teams to help clients adopt and scale AI-powered cloud technologies. The company also is investing in training and certification programs so teams across sales, presales, and delivery can sell, migrate, and implement AI-powered cloud solutions. NTT Data says it will certify 5,000 engineers in Google Cloud technology, a step that underscores the scale of resources both firms are committing to meet surging enterprise demand. Both companies are co-investing in global sales and go-to-market campaigns designed to fast-track adoption across priority industries. A Landmark Moment for NTT—and the Industry Marv Mouchawar, Head of Global Innovation at NTT Data, said the partnership is a significant milestone in the company’s mission to drive innovation and digital transformation across industries. “By combining NTT Data’s deep expertise in AI, cloud-native modernization and enterprise solutions with Google Cloud’s advanced technologies, we are helping businesses accelerate their AI-powered cloud adoption globally and unlock new opportunities for growth,” she noted. For the data center industry, this partnership is notable not just as a technology alignment but as a signal of where digital infrastructure is headed. Hyperscale cloud providers continue to expand their reach through partnerships with major service providers,

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »