Stay Ahead, Stay ONMINE

Tuning into the future of collaboration 

In partnership withShure When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless.  Necessity during the pandemic accelerated years of innovation in months.   “Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”   Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges.  “If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”   For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion.  “Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.”  The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come.  “It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson.  This episode of Business Lab is produced in partnership with Shure.  Full Transcript  Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.   This episode is produced in partnership with Shure.   Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.   Two words for you: elevated audio.   My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.   Welcome Sam, welcome Brendan.  Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation.  Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area.  Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure?  Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets.  Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology.  Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone?  Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible.  Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful?  Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared.  Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well?  Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room.  So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals.  Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well.  Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means.  That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible.  Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view?  Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time.  Megan: Oh, wow. Very impressive.  Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate.  Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless.  Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too?  Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.   We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments.  And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.   And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience.  Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities?  Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them.  And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI.  Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers?  Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure.  Megan: Of course.  Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.   All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way.  Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling.  Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.   If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will.  Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition?  Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025.  Megan: Oh, wow.  Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration.  Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.   Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools.  Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition?  Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.   We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.   The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace.  Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.   That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.   That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.   This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

In partnership withShure

When work went remote, the sound of business changed. What began as a scramble to make home offices functional has evolved into a revolution in how people hear and are heard. From education to enterprises, companies across industries have reimagined what clear, reliable communication can mean in a hybrid world. For major audio and communications enterprises like Shure and Zoom, that transformation has been powered by artificial intelligence, new acoustic technologies, and a shared mission: making connection effortless. 

Necessity during the pandemic accelerated years of innovation in months.  

“Audio and video just working is a baseline for collaboration,” says chief ecosystem officer at Zoom, Brendan Ittelson. “That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem.”  

Audio is a foundation for trust, understanding, and collaboration. Poor sound quality can distort meaning and fatigue listeners, while crisp audio and intelligent processing can make digital interactions feel nearly as natural as in-person exchanges. 

“If you think about the fundamental need here,” adds chief technology officer at Shure, Sam Sabet, “It’s the ability to amplify the audio and the information that’s really needed, and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate.”  

For both Ittelson and Sabet, AI now sits at the center of this progress. For Shure, machine learning powers real-time noise suppression, adaptive beamforming, and spatial audio that tunes itself to a room’s acoustics. For Zoom, AI underpins every layer of its platform, from dynamic noise reduction to automated meeting summaries and intelligent assistants that anticipate user needs. These tools are transforming communication from reactive to proactive, enabling systems that understand intent, context, and emotion. 

“Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing,” says Sabet. “Having software and algorithms that adapt seamlessly and self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.” 

The future, they suggest, is one where technology fades into the background. As audio devices and AI companions learn to self-optimize, users won’t think about microphones or meeting links. Instead, they’ll simply connect. Both companies are now exploring agentic AI systems and advanced wireless solutions that promise to make collaboration seamless across spaces, whether in classrooms, conference rooms, or virtual environments yet to come. 

“It’s about helping people focus on strategy and creativity instead of administrative busy work,” says Ittelson. 

This episode of Business Lab is produced in partnership with Shure. 

Full Transcript 

Megan Tatum: From MIT Technology Review, I’m Megan Tatum and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.  

This episode is produced in partnership with Shure.  

Now as the pandemic ushered in the cultural shift that led to our increasingly virtual world, it also sparked a flurry of innovation in the audio and video industries to keep employees and customers connected and businesses running. Today we’re going to talk about the AI technologies behind those innovations, the impact on audio innovation, and the continuing emerging opportunities for further advances in audio capabilities.  

Two words for you: elevated audio.  

My guests today are Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom.  

Welcome Sam, welcome Brendan. 

Sam Sabet: Thank you, Megan. It’s a pleasure to be here and I’m looking forward to this conversation with both you and Brendan. It should be a very exciting conversation. 

Brendan Ittelson: Thank you so much for having me today. I’m looking forward to the conversation and all the topics we have to dive into on this area. 

Megan: Fantastic. Lovely to have you both here. And Sam, just to set some context, I wonder if we could start with the pandemic and the innovation that really was born out of necessity. I mean, when it became clear that we were all going to be virtual for the foreseeable future, I wonder what was the first technological mission for Shure? 

Sam: Yeah, very good question. The pandemic really accelerated a lot of innovation around virtual communications and fundamentally how we perform our everyday jobs remotely. One of our first technological mission when the pandemic happened and everybody ended up going home and performing their functions remotely was to make sure that people could continue to communicate effectively, whether that’s for business meetings, virtual events, or educational purposes. We focused on collaboration and enhancing collaboration tools. And ideally what we were aiming to do, or we focused on, was to basically improve the ease of use and configuration of audio tool sets. 

Because unlike the office environment where it might be a lot more controlled, people are working from non-traditional areas like home offices or other makeshift solutions, we needed to make sure that people could still get pristine audio and that studio level audio even in uncontrolled environments that are not really made for that. We expedited development in our software solutions. We created tool sets that allowed for ease of deployment and remote configuration and management so we could enable people to continue doing the things they needed to do without having to worry about the underlying technology. 

Megan: And Brendan, during that time, it seemed everyone became a Zoom user of some sort. I mean, what was the first mission at Zoom when virtual connection became this necessity for everyone? 

Brendan: Well, our mission fundamentally didn’t change. It’s always been about delivering frictionless communications. What shifted was the urgency and the magnitude of what we were doing. Our focus shifted on how we do this reliably, securely, and to scale to ensure these millions of new users could connect instantly without friction. We really shifted our thinking of being just a business continuity tool to becoming a lifeline for so many individuals and industries. The stories that we heard across education, healthcare, and just general human connection, the number of those moments that matter to people that we were able to help facilitate just became so important. We really focused on how can we be there and make it frictionless so folks can focus on that human connection. And that accelerated our thinking in terms of innovation and reinforced the thought that we need to focus on the simplicity, accessibility, and trust in communication technology so that people could focus on that connection and not the technology that makes it possible. 

Megan: That’s so true. It did really just become an absolute lifeline for people, didn’t it? And before we dive into the technologies beyond these emerging capabilities, I wonder if we could first talk about just the importance of clear audio. I mean, Sam, as much as we all worry over how we look on Zoom, is how we sound perhaps as or even more impactful? 

Sam: Yeah, you’re absolutely correct. I mean, clear audio is absolutely critical for effective communications. Video quality is very important absolutely, but poor audio can really hinder understanding and engagement. As a matter of fact, there’s studies and research from areas such as Yale University that say that poor audio can make understanding somewhat more challenged and even affect retention of information. Especially in an educational type environment where there’s a lot of background noise and very differing types of spaces like auditoriums and lecture halls, it really becomes a high priority that you have great audio quality. And during the pandemic, as you said, and as Brendan rightly said, it became one of our highest priorities to focus on technologies like beamforming mics and ways to focus on the speaker’s voice and minimize that unwanted background noise so that we could ensure that the communication was efficient, was well understood, and that it removed the distraction so people could be able to actually communicate and retain the information that was being shared. 

Megan: It is incredible just how impactful audio can be, can’t it? Brendan, I mean as you said, remote and hybrid collaboration is part of Zoom’s DNA. What observations can you share about how users have grown along with the technological advancements and maybe how their expectations have grown as well? 

Brendan: Definitely. I mean, users now expect seamless and intelligent experiences. Audio and video just working is a baseline for collaboration. That expectation has shifted from connecting people to enhancing productivity and creativity across the entire ecosystem. When we look at it, we’re really looking at these trends in terms of how people want to be better when they’re at home. For example, AI-powered tools like Smart Summaries, translation and noise suppression to help people stay productive and connected no matter where they’re working. But then this also comes into play at the office. We’re starting to see folks that dive into our technology like Intelligent Director and Smart Name Tags that create that meeting equity even when they’re in a conference room. 

So, the remote experience and the room experience all are similar and create that same ability to be seen, heard, and contribute. And we’re now diving further into this that it’s beyond just meetings. Zoom is really transforming into an AI-first work platform that’s focused on human connection. And so that goes beyond the meetings into things like Chat, Zoom Docs, Zoom Events and Webinars, the Zoom Contact Center and more. And all of this being brought together using our AI Companion at its core to help connect all of those different points of connection for individuals. 

Megan: I mean, so Brendan, we know it wasn’t only workplaces that were affected by the pandemic, it was also the education sector that had to undergo a huge change. I wondered if you could talk a little bit about how Zoom has operated in that higher education sphere as well. 

Brendan: Definitely. Education has always been a focus for Zoom and an area that we’ve believed in. Because education and learning is something as a company we value and so we have invested in that sector. And personally being the son of academics, it is always an area that I find fascinating. We continue to invest in terms of how do we make the classroom a stronger space? And especially now that the classroom has changed, where it can be in person, it can be virtual, it can be a mix. And using Zoom and its tools, we’re able to help bridge all those different scenarios to make learning accessible to students no matter their means. 

That’s what truly excites us, is being able to have that technology that allows people to pursue their desires, their interests, and really up-level their pursuits and inspire more. We’re constantly investing in how to allow those messages to get out and to integrate in the flow of communication and collaboration that higher education uses, whether that’s being integrated into the classroom, into learning management systems, to make that a seamless flow so that students and their educators can just collaborate seamlessly. And also that we can support all the infrastructure and administration that helps make that possible. 

Megan: Absolutely. Such an important thing. And Sam, Shure as well, could you talk to us a bit about how you worked in that kind of education space as well from an audio point of view? 

Sam: Absolutely. Actually, this is a topic that’s near and dear to my heart because I’m actually an adjunct professor in my free time. 

Megan: Oh, wow. Very impressive. 

Sam: And the challenges of trying to do this sort of a hybrid lecture, if you will. And Shure has been particularly well suited for this environment and we’ve been focused on it and investing in technologies there for decades. If you think about how a lecture hall is structured, it’s a little different than just having a meeting around the conference table. And Shure has focused on creating products that allow this combination of a presenter scenario along with a meeting space plus the far end where users or students are remote, they can hear intelligibly what’s happening in the lecture hall, but they can also participate. 

Between our products like the Ceiling Mic Arrays and our wireless microphones that are purpose built for presenters and educators like our MXW neXt product line, we’ve created technologies that allow those two previously separate worlds to integrate together. And then add that onto integrating with Zoom and other products that allow for that collaboration has been very instrumental. And again, being a user and providing those lectures, I can see a night and day difference and just how much more effective my lectures are today from where they were five to six years ago. And that’s all just made possible by all the technologies that are purpose built for these scenarios and integrating more with these powerful tools that just make the job so much more seamless. 

Megan: Absolutely fascinating that you got to put the technology to use yourself as well to check that it was all working well. And you mentioned AI there, of course. I mean, Sam, what AI technologies have had the most significant impact on recent audio advancements too? 

Sam: Yeah. Absolutely. If you think about the fundamental need here, it’s the ability to amplify the audio and the information that’s really needed and diminish the unwanted sounds and audio so that we can enhance that experience and make it seamless for people to communicate. With our innovations at Shure, we’ve leveraged the cutting-edge technologies to both enhance communication effectiveness and to align seamlessly with evolving features in unified communications like the ones that Brandon just mentioned in the Zoom platforms.  

We partner with industry leaders like Zoom to ensure that we’re providing the ability to be able to focus on that needed audio and eliminate all the background distractions. AI has transformed that audio technology with things like machine learning algorithms that enable us to do more real-time audio processing and significantly enhancing things like noise reduction and speech isolation. Just to give you a simple example, our IntelliMix Room audio processing software that we’ve released as well as part of a complete room solution uses AI to optimize sound in different environments. 

And really that’s one of the fundamental changes in this period, whether that’s pandemic or post-pandemic, is that the key is really flexibility and being able to adapt to changing work environments. Even if you’re not working from home and coming into the office, the types of spaces and environments you try to collaborate in today are constantly changing because our needs are constantly changing. And so having software and algorithms that adapt seamlessly and are able to self-optimize based on the acoustics of the room, based on the different layouts of the spaces where people collaborate in is instrumental.  

And then last but not least, AI has transformed the way audio and video integrate. For example, we utilize voice recognition systems that integrate with intelligent cameras so that we enable voice tracking technology so that cameras can not only identify who’s speaking, but you have the ability to hear and see people clearly. And that in general just enhances the overall communication experience. 

Megan: Wow. It’s just so much innovation in quite a short space of time really. I mean, Brendan, you mentioned AI a little bit there beforehand, but I wonder what other AI technologies have had the biggest impact as Zoom builds out its own emerging capabilities? 

Brendan: Definitely. And I couldn’t agree more with Sam that, I mean, AI has made such a big shift and it’s really across the spectrum. And when I think about it, there’s almost three tiers when you look at the stack. You start off at the raw audio where AI is doing those things like noise suppression, echo cancellation, voice enhancements. All of that just makes this amazing audio signal that can then go into the next layer, which is the speech AI and natural language processing. Which starts to open up those items such as the real-time transcription, translation, searchable content to make the communication not just what’s heard, but making it more accessible to more individuals and inclusive by providing that content in a format that is best for them. 

And then you take those two layers and put the generative and agentic AI on top of that, that can start surfacing insights, summarize the conversation, and even take actions on someone’s behalf. It really starts to change the way that people work and how they have access and allows them to connect. I think it is a huge shift and I’m very excited by how those three levels start to interact to really enable people to do more and to connect thanks to AI. 

Megan: Yeah. Absolutely. So much rich information that can come out from a single call now because of those sorts of tools. And following on from that, Brendan, I mean, you mentioned before the Zoom AI Companion. I wondered if you could talk a bit about what were your top priorities when building that product to ensure it was truly useful for your customers? 

Brendan: Definitely. When we developed AI Companion, we had two priority focus areas from day one, trust and security, and then accuracy and relevance. On the trust side, it was a non-negotiable that customer data wouldn’t be used to train our models. People need to know that their conversations and content are private and secure. 

Megan: Of course. 

Brendan: And then with accuracy, we needed to ensure AI outputs weren’t generic but grounded in the actual context of a meeting, a chat or a product. But the real story here when I think about AI Companion is the customer value that it delivers. AI Companion helps people save time with meeting recaps, task generation, and proactive prep for the next session. It reduces that friction in hybrid work, whether you’re in a meeting room, a Zoom room, or collaborating across different collaboration tools like Microsoft or Google. And it enables more equitable participation by surfacing the right context for everyone no matter where and how they’re working.  

All this leads to a result where it’s practical, trustworthy, and embedded where work happens. And it’s just not another tool to manage, it’s there in someone’s flow of work to help them along the way. 

Megan: Yeah. That trust piece is just so important, isn’t it, today? And Sam, as much as AI has impacted audio innovation, audio has also had an impact on AI capabilities. I wondered if you could talk a little bit about audio as a data input and the advancements technologies like large language models, LLMs, are enabling. 

Sam: Absolutely. Audio is really a rich data source that’s added a new dimension to AI capabilities. If you think about speech recognition or natural language processing, they’ve had significant advances due to audio data that’s provided for them. And to Brendan’s point about trust and accuracy, I like to think of the products that Shure enables customers with as essentially the eyes and ears in the room for leading AI companions just like the Zoom AI Companion. You really need that pristine audio input to be able to trust the accuracy of what the AI generates. These AI Companions have been very instrumental in the way we do business every day. I mean, between transcription, speaker attributions, the ability to add action items within a meeting and be able to track what’s happening in our interactions, all of that really has to rely on that accurate and pristine input from audio into the AI. I feel that further improves the trust that our end users have to the results of AI and be able to leverage it more.  

If you think about it, if you look at how AI audio inputs enhance that interactive AI system, it enables more natural and intuitive interactions with AI. And it really allows for that seamless integration and the ability for users to use it without having to worry about, is the room set up correctly? Is the audio level proper? And when we talk even about agentic AI, we’re working on future developments where systems can self-heal or detect that there are issues in the environment so that they can autocorrect and adapt in all these different environments and further enable the AI to be able to do a much more effective job, if you will. 

Megan: Sam, you touched on future developments there. I wonder if we could close our conversation today with a bit of a future forward look, if we could. Brendan, can you share innovations that Zoom is working on now and what are you most excited to see come to fruition? 

Brendan: Well, your timing for this question is absolutely perfect because we’ve just wrapped up Zoomtopia 2025. 

Megan: Oh, wow. 

Brendan: And this is where we discussed a lot of the new AI innovations that we have coming to Zoom. Starting off, there’s AI Companion 3.0. And we’ve launched this next generation of agentic AI capabilities in Zoom Workplace. And with 3.0 when it releases, it isn’t just about transcribing, it’s turned into really a platform that helps you with follow-up task, prep for your next conversation, and even proactively suggest how to free up your time. For example, AI Companion can help you schedule meetings intelligently across time zones, suggest which meetings you can skip, and still stay informed and even prepare you with context and insights before you walk into the conversation. It’s about helping people focus on strategy and creativity instead of administrative busy work. And for hybrid work specifically, we introduced Zoomie Group Assistant, which will be a big leap for hybrid collaboration. 

Acting as an assistant for a group chat and meetings, you can simply ask, “@Zoomie, what’s the latest update on the project?” Or “@Zoomie, what are the team’s action items?” And then get instant answers. Or because we’re talking about audio here, you can go into a conference room and say, “Hey, Zoomie,” and get help with things like checking into a room, adjusting lights, temperature, or even sharing your screen. And while all these are built-in features, we’re also expanding the platform to allow custom AI agents through our AI Studio, so organizations can bring their own agents or integrate with third-party ones.  

Zoom has always believed in an open platform and philosophy and that is continuing. Folks using AI Companion 3.0 will be able to use agents across platforms to work with the workflows that they have across all the different SaaS vendors that they might have in their environment, whether that’s Google, Microsoft, ServiceNow, Cisco, and so many other tools. 

Megan: Fantastic. It certainly sounds like a tool I could use in my work, so I look forward to hearing more about that. And Sam, we’ve touched on there are so many exciting things happening in audio too. What are you working on at Shure? And what are you most excited to see come to fruition? 

Sam: At Shure, our engineering teams are really working on a range of exciting projects, but particularly we’re working on developing new collaboration solutions that are integral for IT end users. And these integrate obviously with the leading UC platforms.  

We’re integrating audio and video technologies that are scalable, reliable solutions. And we want to be able to seamlessly connect these to cloud services so that we can leverage both AI technologies and the tool sets available to optimize every type of workspace essentially. Not just meeting rooms, but lecture halls, work from home scenarios, et cetera.  

The other area that we really focus on in terms of our reliability and quality really comes from our DNA in the pro audio world. And that’s really all-around wireless audio technologies. We’re developing our next-generation wireless systems and these are going to offer even greater reliability and range. And they really become ideal for everything from a large-scale event to personal home use and the gamut across that whole spectrum. And I think all of that in partnership with our partners like Zoom will help just facilitate the modern workspace. 

Megan: Absolutely. So much exciting innovation clearly going on behind the scenes. Thank you both so much.  

That was Sam Sabet, chief technology officer at Shure, and Brendan Ittelson, chief ecosystem officer at Zoom, whom I spoke with from Brighton in England.  

That’s it for this episode of Business Lab. I’m your host, Megan Tatum. I’m a contributing editor at Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology and you can find us in print on the web and at events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.  

This show is available wherever you get your podcasts. And if you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review and this episode was produced by Giro Studios. Thanks for listening. 

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

F5 brings new visibility and AI controls to Big-IP, NGINX

The demand came from a gap that general-purpose observability tools were not filling. Customers running tools like Datadog and New Relic told F5 they needed something different.  F5 Insight pulls from technology acquired through the Threat Stack and Fletch acquisitions and runs on F5’s AI data fabric. It includes an

Read More »

Tech layoffs surpass 45,000 in early 2026

Layoffs spread across tech sectors Beyond Amazon, Meta, and Block, several technology vendors and platform companies have also announced sizable layoffs this year. According to the RationalFX report: Semiconductor and electronics company ams OSRAM has announced 2,000 layoffs. Telecommunications vendor Ericsson has announced 1,900 job cuts. Semiconductor equipment manufacturer ASML

Read More »

Eridu exits stealth with $200M to rebuild AI networking

That gap is not static. Promode Nedungadi, Chief Technology Officer, said the architectural and algorithmic trends driving AI are making the network problem harder, not easier. Techniques like mixture-of-experts models and the disaggregation of inference into separate prefill and decode stages all require more data movement. “Every one of those

Read More »

United States to Release 172 Million Barrels of Oil From the Strategic Petroleum Reserve

WASHINGTON—U.S. Secretary of Energy Chris Wright released the following statement regarding the International Energy Agency (IEA) and the U.S. Strategic Petroleum Reserve (SPR): “Earlier today, 32 member nations of the International Energy Agency unanimously agreed to President Trump’s request to lower energy prices with a coordinated release of 400 million barrels of oil and refined products from their respective reserves.  “As part of this effort, President Trump authorized the Department of Energy to release 172 million barrels from the Strategic Petroleum Reserve, beginning next week. This will take approximately 120 days to deliver based on planned discharge rates.  “President Trump promised to protect America’s energy security by managing the Strategic Petroleum Reserve responsibly and this action demonstrates his commitment to that promise. Unlike the previous administration, which left America’s oil reserves drained and damaged, the United States has arranged to more than replace these strategic reserves with approximately 200 million barrels within the next year—20% more barrels than will be drawn down—and at no cost to the taxpayer.  “For 47 years, Iran and its terrorist proxies have been intent on killing Americans. They have manipulated and threatened the energy security of America and its allies. Under President Trump, those days are coming to an end.  “Rest assured, America’s energy security is as strong as ever.”                                                                                         ###

Read More »

Occidental Petroleum, 1PointFive STRATOS DAC plant nears startup in Texas Permian basin

Occidental Petroleum Corp. and its subsidiary 1PointFive expect Phase 1 of the STRATOS direct air capture (DAC) plant in Texas’ Permian basin to come online in this year’s second quarter. In a post to LinkedIn, 1PointFive said Phase 1 “is in the final stage of startup” and that Phase 2, which incorporates learnings from research and development and Phase 1 construction activities, “will also begin commissioning in Q2, with operational ramp-up continuing through the rest of the year.” Once fully operational, STRATOS is designed to capture up to 500,000 tonnes/year (tpy) of CO2. As part of the US Environmental Protection Agency (EPA) Class VI permitting process and approval, it was reported that STRATOS is expected to include three wells to store about 722,000 tpy of CO2 in saline formations at a depth of about 4,400 ft. The company said a few activities before start-up remain, including ramping up remaining pellet reactors, completing calciner final commissioning in parallel, and beginning CO2 injection. Start-up milestones achieved include: Completed wet commissioning with water circulation. Received Class VI permits to sequester CO2. Ran CO2 compression system at design pressure. Added potassium hydroxide (KOH) to capture CO2 from the atmosphere. Building pellet inventory. Burners tested on calciner.  

Read More »

Brava Energia weighs Phase 3 at Atlanta to extend production plateau

Just 2 months after bringing its flagship Atlanta field onstream with the new FPSO Atlanta, Brazil’s independent operator Brava Energia SA is evaluating a potential third development phase that could add roughly 25 million bbl of reserves and help sustain peak production longer than originally planned. The Phase 3 project, still at an early technical and economic evaluation stage, focuses on the Atlanta Nordeste area; a separate, shallower reservoir discovered in 2006 by Shell’s 9-SHEL-19D-RJS well. According to André Fagundes, vice-president of research (Brazil) at Welligence Energy Analytics, Phase 2 has four wells still to be developed: two expected in 2027 and two in 2029. Phase 3 would involve drilling two additional wells in 2031, bringing total development to 12 producing wells. Until recently, full-field development was understood to comprise 10 wells, but Brava has since updated guidance to reflect a 12-well development concept. Atlanta field upside The primary objective is clear. “We believe its main objective is to extend the production plateau,” Fagundes said. Welligence estimates incremental recovery could reach 25 MMbbl, increasing the field’s overall recovery factor by roughly 1.5%. Lying outside Atlanta’s main Cretaceous reservoir, Atlanta Nordeste represents a genuine upside opportunity, Fagundes explained. The field benefits from strong natural aquifer support, and no water or gas injection is anticipated. Water-handling constraints that affected early production using the Petrojarl I—limited to 11,500 b/d of water treatment—are no longer a bottleneck. FPSO Atlanta can process up to 140,000 b/d of water. Reservoir performance to date has been solid, albeit with difficulties. Recurrent electric submersible pump (ESP) failures and processing limits on the previous FPSO complicated full validation of original reservoir models. With the new 50,000-b/d FPSO in operation since late 2024, reservoir deliverability has become the main constraint. Phase 3 wells would also use ESPs and require additional subsea

Read More »

California Resources eyes ‘measured’ capex ramp on way to 12% production growth thanks to Berry buy

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } The leaders of California Resources Corp., Long Beach, plan to have the company’s total production average 152,000-157,000 boe/d in 2026, with each quarter expected to be in that range. That output would equate to an increase of more than 12% from the operator’s 137,000 boe/d during fourth-quarter 2025, due mostly to the mid-December acquisition of Berry Corp. Fourth-quarter results folded in 14 days of Berry production and included 109,000 b/d of oil, with the company’s assets in the San Joaquin and Los Angeles basins accounting for 99,000 b/d of that total. The company dilled 31 new wells during the quarter and 76 in all of 2025—all in the San Joaquin—but that number will grow significantly to about 260 this year as state officials have resumed issuing permits following the passage last fall of a bill focused on Kern County production. Speaking to analysts after CRC reported fourth-quarter net income of $12 million on $924 million in revenues, president and chief executive officer Francisco Leon and chief financial officer Clio Crespy said the goal is to manage 2026 output decline to roughly 0.5% per quarter while operating four rigs and

Read More »

Petro-Victory Energy spuds São João well in Brazil

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } Petro-Victory Energy Corp. has spudded the SJ‑12 well at São João field in Barreirinhas basin, on the Brazilian equatorial margin, Maranhão.  Drilling and testing SJ‑12 is aimed at proving enough gas can be produced to sell locally. The well forms part of the single non‑associated gas well commitment under a memorandum of understanding signed in 2024 with Enava. São João contains 50.1 bcf (1.4 billion cu m) non‑associated gas resources. Petro‑Victory 100% owns and operates São João field.

Read More »

Opinion Poll: Strait of Hormuz disruptions

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: var(–color-primary-main); } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; } 388041610 © Ahmad Efendi | Dreamstime.com US, Israel, and Iran flags <!–> ]–> <!–> –> Oil & Gas Journal wants to hear your thoughts about how the collaborative strike on Iran by the US and Israel and disruptions through the Strait of Hormuz may impact oil prices.  

Read More »

Datalec targets rapid infrastructure deployment with new modular data centers

“We are engineering the data center with a new lens bringing pre-engineered system designs that are flexible and adaptable that enables a tailored solution for clients,” said John Lever, director of modular solutions at Datalec. The systems are flexible enough that these solutions cater for all types of data center, from standard server technology to AI and high-density compute. Datalec also provides “bolt-on” solutions, including a ‘digital wrapper’ including digital twinning and lifecycle and global support, Lever says. Another way Datalec says it differentiates from competing modular designs is a larger share of work is done offsite in a controlled manufacturing environment, which cuts onsite construction time, improves safety and limits disruption to live facilities, Lever says. The company competes with other modular data center vendors including Schneider Electric, Vertiv, Flex many others. DPI’s says its services are aimed at colocation providers, hyperscale and AI infrastructure teams, and large enterprises that need to add capacity quickly, safely and cost effectively across multiple regions.

Read More »

Study finds significant savings from direct current power for AI workloads

The result is a 50% to 80% reduction in copper usage, due to fewer conductors and less parallel cabling, and an 8% to 12% reduction in annual energy-related OpEx through lower conversion and distribution losses. By reducing conductor count, cabling, and redundant power components, 800VDC enables meaningful savings at both build-out and operational stages. AI-first facilities can see a $4 million to $8 million in CapEx savings per 10 MW build by reducing upstream AC. For a one-gigawatt data center, you’re saving a couple million pounds of copper wire, he said. Burke says an all-DC data center is best done with a whole new facility rather than retrofitting old facilities. “[DC] is going to be in a lot of greenfield data centers that are going to be built, and data centers that are going to go to higher compute power are also going to DC,” he said. He did recommend all-DC retrofits for existing data centers that are going to employ high power computing with GPUs. Enteligent’s unnamed and as yet unreleased product is a converter that takes 800 volts and partitions it to 50 volts for the computing servers. The company will provide a new power supply, power shelf that converts 800 volts DC to 50 volts DC much more efficiently than any current power supplies. Burke said the company is doing NDA level testing and pilot programs now with its product, but it will be making a formal announcement within the next few weeks. There are a number of players in the DC arena focusing on different parts of the power supply market including Vertiv, Rutherford, Siemens, Eaton and many more.

Read More »

Cisco blends Splunk analytics, security with core data center management

With the integration, data center teams can gather and act on events, alarms, health scores, and inventory through open APIs, Cisco stated. It also offers pre-built and customizable dashboards for inventory, health, fabric state, anomalies, and advisories as well as correlates telemetry across fabrics and technology tiers for actionable insights, according to Cisco. “This isn’t just another connector or API call. This is an embedded, architectural integration designed to transform how you monitor, troubleshoot, and secure your data center fabric. By bringing the power of Splunk directly into the Data Center Networking environment, we are enabling teams to solve complex problems faster, maintain strict data sovereignty, and dramatically reduce operational costs,” wrote Usha Andra is a senior product marketing leader and Anant Shah, senior product manager, both with Cisco Data Center Networking in a blog about the integration.  “Traditionally, network monitoring involves a trade-off. You either send massive amounts of raw logs to a centralized data lake, incurring high ingress and storage costs. Or you rely on sampled data that misses critical microbursts and anomalies,” Andra and Shah wrote.  “Native Splunk integration changes the paradigm by running Splunk capabilities directly within the Cisco Nexus Dashboard. This allows for the streaming of high-fidelity telemetry, including anomalies, advisories, and audit logs, directly to Splunk analytics.”

Read More »

Execution, Power, and Public Trust: Rich Miller on 2026’s Data Center Reality and Why He Built Data Center Richness

DCF founder Rich Miller has spent much of his career explaining how the data center industry works. Now, with his latest venture, Data Center Richness, he’s also examining how the industry learns. That thread provided the opening for the latest episode of The DCF Show Podcast, where Miller joined present Data Center Frontier Editor in Chief Matt Vincent and Senior Editor David Chernicoff for a wide-ranging discussion that ultimately landed on a simple conclusion: after two years of unprecedented AI-driven announcements, 2026 will be the year reality asserts itself. Projects will either get built, or they won’t. Power will either materialize, or it won’t. Communities will either accept data center expansion – or they’ll stop it. In other words, the industry is entering its execution phase. Why Data Center Richness Matters Now Miller launched Data Center Richness as both a podcast and a Substack publication, an effort to experiment with formats and better understand how professionals now consume industry information. Podcasts have become a primary way many practitioners follow the business, while YouTube’s discovery advantages increasingly make video versions essential. At the same time, Miller remains committed to written analysis, using Substack as a venue for deeper dives and format experimentation. One example is his weekly newsletter distilling key industry developments into just a handful of essential links rather than overwhelming readers with volume. The approach reflects a broader recognition: the pace of change has accelerated so much that clarity matters more than quantity. The topic of how people learn about data centers isn’t separate from the industry’s trajectory; it’s becoming part of it. Public perception, regulatory scrutiny, and investor expectations are now shaped by how stories are told as much as by how facilities are built. That context sets the stage for the conversation’s core theme. Execution Defines 2026 After

Read More »

Nomads at the Frontier: PTC 2026 Signals the Digital Infrastructure Industry’s Moment of Execution

Each January, the Pacific Telecommunications Council conference serves as a barometer for where digital infrastructure is headed next. And according to Nomad Futurist founders Nabeel Mahmood and Phillip Koblence, the message from PTC 2026 was unmistakable: The industry has moved beyond hype. The hard work has begun. In the latest episode of The DCF Show Podcast, part of our ongoing ‘Nomads at the Frontier’ series, Mahmood and Koblence joined Data Center Frontier to unpack the tone shift emerging across the AI and data center ecosystem. Attendance continues to grow year over year. Conversations remain energetic. But the character of those conversations has changed. As Mahmood put it: “The hype that the market started to see is actually resulting a bit more into actions now, and those conversations are resulting into some good progress.” The difference from prior years? Less speculation. More execution. From Data Center Cowboys to Real Deployments Koblence offered perhaps the sharpest contrast between PTC conversations in 2024 and those in 2026. Two years ago, many projects felt speculative. Today, developers are arriving with secured power, customers, and construction underway. “If 2024’s PTC was data center cowboys — sites that in someone’s mind could be a data center — this year was: show me the money, show me the power, give me accurate timelines.” In other words, the market is no longer rewarding hypothetical capacity. It is demanding delivered capacity. Operators now speak in terms of deployments already underway, not aspirational campuses still waiting on permits and power commitments. And behind nearly every conversation sits the same gating factor. Power. Power Has Become the Industry’s Defining Constraint Whether discussions centered on AI factories, investment capital, or campus expansion, Mahmood and Koblence noted that every conversation eventually returned to energy availability. “All of those questions are power,” Koblence said.

Read More »

Land and Expand: Early 2026 Megaprojects Reflect a Power-First Ethos

Vantage — Lighthouse (Port Washington, Wisconsin) Although the on-site ceremonial groundbreaking occurred in 2025, Vantage Data Centers’ Lighthouse campus in Port Washington, Wisconsin, remained one of the most closely watched AI infrastructure developments entering 2026, with updated local materials posted February 19 reinforcing the project’s scale and timeline. Announced in October 2025 in partnership with OpenAI and Oracle, Lighthouse is positioned as the Midwest anchor site within the companies’ broader Stargate expansion, which targets up to 4.5 gigawatts of additional AI capacity globally. Current plans call for four hyperscale data centers delivering nearly 902 MW of IT load on a site encompassing roughly 672 acres, with construction expected to run through 2028. From a Land and Expand perspective, the project exemplifies the new generation of AI campuses involving large-scale land banking paired with phased delivery designed to stay ahead of hyperscale demand curves. Just as notable is the project’s power and community framework. Vantage is working with WEC Energy Group’s We Energies on a dedicated rate structure under which the developer will underwrite 100% of the power infrastructure investment, a model explicitly designed to shield existing customers from rate increases. The utility partnership also includes plans to enable nearly 2 gigawatts of new zero-emission energy capacity, with approximately 70% allocated to the Lighthouse campus and the remainder supporting broader grid needs. Water and environmental positioning are also central to the project narrative. Lighthouse is designed around a closed-loop liquid cooling system intended to minimize water consumption, alongside local restoration investments aimed at achieving water positivity. Vantage has also committed to preserving significant portions of the site’s natural landscape while pursuing LEED certification for the campus. Economically, the development is expected to generate more than 4,000 primarily union construction jobs and over 1,000 long-term operational roles, while Vantage has pledged at

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »