Stay Ahead, Stay ONMINE

OpenAI’s new voice AI model gpt-4o-transcribe lets you add speech to your existing text apps in seconds

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI’s voice AI models have gotten it into trouble before with actor Scarlett Johansson, but that isn’t stopping the company from continuing to advance its offerings in this category. Today, the ChatGPT maker has unveiled three, […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI’s voice AI models have gotten it into trouble before with actor Scarlett Johansson, but that isn’t stopping the company from continuing to advance its offerings in this category.

Today, the ChatGPT maker has unveiled three, all new proprietary voice models called gpt-4o-transcribe, gpt-4o-mini-transcribe and gpt-4o-mini-tts, available initially in its application programming interface (API) for third-party software developers to build their own apps atop, as well as on a custom demo site, OpenAI.fm, that individual users can access for limited testing and fun.

Moreover, the gpt-4o-mini-tts model voices can be customized from several pre-sets via text prompt to change their accents, pitch, tone, and other vocal qualities — including conveying whatever emotions the user asks them to, which should go a long way to addressing any concerns OpenAI is deliberately imitating any particular user’s voice (the company previously denied that was the case with Johansson, but pulled down the ostensibly imitative voice option, anyway). Now it’s up to the user to decide how they want their AI voice to sound when speaking back.

In a demo with VentureBeat delivered over video call, OpenAI technical staff member Jeff Harris showed how using text alone on the demo site, a user could get the same voice to sound like a cackling mad scientist or a zen, calm yoga teacher.

Discovering and refining new capabilities within GPT-4o base

The models are variants of the existing GPT-4o model OpenAI launched back in May 2024 and which currently powers the ChatGPT text and voice experience for many users, but the company took that base model and post-trained it with additional data to make it excel at transcription and speech. The company didn’t specify when the models might come to ChatGPT.

“ChatGPT has slightly different requirements in terms of cost and performance trade-offs, so while I expect they will move to these models in time, for now, this launch is focused on API users,” Harris said.

It is meant to supersede OpenAI’s two-year-old Whisper open source text-to-speech model, offering lower word error rates across industry benchmarks and improved performance in noisy environments, with diverse accents, and at varying speech speeds — across 100+ languages.

The company posted a chart on its website showing just how much lower the gpt-4o-transcribe models’ error rates are at identifying words across 33 languages, compared to Whisper — with an impressively low 2.46% in English.

“These models include noise cancellation and a semantic voice activity detector, which helps determine when a speaker has finished a thought, improving transcription accuracy,” said Harris.

Harris told VentureBeat that the new gpt-4o-transcribe model family is not designed to offer “diarization,” or the capability to label and differentiate between different speakers. Instead, it is designed primarily to receive one (or possibly multiple voices) as a single input channel and respond to all inputs with a single output voice in that interaction, however long it takes.

The company is further hosting a competition for the general public to find the most creative examples of using its demo voice site OpenAI.fm and share them online by tagging the @openAI account on X. The winner is set to receive a custom Teenage Engineering radio with OpenAI logo, which OpenAI Head of Product, Platform Olivier Godement said is one of only three in the world.

An audio applications gold mine

The enhancements make them particularly well-suited for applications such as customer call centers, meeting note transcription, and AI-powered assistants.

Impressively, the company’s newly launched Agents SDK from last week also allows those developers who have already built apps atop its text-based large language models like the regular GPT-4o to add fluid voice interactions with only about “nine lines of code,” according to a presenter during an OpenAI YouTube livestream announcing the new models (embedded above).

For example, an e-commerce app built atop GPT-4o could now respond to turn-based user questions like “tell me about my last orders” in speech with just seconds of tweaking the code by adding these new models.

“For the first time, we’re introducing streaming speech-to-text, allowing developers to continuously input audio and receive a real-time text stream, making conversations feel more natural,” Harris said.

Still, for those devs looking for low-latency, real-time AI voice experiences, OpenAI recommends using its speech-to-speech models in the Realtime API.

Pricing and availability

The new models are available immediately via OpenAI’s API, with pricing as follows:

gpt-4o-transcribe: $6.00 per 1M audio input tokens (~$0.006 per minute)

gpt-4o-mini-transcribe: $3.00 per 1M audio input tokens (~$0.003 per minute)

gpt-4o-mini-tts: $0.60 per 1M text input tokens, $12.00 per 1M audio output tokens (~$0.015 per minute)

However, they arrive into a time of fiercer-than-ever competition in the AI transcription and speech space, with dedicated speech AI firms such as ElevenLabs offering its new Scribe model that supports diarization and boasts a similarly (but not as low) reduced error rate of 3.3% in English, and pricing of $0.40 per hour of input audio (or $0.006 per minute, roughly equivalent).

Another startup, Hume AI offers a new model Octave TTS with sentence-level and even word-level customization of pronunciation and emotional inflection — based entirely on the user’s instructions, not any pre-set voices. The pricing of Octave TTS isn’t directly comparable, but there is a free tier offering 10 minutes of audio and costs increase from there between

Meanwhile, more advanced audio and speech models are also coming to the open source community, including one called Orpheus 3B which is available with a permissive Apache 2.0 license, meaning developers don’t have to pay any costs to run it — provided they have the right hardware or cloud servers.

Industry adoption and early results

Several companies have already integrated OpenAI’s new audio models into their platforms, reporting significant improvements in voice AI performance, according to testimonials shared by OpenAI with VentureBeat.

EliseAI, a company focused on property management automation, found that OpenAI’s text-to-speech model enabled more natural and emotionally rich interactions with tenants.

The enhanced voices made AI-powered leasing, maintenance, and tour scheduling more engaging, leading to higher tenant satisfaction and improved call resolution rates.

Decagon, which builds AI-powered voice experiences, saw a 30% improvement in transcription accuracy using OpenAI’s speech recognition model.

This increase in accuracy has allowed Decagon’s AI agents to perform more reliably in real-world scenarios, even in noisy environments. The integration process was quick, with Decagon incorporating the new model into its system within a day.

Not all reactions to OpenAI’s latest release have been warm. Dawn AI app analytics software co-founder Ben Hylak (@benhylak), a former Apple human interfaces designer, posted on X that while the models seem promising, the announcement “feels like a retreat from real-time voice,” suggesting a shift away from OpenAI’s previous focus on low-latency conversational AI via ChatGPT.

Additionally, the launch was preceded by an early leak on X (formerly Twitter). TestingCatalog News (@testingcatalog) posted details on the new models several minutes before the official announcement, listing the names of gpt-4o-mini-tts, gpt-4o-transcribe, and gpt-4o-mini-transcribe. The leak was credited to @StivenTheDev, and the post quickly gained traction.

But looking ahead, OpenAI plans to continue refining its audio models and is exploring custom voice capabilities while ensuring safety and responsible AI use. Beyond audio, OpenAI is also investing in multimodal AI, including video, to enable more dynamic and interactive agent-based experiences.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco launches dedicated wireless certification track

CCIE Wireless The CCIE Wireless certification validates networking professionals’ ability to “maximize the potential of any enterprise wireless solution from designing and deploying to operating and optimizing,” Cisco says. “Our Cisco CCIE Wireless certification also reflects the growth and evolution of wireless technologies. It includes Cisco’s cloud-based network management solution,

Read More »

IBM, AMD team on quantum computing

IBM and AMD are working together to blend Big Blue’s quantum computers with the chipmaker’s CPUs, GPUs and FPGAs to build intelligent, quantum-centric, high-performance computers. The plan is to combine the power and intelligence of quantum computers with the benefits of classic computing to enable new kinds of algorithms that

Read More »

Alberta to Impose 2 Percent Levy on Hardware at Data Centers

Alberta plans to impose a 2% levy on the computer hardware used in large data centers starting at the end of next year. The tax will apply to data centers with a load of 75 megawatts or larger and will go into effect Dec. 31, 2026, the province said in an emailed statement. The levy will be offset by provincial corporate income taxes so that, once a data center becomes profitable and pays corporate taxes, the levy won’t result in an additional tax burden. Data center operators have been drawn to Alberta, a Canadian province with abundant and relatively cheap natural gas supplies. More than two dozen  projects have been proposed to the Alberta Electric System Operator with a load of more than 12,000 megawatts, according to the agency’s data.  The hardware levy is the latest regulation imposed on the emerging industry by Alberta. In June, AESO announced it would limit connections to no more than 1,200 megawatts for “large load projects,” including data centers, until 2028, a move that prompted some indigenous communities to push back, claiming the restriction would hinder their own investments.  Data centers of at least 75 megawatts will be considered designated industrial properties and property values will be assessed by the province, according to the release. Land and buildings associated with facilities will be subject to municipal taxes. Municipalities have the option to offer property tax deferrals of as many as 15 years. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

IEP plans 944-MW behind-the-meter gas plant to power PA data center

Dive Brief: International Electric Power is planning to develop a 944-MW behind-the-meter, gas-fired plant to power a data center being developed in Greene County, Pennsylvania, according to a project partner’s Wednesday announcement. The data center will be “supplemented by battery storage and back up from an existing interconnection with the electric grid,” the companies said. IEP is a privately held thermal and renewable power developer. Essential Utilities, a publicly-held company, informed the Securities and Exchange Commission it would be investing $26 million in the project to develop a water treatment facility for the power plant and data center. Greene County Chairman Jared Edgreen told Utility Dive the county is aware of the project and sees it as “a large step forward. … We need to diversify what we have in the county,” he said. “Energy is at a premium.” Dive Insight: IEP is “proposing a pretty big complex,” Green County administrator Jeffrey Marshall said. He added that a lot of data centers are looking at behind-the-meter generation “versus going through the whole PJM process.” Most of the approvals needed for the new plant will be handled at the state and federal level, Marshall said. The Federal Energy Regulatory Commission in February launched a review of issues related to colocating large loads, such as data centers, at power plants in PJM’s footprint. That review is ongoing. Meanwhile PJM has also launched a fast-track stakeholder process to develop rules for interconnecting data centers. The announcement Wednesday focused on Essential’s investment, as well as its subsidiaries’ involvement. Essential subsidiary Peoples is the the largest gas utility in Pennsylvania and will provide gas consulting services and energy management services to the project, the company said. Through its subsidiary Aqua, the company will build and operate a water treatment plant to service the power plant and data center, using

Read More »

Illinois regulators, others urge FERC to dismiss Ameren push for right to build $1.9B in MISO projects

The Illinois Commerce Commission, Invenergy and consumer groups are urging the Federal Energy Regulatory Commission to dismiss a petition from Ameren Illinois seeking the right to build about $1.9 billion in transmission projects in the state. The Midcontinent Independent System Operator and Exelon — on behalf of its Commonwealth Edison utility subsidiary — told FERC that courts should determine whether a precedent under Illinois law gives Ameren Illinois the right of first refusal, or ROFR, to build the transmission lines that MISO contends are eligible for competitive bids. Ameren Illinois contends that judicial precedent in Illinois enforcing the state’s “first in the field” doctrine gives the utility dibs on building MISO-approved transmission projects in the state, according to a petition filed by the Ameren subsidiary at FERC on July 24. MISO improperly determined that two 765-kV transmission projects in Illinois were eligible to be put out to bid, according to Ameren Illinois. The projects are the Woodford County–Illinois/Indiana State Line project and the Sub T–Iowa/Illinois State Line–Woodford County project. MISO estimates they would cost $984.6 million and $940.1 million to build, respectively, and would come online in 2034. The projects are part of MISO’s Tranche 2.1 set of regional transmission projects that the grid operator’s board approved in December. Illinois lacks a ROFR, according to the ICC. Illinois Gov. JB Pritzker, a Democrat, in 2023 vetoed a bill that would have given incumbent utilities in the state a ROFR, the commission noted. “Illinois has specifically declined to adopt an ROFR statute, and the Illinois [first in the field] doctrine has never been found by any Illinois court to constitute an ROFR,” the ICC said in an Aug. 25 filing at FERC. Further, Ameren Illinois on July 14 sued MISO in the Illinois Circuit Court for Woodford County arguing it effectively

Read More »

Unconstitutional: Utilities are funding political speech with captive ratepayer funds

Utilities are private for-profit companies that enjoy state-granted monopolies to deliver gas and electricity. These companies, which rely on government-approved rates levied on captive utility ratepayers to generate revenue, are among the most powerful actors in state and federal policymaking. Because policy decisions can transform profits, utilities and their trade associations are actively speaking on and engaged in controversial political debates on issues such as climate change response, infrastructure investment, land-use, and the role of the government in regulating private businesses. Utilities often fund their political speech and political activity through government-set energy rates, which raises concerns that regulators are impermissibly burdening ratepayer associational and speech interests protected by the First Amendment. Consider the FirstEnergy scandal. In 2016, FirstEnergy Corp., one of the largest multi-state utilities in the country, found itself “bleeding cash” as one of its wholly owned subsidiaries ran two failing nuclear plants. Leading up to the 2018 election cycle, the utility funneled over $60 million through 501(c)(4) entities to Ohio politicians. FirstEnergy used this funding to orchestrate the passage of state legislation that bailed out FirstEnergy’s nuclear plants, before then shielding the legislation from a citizen referendum repeal. Remarkably, FirstEnergy charged its captive ratepayers a portion of that $60 million, making them unwitting financial participants in a political campaign that, by one estimate, cost Ohioans “$2 billion in excess utility fees and $7 billion in healthcare costs stemming from pollution.” FirstEnergy’s tactic of using ratepayer funds to support political activity, is not anomalous in the utility industry. Across the country, monopoly utilities regularly engage in political lobbying, advertising, and campaign related funding, using money collected from the public through government-set rates. While a handful of states have taken initial steps to curb this practice, in a forthcoming Energy Law Journal article, I argue that the First Amendment

Read More »

More Advanced Nuclear Innovators to Receive HALEU from US DOE

The United States Department of Energy (DOE) has conditionally selected three more local companies for access to high-assay low-enriched uranium (HALEU) under a program supporting the development of advanced nuclear reactors. The new selections for the HALEU Availability Program, established 2020, were Antares Nuclear Inc., Standard Nuclear Inc. and a partnership between Abilene Christian University and Natura Resources LLC. Antares and Natura have already been selected for the DOE’s separate Reactor Pilot Program, a new pathway for testing advanced nuclear reactors outside of national laboratories using the federal authorization process. Standard has also been conditionally selected earlier for the Fuel Line Pilot Program, which supports the Reactor Pilot Program. The fuel program seeks U.S. companies to build and operate production facilities outside of national labs under the DOE authorization process.  Antares was selected for the HALEU supply program for an advanced microreactor design that aims to reach “criticality” by July 2026, according to a DOE statement. Standard Nuclear would use HALEU to support the establishment of TRISO (tristructural isotropic particle) fuel lines for the Reactor Pilot Program and other TRISO-fueled reactors. Natura was selected for a new molten salt research reactor under construction in Texas. “As a next step, DOE will initiate the contracting process to allocate the material to the three companies, some of which could receive their HALEU later this year”, the statement said. “The allocation process is ongoing, and DOE plans to continue HALEU allocations to additional companies in the future”. “HALEU is not currently available from domestic suppliers and many advanced reactors need the material to achieve smaller designs, longer operating cycles, and increased efficiencies over current technologies”, it said. “To help fill this gap, DOE created a process for nuclear developers to request HALEU material from DOE sources, including material from the National Nuclear Security

Read More »

Expiring September Natural Gas Contract Rose 15 Cents

The expiring September natural gas contract rose 15.0 cents to roll off the board at $2.867 per million British thermal units (MMBtu) yesterday, sparking a relief rally across the NYMEX curve. That’s what Eli Rubin, an energy analyst at EBW Analytics Group, said in a report sent to Rigzone by the EBW team on Thursday. Rubin added in the report, however, that “fundamentally … the near-term outlook remains mired in mild weather and an anticipated surge in the storage surplus vs. five-year average above 200 billion cubic feet in early September”. “Production readings retreated early this week, contributing to the case for upside, with Marcellus spot pricing suggestive of producers curtailing supply on the margins. It is unclear whether recently softer Permian output figures are sustainable, however,” Rubin noted in the report. Rubin went on to state in the report that yesterday’s rally increases the stakes for this morning’s U.S. Energy Information Administration (EIA) storage report. “Consensus expectations suggest a 25-29 billion cubic feet injection,” Rubin said. “A second straight bullish EIA surprise may extend yesterday’s relief rally – but a bearish surprise may quash nascent upside. Traders may also be slow to establish sizable short-term positions heading into the Labor Day holiday weekend,” he added. The EIA’s latest weekly natural gas storage report at the time of writing was released on August 21 and included data for the week ending August 15. That report stated that “working gas in storage was 3,199 billion cubic feet as of Friday, August 15, 2025, according to EIA estimates”. “This represents a net increase of 13 billion cubic feet from the previous week. Stocks were 95 billion cubic feet less than last year at this time and 174 billion cubic feet above the five-year average of 3,025 billion cubic feet. At 3,199 billion

Read More »

AI networking success requires deep, real-time observability

Most research participants also told us they need to improve visibility into their data center network fabrics and WAN edge connectivity services. (See also: 10 network observability certifications to boost IT operations skills) The need for real-time data Observability of AI networks will require many enterprises to optimize how their tools collect network data. For instance, most observability tools rely on SNMP polling to pull metrics from network infrastructure, and these tools typically poll devices at five minute intervals. Shorter polling intervals can adversely impact network performance and tool performance. Sixty-nine percent of survey participants told EMA that AI networks require real-time infrastructure monitoring that SNMP simply cannot support. Real-time telemetry closes visibility gaps. For instance, AI traffic bursts that create congestion and packet drops may last only seconds, an issue that a five-minute polling interval would miss entirely. To achieve this level of metric granularity, network teams will have to adopt streaming network telemetry. Unfortunately, support of such technology is still uneven among network infrastructure and network observability vendors due to a lack of industry standardization and a perception among vendors that customers simply don’t need it. Well, AI is about to create a lot of demand for it.  In parallel to the need for granular infrastructure metrics, 51% of respondents told EMA that they need more real-time network flow monitoring. In general, network flow technologies such as NetFlow and IPFIX can deliver data nearly in real-time, with delays of seconds or a couple minutes depending on the implementation. However, other technologies are less timely. In particular, the VPC flow logs generated by cloud providers are do not offer the same data granularity. Network teams may need to turn to real-time packet monitoring to close cloud visibility gaps.  Smarter analysis for smarter networks Network teams also need their network

Read More »

Equinix Bets on Nuclear and Fuel Cells to Meet Exploding Data Center Energy Demand

A New Chapter in Data Center Energy Strategy Equinix’s strategic investments in advanced nuclear and fuel cell technologies mark a pivotal moment in the evolution of data center energy infrastructure. By proactively securing power sources like Oklo’s fast reactors and Radiant’s microreactors, Equinix is not merely adapting to the industry’s growing energy demands but is actively shaping the future of sustainable, resilient power solutions. This forward-thinking approach is mirrored across the tech sector. Google, for instance, has partnered with Kairos Power to develop small modular reactors (SMRs) in Tennessee, aiming to supply power to its data centers by 2030 . Similarly, Amazon has committed to deploying 5 gigawatts of nuclear energy through partnerships with Dominion Energy and X-energy, underscoring the industry’s collective shift towards nuclear energy as a viable solution to meet escalating power needs . The urgency of these initiatives is underscored by projections from the U.S. Department of Energy, which anticipates data center electricity demand could rise to 6.7%–12% of total U.S. production by 2028, up from 4.4% in 2023. This surge, primarily driven by AI technologies, is straining existing grid infrastructure and prompting both public and private sectors to explore innovative solutions. Equinix’s approach, i.e. investing in both immediate and long-term energy solutions, sets a precedent for the industry. By integrating fuel cells for near-term needs and committing to advanced nuclear projects for future scalability, Equinix exemplifies a balanced strategy that addresses current challenges while preparing for future demands. As the industry moves forward, the collaboration between data center operators, energy providers, and policymakers will be crucial. The path to a sustainable, resilient energy future for data centers lies in continued innovation, strategic partnerships, and a shared commitment to meeting the digital economy’s power needs responsibly.

Read More »

Evolving to Meet AI-Era Data Center Power Demands: A Conversation with Rehlko CEO Brian Melka

On the latest episode of the Data Center Frontier Show Podcast, we sat down with Brian Melka, CEO of Rehlko, to explore how the century-old mission-critical power provider is reinventing itself to support the new realities of AI-driven data center growth. Rehlko, formerly known as Kohler Energy, rebranded a year ago but continues to draw on more than a century of experience in power generation and backup systems. Melka emphasized that while the name has changed, the mission has not: delivering reliable, scalable, and flexible energy solutions to support always-on digital infrastructure. Meeting Surging AI Power Demands Asked how Rehlko is evolving to support the next wave of data center development, Melka pointed to two major dynamics shaping the market: Unprecedented capacity needs driven by AI training and inference. New, “spiky” usage patterns that strain traditional backup systems. “Power generation is something we’ve been doing longer than anyone else, starting in 1920,” Melka noted. “As we look forward, it’s not just about the scale of backup power required — it’s about responsiveness. AI has very large short-duration power demands that put real strain on traditional systems.” To address this, Rehlko is scaling its production capacity fourfold over the next three to four years, while also leveraging its global in-house EPC (engineering, procurement, construction) capabilities to design and deliver hybrid systems. These combine diesel or gas generation with battery storage and short-duration modulation, creating a more responsive power backbone for AI data centers. “We’re the only ones out there that can deliver that breadth of capability on a full turnkey basis,” Melka said. “It positions us to support customers as they navigate these new patterns of energy demand.” Speed to Power Becomes a Priority In today’s market, “speed to power” has become the defining theme. Developers and operators are increasingly considering

Read More »

Data Center Chip Giants Negotiate Political Moves, Tariffs, and Corporate Strategies

And with the current restrictions being placed on US manufacturers selling AI parts to China, reporting says NVIDIA is developing a Blackwell-based China chip, more capable than the current H20 but still structured to comply with U.S. export rules. Reuters reported that it would be  a single-die design (roughly half the compute of the dual-die B300), with HBM and NVLink, sampling as soon as next month. A second compliant workstation/inference product (RTX6000D) is also in development. Chinese agencies have reportedly discouraged use of NVIDIA H20 in government work, favoring Huawei Ascend. However, there have been reports describing AI training using the Ascend to be “challenging”, forcing some AI firms to revert to NVIDIA for large-scale training while using Ascend for inference. This keeps China demand alive for compliant NVIDIA/AMD parts—hence the U.S. interest in revenue-sharing. Meanwhile, AMD made its announcements at June’s “Advancing AI 2025” to set MI350 (CDNA 4) expectations and a yearly rollout rhythm that’s designed to erase NVIDIA’s time lead as much as fight on absolute perf/Watt. If MI350 systems ramp aligns with major cloud designs in 2026, AMD’s near-term objective is defending MI300X momentum while converting large customers to multi-vendor strategies (often pairing MI clusters with NVIDIA estates for redundancy and price leverage). The 15% China license fee will shape how AMD prices MI-series export SKUs and whether Chinese hyperscalers still prefer them to the domestic alternative (Huawei Ascend), which continue to face software/toolchain challenges. If Chinese buyers balk or Beijing discourages purchases, the revenue-share may be moot; if they don’t, AMD has a path to keep seats warm in China while building MI350 demand elsewhere. Beyond China export licenses, the U.S. and EU recently averted a larger trade war by settling near 15% on certain sectors, which included semiconductors, as opposed to the far more

Read More »

Johnson Controls Brings Data Center Cooling into the “As-a-Service” Era

Cooling Without the Risk Johnson Controls’ Data Center Cooling as a Service (DCCaaS) approach is designed to take cooling risk off the operator’s shoulders. The company doesn’t just provide the technology—it delivers a comprehensive, long-term service package that covers design, build, operation, maintenance, and life cycle management. The model shifts cooling from a capital expense to an operating expense, providing financial flexibility at a time when operators are pouring billions into AI-ready infrastructure. “We take on the risk of performance and uptime,” Renkis explained. “If we don’t meet the agreed-upon KPIs, there are financial consequences for us—not the customer.” The AI Advantage A key differentiator in Johnson Controls’ approach is its integration of AI, machine learning, and advanced analytics. Through its OpenBlue and Metasys platforms—supplemented by partnerships with three to four external AI providers—the company is able to continuously optimize cooling system performance. These AI-driven systems not only extend the life of equipment but also deliver financially guaranteed outcomes. “We tie our results to customer-defined KPIs,” said Renkis. “If we miss, we pay. That accountability drives everything we do.” Modularity with Flexibility While the industry is trending toward modularity and prefabricated builds, Renkis stressed that every DCCaaS project remains unique. Johnson Controls designs contracts with “detour functionality”—flexible pathways to upgrade and adapt as technology shifts. That flexibility is crucial given the rapid emergence of AI factory-scale demands. New chip architectures and ultra-dense racks—600kW, 1MW, even 1.5MW—are reshaping expectations for cooling and power. “Nobody knows exactly how this will evolve,” Renkis noted. “That uncertainty makes the as-a-service model the most prudent path forward.” Beyond Traditional Facilities Management Cooling-as-a-service is distinct from conventional facilities management in both scope and financial muscle. Johnson Controls brings to the table its own capital arm—Johnson Controls Capital—and a joint venture with Apollo Group, known as Ionic

Read More »

Meta’s Dual-Track Data Center Strategy: Owning AI Campuses, Leasing Cloud, and Expanding Nationwide

Provisioning the Power is a Major Project All its Own Powering a data center campus on this scale in an area like rural Louisiana is not a simple task. News reports and a utility commission filing by power company Entergy are starting to reveal the scope of project preparation already in process to get the site the power it will need. To bring in outside power, Entergy plans a 100-mile, 500kV transmission project (at an approximate cost of $1.2 billion) to move bulk power into the area. Substations & lines tied to the site will include a new “Smalling” 500/230kV substation, a new “Car Gas Road” 500kV switchyard, six customer substations on Meta’s property, two 30-mile 500kV lines, and multiple 230kV feeders into the campus. Additionally, Entergy has sought approval for three combined-cycle gas plants generating abou 2.25 GW of power and associated lines to meet the immediate load while broader transmission is built out; state hearings are underway with a vote on this part of the project expected before the end of August 2025.   Approval is being sought from the Louisiana Public Service Commision to build these three new gas plants and their associated infrastructure at a cost of just under $4 billion. Concerns are being raised by local community groups as well as the Union of Concerned Scientists (UCS) and Louisiana-based Alliance for Affordable Energy (AAE) not just about how much of the initial costs will be passed on to Louisiana ratepayers, but also on issues related to what happens as the first series of contracts for power begin to expire in 15 years. The plans being presented were initially scheduled to be voted on in October 2025 and the fast tracking of project approval has highlighted the concerns of the opposition. Both the short- and long-term

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »