Stay Ahead, Stay ONMINE

Accelsius and iM Data Centers Demo Next-Gen Cooling and Sustainability at Miami Data Center

Miami Data Center Developments Update Miami has recently witnessed several significant developments and investments in its data center sector, underscoring the city’s growing importance as a digital infrastructure hub. Notable projects include: Project Apollo:  A proposed 15-megawatt (MW), two-story, 75,000-square-foot data center in unincorporated Miami-Dade County. With an estimated investment of $150 million, construction is […]

Miami Data Center Developments Update

Miami has recently witnessed several significant developments and investments in its data center sector, underscoring the city’s growing importance as a digital infrastructure hub. Notable projects include:

Project Apollo:  A proposed 15-megawatt (MW), two-story, 75,000-square-foot data center in unincorporated Miami-Dade County. With an estimated investment of $150 million, construction is slated to commence between 2026 and 2027. The development team has prior experience with major companies such as Amazon, Meta, and Iron Mountain. 

RadiusDC’s Acquisition of Miami I:  In August 2024, RadiusDC acquired the Miami I data center located in the Sweetwater area. Spanning 170,000 square feet across two stories, the facility currently offers 3.2MW of capacity, with plans to expand to 9.2 MW by the first half of 2026. The carrier-neutral facility provides connectivity to 11 fiber optic and network service providers. 

Iron Mountain’s MIA-1 Data Center: Iron Mountain is developing a 150,000-square-foot, 16 MW data center on a 3.4-acre campus in Central North West Miami. The facility, known as MIA-1, is scheduled to open in 2026 and aims to serve enterprises, cloud providers, and large-scale users in South Florida. It will feature fiber connections to other Iron Mountain facilities and a robust pipeline of carriers and software-defined networks. 

EDGNEX’s Investment Plans:  As of this month, Dubai, UAE-based EDGNEX has announced plans to invest $20 billion in the U.S. data center market, with the potential to double this investment. This plan includes a boutique condo project in Miami, estimated to have a $1 billion gross development value, indicating a significant commitment to the region’s digital infrastructure. 

All of these developments highlight Miami’s strategic position as a connectivity hub, particularly serving as a gateway to Latin America and the Caribbean. The city’s data center market is characterized by steady growth, with a focus on retail colocation and international connectivity. However, challenges such as limited current supply and power constraints in existing facilities have been noted. 

Additionally, existing facilities like Equinix’s NAP of the Americas play a crucial role in Miami’s data center landscape. This six-story, 750,000-square-foot data center and Internet exchange point is one of the world’s largest and serves as a major hub for network traffic between the United States and Latin America. 

Overall, all of the investments and developments mentioned here underscore Miami’s growing prominence in the data center industry, driven by its strategic location and increasing demand for digital infrastructure.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

HPE bolsters hybrid mesh firewall platform

“Hybrid mesh firewalls provide unified, multiform‑factor firewall security, giving organizations consistent policy, visibility, and enforcement across on‑premises, cloud, and remote environments. With hardware appliances, virtual firewalls, cloud‑native firewalls, and firewall as a service (FWaaS) under a single management plane, teams can apply the same rules everywhere to reduce gaps and

Read More »

Energy Department Announces $50 Million Investment to Advance Affordable, Reliable, and Secure Energy for Tribes

These new investments will support Tribal-led energy project planning and development, strengthening energy reliability and increasing electricity access across Tribal communities  WASHINGTON—The U.S. Department of Energy’s (DOE) Office of Indian Energy (IE) today announced a $50 million notice of funding opportunity (NOFO) aimed at fostering affordable, reliable, and secure energy solutions in Indian Country. This investment will support Tribal-led community-scale energy project planning and development and large-scale energy project planning.   In accordance with President Trump’s Executive Order, Unleashing American Energy, this NOFO highlights the fundamental role of energy in strengthening Tribal economies.   “This investment reflects the Trump Administration’s commitment to ensuring Tribal communities have access to affordable, reliable, and secure energy,” said U.S. Secretary of Energy Chris Wright. “By strengthening local energy infrastructure, we are supporting long-term economic growth, energy independence, and resilience across Indian Country. “This $50 million competitive funding opportunity for Tribal entities is directly aligned with the priorities of the U.S. Department of Energy,” said DOE’s Office of Indian Energy Director Eric Mahroum. “This funding will unleash Tribal energy development— supporting energy projects that aim to cut energy costs, expand electricity access, and advance economic opportunities. It’s exciting and like nothing we have offered before.”   Through the Unleashing Tribal Energy Development NOFO, the Office of Indian Energy is soliciting applications from Indian Tribes, which include Alaska Native regional corporations and Village corporations, Tribal and intertribal organizations, Tribal Energy Development Organizations, and Tribal Colleges and Universities—or any consortium of these eligible groups–to focus on:   Construction and installation of Tribal community-scale energy projects to meet the needs of the community   Predevelopment activities required to identify community-scale energy opportunities and bring projects from concept to implementation ready   Planning, assessment, and feasibility activities to de-risk and advance development for large-scale Tribal energy projects that provide opportunities for revenue generation and economic development   DOE works comprehensively from inception through commercialization, helping Tribes develop solutions

Read More »

Trump Administration Keeps Indiana Coal Plants Open to Ensure Affordable, Reliable and Secure Power in the Midwest

Emergency orders address critical grid reliability issues, lowering risk of blackouts and ensuring affordable electricity access WASHINGTON—U.S. Secretary of Energy Chris Wright today issued emergency orders to keep two Indiana coal plants operational to ensure Americans in the Midwest region of the United States have continued access to affordable, reliable, and secure electricity. The orders direct the Northern Indiana Public Service Company (NIPSCO), CenterPoint Energy, and the Midcontinent Independent System Operator, Inc. (MISO) to take all measures necessary to ensure specified generation units at both the R.M. Schahfer and F.B. Culley generating stations in Indiana are available to operate. Certain generation units at the coal plants were scheduled to shut down at the end of 2025. The orders prioritize minimizing electricity costs for the American people and minimizing the risk and costs of blackouts. “The last administration’s energy subtraction policies had the United States on track to likely experience significantly more blackouts in the coming years—thankfully, President Trump won’t let that happen,” said Energy Secretary Wright. “The Trump Administration will continue taking action to keep America’s coal plants running to ensure we don’t lose critical generation sources. Americans deserve access to affordable, reliable, and secure energy to power their homes all the time, regardless of whether the wind is blowing or the sun is shining.” The reliable supply of power from these two coal plants was essential in powering the grid during recent extreme winter weather. From January 23–February 1, Schahfer operated at over 285 megawatts (MW) every day and Culley operated at approximately 30 MW almost every day. These operations serve as a reminder that allowing reliable generation to go offline would unnecessarily contribute to grid reliability risks. Since the Department of Energy’s (DOE) original orders were issued on December 23, 2025, the coal plants have proven critical to MISO’s operations, operating during periods of high energy demand and low levels of intermittent

Read More »

Energy Department Begins Delivering SPR Barrels at Record Speeds

WASHINGTON — The U.S. Department of Energy (DOE) today announced the award of contracts for the initial phase of the Strategic Petroleum Reserve (SPR) Emergency Exchange as directed by President Trump. The first oil shipments began today—just nine days after President Trump and the Department of Energy announced the United States would lead a coordinated release of emergency oil reserves among International Energy Agency (IEA) member nations to address short-term supply disruptions. Under these initial awards, DOE will move forward with an exchange of 45.2 million barrels of crude oil and receive 55 million barrels in return, all at no cost to the taxpayer. This represents the first tranche of the United States’ 172-million-barrel release. Companies will receive 10 million barrels from the Bayou Choctaw SPR site, 15.7 million barrels from Bryan Mound, and 19.5 million barrels from West Hackberry. “Thanks to President Trump, the Energy Department began this first exchange at record speeds to address short-term supply disruptions while also strengthening the Strategic Petroleum Reserve by returning additional barrels at no cost to taxpayers,” said Kyle Haustveit, Assistant Secretary of the Hydrocarbons and Geothermal Energy Office. “This exchange not only maintains reliability in the current market but will generate hundreds of millions of dollars in value in the form of additional barrels for the American people when the barrels are returned.” This initial action will ultimately add close to 10 million barrels to the SPR’s inventory when the barrels are returned. Taxpayers will benefit from both the short-term support for global supply and long-term growth of the SPR’s inventory. This helps protects U.S. and global energy security. The Trump Administration continues to pursue additional opportunities to strengthen the reserve and restore its long-term readiness as a cornerstone of American energy security. For more information on the Strategic Petroleum Reserve and DOE’s

Read More »

Then & Now: Oil prices, US shale, offshore, and AI—Deborah Byers on what changed since 2017

In this Then & Now episode of the Oil & Gas Journal ReEnterprised podcast, Managing Editor and Content Strategist Mikaila Adams reconnects with Deborah Byers, nonresident fellow at Rice University’s Baker Institute Center for Energy Studies and former EY Americas industry leader, to revisit a set of questions first posed in 2017. In 2017, the industry was emerging from a downturn and recalibrating strategy; today, it faces heightened geopolitical risk, market volatility, and a rapidly evolving technology landscape. The conversation examines how those earlier perspectives have aged—covering oil price bands and the speed of recovery from geopolitical shocks, the role of US shale relative to OPEC in balancing global supply, and the shift from scarcity to economic abundance driven by technology and capital discipline. Adams and Byers also compare the economics and risk profiles of shale and offshore development, including the growing role of Brazil, Guyana, and the Gulf of Mexico, and discuss how infrastructure and regulatory constraints shape market outcomes. The episode further explores where digital transformation—particularly artificial intelligence—is delivering tangible returns across upstream operations, from predictive maintenance and workforce planning to capital project execution. The discussion concludes with insights on consolidation and scale in the Permian basin, the strategic rationale behind recent megamergers, and the industry’s ongoing challenge to attract and retain next‑generation talent through flexibility, technical opportunity, and purpose‑driven work.

Read More »

Eni plans tieback of new gas discoveries offshore Libya

Eni North Africa, a unit of Eni SPA, together with Libya’s National Oil Corp., plans to develop two new gas discoveries offshore Libya as tiebacks to existing infrastructure. The gas discoveries were made offshore Libya, about 85 km off the coast in about 650 ft of water. Bahr Essalam South 2 (BESS 2) and Bahr Essalam South 3 (BESS 3), adjacent geological structures, were successfully drilled through the exploration well C1-16/4 and the appraisal well B2-16/4 about 16 km south of Bahr Essalam gas field, which lies about 110 km from the Tripoli coast. Gas-bearing intervals were encountered in both wells within the Metlaoui formation, the main productive reservoir of the area. The acquired data indicate the presence of a high-quality reservoir, with productive capacity confirmed by the well test already carried out on the first well. Preliminary volumetric estimates indicate that the BESS 2 and BESS 3 structures jointly contain more than 1 tcf of gas in place. Their proximity to Bahr Essalam field will enable rapid development through tie-back, the operator said. The gas produced will be supplied to the Libyan domestic market and for export to Italy. Bahr Essalam produces through the Sabratha platform to the Mellitah onshore treatment plant.

Read More »

Azule Energy launches first non-associated gas production offshore Angola

Azule Energy has started natural gas production from the New Gas Consortium (NGC)’s Quiluma shallow water field offshore Angola. Start-up of the gas delivery from Quiluma field follows the November 2025 introduction of gas into the onshore gas plant, marking the beginning of production operations. The initial gas export will be 150 MMscfd and will ramp up to 330 MMscfd by yearend, the operator said in a release Mar. 13.  In a separate release Mar. 17, NGC partner TotalEnergies said the startup marks the first development of a non-associated gas field in Angola, noting that the gas produced “will be a stable and important source of gas supply for the Angola LNG plant that is delivering LNG to both the European and Asian markets.” The non-associated gas of NGC Phase 1 will come from Quiluma and Maboqueiro shallow water fields with additional potential related to gas from Blocks 2, 3, and 15/14 areas. An onshore plant will process gas from the fields and connect to the Angola LNG plant, aimed at a reliable feedstock supply to the plant, sited near Soyo in the Zaire province in north Angola. The plant holds a capacity of 400MMscfd of gas and 20,000 b/d of condensates. Azule Energy, a 50-50 joint venture between bp and Eni, is operator of NGC project with 37.4% interest. Partners are TotalEnergies (11.8%), Cabinda Gulf Oil Co., a subsidiary of Chevron (31%), and Sonangol E&P (19.8%).

Read More »

Panasonic says datacenter batteries are selling out and AI is to blame

AI servers are rewriting the power rulebook The root cause, Panasonic noted in the statement, is the electrical behavior of AI workloads. Unlike conventional server applications, AI inference and training draw large amounts of electricity in short bursts to sustain GPU processing, causing peak power levels to spike rapidly and voltages to fluctuate. “Peak power levels for such servers can rise rapidly, and voltages can often become unstable,” the statement said. “Securing stable, highly reliable power supplies is an absolute necessity for AI datacenters.” Vertiv warned in its 2025 Data Center Trends predictions that AI racks must handle loads that “can fluctuate from a 10% idle to a 150% overload in a flash,” requiring UPS systems and batteries with significantly higher power densities than current infrastructure provides. Panasonic said the solution gaining traction among hyperscalers is to place a battery backup unit on each server rack rather than rely on centralized UPS infrastructure upstream, absorbing voltage instability at the source. The company said its systems also carry a peak shaving function that stores off-peak electricity and deploys it during demand spikes, reducing peak grid draw at a time when AI-driven consumption faces growing regulatory and utility scrutiny. Several independent research bodies have reached similar conclusions on the severity of the power challenge ahead. Uptime Institute, in its Five Data Center Predictions for 2026, said “developers will not outrun the power shortage,” with research analyst Max Smolaks warning the crisis “is likely to last many years.” The IEA projected global datacenter electricity consumption could exceed 1,000 TWh by 2026, more than double 2022 levels, while Gartner has warned that energy shortages could restrict 40% of AI datacenters by 2027. Gogia said the shift runs deeper than a hardware swap. “This is not backup in the traditional sense. This is active stabilisation,”

Read More »

Why AI rack densities make liquid cooling non-negotiable

Average rack power density has more than doubled in two years, from 8 kW to 17 kW, and is projected to reach 30 kW by 2027, according to anOctober 2024 McKinsey report, with AI training racks already well ahead of that average. Those limits show up in GPU clock speed. H100 GPUs under inadequate air cooling can throttle to a fraction of their rated clock speed within seconds of a sustained training run. In distributed jobs across thousands of GPUs, one throttled chip can stall the entire run. TheDOE estimates cooling accounts for up to 40% of data center energy use. JLL research establishes three density thresholds: Up to ~20 kW per rack: air cooling is adequate Up to ~100 kW: rear-door heat exchangers extend viability Above ~175 kW: immersion cooling is required Direct-to-chip cooling fills the middle band, handling densities between ~100 and ~175 kW where rear-door exchangers fall short and immersion is not yet warranted. Hot water changes the economics Mechanical chillers are one of the biggest energy draws in any liquid-cooled data center, and until recently there were an unavoidable cost of liquid cooling. Nvidia’s Vera Rubin processor is changing that. At CES in January 2026, Jensen Huang announced that Vera Rubin supports liquid cooling at 45 degrees Celsius, high enough for data centers to reject heat through dry coolers using ambient air rather than mechanical chillers.Nvidia’s CES press release confirmed Rubin is in full production, with customer availability in the second half of 2026. According toNvida’s product specifications, the Vera Rubin NVL72 uses warm-water, single-phase direct liquid cooling at a 45°C supply temperature, allowing data centers to reject heat through dry coolers using ambient air rather than energy-intensive chiller systems.

Read More »

Executive Roundtable: AI Infrastructure Enters Its Execution Era

Miranda Gardiner, iMasons Climate Accord:  Since 2023, the digital infrastructure industry has moved definitively from planning to execution in the AI infrastructure cycle. Industry analysts forecast continued exponential growth, with active capacity at least doubling between now and 2030 and total capacity potentially tripling, quintupling, or more. In practical terms, we’ll see more digital infrastructure capacity come online in the next five year than has been built in the past 30 years, representing a historic industrial transformation requiring trillions of dollars in capital expenditure and a workforce measured in the millions. Design and organizational flexibility, integrated execution of sustainable solutions, and community-centered workforce development will separate those that thrive from those that struggle. Effective organizations will pivot quickly under these constantly shifting conditions and the leaders will be those that build fast but build right, as strategic flexibility balances long-term performance, efficiency, and regulatory compliance. We already know the resource intensity required to bring AI resources online and are working diligently to ensure this short-term, delivering streamlined and optimized solutions for everything from site selection to cooling and power management while lower lifecycle emissions. Additionally, in some regions, grid interconnection timelines and power availability are already the pacing item for data center development. Organizations that align their sustainability targets and energy procurement strategies will have a clearer path to execution. An operational model capable of delivering multiple large-scale facilities simultaneously across regions is another key piece to successful outcomes. Standardized, repeatable frameworks that reduce engineering time and accelerate permitting. We hear often about collaboration and strong partnerships, and these will be critical with utilities, regulators, and equipment manufacturers to anticipate bottlenecks before they impact schedules. Execution discipline will increasingly determine competitive advantage as the industry scales. The world and, especially, our host communities, are watching closely. Projects that move forward

Read More »

Jensen Huang Maps the AI Factory Era at NVIDIA GTC 2026

SAN JOSE, Calif. — If there was a single message that emerged from Jensen Huang’s keynote at Nvidia’s GTC conference this week, it was this: the artificial intelligence revolution is entering its infrastructure phase. For the past several years, the technology industry has been preoccupied with training ever larger models. But in Huang’s telling, that era is already giving way to something far bigger: the industrial-scale deployment of AI systems that run continuously, generating intelligence on demand. “The inference inflection point has arrived,” Huang told the audience gathered at the SAP Center. That shift carries enormous implications for the data center industry. Instead of episodic bursts of compute used to train models, the next generation of AI systems will require persistent, high-throughput infrastructure designed to serve billions, and eventually trillions, of inference requests every day. And the scale of the buildout Huang envisions is staggering. Throughout the keynote, the Nvidia CEO repeatedly referenced what he believes will become a trillion-dollar global market for AI infrastructure in the coming years, spanning accelerated computing systems, networking fabrics, storage architectures, power systems, and the facilities required to house them. At that scale, Huang argued, data centers are no longer simply IT facilities. They are truly becoming AI factories: industrial systems designed to convert electricity into tokens. “Tokens are the new commodity,” Huang said. “AI factories are the infrastructure that produces them.” Across more than two hours on stage, Huang sketched the architecture of that new computing platform, introducing new computing systems, networking technologies, software frameworks, and infrastructure blueprints designed to support what Nvidia believes will be the largest computing buildout in history. Four main themes defined the presentation: • The arrival of the inference inflection point.• The emergence of OpenClaw as a foundational operating layer for AI agents.• New hybrid inference architectures involving

Read More »

Executive Roundtable: The Coordination Imperative

Christopher Gorthy, DPR Construction:  Early collaboration of key stakeholders has become the baseline to deliver these complex projects. The teams that are successful in these environments are the ones who combine effective meeting structures with enough in‑person interaction to build real trust. Pairing those relationships with the right tools can help track key decision making, document reasoning, and keep everyone aligned on “The Why,” creating more predictable outcomes. Where the industry continues to feel fragmented is around liability, risk, and comfort with sharing design and model data. Achieving the speed these projects demand requires the entire team to understand each partner’s constraints and then working together to solve problems, communicating clearly and documenting decisions as they go. All of our partnerships are solving equations with multiple variables. Our teams must provide early feedback and solutions when faced with impacts or delays outside our control, and even earlier communications of impacts that cannot be mitigated. Open communication channels, whether through shared digital platforms or recurring working sessions, are critical to staying ahead of risk. As projects get bigger, alignment with financial institutions, insurance entities and private equity partners also have become essential.   The number of trade partners capable of taking on contracts of this size is limited, so making sure we are setting up our partners for success while also working to expand the network of qualified trade partners is a key strategy.  From a tactical standpoint, the most effective projects operate from a single integrated schedule that ties together the owner, vendors, general contractor, trades, commissioning teams, and all other stakeholders. Reinforcing this with consistent two‑ to three‑week look‑ahead reviews and onsite schedule coordination meetings regardless of contractual structure significantly increases alignment and efficiency at the project level.

Read More »

Jensen Huang After the Keynote: Inside Nvidia’s GTC 2026 Press Briefing

The Data Center as Token Factory If there was one line of thinking that defined the session, it was Huang’s insistence that the industry must stop thinking about computers as systems for data entry and retrieval. That, he said, is the old paradigm. The new one is a “token manufacturing system.” That phrase landed because it compresses a lot of Nvidia’s strategy into a single mental model. In this view, the modern data center is no longer just a warehouse of servers or a cloud abstraction layer. It is a factory, and the unit of output is increasingly the token. For Data Center Frontier readers, this is a familiar direction of travel, but Huang pushed it further than most CEOs do. He repeatedly tied Nvidia’s roadmap to token throughput, token economics, and performance per watt. He is clearly trying to establish a new baseline metric for AI infrastructure value. Not raw capacity, but how much useful intelligence a facility can produce from a fixed power envelope. That point also surfaced in his discussion of Grace and Vera CPUs. Huang’s argument was not that Nvidia intends to win every classical CPU market. It was that traditional measures such as cores per dollar are insufficient in AI data centers where the real economic risk is leaving extremely valuable GPUs idle. In other words, the CPU matters because it must move work fast enough to keep the GPU estate productive. In a power-limited, AI-heavy environment, the purpose of the CPU changes. It is no longer optimized for the old hyperscale rental model. It is optimized for keeping the token factory fed. That is a subtle but major shift. It suggests that the next-generation AI data center will be increasingly engineered around the productivity of the overall system rather than around legacy component economics.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »