Stay Ahead, Stay ONMINE

How E2B became essential to 88% of Fortune 100 companies and raised $21 million

E2B, a startup providing cloud infrastructure specifically designed for artificial intelligence agents, has closed a $21 million Series A funding round led by Insight Partners, capitalizing on surging enterprise demand for AI automation tools.The funding comes as an remarkable 88% of Fortune 100 companies have already signed up to use E2B’s platform, according to the company, highlighting the rapid enterprise adoption of AI agent technology. The round included participation from existing investors Decibel, Sunflower Capital, and Kaya, along with notable angels including Scott Johnston, former CEO of Docker.E2B’s technology addresses a critical infrastructure gap as companies increasingly deploy AI agents — autonomous software programs that can execute complex, multi-step tasks including code generation, data analysis, and web browsing. Unlike traditional cloud computing designed for human users, E2B provides secure, isolated computing environments where AI agents can safely run potentially dangerous code without compromising enterprise systems.“Enterprises have enormous expectations for AI agents. However, we’re asking them to scale and perform on legacy infrastructure that wasn’t designed for autonomous agents,” said Vasek Mlejnsky, co-founder and CEO of E2B, in an exclusive interview with VentureBeat. “E2B solves this by equipping AI agents with safe, scalable, high-performance cloud infrastructure designed specifically for production-scale agent deployments.”

E2B, a startup providing cloud infrastructure specifically designed for artificial intelligence agents, has closed a $21 million Series A funding round led by Insight Partners, capitalizing on surging enterprise demand for AI automation tools.

The funding comes as an remarkable 88% of Fortune 100 companies have already signed up to use E2B’s platform, according to the company, highlighting the rapid enterprise adoption of AI agent technology. The round included participation from existing investors Decibel, Sunflower Capital, and Kaya, along with notable angels including Scott Johnston, former CEO of Docker.

E2B’s technology addresses a critical infrastructure gap as companies increasingly deploy AI agents — autonomous software programs that can execute complex, multi-step tasks including code generation, data analysis, and web browsing. Unlike traditional cloud computing designed for human users, E2B provides secure, isolated computing environments where AI agents can safely run potentially dangerous code without compromising enterprise systems.

“Enterprises have enormous expectations for AI agents. However, we’re asking them to scale and perform on legacy infrastructure that wasn’t designed for autonomous agents,” said Vasek Mlejnsky, co-founder and CEO of E2B, in an exclusive interview with VentureBeat. “E2B solves this by equipping AI agents with safe, scalable, high-performance cloud infrastructure designed specifically for production-scale agent deployments.”


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


Seven-figure monthly revenue spike shows enterprises betting big on AI automation

The funding reflects explosive revenue growth, with E2B adding “seven figures” in new business just in the past month, according to Mlejnsky. The company has processed hundreds of millions of sandbox sessions since October, demonstrating the scale at which enterprises are deploying AI agents.

E2B’s customer roster reads like a who’s who of AI innovation: search engine Perplexity uses E2B to power advanced data analysis features for Pro users, implementing the capability in just one week. AI chip company Groq relies on E2B for secure code execution in its Compound AI systems. Workflow automation platform Lindy integrated E2B to enable custom Python and JavaScript execution within user workflows.

The startup’s technology has also become critical infrastructure for AI research. Hugging Face, the leading AI model repository, uses E2B to safely execute code during reinforcement learning experiments for replicating advanced models like DeepSeek-R1. Meanwhile, UC Berkeley’s LMArena platform has launched over 230,000 E2B sandboxes to evaluate large language models’ web development capabilities.

Firecracker microVMs solve the dangerous code problem plaguing AI development

E2B’s core innovation lies in its use of Firecracker microVMs — lightweight virtual machines originally developed by Amazon Web Services — to create completely isolated environments for AI-generated code execution. This addresses a fundamental security challenge: AI agents often need to run untrusted code that could potentially damage systems or access sensitive data.

“When talking to customers and special enterprises, their biggest decision is almost always build versus buy,” Mlejnsky explained in an interview. “With the build versus buy solution, it all really comes down to whether you want to spend next six to 12 months building this hiring five to 10 person infrastructure team that will cost you at least half a million dollars…or you can use our plug and play solution.”

The platform supports multiple programming languages including Python, JavaScript, and C++, and can spin up new computing environments in approximately 150 milliseconds — fast enough to maintain the real-time responsiveness users expect from AI applications.

Enterprise customers particularly value E2B’s open-source approach and deployment flexibility. Companies can self-host the entire platform for free or deploy it within their own virtual private clouds (VPCs) to maintain data sovereignty — a critical requirement for Fortune 100 firms handling sensitive information.

Perfect timing as Microsoft layoffs signal shift toward AI worker replacement

The funding comes at a pivotal moment for AI agent technology. Recent advances in large language models have made AI agents increasingly capable of handling complex, real-world tasks. Microsoft recently laid off thousands of employees while expecting AI agents to perform previously human-only work, Mlejnsky pointed out in our interview.

However, infrastructure limitations have constrained AI agent adoption. Industry data suggests fewer than 30% of AI agents successfully make it to production deployment, often due to security, scalability, and reliability challenges that E2B’s platform aims to solve.

“We’re building the next cloud,” Mlejnsky said, outlining the company’s ambitious vision. “The current world runs on Cloud 2.0, which was made for humans. We’re building the open-source cloud for AI agents where they can be autonomous and run securely.”

The market opportunity appears substantial. Code generation assistants already produce at least 25% of the world’s software code, while JPMorgan Chase saved 360,000 hours annually through document processing agents. Enterprise leaders expect to automate 15% to 50% of manual tasks using AI agents, creating massive demand for supporting infrastructure.

Open-source strategy creates defensive moat against tech giants like Amazon and Google

E2B faces potential competition from cloud giants like Amazon, Google, and Microsoft, which could theoretically replicate similar functionality. However, the company has built competitive advantages through its open-source approach and focus on AI-specific use cases.

“We don’t really care” about the underlying virtualization technology, Mlejnsky explained, noting that E2B focuses on creating an open standard for how AI agents interact with computing resources. “We are even like actually partnering with a lot of these cloud providers too, because a lot of enterprise customers actually want to deploy E2B inside their AWS account.”

The company’s open-source sandbox protocol has become a de facto standard, with hundreds of millions of compute instances demonstrating its real-world effectiveness. This network effect makes it difficult for competitors to displace E2B once enterprises have standardized on its platform.

Alternative solutions like Docker containers, while technically possible, lack the security isolation and performance characteristics required for production AI agent deployments. Building similar capabilities in-house typically requires 5-10 infrastructure engineers and at least $500,000 in annual costs, according to Mlejnsky.

Enterprise features like 24-hour sessions and 20,000 concurrent sandboxes drive Fortune 100 adoption

E2B’s enterprise success stems from features specifically designed for large-scale AI deployments. The platform can scale from 100 concurrent sandboxes on the free tier to 20,000 concurrent environments for enterprise customers, with each sandbox capable of running for up to 24 hours.

Advanced enterprise features include comprehensive logging and monitoring, network security controls, and secrets management — capabilities essential for Fortune 100 compliance requirements. The platform integrates with existing enterprise infrastructure while providing the granular controls security teams demand.

“We have very strong inbound,” Mlejnsky noted, describing the sales process. “Once we tackle the 87% we will come back for those 13%.” Customer objections typically focus on security and privacy controls rather than fundamental technology concerns, indicating broad market acceptance of the core value proposition.

Insight Partners’ $21M bet validates AI infrastructure as next major software category

Insight Partners‘ investment reflects growing investor confidence in AI infrastructure companies. The global software investor, which manages over $90 billion in regulatory assets, has invested in more than 800 companies worldwide and seen 55 portfolio companies achieve initial public offerings.

“Insight Partners is excited to back E2B’s visionary team as they pioneer essential infrastructure for AI agents,” said Praveen Akkiraju, Managing Director at Insight Partners. “Such rapid growth and enterprise adoption can be difficult to achieve, and we believe that E2B’s open-source sandbox standard will become a cornerstone of secure and scalable AI adoption across the Fortune 100 and beyond.”

The investment will fund expansion of E2B’s engineering and go-to-market teams in San Francisco, development of additional platform features, and support for the growing customer base. The company plans to strengthen its open-source sandbox protocol as a universal standard while developing enterprise-grade modules like secrets vault and monitoring tools.

The infrastructure play that could define enterprise AI’s next chapter

E2B’s trajectory reveals a fundamental shift in how enterprises approach AI deployment. While much attention has focused on large language models and AI applications, the company’s rapid adoption among Fortune 100 firms demonstrates that specialized infrastructure has become the critical bottleneck.

The startup’s success also highlights a broader trend: as AI agents transition from experimental tools to mission-critical systems, the underlying infrastructure requirements more closely resemble those of traditional enterprise software than consumer AI applications. Security, compliance, and scalability — not just model performance — now determine which AI initiatives succeed at scale.

For enterprise technology leaders, E2B’s emergence as essential infrastructure suggests that AI transformation strategies must account for more than just model selection and application development. The companies that successfully scale AI agents will be those that invest early in the specialized infrastructure layer that makes autonomous AI operation possible.

In an era where AI agents are poised to handle an ever-growing share of knowledge work, the platforms that keep those agents running safely may prove more valuable than the agents themselves.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Cisco donates AI agent tech to Linux Foundation

Cisco is donating its AGNTCY initiative to the Linux Foundation, which will continue to advance the AI agent management platform as an open-source project. Outshift, which is Cisco’s research and development arm, launched AGNTCY to develop AI agent discovery, identity, messaging, and observability infrastructure. After the handover, Cisco, Dell, Google Cloud,

Read More »

Pemex Posts First Profit in Over a Year

For the first time in more than a year, Petroleos Mexicanos has swung to a profit, a positive signal for the embattled state oil driller as President Claudia Sheinbaum’s administration seeks to raise as much as $12 billion to help pay down the company’s massive debts. Pemex’s results were boosted by currency moves in the second quarter, thanks to a strengthening in the peso. Lower cost of sales and stronger performance among some financial assets contributed. The positive report comes as Sheinbaum’s administration seeks to sell as much as $12 billion in securities to international investors in a bid to raise financing to help pay Pemex’s roughly $100 billion in debt. The profits could help to make the financing round go more smoothly.  Pemex posted a net income of 59.52 billion pesos ($3.2 billion) for the second quarter, compared with a 273.3 billion peso loss a year prior. Pemex reported about $30 billion in losses in 2024. Crude and condensate production slumped to 1.63 million barrels per day, down 8.6% from a year earlier, the company said. Natural gas output was almost 3.6 billion cubic feet per day, a 3.7% drop from a year prior. Crude processing climbed. The debt offering, disclosed in a filing July 22, will consist of dollar-denominated debt maturing August 2030, in the form of amortizing pre-capitalized securities, or P-Caps, a type of instrument used in asset-backed finance. Mexico’s finance ministry has said the operation would allow Pemex to address short-term financial and operational needs, while keeping the liabilities off Pemex and Mexico’s official balance sheets. Pemex said on Monday the proceeds would be used in part to refinance the company’s short-term bank debt. Pemex will also publish a comprehensive business plan in the coming weeks, which will include further guidance on future debt operations. Pemex’s

Read More »

Vitol Hands Record $10.6B Payout to Its Traders

Vitol Group handed a record $10.6 billion to its executives and senior staff through share buybacks last year, as the fallout of the energy crisis continued to deliver extraordinary riches to the world’s commodity traders. The share repurchase – almost certainly the highest such payout in the industry’s history – means privately held Vitol has distributed over $31 billion to its partners in the past decade, according to the company’s audited annual accounts seen by Bloomberg News. The numbers show how the disruptions that followed Russia’s invasion of Ukraine have handed a spectacular bonanza to a small group of commodity traders that their predecessors could only have dreamed of. Vitol has paid out more through buybacks in the past three years than in the previous 17 years combined. The world’s largest commodity trading house, the company is owned by 450-500 of its employees, a senior executive told a New York court last year. Based on that number, the 2024 payout would represent an average of over $20 million per partner, with some top executives and traders likely having received multiples of that. The results also cement Vitol’s position as the most profitable commodity trading house by far: its net profit for the year of $8.7 billion was more than the combined profits of its four closest rivals, Glencore Plc, Trafigura Group, Mercuria Energy Group Ltd. and Gunvor Group. Still, the huge payout comes as earnings are moderating across the industry. Vitol’s buyback in 2024 outstripped its profit for the year, meaning that the group’s equity dropped from $32.5 billion at the end of 2023 to $30.7 billion at the end of 2024. There’s a similar trend taking place across the largest commodity trading companies, many of which operate as employee-owned partnerships, particularly as senior executives who have accumulated valuable shareholdings retire. At

Read More »

NEP lets well contract for UK North Sea CO2 storage

The Northern Endurance Partnership (NEP), a joint venture of bp, Equinor ASA, and TotalEnergies, has let a technologies and services contract to SLB for carbon storage site development in the North Sea. SLB will construct six carbon storage wells. The project scope includes drilling, measurement, cementing, fluids, completions, wireline, and pumping services. NEP is developing onshore and offshore infrastructure needed to transport CO2 from carbon capture projects across Teesside and the Humber, collectively known as the East Coast Cluster, to secure storage under the North Sea. In October 2021, the NEP’s East Coast Cluster, which includes Net Zero Teesside, was selected as a priority cluster in phase-1 of the UK Government’s carbon capture, usage, and storage (CCUS) cluster sequencing process (OGJ Online, Mar. 15, 2024).

Read More »

Oldelval launches Duplicar Norte to expand transport capacity in Argentina’s Vaca Muerta

Argentina midstream operator Oleoductos del Valle SA (Oldelval) confirmed the execution of Duplicar Norte, a new project aimed at expanding transport capacity for unconventional oil from the Vaca Muerta formation in Argentina’s Neuquén basin. The estimated $380-million initiative—designed to strengthen the main pipeline system connecting the production zone to logistics hubs and Atlantic export terminals—was announced following signing or ship-or-pay transportation contracts with Pluspetrol, Chevron, Tecpetrol, and state-owned Gas y Petróleo del Neuquén (GyP). The Duplicar Norte project envisages construction of a 207-km, 24-in. OD pipeline linking the Puesto Hernández pumping station in northern Neuquén to the Allen station in Río Negro. “Duplicar Norte will unlock the full development potential of the basin’s Northern Hub, integrating into the main trunk system and providing predictability for operators,” the company said in an official statement. Early commissioning is scheduled for late 2026, with full operational startup expected in first-quarter 2027. The project also will include installation of an automated measurement unit in Allen, enabling crude transfers of 20,000-45,000 cu m/day to the VMOS logistics network. The project adds to Allen’s role as an energy logistics hub. The town is the starting point of the recently completed Duplicar Plus system, which expanded Oldelval’s transport capacity by 300,000 b/d to the Puerto Rosales terminal in Bahía Blan. The $1.4-billion Duplicar Plusproject included the construction of a 150-km pipeline, high-pressure pumping stations, advanced automation systems, and expanded storage capacity. Outside of these expansions, Oldelval recently shelved its proposed $500-million Duplicar X initiative also proposed as to increase transportation capacity. The company cited the absence of final shipper agreements as the reason it suspended the 300-km parallel route, originally scheduled to begin construction in second-half 2025. Oldelval currently transports 85% of Vaca Muerta’s crude output.

Read More »

Matador expects small production drop next quarter, no rush to add back ninth rig

After a record second quarter, the leaders of Matador Resources Co., Dallas, lifted their full-year production target by about 1%. But they’re sticking with the capital spending targets they lowered this spring and are in no hurry to consider adding back the rig they’re about to pull from service. Matador produced an average of 209,013 boe/d (including 122,875 b/d of oil) from its Delaware basin and northwestern Louisiana operations in the 3 months that ended June 30, which was 5% higher than in the first quarter. The operator’s oil production climbed 7% from the first 3 months of this year while natural gas production rose about 3% to nearly 517 MMcfd. The company turned to sales 32 gross (22.8 net) operated wells during the spring quarter. The team, led by Joe Foran, chairman and chief executive officer, expects that number to be 28-32 this quarter. About two-thirds of those will come online later in the period. But the executives also pushed out the possibility that they’ll reverse their April decision to release Matador ninth rig (OGJ Online, Apr. 24, 2025). “We believe we can defer making that decision until later this year or the beginning of next year and still be able to drive relative growth in 2026 versus what we believe the industry average growth rate will be,” William Lambert, chief financial officer and head of strategy, said on a July 23 conference call discussing Matador’s second-quarter results. After quarterly production topped guidance while requiring for capex toward the lower end of the expected range of $390-$480 million, Matador executives expect the company to step down slightly and average 116,500-118,000 boe/d of oil production in the third quarter. Gas production is expected to be about 495 MMcfd.  Full-year output is now forecast to be 200,000-205,000 boe/d, up about 1%

Read More »

TotalEnergies lets tubing contracts for GranMorgu project offshore Suriname

TotalEnergies has let a casing, tubing, and integrated running services contract to Tenaris for the GranMorgu project off the coast of Suriname. Tenaris will supply about 47,000 tons of casing and tubing as well as services such as demand planning, pipe management, preparation of pipe, and handling of surplus tubulars and returns, the company said in a release July 24.  In addition, Saipem, which secured an EPCI contract by TotalEnergies, has selected Tenaris to provide the seamless line pipe and thermal insulation coatings package for the project, Tenaris said. The line pipe and coating contract includes the provision of 190 km of coated carbon steel seamless pipes for subsea production flowlines and for water and gas injection lines. These pipelines will be installed at water depths of up to 1,100 m utilizing S-Lay and J-Lay vessels. Tenaris will serve these operations from a yard that it has leased in Suriname. TotalEnergies plans to produce 220,000 b/d of oil in the central area of Block 58 through shallow and deep water wells connected to a floating production, storage, and offloading (FPSO) vessel. GranMorgu will include production from the Krabdagu and Sapakara oil discoveries, where an appraisal drilling campaign completed in 2023 confirmed gross estimated recoverable resources of more than 750 million bbl of oil. First oil is expected in 2028 (OGJ Online. Nov. 14, 2024).

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

CoreSite’s Denver Power Play: Acquisition of Historic Carrier Hotel Supercharges Interconnection Capabilities

In this episode of the Data Center Frontier Show podcast, we unpack one of the most strategic data center real estate moves of 2025: CoreSite’s acquisition of the historic Denver Gas and Electric Building. With this transaction, CoreSite, an American Tower company, cements its leadership in the Rocky Mountain region’s interconnection landscape, expands its DE1 facility, and streamlines access to Google Cloud and the Any2Denver peering exchange. Podcast guests Yvonne Ng, CoreSite’s General Manager and Vice President for the Central Region, and Adam Post, SVP of Finance and Corporate Development, offer in-depth insights into the motivations behind the deal, the implications for regional cloud and network ecosystems, and what it means for Denver’s future as a cloud interconnection hub. Carrier Hotel to Cloud Hub Located at 910 15th Street in downtown Denver, the Denver Gas and Electric Building is widely known as the most network-dense facility in the region. Long the primary interconnection hub for the Rocky Mountains, the building has now been fully acquired by CoreSite, bringing ownership and operations of the DE1 data center under a single umbrella. “This is a strategic move to consolidate control and expand our capabilities,” said Ng. “By owning the building, we can modernize infrastructure more efficiently, double the space and power footprint of DE1, and deliver an unparalleled interconnection ecosystem.” The acquisition includes the facility’s operating businesses and over 100 customers. CoreSite will add approximately 3 critical megawatts (CMW) of data center capacity, nearly doubling DE1’s footprint. Interconnection in the AI Era As AI, multicloud strategies, and real-time workloads reshape enterprise architecture, interconnection has never been more vital. CoreSite’s move elevates Denver’s role in this transformation. With the deal, CoreSite becomes the only data center provider in the region offering direct connections to major cloud platforms, including the dedicated Google Cloud Platform

Read More »

Texas Senate Bill 6: A Bellwether On How States May Approach Data Center Energy Use

Texas isn’t the first state to begin attempting to regulate energy use statewide. The impact of this legislation could shape how other states, of which there are at least a dozen in process, could shape their own programs. What are Other States Doing? There’s a clear shift toward targeted utility regulation for mega-load data centers. States are increasingly requiring cost alignment, with large consumers bearing infrastructure costs rather than residential cross-subsidization and implementing specialized contract/tariff terms, taking advantage of these huge contracts to uniquely tailor each contract. These agreements are also being used to enforce environmental responsibility through reporting mandates and permitting. And for those estates still focusing on incentivization to draw data center business, coupling incentives with guardrails, balancing investment attraction with equitable distribution. What follows is a brief  overview of U.S. states that have enacted or proposed special utility regulations and requirements for data centers. The focus is  on tariffs, cost-allocation mechanisms, green mandates, billing structures, and transparency rules. California SB 57 (2025): Introduces a special electricity tariff for large users—including data centers—with embedded zero-carbon procurement targets, aiming to integrate grid reliability with emissions goals. AB 222 (2025): Targets consumption transparency, requiring data centers to report energy usage with a specific focus on AI-driven load. Broader California Public Utilities  actions: Proposals for efficiency mandates like airflow containment via Title 24; opening utility rate cases to analyze infrastructure cost recovery from large consumers. Georgia Public Service Commission  rule changes (January 2025): Georgia Power can impose minimum billing, longer contract durations, and special terms for customers with loads >100 MW—chiefly data centers. SB 34: Mandates that data centers either assume full infrastructure costs or pay equitably—not distributing these costs to residential users. Ohio AEP Ohio proposed in 2024: For loads >25 MW (data centers, crypto), demand minimum charges, 10-year contracts, and exit penalties before new infrastructure

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »