Stay Ahead, Stay ONMINE

Runloop lands $7M to power AI coding agents with cloud-based devboxes

Runloop, a San Francisco-based infrastructure startup, has raised $7 million in seed funding to address what its founders call the “production gap” — the critical challenge of deploying AI coding agents beyond experimental prototypes into real-world enterprise environments.The funding round, led by The General Partnership with participation from Blank Ventures, comes as the artificial intelligence code tools market is projected to reach $30.1 billion by 2032, growing at a compound annual growth rate of 27.1%, according to multiple industry reports. The investment signals growing investor confidence in infrastructure plays that enable AI agents to work at enterprise scale.Runloop’s platform addresses a fundamental question that has emerged as AI coding tools proliferate: where do AI agents actually run when they need to perform complex, multi-step coding tasks?“I think long term the dream is that for every employee at every big company, there’s maybe five or 10 different digital employees, or AI agents that are helping those people do their jobs,” explained Jonathan Wall, Runloop’s co-founder and CEO, in an exclusive interview with VentureBeat. Wall previously co-founded Google Wallet and later founded fintech startup Index, which Stripe acquired.

Runloop, a San Francisco-based infrastructure startup, has raised $7 million in seed funding to address what its founders call the “production gap” — the critical challenge of deploying AI coding agents beyond experimental prototypes into real-world enterprise environments.

The funding round, led by The General Partnership with participation from Blank Ventures, comes as the artificial intelligence code tools market is projected to reach $30.1 billion by 2032, growing at a compound annual growth rate of 27.1%, according to multiple industry reports. The investment signals growing investor confidence in infrastructure plays that enable AI agents to work at enterprise scale.

Runloop’s platform addresses a fundamental question that has emerged as AI coding tools proliferate: where do AI agents actually run when they need to perform complex, multi-step coding tasks?

“I think long term the dream is that for every employee at every big company, there’s maybe five or 10 different digital employees, or AI agents that are helping those people do their jobs,” explained Jonathan Wall, Runloop’s co-founder and CEO, in an exclusive interview with VentureBeat. Wall previously co-founded Google Wallet and later founded fintech startup Index, which Stripe acquired.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


The analogy Wall uses is telling: “If you think about hiring a new employee at your average tech company, your first day on the job, they’re like, ‘Okay, here’s your laptop, here’s your email address, here are your credentials. Here’s how you sign into GitHub.’ You probably spend your first day setting that environment up.”

That same principle applies to AI agents, Wall argues. “If you expect these AI agents to be able to do the kinds of things people are doing, they’re going to need all the same tools. They’re going to need their own work environment.”

Runloop focused initially on the coding vertical based on a strategic insight about the nature of programming languages versus natural language. “Coding languages are far narrower and stricter than something like English,” Wall explained. “They have very strict syntax. They’re very pattern driven. These are things LLMs are really good at.”

More importantly, coding offers what Wall calls “built-in verification functions.” An AI agent writing code can continuously validate its progress by running tests, compiling code, or using linting tools. “Those kind of tools aren’t really available in other environments. If you’re writing an essay, I guess you could do spell check, but evaluating the relative quality of an essay while you’re partway through it — there’s not a compiler.”

This technical advantage has proven prescient. The AI code tools market has indeed emerged as one of the fastest-growing segments in enterprise AI, driven by tools like GitHub Copilot, which Microsoft reports is used by millions of developers, and OpenAI’s recently announced Codex improvements.

Inside Runloop’s cloud-based devboxes: enterprise AI agent infrastructure

Runloop’s core product, called “devboxes,” provides isolated, cloud-based development environments where AI agents can safely execute code with full filesystem and build tool access. These environments are ephemeral — they can be spun up and torn down dynamically based on demand.

“You can stand them up, tear them down. You can spin up 1,000, use 1,000 for an hour, then maybe you’re done with some particular task. You don’t need 1,000 so you can tear them down,” Wall said.

One customer example illustrates the platform’s utility: a company that builds AI agents to automatically write unit tests for improving code coverage. When they detect production issues in their customers’ systems, they deploy thousands of devboxes simultaneously to analyze code repositories and generate comprehensive test suites.

“They’ll onboard a new company and be like, ‘Hey, the first thing we should do is just look at your code coverage everywhere, notice where it’s lacking. Go write a whole ton of tests and then cherry pick the most valuable ones to send to your engineers for code review,’” Wall explained.

Runloop customer success: six-month time savings and 200% revenue growth

Despite only launching billing in March and self-service signup in May, Runloop has achieved significant momentum. The company reports “a few dozen customers,” including Series A companies and major model laboratories, with revenue growth exceeding 200% since March.

“Our customers tend to be of the size and shape of people who are very early on the AI curve, and are pretty sophisticated about using AI,” Wall noted. “That right now, at least, tends to be Series A companies — companies that are trying to build AI as their core competency — or some of the model labs who obviously are the most sophisticated about it.”

The customer impact appears substantial. Dan Robinson, CEO of Detail.dev, a Runloop customer, said in a statement: “Runloop has been killer for our business. We couldn’t have gotten to market so quickly without it. Instead of burning months building infrastructure, we’ve been able to focus on what we’re passionate about: creating agents that crush tech debt… Runloop basically compressed our go-to-market timeline by six months.”

AI code testing and evaluation: moving beyond simple chatbot interactions

Runloop’s second major product, Public Benchmarks, addresses another critical need: standardized testing for AI coding agents. Traditional AI evaluation focuses on single interactions between users and language models. Runloop’s approach is fundamentally different.

“What we’re doing is we’re judging potentially hundreds of tool uses, hundreds of LLM calls, and we’re judging a composite or longitudinal outcome of an agent run,” Wall explained. “It’s far more longitudinal, and very importantly, it’s context rich.”

For example, when evaluating an AI agent’s ability to patch code, “you can’t evaluate the diff or the response from the LLM. You have to put it into the context of the full code base and use something like a compiler and the tests.”

This capability has attracted model laboratories as customers, who use Runloop’s evaluation infrastructure to verify model behavior and support training processes.

The AI coding tools market has attracted massive investment and attention from technology giants. Microsoft’s GitHub Copilot leads in market share, while Google recently announced new AI developer tools, and OpenAI continues advancing its Codex platform.

However, Wall sees this competition as validation rather than threat. “I hope lots of people build AI coding bots,” he said, drawing an analogy to Databricks in the machine learning space. “Spark is open source, it’s something anyone can use… Why do people use Databricks? Well, because actually deploying and running that is pretty difficult.”

Wall anticipates the market will evolve toward domain-specific AI coding agents rather than general-purpose tools. “I think what we’ll start to see is domain specific agents that kind of outperform those things for a specific task,” such as AI agents specialized in security testing, database performance optimization, or specific programming frameworks.

Runloop’s revenue model and growth strategy for enterprise AI infrastructure

Runloop operates on a usage-based pricing model with a modest monthly fee plus charges based on actual compute consumption. For larger enterprise customers, the company is developing annual contracts with guaranteed minimum usage commitments.

The $7 million in funding will primarily support engineering and product development. “The incubation of an infrastructure platform is a little bit longer,” Wall noted. “We’re just now starting to really broadly go to market.”

The company’s team of 12 includes veterans from Vercel, Scale AI, Google, and Stripe — experience that Wall believes is crucial for building enterprise-grade infrastructure. “These are pretty seasoned infrastructure people that are pretty senior. It would be pretty difficult for every single company to go assemble a team like this to solve this problem, and they more or less need to if they didn’t use something like Runloop.”

What’s next for AI coding agents and enterprise deployment platforms

As enterprises increasingly adopt AI coding tools, the infrastructure to support them becomes critical. Industry analysts project continued rapid growth, with the global AI code tools market expanding from $4.86 billion in 2023 to over $25 billion by 2030.

Wall’s vision extends beyond coding to other domains where AI agents will need sophisticated work environments. “Over time, we think we’ll probably take on other verticals,” he said, though coding remains the immediate focus due to its technical advantages for AI deployment.

The fundamental question, as Wall frames it, is practical: “If you’re a CSO or a CIO at one of these companies, and your team wants to use… five agents each, how are you possibly going to onboard that and bring into your environment 25 agents?”

For Runloop, the answer lies in providing the infrastructure layer that makes AI agents as easy to deploy and manage as traditional software applications — turning the vision of digital employees from prototype to production reality.

“Everyone believes you’re going to have this digital employee base. How do you onboard them?” Wall said. “If you have a platform that these things are capable of running on, and you vetted that platform, that becomes the scalable means for people to start broadly using agents.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TechnipFMC Sees Surge in Q2 Profit

TechnipFMC PLC has reported $285.5 million in adjusted net income for the second quarter, up 99.8 percent from the prior three-month period and 51.1 percent against Q2 2024. The adjusted diluted earnings per share of 68 cents beat the Zacks Consensus Estimate of $0.57. TechnipFMC kept its dividend at $0.05

Read More »

Shell Profit Falls as Traders Grapple with Volatility

Shell Plc reported second-quarter profit that dropped by 32 percent but beat analyst expectations, which had been lowered after a bearish trading update earlier this month. Shell’s shares were up 2.7 percent in London on Thursday morning, after the company reported the earnings beat and maintained its pace of buying back $3.5 billion of shares in the quarter. Analysts noted that Shell’s cash flow from operations of $12.3 billion was well ahead of consensus expectations of $10.1 billion. The drop in profit from a year earlier reflected lower oil and gas prices, as well as weaker performance from the company’s oil traders. Shell Chief Executive Officer Wael Sawan has spent the past two years seeking to cut costs, improve reliability and shed underperforming assets in an effort to close a valuation gap with Shell’s US rivals. The “sprint” has begun to pay off, as the company has outperformed its Big Oil peers so far in 2025.  Maintaining the buyback level “should be well-received,” said Jefferies analyst Giacomo Romeo said. “It’s been quarter after quarter of just steady delivery,” Sawan said in an interview with Bloomberg Television. “That’s 15 quarters in a row where we are delivering 3 or more billion dollars per quarter buybacks and that’s key for us.” Shell reduced its structural costs by a further $800 million in the first half, bringing the pre-tax total since 2022 to $3.9 billion, it said. Net debt rose to $43.2 billion from $41.5 billion in the first quarter. Analysts had cut their forecasts for earnings after Shell warned July 7 that earnings from its fabled trading division would be “significantly lower” than the prior quarter. Shell’s traders are often one of its biggest profit boosters, and Sawan said in March that its traders haven’t lost money in a single quarter over the past

Read More »

UK Oil Regulator Fines Chrysaor

UK oil and gas regulator the North Sea Transition Authority (NSTA) announced in a statement posted on its site this week that Chrysaor has been fined GBP 150,000 ($200,591) for “vent breaches”. The NSTA noted in its statement that the company exceeded its consent by more than 145 tons and that breaches took place the Armada hub in the central North Sea. The NSTA added in the statement that Chrysaor “failed to identify and then inform the NSTA of the breach for seven months, despite the regulator’s repeated messaging to industry that production needs to become increasingly clean”.   In its statement, the NSTA highlighted that Chrysaor, which it pointed out was acquired by Harbour Energy in 2021, blamed the breach on high winds preventing it from relighting the flare on the Armada platform. The Armada processing hub serves Hawkins, Fleming, Drake, Maria, and Seymour fields and has a capacity of more than 20,000 barrels per day, the NSTA noted. “In January 2022 an unplanned shut-in led to one vent event,” the NSTA said in the statement. “This was followed in August the same year when there was a further event after a start-up from a shutdown; in October high winds caused the flare to extinguish with the operator being unable to relight the flare for three days,” it added. “And in November, the flare was extinguished due to a depressurization and there was a delay in relighting due to the weather conditions. This venting continued for three days,” it continued. “In total, Chrysaor vented 370.046 tons at Armada from 1 January 2022 to 31 December 2022, exceeding its consent by 145.566 tons, almost 65 percent. Venting consent was breached in October 2022 and Chrysaor failed to inform the NSTA until May 2023, which indicated that it was unaware of

Read More »

Valaris Revenue Slips

Valaris Ltd. on Wednesday reported $615.2 million in revenue for the second quarter, down one percent from the prior three-month period due to fewer operating days and lower amortized revenue for its floater fleet. That was partially offset by more operating days and higher average daily revenue for the jackup fleet. Revenue from floaters was $362.9 million, down 10 percent against Q1. Revenue from jackups was $238 million, up 11 percent. ARO Drilling, Valaris’ 50-50 venture with Saudi Arabian Oil Co., contributed $139.9 million, down four percent. Total revenue exclusive of reimbursable items came at $572.3 million, compared to $577.8 million for Q1. Reimbursable revenue was $42.9 million. “Since reporting our first quarter results, we have secured new contracts with associated revenue backlog of more than $1 billion, increasing our total backlog to approximately $4.7 billion”, president and chief executive Anton Dibowitz said. “These awards include attractive contracts for three seventh-generation drillships, and we have now secured work for three of our four drillships with near-term availability”. “As expected, the pipeline of floater opportunities we have discussed in recent quarters are [sic] converting into contracts, and we anticipate additional awards across the industry in the coming months”, Dibowitz added. Valaris noted, “Exclusive of reimbursable items, contract drilling expense decreased to $355 million from $374 million in the first quarter 2025 primarily due to a favorable arbitration outcome related to previously disclosed patent license litigation, which led to a $17 million accrual reversal, as well as lower amortized expense for the floater fleet and a reduction in costs associated with three retired semisubmersibles that were sold for recycling during the quarter”.  While revenue fell, the Hamilton, Bermuda-based driller rebounded from a net loss of $39.2 million for Q1 to a net profit of $114.2 million for Q2. “Net income included tax expense

Read More »

Japan’s Inpex Acquires Stakes in Several Norwegian Sea Assets

Tokyo, Japan-based Inpex Corporation said it has acquired stakes in several oil and gas assets in the Norwegian Sea. The company’s subsidiary Inpex Norway Co. Ltd., through its local Norwegian entity, Inpex Idemitsu Norge AS (IIN), has entered into an agreement with Pandion Energy AS to acquire 10 percent participating interest in each of the Valhall and Hod oil and gas fields and 20 percent participating interest in each of the Mistral and Slagugle oil and gas discoveries. The Valhall and Hod fields are currently producing hydrocarbons, while the Mistral and Slagugle oil and gas discoveries have yet to be developed, Inpex said in a news release. IIN holds numerous licenses in the northern North Sea, the northern Norwegian Sea, and the Barents Sea, and has participated in steady production operations from fields in the North Sea, according to the release. With the acquisition of the new stakes, IIN’s oil and gas production volume will increase to about 27,000 barrels per day (bpd) from approximately 23,000 bpd, the company said. Further, the acquisitions are expected to expand Inpex’s business portfolio in the future through the development of the Mistral and Slagugle oil and gas discoveries, as well as the potential for exploration and development in the surrounding areas, the release said. Bonaparte CCS Project Awarded Major Project Status Earlier in the month, the Bonaparte CCS Assessment Joint Venture was awarded “Major Project” status by the Australian government. The project, which is operated by company subsidiary Inpex Browse E&P Pty Ltd holding a 53 percent stake, is the first offshore carbon capture and storage (CCS) project to receive the designation in the country, according to an earlier statement. TotalEnergies CCS Australia Pty Ltd holds a 26 percent stake while Woodside Energy Ltd holds 21 percent. The award “highlights the project’s recognized potential to

Read More »

Uniper, Tourmaline Ink 234 Bcf Gas Deal

Canada’s Tourmaline Oil Corp. has signed an agreement with German power and gas utility Uniper SE for an eight-year supply of natural gas totaling 234 billion cubic feet (Bcf). “Under the LNG Netback Supply Agreement, Tourmaline will deliver gas to the ANR SE trading hub in southeast Louisiana, USA”, said a joint statement Wednesday. “The contract is based on TTF (Dutch Title Transfer Facility) pricing, providing Tourmaline with international price exposure”. Tourmaline president and chief executive Mike Rose said, “This long-term supply agreement with Uniper supports the continued execution of our market diversification strategy. We’re proud to be supplying Canadian natural gas to meet rising demand in international markets and to enhance European energy security”. Uniper chief commercial officer Carsten Poppinga said the deal showcases Uniper’s “further diversifies Uniper’s LNG supply sourcing portfolio, an important aspect of our European security of supply objectives”. North American Power Exit In the first quarter Uniper sold its North American power assets but retained its gas portfolio and hydrogen-related activities. The divestment contributes to the fulfillment of fair-competition guardrails imposed by the European Commission in approving Uniper’s bailout by the German government late 2022. The sale covered “power purchase and sale contracts and energy management agreements in the North American power markets ERCOT (North, South, West and Houston), WEST (WECC and CAISO) and CENTRAL (MISO and SPP) through a number of transactions with several counterparties”, Uniper said in a press release February 5. It did not name its buyers. Woodside Deal Uniper’s gas deal with Tourmaline comes three months after Uniper committed to one million metric tons per annum (MMtpa) of liquefied natural gas (LNG) for 13 years from Woodside Energy Group Ltd.’s Louisiana LNG. The agreement with the Australian company also secures an additional supply of up to one MMtpa from the rest

Read More »

Spire to Buy Duke Tennessee Gas Business for $2.5B

Spire Inc. agreed to pay $2.5 billion to acquire Duke Energy Corp.’s Tennessee Piedmont Natural Gas unit to expand in the growing Nashville region. The deal will give Spire Tennessee’s largest investor-owned gas utility, with almost 3,800 miles (6,100 kilometers) of distribution and transmission pipelines and a liquefied natural gas facility, serving about 200,000 Nashville area customers. The price represents a multiple of 1.5 times Piedmont’s estimated 2026 rate base, according to a statement Tuesday. Spire is expanding in the middle Tennessee region, where Nashville is one of the fastest-growing US cities. The deal also reflects a long-term trend of utilities shedding non-core assets, especially gas companies, to focus on more stable, regulated operations. Duke said it would use about $800 million of the proceeds to offset debt at Piedmont to maintain its capital structure, with the balance going to its five-year capital plan. Spire, based in St. Louis, is one of the largest publicly traded natural gas companies in the country, serving Alabama, Mississippi and Missouri. “This acquisition is a natural fit for Spire, allowing us to expand our core utility business and increase our utility customer base to nearly two million homes and businesses,” Scott Doyle, Spire’s chief executive officer, said in the statement. WHAT DO YOU THINK? Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

CoreSite’s Denver Power Play: Acquisition of Historic Carrier Hotel Supercharges Interconnection Capabilities

In this episode of the Data Center Frontier Show podcast, we unpack one of the most strategic data center real estate moves of 2025: CoreSite’s acquisition of the historic Denver Gas and Electric Building. With this transaction, CoreSite, an American Tower company, cements its leadership in the Rocky Mountain region’s interconnection landscape, expands its DE1 facility, and streamlines access to Google Cloud and the Any2Denver peering exchange. Podcast guests Yvonne Ng, CoreSite’s General Manager and Vice President for the Central Region, and Adam Post, SVP of Finance and Corporate Development, offer in-depth insights into the motivations behind the deal, the implications for regional cloud and network ecosystems, and what it means for Denver’s future as a cloud interconnection hub. Carrier Hotel to Cloud Hub Located at 910 15th Street in downtown Denver, the Denver Gas and Electric Building is widely known as the most network-dense facility in the region. Long the primary interconnection hub for the Rocky Mountains, the building has now been fully acquired by CoreSite, bringing ownership and operations of the DE1 data center under a single umbrella. “This is a strategic move to consolidate control and expand our capabilities,” said Ng. “By owning the building, we can modernize infrastructure more efficiently, double the space and power footprint of DE1, and deliver an unparalleled interconnection ecosystem.” The acquisition includes the facility’s operating businesses and over 100 customers. CoreSite will add approximately 3 critical megawatts (CMW) of data center capacity, nearly doubling DE1’s footprint. Interconnection in the AI Era As AI, multicloud strategies, and real-time workloads reshape enterprise architecture, interconnection has never been more vital. CoreSite’s move elevates Denver’s role in this transformation. With the deal, CoreSite becomes the only data center provider in the region offering direct connections to major cloud platforms, including the dedicated Google Cloud Platform

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »