Stay Ahead, Stay ONMINE

Skip the AI ‘bake-off’ and build autonomous agents: Lessons from Intuit and Amex

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now As generative AI matures, enterprises are shifting from experimentation to implementation—moving beyond chatbots and copilots into the realm of intelligent, autonomous agents. In a conversation with VentureBeat’s Matt Marshall, […]

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


As generative AI matures, enterprises are shifting from experimentation to implementation—moving beyond chatbots and copilots into the realm of intelligent, autonomous agents. In a conversation with VentureBeat’s Matt Marshall, Ashok Srivastava, SVP and Chief Data Officer at Intuit, and Hillary Packer, EVP and CTO at American Express at VB Transform, detailed how their companies are embracing agentic AI to transform customer experiences, internal workflows and core business operations.

>>See all our Transform 2025 coverage here<<

From models to missions: the rise of intelligent agents

At Intuit, agents aren’t just about answering questions—they’re about executing tasks. In TurboTax, for instance, agents help customers complete their taxes 12% faster, with nearly half finishing in under an hour. These intelligent systems draw data from multiple streams—including real-time and batch data—via Intuit’s internal bus and persistent services. Once processed, the agent analyzes the information to make a decision and take action.

“This is the way we’re thinking about agents in the financial domain,”  said Srivastava. “We’re trying to make sure that as we build, they’re robust, scalable and actually anchored in reality. The agentic experiences we’re building are designed to get work done for the customer, with their permission. That’s key to building trust.”

These capabilities are made possible by GenOS, Intuit’s custom generative AI operating system. At its heart is GenRuntime, which Srivastava likens to a CPU: it receives the data, reasons over it, and determines an action that’s then executed for the end user. The OS was designed to abstract away technical complexity, so developers don’t need to reinvent risk safeguards or security layers every time they build an agent.

Across Intuit’s brands—from TurboTax and QuickBooks to Mailchimp and Credit Karma—GenOS helps create consistent, trusted experiences and ensure robustness, scalability and extensibility across use cases. 

Building the agentic stack at Amex: trust, control,and experimentation

For Packer and her team at Amex, the move into agentic AI builds on more than 15 years of experience with traditional AI and a mature, battle-tested big data infrastructure. As GenAI capabilities accelerate, Amex is reshaping its strategy to focus on how intelligent agents can drive internal workflows and power the next generation of customer experiences. For example, the company is focused on developing internal agents that boost employee productivity, like the APR agent that reviews software pull requests and advises engineers on whether code is ready to merge. This project reflects Amex’s broader approach: start with internal use cases, move quickly, and use early wins to refine the underlying infrastructure, tools, and governance standards.

To support fast experimentation, strong security, and policy enforcement, Amex developed an “enablement layer” that allows for rapid development without sacrificing oversight. “And so now as we think about agentic, we’ve got a nice control plane to plug in these additional, additional guardrails that we really do need to have in place,” said Packer.

Within this system is Amex’s concept of modular “brains”—a framework in which agents are required to consult with specific “brains” before taking action. These brains serve as modular governance layers—covering brand values, privacy, security, and legal compliance—that every agent must engage with during decision-making. Each brain represents a domain-specific set of policies, such as brand voice, privacy rules, or legal constraints and functions as a consultable authority. By routing decisions through this system of constraints, agents remain accountable, aligned with enterprise standards and worthy of user trust.

For instance, a dining reservation agent operating through Rezi, Amex’s restaurant booking platform, must validate that it’s selecting the right restaurant at the right time, matching the user’s intent while adhering to brand and policy guidelines.

Architecture that enables speed and safety

Both AI leaders agreed that enabling rapid development at scale demands thoughtful architectural design. At Intuit, the creation of GenOS empowers hundreds of developers to build safely and consistently. The platform ensures each team can access shared infrastructure, common safeguards, and model flexibility without duplicating work.

Amex took a similar approach with its enablement layer. Designed around a unified control plane, the layer lets teams rapidly develop AI-driven agents while enforcing centralized policies and guardrails. It ensures consistent implementation of risk and governance frameworks while encouraging speed. Developers can deploy experiments quickly, then evaluate and scale based on feedback and performance, all without compromising brand trust.

Lessons in agentic AI adoption

Both AI leaders stressed the need to move quickly, but with intent. “Don’t wait for a bake-off,” Packer advised. “It’s better to pick a direction, get something into production, and iterate quickly, rather than delaying for the perfect solution that may be outdated by launch time.” They also emphasized that measurement must be embedded from the very beginning. According to Srivastava, instrumentation isn’t something to bolt on later—it has to be an integral part of the stack. Tracking cost, latency, accuracy and user impact is essential for assessing value and maintaining accountability at scale. 

“You have to be able to measure it. That’s where GenOS comes in—there’s a built-in capability that lets us instrument AI applications and track both the cost going in and the return coming out,” said Srivastava. “I review this every quarter with our CFO. We go line by line through every AI use case across the company, assessing exactly how much we’re spending and what value we’re getting in return.”

Intelligent agents are the next enterprise platform shift

Intuit and American Express are among the leading enterprises adopting agentic AI not just as a technology layer, but as a new operating model. Their approach focuses on building the agentic platform, establishing governance, measuring impact, and moving quickly. As enterprise expectations evolve from simple chatbot functionality to autonomous execution, organizations that treat agentic AI as a first-class discipline—with control planes, observability, and modular governance—will be best positioned to lead the agentic race.

Editor’s note: As a thank-you to our readers, we’ve opened up early bird registration for VB Transform 2026 — just $200. This is where AI ambition meets operational reality, and you’re going to want to be in the room. Reserve your spot now

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

TechnipFMC Sees Surge in Q2 Profit

TechnipFMC PLC has reported $285.5 million in adjusted net income for the second quarter, up 99.8 percent from the prior three-month period and 51.1 percent against Q2 2024. The adjusted diluted earnings per share of 68 cents beat the Zacks Consensus Estimate of $0.57. TechnipFMC kept its dividend at $0.05

Read More »

Backblaze adds cloud storage security protection features

Application Keys have received a significant boost, starting with Multi-Bucket Application Keys, which make it possible to create a single key that can be used for more than one specific cloud storage bucket. This enhancement provides more granular control over bucket access, reducing the attack surface.  Secondly, Backblaze is now

Read More »

Baker Hughes to Buy Chart in $13.6B Deal

Baker Hughes and Chart Industries announced in separate statements Tuesday that they have entered into a definitive agreement under which Baker Hughes will acquire all outstanding shares of Chart’s common stock for $210 per share in cash. That’s equivalent to a total enterprise value of $13.6 billion, the statements highlighted. In its statement, Baker Hughes said it has secured fully committed bridge debt financing to fund the transaction, provided by Goldman Sachs Bank USA, Goldman Sachs Lending Partners LLC, and Morgan Stanley Senior Funding, Inc., which Baker Hughes said is expected to be replaced with permanent debt financing prior to close.  The boards of directors of Baker Hughes and Chart have each unanimously approved the transaction, and the Chart board of directors has unanimously recommended that Chart shareholders approve the transaction, Baker Hughes noted in its statement, adding that the transaction is subject to customary conditions, including approval by Chart shareholders, and the receipt of applicable regulatory approvals. The transaction is expected to be completed by mid-2026, according to both statements. Baker Hughes outlined several “compelling strategic and financial benefits” related to the transaction in its statement, pointing out that the deal “advances Baker Hughes’ strategic vision to be an energy and industrial technology leader”, “expands Baker Hughes’ offerings in attractive growth markets”, offers “complementary product capabilities”, “strengthens Baker Hughes’ lifecycle revenue mix”,  “delivers substantial synergies”, and offers “attractive financial profile and returns for shareholders”. In its statement, Chart said the transaction combines two highly complementary portfolios to create a leading energy and industrial technology company. “As part of Baker Hughes, our solutions will be integrated into a broader industrial and energy technology platform, deepening presence in high growth markets like LNG, data centers, decarbonization and industrial gas,” Chart noted in its statement. “With shared values around engineering excellence and global scale, this transaction advances Chart’s

Read More »

EQT Completes Acquisition of Olympus Energy Assets

EQT Corp. has completed the purchase of the upstream and midstream operations of Olympus Energy for $1.8 billion and consequently raised its projected sales volume for 2025 by 100 billion cubic feet of natural gas equivalent (Bcfe). The Appalachian Basin-focused producer said in its quarterly report it expects to integrate the new assets within a month. “The assets comprise a vertically integrated, contiguous 90,000-net-acre position offsetting the company’s existing core acreage in Southwest Pennsylvania with net production of approximately 500 MMcf/d [million cubic feet a day]”, EQT said April 22 announcing the deal. “Olympus Energy has over 10 years of high-quality Marcellus inventory at maintenance activity levels, with an additional seven years of upside from the Utica”, EQT said. “Olympus Energy’s integrated platform and high-quality inventory drives an unlevered free cash flow breakeven price comparable to the company’s peer-leading cost structure”. In the second quarter EQT logged 568 Bcfe in sales volumes, up 60 Bcfe from Q2 2024, “driven by strong well performance and compression project outperformance, underscoring continued synergy capture momentum from the company’s acquisition of Equitrans Midstream Corp.”, EQT said. EQT took over Equitrans last year, with president and chief executive Toby Z. Rice calling the enlarged EQT “America’s only large-scale, vertically integrated natural gas business”. For the full year 2025, EQT raised its sale volume guidance from 590-640 Bcfe to 2,300-2,400 Bcfe. Meanwhile its Q2 2025 adjusted net result rebounded to $273 million, compared to an adjusted net loss of $37 million for Q2 2024. EQT’s adjusted earnings per share of 45 cents beat the Zacks Consensus Estimate of $0.44. Before adjustment for nonrecurring items, net profit was $784 million, up $774 million against Q2 2024. Adjusted earnings before interest, taxes, depreciation and amortization totaled $1.03 billion, up $563 million compared to Q2 2024. Adjusted operating cash flow

Read More »

Constellation Receives FERC Approval for Calpine Acquisition

Constellation Energy Corporation said it has received regulatory approval from the Federal Energy Regulatory Commission (FERC) for its previously announced acquisition of Calpine Corporation. The approval represents the most recent step forward in the transaction following earlier approvals by the New York Public Service Commission and the Public Utility Commission of Texas, the company said in a news release. The transaction, which is expected to close in the fourth quarter of 2025, remains subject to clearance by the Department of Justice and other customary closing conditions, according to the release. “We appreciate FERC’s timely attention, review and approval of this transaction,” Constellation President and CEO Joe Dominguez said. “As electricity demand accelerates, this combination gives us the ability to best serve the nation’s industries and communities with the clean, always-on power they need to grow and thrive”. The two companies have entered into an agreement under which Constellation will acquire Calpine in a cash and stock transaction valued at an equity purchase price of approximately $16.4 billion, composed of 50 million shares of Constellation stock and $4.5 billion in cash plus the assumption of approximately $12.7 billion of Calpine’s net debt. After accounting for cash that is expected to be generated by Calpine between signing and the expected closing date, as well as the value of tax attributes at Calpine, the net purchase price is $26.6 billion, Constellation said earlier. The acquisition “creates the cleanest and most reliable generation portfolio in the USA, with a diverse, coast-to-coast portfolio of zero- and low-emission generation assets,” Constellation said. The combination of Constellation and Calpine will have nearly 60 gigawatts (GW) of capacity from zero- and low-emission sources, including nuclear, natural gas, geothermal, hydro, wind, solar, cogeneration and battery storage. The combined company’s footprint will include a significantly expanded presence in Texas, the

Read More »

DOE Allows PJM to Operate Wagner’s Unit 4 above Operating Limit

The U.S. Department of Energy (DOE) has issued an emergency order allowing PJM Interconnection to operate Unit 4 of the Wagner Generating Station beyond its normal limits. The DOE said in a media release that it allowed PJM, in coordination with Talen Energy Corporation, to run specified units at the Wagner Generating Station as PJM deems necessary to meet anticipated electricity demand. “This order reduces the threat of power outages during peak demand conditions for millions of Americans”, Secretary of Energy Chris Wright said. The order is in effect from July 28 through October 26, 2025, and is the fifth emergency order authorized by Section 202(c) of the Federal Power Act that Secretary Wright has signed since assuming office, the DOE said. On January 20, 2025, President Donald Trump issued Executive Order 14156, declaring a National Energy Emergency. The order emphasized that the country’s insufficient energy supply and outdated infrastructure threaten U.S. energy security and have led to high energy costs for Americans, the DOE said. In the DOE’s “Resource Adequacy Report: Evaluating the Reliability and Security of the U.S. Electric Grid”, the department warned that if existing retirement plans and addition schedules continue as planned, most regions could face severe reliability problems within five years, and the power grid may fail to meet future demand. Section 202(c) of the Federal Power Act gives the DOE the authority to support electricity companies during times of emergencies when they would otherwise not be permitted to supply Americans with reliable, consistent power by superseding normal regulatory requirements, the DOE said. PJM has expressed concerns about resource adequacy due to load growth and the retirement of dispatchable resources. In its February 2023 assessment, PJM underscored reliability risks stemming from mismatched timing between retirements, load growth, and new generation entry. To contact the author,

Read More »

Baker Hughes Wins Contract from Genesis Energy for Offshore Oil Pipelines

Baker Hughes said it was awarded a contract by Genesis Energy to provide drag-reducing agents (DRAs) for two critical offshore oil pipelines serving the U.S. Gulf Coast. The multi-year agreement includes chemicals, associated management services, and the deployment of Baker Hughes’ Leucipa automated field production solution to optimize operations, the company said in a news release. Under the agreement, Baker Hughes will provide DRAs from its FLO product line to support the transportation of both light and heavy crude from offshore platforms to storage and refining facilities in Texas and Louisiana. Financial terms of the contract were not disclosed. The use of DRAs and midstream solutions will significantly increase the capacity of the Cameron Highway Oil Pipeline and Poseidon systems, each of which is operated and 64 percent owned by Genesis Energy, allowing for increased oil production and enhanced crude slate flexibility in the Gulf, Baker Hughes said. “As America’s energy demand continues to grow, it is crucial that midstream capacity keep up with production to avoid bottlenecks,” Amerino Gatti, executive vice president of oilfield services and equipment at Baker Hughes, said. “Fortunately, technology provides the solution to this challenge. By utilizing DRAs to reduce friction between pipelines and the hydrocarbon resources that flow through them, as well as AI to optimize their operation, Genesis can increase its capacity without the need for large investments of time and capital, all while helping the country meet its energy needs”. Strategic Partnership with Petronas Earlier in the month, Baker Hughes said it entered into a memorandum of understanding (MoU) with Petroliam Nasional Berhad (Petronas) on a strategic partnership to explore business initiatives aimed at supporting the delivery of Asia’s energy expansion and transition. The MoU “serves as a foundation for collaboration initiatives between the two companies to enhance local supply chain capabilities

Read More »

350 Offshore Workers ‘Traveling Towards Strike Action’

UK union Unite announced that around 350 offshore workers are “traveling towards strike action in disputes with offshore operators and companies”, in a statement sent to Rigzone by the Unite team recently. Workers employed by Repsol, CNOOC, and MCL Medics are all involved in strike ballots or forthcoming industrial action on offshore platforms, Unite highlighted in the statement. “Over 200 Repsol workers have rejected several unacceptable pay offers with the latest amounting to a three per cent increase in basic pay,” the union said in the statement. “Unite can confirm its membership has emphatically backed strike by 92.1 per cent after rejecting the latest pay offer,” it added. The union noted in the statement that industrial action is now set to hit Repsol’s Arbroath, AUK, Bleoholm, Claymore, Clyde, Fulmer, Montrose, and Piper Bravo assets “in a series of stoppages”. Unite said a one day strike will commence at 06:00 on August 6, 13, and 28 and added that a further stoppage will take place on September 4. A continuous overtime ban will also be in operation, according to the statement. The union claimed in the statement that “the impact of industrial action will lead to the shutting down of platforms as the workers involved include control room operators, supervisors, electricians, technicians, mechanics and HSE advisors”. A further 130 CNOOC workers are being balloted on industrial action in a dispute over jobs, pay, and conditions on the Buzzard, Scott, and Golden Eagle platforms, Unite said in the statement, noting that several offers concerning pay and allowances have been rejected by the workers, with the latest amounting to a 4.25 percent increase in basic pay. The union highlighted in the statement that a ballot on industrial action opened on July 25 and will close August 28. “Unite believes that the impact of

Read More »

Micron unveils PCIe Gen6 SSD to power AI data center workloads

Competitive positioning With the launch of the 9650 SSD PCIe Gen 6, Micron competes with Samsung and SK Hynix enterprise SSD offerings, which are the dominant players in the SSD market. In December last year, SK Hynix announced the development of PS1012 U.2 Gen5 PCIe SSD, for massive high-capacity storage for AI data centers.  The PM1743 is Samsung’s PCIe Gen5 offering in the market, with 14,000 MBps sequential read, designed for high-performance enterprise workloads. According to Faruqui, PCIe Gen6 data center SSDs are best suited for AI inference performance enhancement. However, we’re still months away from large-scale adoption as no current CPU platforms are available with PCIe 6.0 support. Only Nvidia’s Blackwell-based GPUs have native PCIe 6.0 x16 support with interoperability tests in progress. He added that PCIe Gen 6 SSDs will see very delayed adoption in the PC segment and imminent 2025 2H adoption in AI, data centers, high-performance computing (HPC), and enterprise storage solutions. Micron has also introduced two additional SSDs alongside the 9650. The 6600 ION SSD delivers 122TB in an E3.S form factor and is targeted at hyperscale and enterprise data centers looking to consolidate server infrastructure and build large AI data lakes. A 245TB variant is on the roadmap. The 7600 PCIe Gen5 SSD, meanwhile, is aimed at mixed workloads that require lower latency.

Read More »

AI Deployments are Reshaping Intra-Data Center Fiber and Communications

Artificial Intelligence is fundamentally changing the way data centers are architected, with a particular focus on the demands placed on internal fiber and communications infrastructure. While much attention is paid to the fiber connections between data centers or to end-users, the real transformation is happening inside the data center itself, where AI workloads are driving unprecedented requirements for bandwidth, low latency, and scalable networking. Network Segmentation and Specialization Inside the modern AI data center, the once-uniform network is giving way to a carefully divided architecture that reflects the growing divergence between conventional cloud services and the voracious needs of AI. Where a single, all-purpose network once sufficed, operators now deploy two distinct fabrics, each engineered for its own unique mission. The front-end network remains the familiar backbone for external user interactions and traditional cloud applications. Here, Ethernet still reigns, with server-to-leaf links running at 25 to 50 gigabits per second and spine connections scaling to 100 Gbps. Traffic is primarily north-south, moving data between users and the servers that power web services, storage, and enterprise applications. This is the network most people still imagine when they think of a data center: robust, versatile, and built for the demands of the internet age. But behind this familiar façade, a new, far more specialized network has emerged, dedicated entirely to the demands of GPU-driven AI workloads. In this backend, the rules are rewritten. Port speeds soar to 400 or even 800 gigabits per second per GPU, and latency is measured in sub-microseconds. The traffic pattern shifts decisively east-west, as servers and GPUs communicate in parallel, exchanging vast datasets at blistering speeds to train and run sophisticated AI models. The design of this network is anything but conventional: fat-tree or hypercube topologies ensure that no single link becomes a bottleneck, allowing thousands of

Read More »

ABB and Applied Digital Build a Template for AI-Ready Data Centers

Toward the Future of AI Factories The ABB–Applied Digital partnership signals a shift in the fundamentals of data center development, where electrification strategy, hyperscale design and readiness, and long-term financial structuring are no longer separate tracks but part of a unified build philosophy. As Applied Digital pushes toward REIT status, the Ellendale campus becomes not just a development milestone but a cornerstone asset: a long-term, revenue-generating, AI-optimized property underpinned by industrial-grade power architecture. The 250 MW CoreWeave lease, with the option to expand to 400 MW, establishes a robust revenue base and validates the site’s design as AI-first, not cloud-retrofitted. At the same time, ABB is positioning itself as a leader in AI data center power architecture, setting a new benchmark for scalable, high-density infrastructure. Its HiPerGuard Medium Voltage UPS, backed by deep global manufacturing and engineering capabilities, reimagines power delivery for the AI era, bypassing the limitations of legacy low-voltage systems. More than a component provider, ABB is now architecting full-stack electrification strategies at the campus level, aiming to make this medium-voltage model the global standard for AI factories. What’s unfolding in North Dakota is a preview of what’s coming elsewhere: AI-ready campuses that marry investment-grade real estate with next-generation power infrastructure, built for a future measured in megawatts per rack, not just racks per row. As AI continues to reshape what data centers are and how they’re built, Ellendale may prove to be one of the key locations where the new standard was set.

Read More »

Amazon’s Project Rainier Sets New Standard for AI Supercomputing at Scale

Supersized Infrastructure for the AI Era As AWS deploys Project Rainier, it is scaling AI compute to unprecedented heights, while also laying down a decisive marker in the escalating arms race for hyperscale dominance. With custom Trainium2 silicon, proprietary interconnects, and vertically integrated data center architecture, Amazon joins a trio of tech giants, alongside Microsoft’s Project Stargate and Google’s TPUv5 clusters, who are rapidly redefining the future of AI infrastructure. But Rainier represents more than just another high-performance cluster. It arrives in a moment where the size, speed, and ambition of AI infrastructure projects have entered uncharted territory. Consider the past several weeks alone: On June 24, AWS detailed Project Rainier, calling it “a massive, one-of-its-kind machine” and noting that “the sheer size of the project is unlike anything AWS has ever attempted.” The New York Times reports that the primary Rainier campus in Indiana could include up to 30 data center buildings. Just two days later, Fermi America unveiled plans for the HyperGrid AI campus in Amarillo, Texas on a sprawling 5,769-acre site with potential for 11 gigawatts of power and 18 million square feet of AI data center capacity. And on July 1, Oracle projected $30 billion in annual revenue from a single OpenAI cloud deal, tied to the Project Stargate campus in Abilene, Texas. As Data Center Frontier founder Rich Miller has observed, the dial on data center development has officially been turned to 11. Once an aspirational concept, the gigawatt-scale campus is now materializing—15 months after Miller forecasted its arrival. “It’s hard to imagine data center projects getting any bigger,” he notes. “But there’s probably someone out there wondering if they can adjust the dial so it goes to 12.” Against this backdrop, Project Rainier represents not just financial investment but architectural intent. Like Microsoft’s Stargate buildout in

Read More »

Google and CTC Global Partner to Fast-Track U.S. Power Grid Upgrades

On June 17, 2025, Google and CTC Global announced a joint initiative to accelerate the deployment of high-capacity power transmission lines using CTC’s U.S.-manufactured ACCC® advanced conductors. The collaboration seeks to relieve grid congestion by rapidly upgrading existing infrastructure, enabling greater integration of clean energy, improving system resilience, and unlocking capacity for hyperscale data centers. The effort represents a rare convergence of corporate climate commitments, utility innovation, and infrastructure modernization aligned with the public interest. As part of the initiative, Google and CTC issued a Request for Information (RFI) with responses due by July 14. The RFI invites utilities, state energy authorities, and developers to nominate transmission line segments for potential fast-tracked upgrades. Selected projects will receive support in the form of technical assessments, financial assistance, and workforce development resources. While advanced conductor technologies like ACCC® can significantly improve the efficiency and capacity of existing transmission corridors, technological innovation alone cannot resolve the grid’s structural challenges. Building new or upgraded transmission lines in the U.S. often requires complex permitting from multiple federal, state, and local agencies, and frequently faces legal opposition, especially from communities invoking Not-In-My-Backyard (NIMBY) objections. Today, the average timeline to construct new interstate transmission infrastructure stretches between 10 and 12 years, an untenable lag in an era when grid reliability is under increasing stress. In 2024, the Federal Energy Regulatory Commission (FERC) reported that more than 2,600 gigawatts (GW) of clean energy and storage projects were stalled in the interconnection queue, waiting for sufficient transmission capacity. The consequences affect not only industrial sectors like data centers but also residential areas vulnerable to brownouts and peak load disruptions. What is the New Technology? At the center of the initiative is CTC Global’s ACCC® (Aluminum Conductor Composite Core) advanced conductor, a next-generation overhead transmission technology engineered to boost grid

Read More »

CoreSite’s Denver Power Play: Acquisition of Historic Carrier Hotel Supercharges Interconnection Capabilities

In this episode of the Data Center Frontier Show podcast, we unpack one of the most strategic data center real estate moves of 2025: CoreSite’s acquisition of the historic Denver Gas and Electric Building. With this transaction, CoreSite, an American Tower company, cements its leadership in the Rocky Mountain region’s interconnection landscape, expands its DE1 facility, and streamlines access to Google Cloud and the Any2Denver peering exchange. Podcast guests Yvonne Ng, CoreSite’s General Manager and Vice President for the Central Region, and Adam Post, SVP of Finance and Corporate Development, offer in-depth insights into the motivations behind the deal, the implications for regional cloud and network ecosystems, and what it means for Denver’s future as a cloud interconnection hub. Carrier Hotel to Cloud Hub Located at 910 15th Street in downtown Denver, the Denver Gas and Electric Building is widely known as the most network-dense facility in the region. Long the primary interconnection hub for the Rocky Mountains, the building has now been fully acquired by CoreSite, bringing ownership and operations of the DE1 data center under a single umbrella. “This is a strategic move to consolidate control and expand our capabilities,” said Ng. “By owning the building, we can modernize infrastructure more efficiently, double the space and power footprint of DE1, and deliver an unparalleled interconnection ecosystem.” The acquisition includes the facility’s operating businesses and over 100 customers. CoreSite will add approximately 3 critical megawatts (CMW) of data center capacity, nearly doubling DE1’s footprint. Interconnection in the AI Era As AI, multicloud strategies, and real-time workloads reshape enterprise architecture, interconnection has never been more vital. CoreSite’s move elevates Denver’s role in this transformation. With the deal, CoreSite becomes the only data center provider in the region offering direct connections to major cloud platforms, including the dedicated Google Cloud Platform

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »