Stay Ahead, Stay ONMINE

Augment Code debuts AI agent with 70% win rate over GitHub Copilot and record-breaking SWE-bench score

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Augment Code, an AI coding assistant startup, unveiled its new “Augment Agent” technology today, designed to tackle the complexity of large software engineering projects rather than simple code generation. The company claims its approach represents a […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Augment Code, an AI coding assistant startup, unveiled its new “Augment Agent” technology today, designed to tackle the complexity of large software engineering projects rather than simple code generation. The company claims its approach represents a significant departure from other AI coding tools by focusing on helping developers navigate and modify large, established codebases that span millions of lines of code across multiple repositories.

The company also announced it has achieved the highest score to date on SWE-bench verified, an industry benchmark for AI coding capabilities, by combining Anthropic’s Claude Sonnet 3.7 with OpenAI’s O1 reasoning model.

“Most work in the coding AI space, which is clearly a hot sector, has focused on what people call ‘zero to one’ or ‘vibe coding’ – starting with nothing and producing a piece of software by the end of the session,” said Scott Dietzen, CEO of Augment Code, in an exclusive interview with VentureBeat. “What we targeted instead is the software engineering discipline of maintaining big, complex systems — databases, networking stacks, storage — codebases that have evolved over many years with hundreds of developers working on them collaboratively.”

Founded in 2022, Augment Code has raised $270 million in total funding, including a $227 million Series B round announced in April 2024 at a post-money valuation of $977 million. The company’s investors include Sutter Hill Ventures, Index Ventures, Innovation Endeavors (led by former Google CEO Eric Schmidt), Lightspeed Venture Partners, and Meritech Capital.

How Augment’s context engine tackles multi-million line codebases

What sets Augment Agent apart, according to the company, is its ability to understand context across massive codebases. The agent boasts a 200,000 token context window, significantly larger than most competitors.

“The challenge for any AI system, including Augment, is that when you’re working with large systems containing tens of millions of lines of code – which is typical for meaningful software applications – you simply can’t pass all that as context to today’s large language models,” explained Dietzen. “We’ve trained our AI models to perform sophisticated real-time sampling, identifying precisely the right subset of the codebase that allows the agent to do its job effectively.”

This approach contrasts with competitors that either don’t handle large codebases or require developers to manually assemble the relevant context themselves.

Another differentiator is Augment’s real-time synchronization of code changes across teams. “Most of our competitors work with stale versions of the codebase,” said Dietzen. “If you and I are collaborating in the same code branch and I make a change, you’d naturally want your AI to be aware of that change, just as you would be. That’s why we’ve implemented real-time synchronization of everyone’s view of the code.”

The company reports its approach has led to a 70% win rate against GitHub Copilot when competing for enterprise business.

Why ‘memories’ feature helps AI match your personal coding style

Augment Agent includes a “Memories” feature that learns from developer interactions to better align with individual coding styles and preferences over time.

“Part of what we wanted to be able to deliver with our agents is autonomy in the sense that you can give them tasks, but you can also intervene,” Dietzen said. “Memories are a tool for the model to generalize your intent, to capture that when I’m in this situation, I want you to take this path rather than the path that you took.”

Contrary to the notion that coding is purely mathematical logic without stylistic elements, Dietzen emphasized that many developers care deeply about the aesthetic and structural aspects of their code.

“There is definitely a mathematical aspect to code, but there’s also an art to coding as well,” he noted. “Many of our developers want to stay in the code. Some use our agents to write all of the code, but there’s a whole group of engineers that care about what the ultimate code looks like and have strong opinions about that.”

Enterprise adoption of AI coding tools has been slowed by concerns about intellectual property protection and security. Augment has focused on addressing these issues with a robust security architecture and enterprise-grade integrations.

“Agents need to be trusted. If you’re going to give them this autonomy, you want to make sure that they’re not going to do any harm,” said Dietzen. “We were the first to offer the various levels of SOC compliance and all of the associated penetration testing to harden our solution.”

The company has also established integration with developer tools like GitHub, Linear, Jira, Notion, Google Search, and Slack. Unlike some competitors that implement these integrations on the client side, Augment handles these connections in the cloud, making them “easily shareable and consistent across a larger team,” according to Dietzen.

Augment Agent is generally available for VS Code users starting today, with early preview access for JetBrains users. The company maintains full compatibility with Microsoft’s ecosystem, unlike competitor Cursor, which forked VS Code.

“At some level, customers that choose Cursor are opting out of the Microsoft ecosystem. They’re not allowed to use all of the standard VS Code plug-ins that Microsoft provides for access to their environment, whereas we’ve preserved 100% compatibility with VS Code and the Microsoft ecosystem,” Dietzen explained.

The evolving partnership between human engineers and AI assistants

Despite the advances in AI coding assistance, Dietzen believes human software engineers will remain essential for the foreseeable future.

“The arguments around whether software engineering is a good discipline for people going forward are very much off the mark today,” he said. “The discipline of software engineering is very, very different in terms of crafting and evolving these large code bases, and human insight is going to be needed for years to come.”

However, he envisions a future where AI can take on more proactive roles in software development: “The real excitement around where we can ultimately get to with AI is AI just going in and assessing quality of a code base and suggesting how that code base could evolve to make it more secure, more capable, easier to use, more robust.”

Augment Code currently counts Webflow, Pigment, Lemonade, ddn, Datastax, and GoFundMe among its customers. Pricing starts at $30 per user per month for professional use and $60 per active user per month for enterprise use, with a free tier also available.

The launch of Augment Agent comes at a time when enterprises are increasingly looking to leverage AI to improve developer productivity while maintaining software quality. As organizations wrestle with maintaining and evolving complex software systems, tools that understand the nuances of large codebases may find a receptive audience among technical leaders seeking to maximize the impact of their engineering teams.

“I’m hopeful that we can actually ramp investment in human software engineering along with machines,” Dietzen concluded, “because we’re going to be able to make much better software, a lot more of it, but much higher quality. And I think we can unleash a tremendous amount of value for humankind by removing all these barriers to great software.”

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Quinas readies UltraRam, flash memory with DRAM speed

For starters, the memory is built on what is called a” III-V technology,” a class of semiconductor materials that are composed of elements from groups III and V of the periodic table, the company stated. These materials have unique properties that make them ideal for use in electronic devices such

Read More »

7 Wi-Fi certifications to bolster wireless networking skills

Organization: Certified Wireless Network Professionals (CWNP) Price: $149.99 for the CWTS-102 exam How to prepare: CWNP offers several resources to prepare including: a live training class, a self-paced training kit, a study and reference guide, an electronic practice test, an eLearning module, an eLearning bundle, and a test and go

Read More »

Microsoft’s hollow core fiber delivers the lowest signal loss ever

The manufacturing process requires precise control of membrane thicknesses around 500 nanometers. The complex nested tube design adds complexity compared to conventional fiber production, though Microsoft is leveraging its resources to address scaling challenges, he added. “Financial trading was the earliest sector to benefit, with its well-funded operators contributing to

Read More »

Northern Lights achieves first injection, storage of CO2 volumes in North Sea

The Northern Lights Joint Venture, comprised of Equinor, Shell plc, and TotalEnergies, has injected and stored the first volumes of CO2 into the reservoir 2.6 m under the seabed and 100 km off the coast of Western Norway. The CO2 is transported from Heidelberg Materials’ cement factory in Brevik  to Øygarden via two 130-m-long vessels (Northern Pathfinder and Northern Pioneer) designed by Shell engineers. The liquefied-carbon carriers are each capable of transporting 7,500 cu m of CO2 in a single trip, according to Shell. The CO2 is then offloaded and transported through a 100-km pipeline and injected into the Aurora reservoir. Heidelberg Materials is expected to capture around 400,000 tonnes/year (tpy) of CO2 from its cement factory in Brevik, two hours south of Oslo, accounting for about half the plant’s total emissions, according to Shell.  Equinor, as the Technical Service Provider (TSP), has been responsible for the construction of the Øygarden receiving infrastructure and the offshore infrastructure on behalf of the joint venture. Equinor also will have operational responsibility of the CO2 plant. The start of CO2 injection completes the first phase of the development, which has a total capacity of 1.5 million tpy of CO2. The project’s industrial customers include Hafslund Celsio and Heidelberg Materials in Norway, Yara in the Netherlands, and Ørsted in Denmark. In March, the owners of Northern Lights made the final investment decision for Phase 2 of the development, which will increase transport and storage capacity to a minimum of 5 million tpy of CO2 from 2028. FID for Phase 2 followed the signing of an agreement to transport and store up to 900,000 tonnes/year of CO2 from Stockholm Exergi.  The expansion of Northern Lights builds on existing infrastructure and includes additional onshore storage tanks, a new jetty, and additional injection wells. Development of the second

Read More »

Energy Transfer granted additional time to begin exports from Lake Charles LNG

Energy Transfer’s Lake Charles LNG Export Co. LLC has been granted additional time to begin exports of LNG to non-free trade agreement countries from the Lake Charles LNG plant in Lake Charles, La. US Secretary of Energy Chris Wright signed an amendment order granting the additional time, the US Department of Energy said in a release Aug. 22. Lake Charles LNG was originally configured as an LNG import terminal but is now being developed as a 16.45-million tonnes/year (tpy) LNG export plant to be constructed on the existing brownfield regasification terminal site in Calcasieu Parish, La. The proposed project includes three 5.5 million tpy liquefaction trains which will utilize existing infrastructure. The existing Lake Charles LNG import and regasification terminal has about 430,000 cu m of above ground LNG storage capacity, two deep water docks capable of handling ships with up to 217,000 cu m of capacity, and a deep-water turning basin. Lake Charles LNG project milestones In June, Energy Transfer LNG Export LLC agreed to supply an additional 1.0 million tpy of LNG to Chevron USA Inc. from the Lake Charles LNG plant (OGJ Online, Sept. 20, 2024). The 20-year agreement increases Chevron’s total contracted volume from Energy Transfer LNG to 3.0 million tpy, following the initial 2.0 million tpy agreement signed in December 2024. In May, the company signed a deal with MidOcean Energy that will see 30% of the project’s costs paid. That same month, the Federal Energy Regulatory Commission (FERC) issued an order granting Lake Charles LNG Export and its affiliates an extension through Dec. 31, 2031 to construct the export project and related pipeline modifications and make it available for service. Energy Transfer expects to make a final investment decision on the project this year.

Read More »

EIA: US 2025 natural gas consumption to reach record level

Natural gas consumption in the US will rise by 1%, reaching a record high of 91.4 bcfd in 2025, according to forecasts by the US Energy Information Administration (EIA). In its most recent Short-Term Energy Outlook, EIA anticipates increased natural gas usage across all sectors, with the exception of the electric power sector, which was the primary driver of natural gas consumption growth over the past decade. Driving the forecast was high natural gas consumption in the beginning of the year. In January, US natural gas consumption reached a record 126.8 bcfd, 5% more than the previous record set in January 2024, according to EIA’s Natural Gas Monthly. In February 2025, the US saw natural gas consumption reach 115.9 bcfd, marking a 5% increase over the previous record set in February 2021. The increased consumption during the winter months can be attributed in part to colder weather, including a polar vortex event that occurred mid-January. Typically, natural gas consumption peaks in January or February due to heightened demand for heating in residential and commercial buildings. According to data from the US Census Bureau’s American Community Survey, 45% of households rely on natural gas as their primary heating source. EIA estimates that US natural gas consumption decreased this spring and summer, compared with consumption over the same period last year, especially in the electric power sector. Natural gas remains the most prevalent source of electricity generation in the US, but so far in 2025 natural gas has lost market share in the electric power sector to coal, solar, and wind. According to EIA, increases in natural gas consumed in the residential and commercial sectors is expected to offset decreases in natural gas consumed in the electric power sector. EIA currently forecasts US natural gas consumption will decrease slightly in 2026, due

Read More »

Equinor discovers oil, gas in Fram area offshore Norway

Equinor Energy AS discovered additional oil and gas in two reservoirs in the Fram area 9 km north of Troll field in the North Sea. The operator and its license partners will consider tying the discovery, preliminarily named F-South, back to existing or future infrastructure, Equinor said in a release Aug. 25. One of the discoveries consists of both oil and gas, while the other one is just gas, Equinor said. In total, the resources are estimated at 0.1-1.1 million std cu m (0.6-6.9 MMboe).  The discoveries are “in an interesting area with a well-developed infrastructure,” said Geir Sørtveit, Equinor’s senior vice-president for Exploration & Production West on the Norwegian continental shelf. He said the companies intend to further explore the area. “We believe that we may encounter more, both oil and gas,” he said.  Discovery details In a separate release Aug. 25, the Norwegian Offshore Directorate provided additional details about the discovery. Well 35/11-31 S, the 24th exploration well in production license (PL) 090, was drilled in 354 m of water by the COSL Innovator drilling rig 9 km north of Troll field and 97 km southwest of Florø to 2,636 m TVD subsea. It was terminated in the Oseberg formation from the Middle Jurassic. The well’s primary target was to prove petroleum in reservoir rocks from the Late Jurassic (Sognefjord formation). The secondary exploration target was to prove petroleum in reservoir rocks from the Middle Jurassic (Brent Group) and the Late Palaeocene. The well encountered a 5-m oil column in the Sognefjord formation, about 36 m of which consists of sandstone with good reservoir properties. A 4-m gas column was encountered in the secondary exploration target in a 68-m thick sandstone layer with moderate-to-good reservoir properties. In the Lista formation, the well encountered aquiferous sandstone with very good reservoir

Read More »

Crescent pushes into Permian with $3.1-billion Vital purchase plan

Crescent Energy Co., Houston, plans to buy Vital Energy Inc. for about $3.1 billion, a deal that will add the Permian basin to its portfolio and increase its net production by about half. The planned acquisition of Tulsa-based Vital will add to Crescent’s operations an organization that spans about 285,000 acres across both the Midland and Delaware basins in the Permian and which is expected to produce nearly 140,000 boe/d this year. By comparison, Crescent’s assets in the Eagle Ford and Uinta basins span about 540,000 and 145,000 net acres, respectively, and are forecast to have total 2025 production of roughly 255,000 boe/d. Crescent chief executive officer David Rockecharlie will retain that role in the combined company. The all-stock transaction is expected to close by year’s end and create $90-100 million in cost savings. The companies’ pro forma production will be about 64% liquids and have an inventory of about 3,100 development locations. Vital today runs four rigs but Rockecharlie said his team plans to trim that to between one and two as it slows the development pace. “We have high conviction around the quality of the inventory available to us here,” Rockecharlie said on a conference call with analysts and investors. “But we’re just going to take it slower and we think that’s going to be better for all the shareholders.”

Read More »

MedcoEnergi granted Bualuang field production period extension

@import url(‘https://fonts.googleapis.com/css2?family=Inter:[email protected]&display=swap’); a { color: #c19a06; } .ebm-page__main h1, .ebm-page__main h2, .ebm-page__main h3, .ebm-page__main h4, .ebm-page__main h5, .ebm-page__main h6 { font-family: Inter; } body { line-height: 150%; letter-spacing: 0.025em; font-family: Inter; } button, .ebm-button-wrapper { font-family: Inter; } .label-style { text-transform: uppercase; color: var(–color-grey); font-weight: 600; font-size: 0.75rem; } .caption-style { font-size: 0.75rem; opacity: .6; } #onetrust-pc-sdk [id*=btn-handler], #onetrust-pc-sdk [class*=btn-handler] { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-policy a, #onetrust-pc-sdk a, #ot-pc-content a { color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-pc-sdk .ot-active-menu { border-color: #c19a06 !important; } #onetrust-consent-sdk #onetrust-accept-btn-handler, #onetrust-banner-sdk #onetrust-reject-all-handler, #onetrust-consent-sdk #onetrust-pc-btn-handler.cookie-setting-link { background-color: #c19a06 !important; border-color: #c19a06 !important; } #onetrust-consent-sdk .onetrust-pc-btn-handler { color: #c19a06 !important; border-color: #c19a06 !important; background-color: undefined !important; } PT Medco Energi Internasional Tbk (MedcoEnergi) subsidiaries Medco Energi Thailand (Bualuang) Ltd. and Medco Energi Thailand (E&P) Ltd. have received approval from the Government of Thailand for an extension of the production period for Bualuang oil field (Block B8/38) in the Gulf of Thailand. The production period has been extended until Oct. 23, 2035. In a release Aug. 21, MedcoEnergi said its long-term development plan aims to “enhance Bualuang’s production and maximize asset value through investment in facility debottlenecking, workovers, development drilling, and near-field exploration.” MedcoEnergi serves as opertor of the field, which consists of three platforms producing to a leased floating storage and offloading (FSO) vessel.  

Read More »

AI networking success requires deep, real-time observability

Most research participants also told us they need to improve visibility into their data center network fabrics and WAN edge connectivity services. (See also: 10 network observability certifications to boost IT operations skills) The need for real-time data Observability of AI networks will require many enterprises to optimize how their tools collect network data. For instance, most observability tools rely on SNMP polling to pull metrics from network infrastructure, and these tools typically poll devices at five minute intervals. Shorter polling intervals can adversely impact network performance and tool performance. Sixty-nine percent of survey participants told EMA that AI networks require real-time infrastructure monitoring that SNMP simply cannot support. Real-time telemetry closes visibility gaps. For instance, AI traffic bursts that create congestion and packet drops may last only seconds, an issue that a five-minute polling interval would miss entirely. To achieve this level of metric granularity, network teams will have to adopt streaming network telemetry. Unfortunately, support of such technology is still uneven among network infrastructure and network observability vendors due to a lack of industry standardization and a perception among vendors that customers simply don’t need it. Well, AI is about to create a lot of demand for it.  In parallel to the need for granular infrastructure metrics, 51% of respondents told EMA that they need more real-time network flow monitoring. In general, network flow technologies such as NetFlow and IPFIX can deliver data nearly in real-time, with delays of seconds or a couple minutes depending on the implementation. However, other technologies are less timely. In particular, the VPC flow logs generated by cloud providers are do not offer the same data granularity. Network teams may need to turn to real-time packet monitoring to close cloud visibility gaps.  Smarter analysis for smarter networks Network teams also need their network

Read More »

Equinix Bets on Nuclear and Fuel Cells to Meet Exploding Data Center Energy Demand

A New Chapter in Data Center Energy Strategy Equinix’s strategic investments in advanced nuclear and fuel cell technologies mark a pivotal moment in the evolution of data center energy infrastructure. By proactively securing power sources like Oklo’s fast reactors and Radiant’s microreactors, Equinix is not merely adapting to the industry’s growing energy demands but is actively shaping the future of sustainable, resilient power solutions. This forward-thinking approach is mirrored across the tech sector. Google, for instance, has partnered with Kairos Power to develop small modular reactors (SMRs) in Tennessee, aiming to supply power to its data centers by 2030 . Similarly, Amazon has committed to deploying 5 gigawatts of nuclear energy through partnerships with Dominion Energy and X-energy, underscoring the industry’s collective shift towards nuclear energy as a viable solution to meet escalating power needs . The urgency of these initiatives is underscored by projections from the U.S. Department of Energy, which anticipates data center electricity demand could rise to 6.7%–12% of total U.S. production by 2028, up from 4.4% in 2023. This surge, primarily driven by AI technologies, is straining existing grid infrastructure and prompting both public and private sectors to explore innovative solutions. Equinix’s approach, i.e. investing in both immediate and long-term energy solutions, sets a precedent for the industry. By integrating fuel cells for near-term needs and committing to advanced nuclear projects for future scalability, Equinix exemplifies a balanced strategy that addresses current challenges while preparing for future demands. As the industry moves forward, the collaboration between data center operators, energy providers, and policymakers will be crucial. The path to a sustainable, resilient energy future for data centers lies in continued innovation, strategic partnerships, and a shared commitment to meeting the digital economy’s power needs responsibly.

Read More »

Evolving to Meet AI-Era Data Center Power Demands: A Conversation with Rehlko CEO Brian Melka

On the latest episode of the Data Center Frontier Show Podcast, we sat down with Brian Melka, CEO of Rehlko, to explore how the century-old mission-critical power provider is reinventing itself to support the new realities of AI-driven data center growth. Rehlko, formerly known as Kohler Energy, rebranded a year ago but continues to draw on more than a century of experience in power generation and backup systems. Melka emphasized that while the name has changed, the mission has not: delivering reliable, scalable, and flexible energy solutions to support always-on digital infrastructure. Meeting Surging AI Power Demands Asked how Rehlko is evolving to support the next wave of data center development, Melka pointed to two major dynamics shaping the market: Unprecedented capacity needs driven by AI training and inference. New, “spiky” usage patterns that strain traditional backup systems. “Power generation is something we’ve been doing longer than anyone else, starting in 1920,” Melka noted. “As we look forward, it’s not just about the scale of backup power required — it’s about responsiveness. AI has very large short-duration power demands that put real strain on traditional systems.” To address this, Rehlko is scaling its production capacity fourfold over the next three to four years, while also leveraging its global in-house EPC (engineering, procurement, construction) capabilities to design and deliver hybrid systems. These combine diesel or gas generation with battery storage and short-duration modulation, creating a more responsive power backbone for AI data centers. “We’re the only ones out there that can deliver that breadth of capability on a full turnkey basis,” Melka said. “It positions us to support customers as they navigate these new patterns of energy demand.” Speed to Power Becomes a Priority In today’s market, “speed to power” has become the defining theme. Developers and operators are increasingly considering

Read More »

Data Center Chip Giants Negotiate Political Moves, Tariffs, and Corporate Strategies

And with the current restrictions being placed on US manufacturers selling AI parts to China, reporting says NVIDIA is developing a Blackwell-based China chip, more capable than the current H20 but still structured to comply with U.S. export rules. Reuters reported that it would be  a single-die design (roughly half the compute of the dual-die B300), with HBM and NVLink, sampling as soon as next month. A second compliant workstation/inference product (RTX6000D) is also in development. Chinese agencies have reportedly discouraged use of NVIDIA H20 in government work, favoring Huawei Ascend. However, there have been reports describing AI training using the Ascend to be “challenging”, forcing some AI firms to revert to NVIDIA for large-scale training while using Ascend for inference. This keeps China demand alive for compliant NVIDIA/AMD parts—hence the U.S. interest in revenue-sharing. Meanwhile, AMD made its announcements at June’s “Advancing AI 2025” to set MI350 (CDNA 4) expectations and a yearly rollout rhythm that’s designed to erase NVIDIA’s time lead as much as fight on absolute perf/Watt. If MI350 systems ramp aligns with major cloud designs in 2026, AMD’s near-term objective is defending MI300X momentum while converting large customers to multi-vendor strategies (often pairing MI clusters with NVIDIA estates for redundancy and price leverage). The 15% China license fee will shape how AMD prices MI-series export SKUs and whether Chinese hyperscalers still prefer them to the domestic alternative (Huawei Ascend), which continue to face software/toolchain challenges. If Chinese buyers balk or Beijing discourages purchases, the revenue-share may be moot; if they don’t, AMD has a path to keep seats warm in China while building MI350 demand elsewhere. Beyond China export licenses, the U.S. and EU recently averted a larger trade war by settling near 15% on certain sectors, which included semiconductors, as opposed to the far more

Read More »

Johnson Controls Brings Data Center Cooling into the “As-a-Service” Era

Cooling Without the Risk Johnson Controls’ Data Center Cooling as a Service (DCCaaS) approach is designed to take cooling risk off the operator’s shoulders. The company doesn’t just provide the technology—it delivers a comprehensive, long-term service package that covers design, build, operation, maintenance, and life cycle management. The model shifts cooling from a capital expense to an operating expense, providing financial flexibility at a time when operators are pouring billions into AI-ready infrastructure. “We take on the risk of performance and uptime,” Renkis explained. “If we don’t meet the agreed-upon KPIs, there are financial consequences for us—not the customer.” The AI Advantage A key differentiator in Johnson Controls’ approach is its integration of AI, machine learning, and advanced analytics. Through its OpenBlue and Metasys platforms—supplemented by partnerships with three to four external AI providers—the company is able to continuously optimize cooling system performance. These AI-driven systems not only extend the life of equipment but also deliver financially guaranteed outcomes. “We tie our results to customer-defined KPIs,” said Renkis. “If we miss, we pay. That accountability drives everything we do.” Modularity with Flexibility While the industry is trending toward modularity and prefabricated builds, Renkis stressed that every DCCaaS project remains unique. Johnson Controls designs contracts with “detour functionality”—flexible pathways to upgrade and adapt as technology shifts. That flexibility is crucial given the rapid emergence of AI factory-scale demands. New chip architectures and ultra-dense racks—600kW, 1MW, even 1.5MW—are reshaping expectations for cooling and power. “Nobody knows exactly how this will evolve,” Renkis noted. “That uncertainty makes the as-a-service model the most prudent path forward.” Beyond Traditional Facilities Management Cooling-as-a-service is distinct from conventional facilities management in both scope and financial muscle. Johnson Controls brings to the table its own capital arm—Johnson Controls Capital—and a joint venture with Apollo Group, known as Ionic

Read More »

Meta’s Dual-Track Data Center Strategy: Owning AI Campuses, Leasing Cloud, and Expanding Nationwide

Provisioning the Power is a Major Project All its Own Powering a data center campus on this scale in an area like rural Louisiana is not a simple task. News reports and a utility commission filing by power company Entergy are starting to reveal the scope of project preparation already in process to get the site the power it will need. To bring in outside power, Entergy plans a 100-mile, 500kV transmission project (at an approximate cost of $1.2 billion) to move bulk power into the area. Substations & lines tied to the site will include a new “Smalling” 500/230kV substation, a new “Car Gas Road” 500kV switchyard, six customer substations on Meta’s property, two 30-mile 500kV lines, and multiple 230kV feeders into the campus. Additionally, Entergy has sought approval for three combined-cycle gas plants generating abou 2.25 GW of power and associated lines to meet the immediate load while broader transmission is built out; state hearings are underway with a vote on this part of the project expected before the end of August 2025.   Approval is being sought from the Louisiana Public Service Commision to build these three new gas plants and their associated infrastructure at a cost of just under $4 billion. Concerns are being raised by local community groups as well as the Union of Concerned Scientists (UCS) and Louisiana-based Alliance for Affordable Energy (AAE) not just about how much of the initial costs will be passed on to Louisiana ratepayers, but also on issues related to what happens as the first series of contracts for power begin to expire in 15 years. The plans being presented were initially scheduled to be voted on in October 2025 and the fast tracking of project approval has highlighted the concerns of the opposition. Both the short- and long-term

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »