Stay Ahead, Stay ONMINE

OpenAI launches research preview of Codex AI software engineering agent for developers — with parallel tasking

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Surprise! Just days after reports emerged suggesting OpenAI was buying white-hot coding startup Windsurf, the former company appears to be launching its own competitor service as a research preview under its brand name Codex, going head-to-head […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Surprise! Just days after reports emerged suggesting OpenAI was buying white-hot coding startup Windsurf, the former company appears to be launching its own competitor service as a research preview under its brand name Codex, going head-to-head against Windsurf, Cursor, and the growing list of AI coding tools offered by startups and large tech companies including Microsoft and Amazon.

Unlike OpenAI’s previous Codex code completion AI model, the new version is a full cloud-based AI software engineering (SWE) agent built atop a fine-tuned version of OpenAI’s o3 reasoning model that can execute multiple development tasks in parallel.

Starting today it will be available for ChatGPT Pro, Enterprise, and Team users, with support for Plus and Edu users expected soon.

Codex’s evolution: from model to autonomous AI coding agent

This release marks a significant step forward in Codex’s development. The original Codex debuted in 2021 as a model for translating natural language into code available through OpenAI’s nascent application programming interface.

It was the engine behind GitHub Copilot, the popular autocomplete-style coding assistant designed to work within IDEs like Visual Studio Code.

That initial iteration focused on code generation and completion, trained on billions of lines of public source code.

However, the early version came with limitations. It was prone to syntactic errors, insecure code suggestions, and biases embedded in its training data. Codex occasionally proposed superficially correct code that failed functionally, and in some cases, made problematic associations based on prompts.

Despite those flaws, it showed enough promise to establish AI coding tools as a rapidly growing product category. That original model has since been deprecated and turned into the name of a new suite of products, according to an OpenAI spokesperson.

GitHub Copilot officially transitioned off OpenAI’s Codex model in March 2023, adopting GPT-4 as part of its Copilot X upgrade to enable deeper IDE integration, chat capabilities, and more context-aware code suggestions.

Agentic visions

The new Codex goes far beyond its predecessor. Now built to act autonomously over longer durations, Codex can write features, fix bugs, answer codebase-specific questions, run tests, and propose pull requests—each task running in a secure, isolated cloud sandbox.

The design reflects OpenAI’s broader ambition to move beyond quick answers and into collaborative work.

Josh Tobin, who leads the Agents Research Team at OpenAI, said during a recent briefing: “We think of agents as AI systems that can operate on your behalf for a longer period of time to accomplish big chunks of work by interacting with the real world.” Codex fits squarely into this definition. “Our vision is that ChatGPT will become almost like a virtual coworker—not just answering quick questions, but collaborating on substantial work across a range of tasks,” he added.

Figures released by OpenAI show that the new Codex-1 SWE agent outperforms all of OpenAI’s latest reasoning models on internal SWE tasks.

New capabilities, new interface, new workflows

Codex tasks are initiated through a sidebar interface in ChatGPT, allowing users to prompt the agent with tasks or questions.

The agent processes each request in an air-gapped environment loaded with the user’s repository and configured to mirror the development setup. It logs its actions, cites test outputs, and summarizes changes—making its work traceable and reviewable.

Alexander Embiricos, head of OpenAI’s Desktop & Agents team (and the former CEO and co-founder of screenshare collaboration startup Multi that OpenAI acquired for an undisclosed sum last year) said in a briefing with journalists that “the Codex agent is a cloud-based software engineering agent that can work on many tasks in parallel, with its own computer to run safely and independently.”

Internally, he said, engineers already use it “like a morning to-do list—fire off tasks to Codex and return to a batch of draft solutions ready to review or merge.”

Codex also supports configuration through AGENTS.md files—project-level guides that teach the agent how to navigate a codebase, run specific tests, and follow house coding styles.

“We trained our model to read code and infer style—like whether or not to use an Oxford comma—because code style matters as much as correctness,” Embiricos said.

Security and practical use

Codex executes tasks without internet access, drawing only on user-provided code and dependencies. This design ensures secure operation and minimizes potential misuse.

“This is more than just a model API,” said Embiricos. “Because it runs in an air-gapped environment with human review, we can give the model a lot more freedom safely.”

OpenAI also reports early external use cases. Cisco is evaluating Codex for accelerating engineering work across its product lines. Temporal uses it to run background tasks like debugging and test writing. Superhuman leverages Codex to improve test coverage and enable non-engineers to suggest lightweight code changes. Kodiak, an autonomous vehicle firm, applies it to improve code reliability and gain insights into unfamiliar stack components.

OpenAI is also rolling out updates to Codex CLI, its lightweight terminal agent for local development. The CLI now uses a smaller model—codex-mini-latest—optimized for low-latency editing and Q&A.

The pricing is set at $1.50 per million input tokens and $6 per million output tokens, with a 75% caching discount. Codex is currently free to use during the rollout period, with rate limits and on-demand pricing options planned.

Does this mean OpenAI IS NOT buying Windsurf? *Thinking face emoji*

The release of Codex comes amid increased competition in the AI coding tools space—and signals that OpenAI is intent on building, rather than buying, its next phase of products.

According to recent data from SimilarWeb, traffic to developer-focused AI tools has surged by 75% over the past 12 weeks, underscoring the growing demand for coding assistants as essential infrastructure rather than experimental add-ons.

Reports from TechCrunch and Bloomberg suggest OpenAI held acquisition talks with fast-growing AI dev tool startups Cursor and Windsurf. Cursor allegedly walked away from the table; Windsurf reportedly agreed in principle to be acquired by OpenAI for a price of $3 billion, though no deal has been officially confirmed by either OpenAI or Windsurf.

Just yesterday, in fact, Windsurf debuted its own family of coding-focused foundation models, SWE-1, purpose-built to support the full software engineering lifecycle, from debugging to long-running project maintenance. SWE-1 models were reported custom made, trained entirely in-house using a new sequential data model tailored to real-world development workflows.

Many things may be happening behind the scenes between the two companies, but to me, the timing of Windsurf launching its own coding foundation model — instead of its strategy to-date of using Llama variants and giving users the option to slot in OpenAI and Anthropic models — followed one day later by OpenAI releasing its own Windsurf competitor, seems to suggest the two are not aligning soon.

But on the other hand, the fact that this new Codex AI SWE agent is in “research preview” to start may be a form of OpenAI pressuring Windsurf or Cursor or anyone else to come to the bargaining table and strike a deal. Asked about the potential for a Windsurf acquisition and reports of one thereof, an OpenAI spokesperson told VentureBeat they had nothing to share on that front.

In either case, Embiricos frames Codex as far more than a mere code tool or assistant.

“We’re about to undergo a seismic shift in how developers work with agents—not just pairing with them in real time, but fully delegating tasks,” he said. “The first experiments were just reasoning models with terminal access. The experience was magical—they started doing things for us.”

Built for dev teams, not merely solo devs

Codex is designed with professional developers in mind, but Embiricos noted that even product managers have found it helpful for suggesting or validating changes before pulling in human SWEs. This versatility reflects OpenAI’s strategy of building tools that augment productivity across technical teams.

Trini, an engineering lead on the project, summarized the broader ambition behind Codex: “This is a transformative change in how software engineers interface with AI and computers in general. It amplifies each person’s potential.”

OpenAI envisions Codex as the centerpiece of a new development workflow where engineers assign high-level tasks to agents and collaborate with them asynchronously. The company is building toward deeper integrations across GitHub, ChatGPT Desktop, issue trackers, and CI systems. The long-term goal is to blend real-time pairing and long-horizon task delegation into a seamless development experience.

As Josh Tobin put it, “Coding underpins so many useful things across the economy. Accelerating coding is a particularly high-leverage way to distribute the benefits of AI to humanity, including ourselves.”

Whether or not OpenAI closes deals for competitors, the message is clear: Codex is here, and OpenAI is betting on its own agents to lead the next chapter in developer productivity.

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

OpenAI tests Google TPUs amid rising inference cost concerns

Barclays forecasts that chip-related capital expenditure for consumer AI inference alone is expected to approach $120 billion in 2026 and exceed $1.1 trillion by 2028.  Barclays also noted that LLM providers, such as OpenAI, are being forced to look at custom chips, mainly ASICS, instead of GPUs, to reduce the

Read More »

Chronosphere unveils logging package with cost control features

According to a study by Chronosphere, enterprise log data is growing at 250% year-over-year, and Chronosphere Logs helps engineers and observability teams to resolve incidents faster while controlling costs. The usage and volume analysis and proactive recommendations can help reduce data before it’s stored, the company says. “Organizations are drowning

Read More »

Cisco CIO on the future of IT: AI, simplicity, and employee power

AI can democratize access to information to deliver a “white-glove experience” once reserved for senior executives, Previn said. That might include, for example, real-time information retrieval and intelligent process execution for every employee. “Usually, in a large company, you’ve got senior executives, and you’ve got early career hires, and it’s

Read More »

Capstone Green Energy Sees Financial Improvements in FY2025

Capstone Green Energy Holdings Inc. has reported fourth quarter and annual revenues of $27.1 million and $85.6 million respectively for fiscal year 2025, compared to 2024 fourth quarter and annual revenues of $24.3 million and $91.2 million respectively. The company said in its report that the fourth-quarter revenue improved by $2.8 million year-over-year, driven by increased demand for products and services, as well as improved rental utilization rates within the company’s Energy as a Service (EaaS) revenue stream. Slow product sales in the first half of the fiscal year caused a decrease of $5.7 million for the year. This was mainly due to restructuring hesitancy and instability in Europe, it said. Gross profit for the fourth quarter of 2025 was $7.5 million, up $4.9 million from the fourth quarter of fiscal 2024. The company recorded a net loss of $0.1 million, compared to a net loss of $5.3 million for the same period in fiscal 2024. Adjusted earnings before interest, taxes, depreciation, and amortization (EBITDA) for the fourth quarter of fiscal 2025 was $2.8 million, up $0.8 million from the fourth quarter of fiscal 2024. This improvement was mainly due to better gross margins and lower operating expenses, it said. Gross profit for FY2025 was $23.3 million, up from $14.3 million for FY2024, due to sales mix shift and price hikes. Net loss was $7.2 million, swinging from FY2024’s net income of $7.4 million, which included a $32.5 million reorganization gain. Excluding this gain, the net loss was cut by $17.9 million, driven by better gross profit, lower expenses, and reduced costs. Adjusted EBITDA rose to $7.9 million from negative $0.5 million for FY2024, including non-recurring charges related to restructuring, litigation, restatement, and SEC investigations, which concluded in early FY2026. “The company’s full-year results reflect the focus on financial health

Read More »

Cheniere Partners Offers $1B Bonds

Cheniere Energy Partners LP (CQP), part of Cheniere Energy Inc. (Cheniere), is offering $1 billion of senior notes due 2035 with a yearly interest of 5.55 percent. “The CQP 2035 Notes will be issued at a price equal to 99.731 percent of par”, CQP said in a press release. Cheniere Partners expects to close the offering July 10. “Cheniere Partners intends to contribute the proceeds from the offering to its subsidiary, Sabine Pass Liquefaction, LLC, to be used to redeem a portion of the outstanding aggregate principal amount of its senior secured notes due 2026”, the statement added. “The CQP 2035 Notes will rank pari passu in right of payment with the existing senior notes at Cheniere Partners, including the senior notes due 2029, the senior notes due 2031, the senior notes due 2032, the senior notes due 2033 and the senior notes due 2034”, it said. Cheniere Partners owns the Sabine Pass LNG terminal in Cameron Parish, Louisiana. The terminal has a production capacity of about 30 million metric tons per annum (MMtpa). It also has regasification facilities that include five LNG storage tanks, vaporizers and three marine berths. Cheniere Partners also owns the Creole Trail Pipeline, which interconnects Sabine Pass LNG with several interstate and intrastate pipelines. Houston, Texas-based Cheniere, which owns 100 percent of the general partner stake and 48.6 percent of the limited partner interest in CQP, reported $10.55 billion in available liquidity at the end of the first quarter. On June 24 Cheniere announced a positive FID (final investment decision) to add two “midscale” trains to the Corpus Christi LNG facility in South Texas. The two trains will raise the terminal’s capacity by over 3 MMtpa. In July 2023 the United States Department of Energy (DOE) granted CCL Midscale Trains 8 & 9 authorization to export to countries

Read More »

Ameren Missouri Files Application for Big Hollow Energy Project

Ameren Missouri, a subsidiary of Ameren Corp., has filed an application with the Missouri Public Service Commission to build an 800-megawatt (MW) simple-cycle natural gas energy center. The company said in a media release that the project would be complemented by Ameren Missouri’s first large-scale battery storage facility. The project, located in Jefferson County, Missouri, is named the Big Hollow Energy Center. “This is the next step to deliver on our strategy to invest in energy infrastructure for our customers’ benefit and provide a balanced generation portfolio”, Mark Birk, chairman and president of Ameren Missouri, said. “As we transition our generation for the future, we’ll continue to serve our customers with the reliable energy they expect while also preparing for anticipated increases in demand”. The Big Hollow Energy Centre is expected to serve as a backup energy source by 2028, similar to Castle Bluff, providing energy during extreme weather conditions and supporting renewable energy sources. On-site, Ameren Missouri will install its first large-scale lithium-ion batteries, capable of being charged when excess energy is available. The 400-MW battery storage is quick to activate, supporting thousands of homes and grid reliability during peak times. Ameren Missouri plans 1,000 MW of battery storage by 2030 and 1,800 MW by 2042, the company said. Ameren added that the two facilities will operate independently. Both will also take advantage of the existing infrastructure on Ameren Missouri-owned land. The identified site, which Ameren Missouri already owns, has existing infrastructure and transmission line access, reducing overall construction time, it said. “It is crucial to have a balanced mix of generation technologies and equally important to strategically locate them across the region”, Ajay Arora, senior vice president and chief development officer at Ameren Missouri, added. “This approach maximizes the energy output from these resources”. Rob Dixon, senior director

Read More »

Land Regulators Approve Montana Gas Pipeline, Nevada Geothermal Projects

The Bureau of Land Management (BLM) has approved a proposed Northwestern Energy pipeline to carry natural gas from Helena to Three Forks, as well as several geothermal expansion and research projects by Ormat Technologies in Nevada. “These projects mark important progress in expanding both traditional and renewable energy infrastructure on public lands”, the Interior Department sub-agency said in an online statement. The 74-mile, 16-inch pipeline benefited from emergency permitting procedures introduced by the Interior last April in response to President Donald Trump’s National Energy Emergency declaration in January. The Interior has allowed environmental assessments that normally take up to one year to be completed in about 14 days, it said in a statement April 23. NorthWestern Energy submitted its proposal June 10 for a 30-year right of way, according to the BLM’s project information page. “The pipeline route includes approximately nine miles of BLM-managed public lands and will follow an existing utility corridor to minimize new ground disturbance and ecological disruption”, the BLM statement said. It will be built in phases between spring 2026 and fall 2029. “Preparatory activities, including increased survey traffic and engineering assessments, are expected to begin in 2025”, the BLM said. In Nevada, the McGinness Hills Geothermal Optimization Project will upgrade and expand three existing Lander County power plants. “Enhancements include new production wells, advanced heat exchangers, upgraded cooling fans, and the addition of a 15-megawatt solar photovoltaic field – aimed at improving overall efficiency and increasing output beyond the current 193 megawatts”, the BLM said. The Diamond Flat Geothermal Project will drill test wells and conduct geothermal resource confirmation activities on federally leased lands near Fallon. Ormat will undertake up to 33 shallow direct push holes and up to four temperature gradient wells to better outline the extent of the geothermal resource. Based on the results,

Read More »

Analyst Highlights August Natural Gas Contract ‘Collapse’

In an EBW Analytics Group report sent to Rigzone by the EBW team on Tuesday, Eli Rubin, an energy analyst at the company, highlighted a “collapse” in the August natural gas contract on Monday. The contract closed at $3.456 per million British thermal units (MMBtu) yesterday, which marked a 28.3 cent, or 7.6 percent drop, from the previous close, the report outlined.  “Yesterday’s natural gas price implosion fully erased Friday’s gains, with strong supply readings, milder Week 3 weather (particularly in Texas and the Southeast), and ongoing Henry Hub spot market softness dragging down the curve,” Rubin said in the report. “LNG bright spots are beginning to emerge, however, with Corpus Christi nominations rising to match a facility high, and LNG Canada loading its first cargo. Early-cycle U.S. LNG demand readings are already 1.9 billion cubic feet per day (Bcfpd) above the June average,” he added. “Critical support near $3.40 per MMBtu (last Thursday’s intraday low) may be determinative for the NYMEX front-month. If support can hold, LNG solidifies at higher levels, and elevated late June production readings decline, expanding Week 2 CDDs [Cooling Degree Days] surpassing the late June heat wave could propel upside over the next 7-10 days,” Rubin went on to state. “If support fails, however, it may open the door to another leg lower at the front of the NYMEX curve,” Rubin warned in the report. In an EBW Analytics Group report sent to Rigzone by the EBW team on Monday, Rubin highlighted that natural gas retreated after Friday’s gain. The report pointed out that the August natural gas contract closed at $3.739 per MMBtu on Friday. This represented a 21.3 cent, or 6.0 percent, rise compared to the previous close, the report outlined. “While the August natural gas contract rose 21.3 cents on Friday after plunging 42.3

Read More »

Coalition Aims to Help NOCs Integrate Methane Reduction in Debt Financing

A coalition of financial, technical and environmental partners has launched a framework that aims to help investors, lenders, and oil and gas companies integrate methane and flaring performance into debt financing. The Methane Finance Working Group, an initiative launched at COP28 in 2023, released guidance to “deliver and deploy market-tested finance mechanisms that facilitate decarbonization across the oil and gas sector, while expanding the opportunities to achieve measurable methane emission reductions,” the Environmental Defense Fund (EDF) said in a statement. The guidance provided by the group “includes a third-party verification process from trusted scientific and engineering experts, establishing an independent benchmark to evaluate and assess company methane performance during the financing process,” according to the statement. The guidance provides recommendations to adapt market-tested financial instruments to structure bonds, use of proceeds instruments, loans and other conventional debt transactions targeting methane and flaring reductions from the oil and gas industry. It is designed to serve both lenders and borrowers, the statement said. Although the technology for methane reduction exists, capital flows remain limited due to structural barriers. Many companies lack access to finance tailored to methane mitigation, while investors struggle to assess emissions performance consistently, the EDF said. Similar bonds have been utilized successfully in other sectors, such as global power utilities, which has cumulatively issued $500 billion in labeled bonds, the EDF noted. Methane performance is a growing factor in global liquefied natural gas (LNG) markets, where buyers are increasingly evaluating upstream emissions when sourcing gas. As international scrutiny over lifecycle emissions intensifies, financial and operational methane transparency has become a competitive differentiator for producers and exporters, according to the statement. “Reducing methane emissions across the oil and gas industry is one of the fastest and most effective ways to slow climate warming in the near term. But the financial

Read More »

Data center capacity continues to shift to hyperscalers

However, even though colocation and on-premises data centers will continue to lose share, they will still continue to grow. They just won’t be growing as fast as hyperscalers. So, it creates the illusion of shrinkage when it’s actually just slower growth. In fact, after a sustained period of essentially no growth, on-premises data center capacity is receiving a boost thanks to genAI applications and GPU infrastructure. “While most enterprise workloads are gravitating towards cloud providers or to off-premise colo facilities, a substantial subset are staying on-premise, driving a substantial increase in enterprise GPU servers,” said John Dinsdale, a chief analyst at Synergy Research Group.

Read More »

Oracle inks $30 billion cloud deal, continuing its strong push into AI infrastructure.

He pointed out that, in addition to its continued growth, OCI has a remaining performance obligation (RPO) — total future revenue expected from contracts not yet reported as revenue — of $138 billion, a 41% increase, year over year. The company is benefiting from the immense demand for cloud computing largely driven by AI models. While traditionally an enterprise resource planning (ERP) company, Oracle launched OCI in 2016 and has been strategically investing in AI and data center infrastructure that can support gigawatts of capacity. Notably, it is a partner in the $500 billion SoftBank-backed Stargate project, along with OpenAI, Arm, Microsoft, and Nvidia, that will build out data center infrastructure in the US. Along with that, the company is reportedly spending about $40 billion on Nvidia chips for a massive new data center in Abilene, Texas, that will serve as Stargate’s first location in the country. Further, the company has signaled its plans to significantly increase its investment in Abu Dhabi to grow out its cloud and AI offerings in the UAE; has partnered with IBM to advance agentic AI; has launched more than 50 genAI use cases with Cohere; and is a key provider for ByteDance, which has said it plans to invest $20 billion in global cloud infrastructure this year, notably in Johor, Malaysia. Ellison’s plan: dominate the cloud world CTO and co-founder Larry Ellison announced in a recent earnings call Oracle’s intent to become No. 1 in cloud databases, cloud applications, and the construction and operation of cloud data centers. He said Oracle is uniquely positioned because it has so much enterprise data stored in its databases. He also highlighted the company’s flexible multi-cloud strategy and said that the latest version of its database, Oracle 23ai, is specifically tailored to the needs of AI workloads. Oracle

Read More »

Datacenter industry calls for investment after EU issues water consumption warning

CISPE’s response to the European Commission’s report warns that the resulting regulatory uncertainty could hurt the region’s economy. “Imposing new, standalone water regulations could increase costs, create regulatory fragmentation, and deter investment. This risks shifting infrastructure outside the EU, undermining both sustainability and sovereignty goals,” CISPE said in its latest policy recommendation, Advancing water resilience through digital innovation and responsible stewardship. “Such regulatory uncertainty could also reduce Europe’s attractiveness for climate-neutral infrastructure investment at a time when other regions offer clear and stable frameworks for green data growth,” it added. CISPE’s recommendations are a mix of regulatory harmonization, increased investment, and technological improvement. Currently, water reuse regulation is directed towards agriculture. Updated regulation across the bloc would encourage more efficient use of water in industrial settings such as datacenters, the asosciation said. At the same time, countries struggling with limited public sector budgets are not investing enough in water infrastructure. This could only be addressed by tapping new investment by encouraging formal public-private partnerships (PPPs), it suggested: “Such a framework would enable the development of sustainable financing models that harness private sector innovation and capital, while ensuring robust public oversight and accountability.” Nevertheless, better water management would also require real-time data gathered through networks of IoT sensors coupled to AI analytics and prediction systems. To that end, cloud datacenters were less a drain on water resources than part of the answer: “A cloud-based approach would allow water utilities and industrial users to centralize data collection, automate operational processes, and leverage machine learning algorithms for improved decision-making,” argued CISPE.

Read More »

HPE-Juniper deal clears DOJ hurdle, but settlement requires divestitures

In HPE’s press release following the court’s decision, the vendor wrote that “After close, HPE will facilitate limited access to Juniper’s advanced Mist AIOps technology.” In addition, the DOJ stated that the settlement requires HPE to divest its Instant On business and mandates that the merged firm license critical Juniper software to independent competitors. Specifically, HPE must divest its global Instant On campus and branch WLAN business, including all assets, intellectual property, R&D personnel, and customer relationships, to a DOJ-approved buyer within 180 days. Instant On is aimed primarily at the SMB arena and offers a cloud-based package of wired and wireless networking gear that’s designed for so-called out-of-the-box installation and minimal IT involvement, according to HPE. HPE and Juniper focused on the positive in reacting to the settlement. “Our agreement with the DOJ paves the way to close HPE’s acquisition of Juniper Networks and preserves the intended benefits of this deal for our customers and shareholders, while creating greater competition in the global networking market,” HPE CEO Antonio Neri said in a statement. “For the first time, customers will now have a modern network architecture alternative that can best support the demands of AI workloads. The combination of HPE Aruba Networking and Juniper Networks will provide customers with a comprehensive portfolio of secure, AI-native networking solutions, and accelerate HPE’s ability to grow in the AI data center, service provider and cloud segments.” “This marks an exciting step forward in delivering on a critical customer need – a complete portfolio of modern, secure networking solutions to connect their organizations and provide essential foundations for hybrid cloud and AI,” said Juniper Networks CEO Rami Rahim. “We look forward to closing this transaction and turning our shared vision into reality for enterprise, service provider and cloud customers.”

Read More »

Data center costs surge up to 18% as enterprises face two-year capacity drought

“AI workloads, especially training and archival, can absorb 10-20ms latency variance if offset by 30-40% cost savings and assured uptime,” said Gogia. “Des Moines and Richmond offer better interconnection diversity today than some saturated Tier-1 hubs.” Contract flexibility is also crucial. Rather than traditional long-term leases, enterprises are negotiating shorter agreements with renewal options and exploring revenue-sharing arrangements tied to business performance. Maximizing what you have With expansion becoming more costly, enterprises are getting serious about efficiency through aggressive server consolidation, sophisticated virtualization and AI-driven optimization tools that squeeze more performance from existing space. The companies performing best in this constrained market are focusing on optimization rather than expansion. Some embrace hybrid strategies blending existing on-premises infrastructure with strategic cloud partnerships, reducing dependence on traditional colocation while maintaining control over critical workloads. The long wait When might relief arrive? CBRE’s analysis shows primary markets had a record 6,350 MW under construction at year-end 2024, more than double 2023 levels. However, power capacity constraints are forcing aggressive pre-leasing and extending construction timelines to 2027 and beyond. The implications for enterprises are stark: with construction timelines extending years due to power constraints, companies are essentially locked into current infrastructure for at least the next few years. Those adapting their strategies now will be better positioned when capacity eventually returns.

Read More »

Cisco backs quantum networking startup Qunnect

In partnership with Deutsche Telekom’s T-Labs, Qunnect has set up quantum networking testbeds in New York City and Berlin. “Qunnect understands that quantum networking has to work in the real world, not just in pristine lab conditions,” Vijoy Pandey, general manager and senior vice president of Outshift by Cisco, stated in a blog about the investment. “Their room-temperature approach aligns with our quantum data center vision.” Cisco recently announced it is developing a quantum entanglement chip that could ultimately become part of the gear that will populate future quantum data centers. The chip operates at room temperature, uses minimal power, and functions using existing telecom frequencies, according to Pandey.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »