Stay Ahead, Stay ONMINE

Anthropic’s stealth enterprise coup: How Claude 3.7 is becoming the coding agent of choice

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More While consumer attention has focused on the generative AI battles between OpenAI and Google, Anthropic has executed a disciplined enterprise strategy centered on coding — potentially the most valuable enterprise AI use case. The results are […]

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


While consumer attention has focused on the generative AI battles between OpenAI and Google, Anthropic has executed a disciplined enterprise strategy centered on coding — potentially the most valuable enterprise AI use case. The results are becoming increasingly clear: Claude is positioning itself as the LLM that matters most for businesses.

The evidence? Anthropic’s Claude 3.7 Sonnet, released just two weeks ago, set new benchmark records for coding performance. Simultaneously, the company launched Claude Code, a command-line AI agent that helps developers build applications faster. Meanwhile, Cursor — an AI-powered code editor that defaults to Anthropic’s Claude model — has surged to a reported $100 million in annual recurring revenue in just 12 months

Anthropic’s deliberate focus on coding comes as enterprises increasingly recognize the power of AI coding agents, which enable both seasoned developers and non-coders to build applications with unprecedented speed and efficiency. “Anthropic continues to come out on top,” said Guillermo Rauch, CEO of Vercel, another fast-growing company that lets developers, including non-coders, deploy front-end applications. Last year, Vercel switched its lead coding model from OpenAI’s GPT to Anthropic’s Claude after evaluating the models’ performance on key coding tasks.

Claude 3.7: Setting new benchmarks for AI coding

Released February 24, Claude 3.7 Sonnet leads on nearly all coding benchmarks. It scored an impressive 70.3% on the respected SWE-bench benchmark, which measures an agent’s software development skills, handily outperforming nearest competitors OpenAI’s o1 (48.9%) and DeepSeek-R1 (49.2%). It also outperforms competitors on agentic tasks.

Source: Anthropic. SWE-bench measures a model’s ability to solve real-world software issues.

Developer communities have quickly verified these results in real-world testing. Reddit threads comparing Claude 3.7 with Grok 3, the newly released model from Elon Musk’s xAI, consistently favor Anthropic’s model for coding tasks. “Based on what I’ve tested, Claude 3.7 seems to be the best for writing code (at least for me),” said a top commenter.

Alongside the 3.7 Sonnet release, Anthropic launched Claude Code, an AI coding agent that works directly through the command line. This complements the company’s October release of Computer Use, which enables Claude to interact with a user’s computer, including using a browser to search the web, opening applications, and inputting text.

Source: Anthropic: TAU-bench is a framework that tests AI agents on complex real-world tasks with user and tool interactions.

Most notable is what Anthropic hasn’t done. Unlike competitors that rush to match each other feature-for-feature, the company hasn’t even bothered to integrate web search functionality into its app — a basic feature most users expect. This calculated omission signals that Anthropic isn’t competing for general consumers but is laser-focused on the enterprise market, where coding capabilities deliver much higher ROI than search.

Hands-on with Claude’s coding capabilities

To test the real-world capabilities of these coding agents, I experimented with building a database to store VentureBeat articles using three different approaches: Claude 3.7 Sonnet through Anthropic’s app; Cursor’s coding agent; and Claude Code.

Using Claude 3.7 directly through Anthropic’s app, I found the solution provided remarkable guidance for a non-coder like myself. It recommended several options, from very robust solutions using things like PostgreSQL database, to easier, lightweight ones like using Airtable. I chose the lightweight solution, and Claude methodically walked me through how to pull articles from the VentureBeat API into Airtable using Make.com for connections. The process took about two hours, including some authentication challenges, but resulted in a functional system. You could say that instead of doing all of the code for me, it showed me a master plan on how to do it.

Cursor, which defaults to Claude’s models, is a full-fledged code editor and was more eager to automate the process. However, it required permission at every step, creating a somewhat tedious workflow.

Claude Code offered yet another approach, running directly in the terminal and using SQLite to create a local database that pulled articles from our RSS feed. This solution was simpler and more reliable in terms of getting me to my end goal, but definitely less robust and feature-rich than the Airtable implementation. I’m now understanding the nature of these tradeoffs, and know that the coding agent I pick really depends on the specific project.

The key insight: Even as a non-developer, I was able to build functional database applications using all three approaches — something that would have been unthinkable just a year ago. And they all relied on Claude under the hood.

For a more detailed review of how to do this so-called “vibe coding,” where you rely on agents to code things while not doing any coding yourself, read this great piece by developer Simon Willison published yesterday. The process can be very buggy, and frustrating at times, but with the right concessions to this, you can go a long way.

The strategy: Why coding is Anthropic’s enterprise play

Anthropic’s singular focus on coding capabilities isn’t accidental. According to projections reportedly leaked to The Information, Anthropic aims to reach $34.5 billion in revenue by 2027 — an 86-fold increase from current levels. Approximately 67% of this projected revenue would come from API business, with enterprise coding applications as the primary driver. While Anthropic hasn’t released exact numbers for its revenue so far, it said its coding revenue surged 1,000% over the last quarter of 2024. Last week, Anthropic announced it had raised $3.5 billion more in funding at a $61.5 billion valuation.

This coding bet is supported by Anthropic’s own Economic Index, which found that 37.2% of queries sent to Claude were in the “computer and mathematical” category, primarily covering software engineering tasks like code modification, debugging and network troubleshooting.

Anthropic appears to be marching to its own beat — at a time when competitors are distracted, rushing to cover both enterprise and consumer markets with feature parity. OpenAI’s lead is reinforced from its early consumer recognition and usage, and it’s stuck trying to serve both regular users and businesses with multiple models and functionality. Google is chasing this trend too, trying to have one of everything.

Anthropic’s comparatively disciplined strategy extends to its product decisions. Instead of chasing consumer market share, the company has prioritized enterprise features like GitHub integration, audit logs, customizable permissions and domain-specific security controls. Six months ago, it introduced a massive 500,000-token context window for developers, while Google limited its 1-million-token window to private testers. The result is a comprehensive coding-focused offering that enterprises are increasingly adopting.

The company recently introduced features allowing non-coders to publish AI-created applications within their organizations, and just last week upgraded its console with enhanced collaboration capabilities, including shareable prompts and templates. This democratization reflects a sort of Trojan Horse strategy: First enable developers to build powerful foundations, then expand access to the broader enterprise workforce, including up into the corporate suite.

The coding agent ecosystem: Cursor and beyond

Perhaps the most telling sign of Anthropic’s success is the explosive growth of Cursor, an AI code editor that reportedly has 360,000 users, with more than 40,000 of them paying customers, after just 12 months — making it possibly the fastest SaaS company to reach that milestone.

Cursor’s success is inextricably linked to Claude. “You’ve got to think their number one customer is Cursor,” noted Sam Witteveen, cofounder of Red Dragon, an independent developer of AI agents. “Most people on [Cursor] were using the Claude Sonnet model — the 3.5 models — already. And now it seems everyone’s just migrating over to 3.7.”

The relationship between Anthropic and its ecosystem extends beyond individual companies like Cursor. In November, Anthropic released its Model Context Protocol (MCP) as an open standard, allowing developers to build tools that interact with Claude models. The standard is being widely adopted by developers.

“By launching this as an open protocol, they’re sort of saying, ‘Hey, everyone, have at it,’” explained Witteveen. “You can develop whatever you want that fits this protocol. We’re going to support this protocol.”

This approach creates a virtuous cycle: Developers build tools for Claude, which makes Claude more valuable to enterprises, which drives more adoption, which attracts more developers.

The competition: Microsoft, OpenAI, Google and open source

While Anthropic has found its focus, competitors are pursuing different strategies with varying results.

Microsoft maintains significant momentum through its GitHub Copilot, which has 1.3 million paid users and has been adopted by more than 77,000 organizations in roughly two years. Companies like Honeywell, State Street, TD Bank Group and Levi’s are among its users. This widespread adoption stems largely from Microsoft’s existing enterprise relationships and its first-mover advantage, whereby it invested early into OpenAI and used that company’s models to power Copilot.

However, even Microsoft has acknowledged Anthropic’s strength. In October, it allowed GitHub Copilot users to choose Anthropic’s models as an alternative to OpenAI. And OpenAI’s recent models — o1 and the newer o3, which emphasize reasoning through extended thinking — haven’t demonstrated particular strengths in coding or agentic tasks.

Google has made its own play by recently making its Code Assist free, but this move seems more defensive than strategic.

The open source movement is another significant force in this landscape. Meta’s Llama models have gained substantial enterprise traction, with major companies like AT&T, DoorDash and Goldman Sachs deploying Llama-based models for various applications. The open-source approach offers enterprises greater control, customization options and cost benefits that closed models can’t match, as VentureBeat reported last year.

Rather than seeing this as a direct threat, Anthropic appears to be positioning itself as complementary to open source. Enterprise customers can use Claude alongside open-source models depending on specific needs, a hybrid approach that maximizes the strengths of each.

In fact, most enterprise companies of scale I’ve talked with over the past several months are explicitly multimodal, in that their AI workflows allow them to use whatever model is best for a given case. Intuit was an early example of a company that had bet on OpenAI as a default for its tax return applications, but then last year switched to Claude because it was superior in some cases. The pain of switching led Intuit to create an AI orchestration framework that allowed switching between models to be much more seamless, as Nhung Ho, Intuit’s VP of AI, told VentureBeat at the time.

Most other enterprise companies have since followed a similar practice. They use whatever model is best for the specific case, pulling in models with simple API calls. In some cases, an open-source model like Llama might work well, but in others — for example, in calculations where accuracy is important — Claude is the choice, Intuit’s Ho explained at VentureBeat’s VB Transform event last year

Over the past couple of days, I’ve been attending the HumanX conference in Las Vegas, where hundreds of developers gathered to talk about AI. Claude comes up almost always whenever the topic of agents or coding is raised. Over lunch yesterday, Julianne Averill, managing director at Danforth Advisors, which advises life science companies, said her company had found Claude superior for many such tasks, including building structured analysis tables.

Vercel CEO Guillermo Rauch, another attendee, said his company, which has surpassed $100 million in annual revenue, chose Claude last year as its default model to help developers code after doing rigorous evaluations of all models. “3.7 is king,” Rauch told VentureBeat. He agreed it’s important to offer developers a choice of models, since the breakneck pace of advances means there can’t be loyalty to a single model. But while Vercel’s V0 product, which lets users generate web user interfaces (UIs) using natural-language prompts, offers that choice, it has to pick a default model to help users during their initial ideation and reasoning phase. That model is Claude Sonnet. “You need the architect model that is capable of reasoning and does the lion’s share of code generation,” he said. “A significant chunk of our pipeline is powered by Anthropic Sonnet.” Adobe, Chick-Fil-A and Bed Bath and Beyond are Vercel’s customers. 

Still Rauch cautioned that fluidity in the LLM race remains, and the lead model could change at any time. Vercel experimented with China’s DeepSeek, he said, but found it fell just short of matching Claude’s Sonnet. Similarly, he said, Alibaba’s Qwen model has gotten very good.

Enterprise implications: Making the shift to coding agents

For enterprise decision-makers, this rapidly evolving landscape presents both opportunities and challenges.

Security remains a top concern, but a recent independent report found Claude 3.7 Sonnet to be the most secure model yet — the only one tested that proved “jailbreak-proof.” This security stance, combined with Anthropic’s backing from both Google and Amazon (and integration into AWS Bedrock), positions it well for enterprise adoption.

The rise of coding agents isn’t just changing how applications are built — it’s democratizing the process. According to GitHub, 92% of U.S.-based developers at enterprise companies were already using AI-powered coding tools at work 18 months ago. That number has likely grown substantially since then.

“The challenge that people are having [because of] not being a coder is really that they don’t know a lot of the terminology. They don’t know best practices,” explained Witteveen. AI coding agents increasingly bridge this gap, allowing technical and non-technical team members to collaborate more effectively.

For enterprise adoption, Witteveen recommends a balanced approach: “It’s the balance between security and experimentation at the moment. Clearly, on the developer side, people are starting to build real-world apps with this stuff.”

For a deeper exploration of these issues, check out my recent YouTube video conversation with Witteveen, where we take a deep dive into the state of coding agents and what they mean for enterprise AI strategy.

Looking ahead: the future of enterprise coding

The rise of AI coding agents signals a fundamental shift in enterprise software development. When used effectively, these tools don’t replace developers but transform their roles, allowing them to focus on architecture and innovation rather than implementation details.

Anthropic’s disciplined approach in focusing specifically on coding capabilities while competitors chase multiple priorities appears to be paying dividends for the company. By the end of 2025, we may look back on this period as the moment when AI coding agents became essential enterprise tools — with Claude leading the way.

For technical decision-makers, the message is clear: Start experimenting with these tools now or risk falling behind competitors who are already using them to accelerate development cycles dramatically. This moment echoes the early days of the iPhone revolution, when companies initially tried to block “unsanctioned” devices from their corporate networks, only to eventually embrace BYOD policies as employee demand became overwhelming. Some companies VentureBeat has talked with, like Honeywell, have recently similarly tried to shut down “rogue” use of AI coding tools not approved by IT. 

Speaking Monday at the HumanX conference, James Reggio, the CTO of Brex, a company that provides credit cards and other financial services to small and mid-sized enterprises, said his company initially also tried to enforce a top-down approach to AI model selection, in an effort to reach perfection. But the company faced revolt among its developer employees, and soon realized this was futile. It decided to allow users to experiment freely. Smart companies are already setting up secure sandbox environments to allow controlled experimentation. Organizations that create clear guardrails while encouraging innovation will benefit from both employee enthusiasm and insights about how these tools can best serve their unique needs — positioning themselves ahead of competitors who resist change. And Anthropic’s Claude, at least for now, is a big beneficiary of this movement.

Watch my video with developer Sam Witteveen here for a full deep dive into the coding agent trend:

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Altera targets low-latency AI edge applications with new FPGA products

Support for Agilex 3 and other Agilex product lines is available through Altera’s free Quartus software suite. Quartus is a design software suite for programmable logic devices. It allows engineers to design, analyze, optimize, and program Intel FPGAs, CPLDs, and SoCs using system-level design techniques and advanced place-and-route algorithms. For

Read More »

Observe links end-user experience with back-end troubleshooting

Frontend Observability uses a capability called Browser Real User Monitoring (RUM) to enable IT and developer teams to quickly identify and diagnose performance issues across browsers, devices, and locations. For instance, RUM identifies anomalies in page load times, core web vitals, and JavaScript or HTTP errors. RUM also provides developers

Read More »

ServiceNow to pay $2.85B for Moveworks’ AI tools

ServiceNow and Moveworks will deliver a unified, end‑to‑end search and self‑service experience for all employee requestors across every workflow, according to ServiceNow. A majority of Moveworks’ current customer deployments already use ServiceNow in their environments to access enterprise AI, data, and workflows. ServiceNow said this acquisition will build upon the

Read More »

Vitesse Seals Acquisition of Lucero

Vitesse Energy Inc. has completed its purchase of Lucero Energy Corp., an oil and gas producer focused on the North Dakota side of the Williston Basin, in an all-stock transaction with a diluted equity value of $222 million. “Pursuant to the Arrangement Agreement between Vitesse and Lucero, dated December 15, 2024, at the closing of the Arrangement each common share of Lucero has the right to receive 0.01239 of a share of Vitesse common stock”, Greenwood Village, Colorado-based well investor Vitesse said in an online statement announcing completion. The Vitesse units to be issued to Lucero shareholders will trade in New York. Calgary, Canada-based Lucero will delist from the Toronto exchange and the OTCQB Market. Vitesse’s board has increased to nine members with the addition of two of Lucero’s owners. They are Lucero director Gary Reaves, managing partner at First Reserve Management LP, which owns 37 percent of Lucero, and Lucero chair M. Bruce Chernoff, who owns 22 percent of Lucero. Contributing over 65 producing wells, Lucero will provide an “operational component” to Vitesse’s  portfolio, said the announcement of the transaction deal December 16, 2024. “Our recently completed Lucero acquisition is expected to be accretive to all key financial metrics and supports today’s increase in the dividend to an annualized rate of $2.25 per share”, Vitesse chair and chief executive Bob Gerrity said separately in the company’s quarterly report. “We believe we are well positioned operationally and financially to pursue additional acquisitions facilitated by the continued evolution of our proprietary database, Luminis, which enhances our evaluation process”. As of the first week of March, Lucero had 1.9 net wells that were either drilling or in the completion phase and an additional 5.3 net locations that had been permitted for development. Accounting for the new assets Vitesse expects its first quarter

Read More »

NNPC JV Bags 20-Year Contract to Supply Gas to Ssonic LNG Plant

Ssonic Petroleum Ltd. has awarded a 20-year contract to a joint venture (JV) of Nigeria National Petroleum Co. Ltd. for the supply of 80 million standard cubic feet a day of natural gas to Ssonic’s liquefied natural gas plant in Lagos’ Lekki Free Trade Zone. The supply will be delivered by the unincorporated JV of NNPC Gas Marketing Ltd., a subsidiary of NNPC, and NIPCO Gas Ltd., a subsidiary of Lagos-based downstream player NIPCO PLC. “The gas supply agreement is part of efforts by the NNPC Ltd to boost domestic gas utilization for industrial and economic growth of the nation and promote the use of gas as a cleaner, cheaper and more environmentally friendly fuel in keeping with the goal of reducing global warming”, NNPC said in an online statement. On November 14, 2024, NNPC reported that Nigeria’s gas production ramped up to 7.4 billion cubic feet per day. NNPC has its own liquefaction operations. Nigeria LNG, in which NNPC owns the majority stake (49 percent), produces up to 22 million metric tons a year of LNG and up to five million metric tons a year of natural gas liquids, according to Nigeria LNG. NNPC is expanding its gas infrastructure to grow its domestic and overseas reach in the sector. Nigeria’s 2020-30 “Decade of Gas” initiative aims to make gas the top fuel in the West African nation’s economic development. Earlier this year construction started on five mini-LNG facilities in Kogi state. NNPC holds stakes in three of the projects: 90 percent in Prime LNG, 50 percent in NGML/Gasnexus LNG and 10 percent in BUA LNG. The other two plants are LNG Arete and Highland LNG. “The Gasnexus/NGML 20MMSCFD [million standard cubic feet a day] small-scale LNG plant will be developed in phases, starting with the development of a 7.5 MMSCF/D

Read More »

Aberdeen Headquartered Company Confirms Intention to Reduce Jobs

A spokesperson for Aberdeen-headquartered Well-Safe Solutions has confirmed to Rigzone that the company intends to reduce jobs. “The knock-on effects of the Energy Profits Levy (EPL) have seen spend delayed on decommissioning across the industry, which is affecting both our rig and engineering activity,” the spokesperson told Rigzone.  “It’s with regret that Well-Safe Solutions confirms its intention to reduce positions aboard the Well-Safe Guardian while it is on standby. We must also resize our onshore team to reflect the reduction in activity throughout 2025,” the spokesperson added. The Well-Safe spokesperson said the company currently going through a collective consultation process “exploring options to safeguard as many colleagues as possible and are supporting them through this challenging time”. “It is proposed that 45 positions may be affected onshore. With the Well-Safe Guardian on standby, we will retain 34 positions onboard in readiness for our return to a client project,” the spokesperson added. “This is not a position we expected to find ourselves in, but we must make this hard decision now to protect the business ahead of an expected increase in global project availability for 2026 and beyond,” the spokesperson continued. The Well-Safe spokesperson told Rigzone that the Well-Safe Defender and Well-Safe Protector are not affected by the current consultations. “Well-Safe Solutions is continuing to deploy its personnel and assets onto relevant projects as we help our clients to realise their energy transition objectives,” the spokesperson told Rigzone. The EPL is an additional temporary tax on a company’s ring-fence profits, an oil and gas price mechanism consultation posted on the UK government website on March 5 states, highlighting that the measure was introduced in May 2022. “The levy was implemented in response to extraordinary profits made by oil and gas companies driven by global events, including resurgent demand for energy post Covid-19 and

Read More »

Petrobras Achieves Successful Feedstock Co-Processing Test at Riograndense

An industrial-scale co-processing experiment has been successfully conducted at the Riograndense Refinery, owned by Petrobras, Ultra and Braskem. The test used 5 percent pyrolysis bio-oil (derived from non-food biomass) and fossil feedstock. The co-processing converted the bio-oil into fuel gas, LPG, and components for gasoline and marine fuel with renewable content, Petrobras said in a media release. Petrobras provided technology for the test at RPR’s catalytic cracking unit, which lasted seven days. A specialized team from Petrobras and Riograndense supported all stages of the bio-oil supply process, Petrobras said. In the experiment, the FCC unit was modified to allow for the simultaneous processing of bio-oil and fossil feedstock. The catalyst used in the reactor is part of the ReNewFCC series, created through a collaboration with Fábrica Carioca de Catalisadores (FCC S.A.), a joint venture between Petrobras and Ketjen, Petrobras said. The renewable component was provided by the Vallourec-Florestal unit. The bio-oil produced is ISCC PLUS-certified and is derived from the condensation of vapors generated during the production of eucalyptus charcoal, which contributes to reducing GHG emissions, according to Petrobras. The test is part of the ongoing efforts to transform RPR into a biorefinery in the coming years, Petrobras said. “In line with our commitment to leading a just energy transition in Brazil, the Riograndense Refinery has the potential to become the world’s first refinery to produce continuously 100 percent renewable products. It will become a biorefinery dedicated to producing fuels exclusively from renewable materials,” Magda Chambriard, Petrobras’ CEO, said. In late 2023, RPR became the first in the world to process 100 percent vegetable oil in an FCC unit, producing fuels and chemical feedstocks like propylene and bio-aromatics (benzene, toluene, and xylenes) using Petrobras’ CENPES technology, according to the company. “The recent test represents a significant breakthrough for global biorefining,

Read More »

Ukraine Says Its Drones Hit Major Refinery in Moscow

Ukraine claimed it hit a major oil refinery that supplies Moscow and its airports, as part of a record drone barrage ahead of talks between Kyiv and the US over a potential ceasefire. Drones hit the Moscow refinery overnight, Ukraine’s General Staff said in a Telegram post. In Russia, the regional unit of emergencies ministry said in a Telegram statement that debris of a UAV with an unexploded warhead was found in Moscow’s district of Kapotnya, where the refinery is located, and was “successfully neutralized.” Gazprom Neft PJSC, which owns the refinery, said the facility was operating normally, according to a statement from its press office. It wasn’t possible to independently verify Ukraine’s claims or assess any damage to the refinery. Ukraine launched a record number of drones on Russia overnight, with air defenses shooting down 337 drones including dozens that targeted Moscow, Russian officials said early Tuesday. Top officials from the US and Ukraine began talks in Saudi Arabia on Tuesday to explore the potential for reaching a ceasefire. The Moscow facility is one of Russia’s largest refineries, with design crude-processing capacity of about 257,000 barrels a day. It supplies more than a third of the fuel market in the capital region, including Moscow airports, according to its website. Ukrainian drones hit the facility last year, forcing it to briefly suspend operations at one of the processing units in September.  Kyiv has intensified attacks on Russia’s energy infrastructure targeting refineries and oil-pumping stations almost on a daily basis as the Kremlin’s war against Ukraine has entered its fourth year. Repeated drone attacks on a key industry aim to curtail Russia’s ability to send fuel to the front line and limit Moscow’s revenue from oil sales.  The overnight strike also led to explosions at an oil-product pipeline operation control station

Read More »

Genel Enters Block 54 in Oman

Genel Energy plc has signed agreements to enter into the Block 54 exploration and production sharing agreement (EPSA) in the Sultanate of Oman. The company has secured a 40 percent participating interest in the license, in which OQ Exploration and Production SAOG (OQEP) will hold operatorship and a 60 percent participating interest. Block 54 (the Karawan Concession) is on the eastern side of the South Oman Salt Basin and immediately adjacent to existing production, Genel said in a media release. The block spans 5,632 square kilometers (2,174.5 square miles) within the Al Wusta Governorate, approximately 600 kilometers (372 miles) south of Muscat, and is largely underexplored, Genel said. In the upcoming three years, Genel and OQEP anticipate putting in around $25 million in total direct expenses for the initial phase of the EPSA, which includes fulfilling the minimum work commitment that entails evaluating existing wells, drilling, and acquiring 3D seismic data. Genel added it will cover a portion of OQEP’s 60 percent stake during the initial phase. OQEP, a subsidiary of OQ SAOC, is Oman’s third-largest producer and is publicly listed on the Muscat Stock Exchange, having recently completed an initial public offering, Genel noted. “We identified Oman some time ago as a preferred jurisdiction for geographical diversification, given its stable regulatory environment and the significant steps it has taken in recent years to set its oil and gas sector up for an exciting future. It is therefore the ideal country for Genel to begin its strategic diversification, expand its portfolio, and invest capital”, Paul Weir, Chief Executive of Genel, said. “We are delighted to be partnering with OQEP and the Ministry of Energy and Minerals of the Sultanate of Oman on this exciting opportunity and look forward to working together to unlock and expand this contingent resource”. To contact

Read More »

Podcast: On the Frontier of Modular Edge AI Data Centers with Flexnode’s Andrew Lindsey

The modular data center industry is undergoing a seismic shift in the age of AI, and few are as deeply embedded in this transformation as Andrew Lindsey, Co-Founder and CEO of Flexnode. In a recent episode of the Data Center Frontier Show podcast, Lindsey joined Editor-in-Chief Matt Vincent and Senior Editor David Chernicoff to discuss the evolution of modular data centers, the growing demand for high-density liquid-cooled solutions, and the industry factors driving this momentum. A Background Rooted in Innovation Lindsey’s career has been defined by the intersection of technology and the built environment. Prior to launching Flexnode, he worked at Alpha Corporation, a top 100 engineering and construction management firm founded by his father in 1979. His early career involved spearheading technology adoption within the firm, with a focus on high-security infrastructure for both government and private clients. Recognizing a massive opportunity in the data center space, Lindsey saw a need for an innovative approach to infrastructure deployment. “The construction industry is relatively uninnovative,” he explained, citing a McKinsey study that ranked construction as the second least-digitized industry—just above fishing and wildlife, which remains deliberately undigitized. Given the billions of square feet of data center infrastructure required in a relatively short timeframe, Lindsey set out to streamline and modernize the process. Founded four years ago, Flexnode delivers modular data centers with a fully integrated approach, handling everything from site selection to design, engineering, manufacturing, deployment, operations, and even end-of-life decommissioning. Their core mission is to provide an “easy button” for high-density computing solutions, including cloud and dedicated GPU infrastructure, allowing faster and more efficient deployment of modular data centers. The Rising Momentum for Modular Data Centers As Vincent noted, Data Center Frontier has closely tracked the increasing traction of modular infrastructure. Lindsey has been at the forefront of this

Read More »

Last Energy to Deploy 30 Microreactors in Texas for Data Centers

As the demand for data center power surges in Texas, nuclear startup Last Energy has now announced plans to build 30 microreactors in the state’s Haskell County near the Dallas-Fort Worth Metroplex. The reactors will serve a growing customer base of data center operators in the region looking for reliable, carbon-free energy. The plan marks Last Energy’s largest project to date and a significant step in advancing modular nuclear power as a viable solution for high-density computing infrastructure. Meeting the Looming Power Demands of Texas Data Centers Texas is already home to over 340 data centers, with significant expansion underway. Google is increasing its data center footprint in Dallas, while OpenAI’s Stargate has announced plans for a new facility in Abilene, just an hour south of Last Energy’s planned site. The company notes the Dallas-Fort Worth metro area alone is projected to require an additional 43 gigawatts of power in the coming years, far surpassing current grid capacity. To help remediate, Last Energy has secured a 200+ acre site in Haskell County, approximately three and a half hours west of Dallas. The company has also filed for a grid connection with ERCOT, with plans to deliver power via a mix of private wire and grid transmission. Additionally, Last Energy has begun pre-application engagement with the U.S. Nuclear Regulatory Commission (NRC) for an Early Site Permit, a key step in securing regulatory approval. According to Last Energy CEO Bret Kugelmass, the company’s modular approach is designed to bring nuclear energy online faster than traditional projects. “Nuclear power is the most effective way to meet Texas’ growing energy demand, but it needs to be deployed faster and at scale,” Kugelmass said. “Our microreactors are designed to be plug-and-play, enabling data center operators to bypass the constraints of an overloaded grid.” Scaling Nuclear for

Read More »

Data Center Jobs: Engineering and Technician Jobs Available in Major Markets

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting.  Data Center Facility Engineer (Night Shift Available) Ashburn, VAThis position is also available in: Tacoma, WA (Nights), Days/Nights: Needham, MA and New York City, NY. This opportunity is working directly with a leading mission-critical data center developer / wholesaler / colo provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning Engineer New Albany, OHThis traveling position is also available in: Somerset, NJ; Boydton, VA; Richmond, VA; Ashburn, VA; Charlotte, NC; Atlanta, GA; Hampton, GA; Fayetteville, GA; Des Moines, IA; San Jose, CA; Portland, OR; St Louis, MO; Phoenix, AZ;  Dallas, TX;  Chicago, IL; or Toronto, ON. *** ALSO looking for a LEAD EE and ME CxA agents.*** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Switchgear Field Service Technician – Critical Facilities Nationwide TravelThis position is also available in: Charlotte, NC; Atlanta, GA; Dallas,

Read More »

Amid Shifting Regional Data Center Policies, Iron Mountain and DC Blox Both Expand in Virginia’s Henrico County

The dynamic landscape of data center developments in Maryland and Virginia exemplify the intricate balance between fostering technological growth and addressing community and environmental concerns. Data center developers in this region find themselves both in the crosshairs of groups worried about the environment and other groups looking to drive economic growth. In some cases, the groups are different components of the same organizations, such as local governments. For data center development, meeting the needs of these competing interests often means walking a none-too-stable tightrope. Rapid Government Action Encourages Growth In May 2024, Maryland demonstrated its commitment to attracting data center investments by enacting the Critical Infrastructure Streamlining Act. This legislation provides a clear framework for the use of emergency backup power generation, addressing previous regulatory challenges that a few months earlier had hindered projects like Aligned Data Centers’ proposed 264-megawatt campus in Frederick County, causing Aligned to pull out of the project. However, just days after the Act was signed by the governor, Aligned reiterated its plans to move forward with development in Maryland.  With the Quantum Loop and the related data center development making Frederick County a focal point for a balanced approach, the industry is paying careful attention to the pace of development and the relations between developers, communities and the government. In September of 2024, Frederick County Executive Jessica Fitzwater revealed draft legislation that would potentially restrict where in the county data centers could be built. The legislation was based on information found in the Frederick County Data Centers Workgroup’s final report. Those bills would update existing regulations and create a floating zone for Critical Digital Infrastructure and place specific requirements on siting data centers. Statewide, a cautious approach to environmental and community impacts statewide has been deemed important. In January 2025, legislators introduced SB116,  a bill

Read More »

New Reports Show How AI, Power, and Investment Trends Are Reshaping the Data Center Landscape

Today we provide a comprehensive roundup of the latest industry analyst reports from CBRE, PwC, and Synergy Research, offering a data-driven perspective on the state of the North American data center market.  To wit, CBRE’s latest findings highlight record-breaking growth in supply, soaring colocation pricing, and mounting power constraints shaping site selection. For its part, PwC’s analysis underscores the sector’s broader economic impact, quantifying its trillion-dollar contribution to GDP, rapid job growth, and surging tax revenues.  Meanwhile, the latest industry analysis from Synergy Research details the acceleration of cloud spending, AI’s role in fueling infrastructure demand, and an unprecedented surge in data center mergers and acquisitions.  Together, these reports paint a picture of an industry at an inflection point—balancing explosive expansion with evolving challenges in power availability, cost pressures, and infrastructure investment. Let’s examine them. CBRE: Surging Demand Fuels Record Data Center Expansion CBRE says the North American data center sector is scaling at an unprecedented pace, driven by unrelenting demand from artificial intelligence (AI), hyperscale, and cloud service providers. The latest North America Data Center Trends H2 2024 report from CBRE reveals that total supply across primary markets surged by 34% year-over-year to 6,922.6 megawatts (MW), outpacing the 26% growth recorded in 2023. This accelerating expansion has triggered record-breaking construction activity and intensified competition for available capacity. Market Momentum: Scaling Amid Power Constraints According to CBRE, data center construction activity reached historic levels, with 6,350 MW under development at the close of 2024—more than doubling the 3,077.8 MW recorded a year prior. Yet, the report finds the surge in development is being met with significant hurdles, including power constraints and supply chain challenges affecting critical electrical infrastructure. As a result, the vacancy rate across primary markets has plummeted to an all-time low of 1.9%, with only a handful of sites

Read More »

Minnesota PUC Says No to Amazon’s Bid to Fast-Track 250 Diesel Generators for Data Center

Amazon is facing scrutiny and significant pushbacks over its plan to install 250 diesel backup generators for a proposed data center in Becker, Minnesota. Much of the concern had been due to the fact that the hyperscaler was seeking an exemption from the state’s standard permitting process, a move that has sparked opposition from environmental groups and state officials. Aggregate Power that Matches Nuclear Power Generation Amazon’s proposed fleet of diesel generators would have a maximum power output almost equivalent to the 647 MW that is produced by Xcel Energy’s nuclear plant in Monticello, one of the two existing nuclear generation stations in the state. Meanwhile, as reported by Datacenter Dynamics, according to a real estate filing published with the Minnesota Department of Revenue, the land parcel assigned for the Amazon data center in Becker was previously part of Minneapolis-based utility Xcel’s coal-powered Sherco Site. Amazon argues that the diesel generators in question are essential to ensuring reliable and secure access to critical data and applications for its customers, including hospitals and first responders. However, opponents worry about the environmental impact and the precedent it may set for future large-scale data center developments in the state. The Law and Its Exception Under Minnesota state law, any power plant capable of generating 50 megawatts or more that connects to the grid via transmission lines must obtain a Certificate of Need from the Public Utilities Commission (PUC). This certification ensures that the infrastructure is necessary and that no cheaper, cleaner alternatives exist. Amazon, however, contends that its generators do not fall under this requirement because they are not connected to the larger electric grid; power generated would be strictly used by the data center suffering an outage from its primary power source. That power would be generated locally, and not transmitted over

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »

Introducing Gemma 3

For a deeper dive into the technical details behind these capabilities, as well as a comprehensive overview of our approach to responsible development, refer to

Read More »