Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
As AI-powered tools spread through enterprise software stacks, the rapid growth of the AI coding platform Windsurf is becoming a case study of what happens when developers adopt agentic tooling at scale.
In a session at today’s VB Transform 2025 conference, CEO and co-founder Varun Mohan discussed how Windsurf’s integrated development environment (IDE) surpassed one million developers within four months of launch. More notably, the platform now writes over half of the code committed by its user base.
The conversation, moderated by VentureBeat CEO Matt Marshall, opened with a brief but pointed disclaimer: Mohan could not comment on OpenAI’s widely reported potential acquisition of Windsurf.
The issue has drawn attention following a Wall Street Journal report detailing a brewing standoff between OpenAI and Microsoft over the terms of that deal and broader tensions within their multi-billion-dollar partnership. According to the WSJ, OpenAI seeks to acquire Windsurf without giving Microsoft access to its intellectual property— an issue that could reshape the enterprise AI coding landscape.
With that context set aside, the session focused on Windsurf’s technology, enterprise traction, and vision for agentic development.
>>See all our Transform 2025 coverage here<<Moving past autocomplete
Windsurf’s IDE is built around what the company calls a “mind-meld” loop—a shared project state between humans and AI that enables full coding flows rather than autocomplete suggestions. With this setup, agents can perform multi-file refactors, write test suites, and even launch UI changes when a pull request is initiated.
Mohan emphasized that coding assistance can’t stop at code generation. “Only about 20 to 30% of a developer’s time is spent writing code. The rest is debugging, reviewing, and testing. To truly assist, an AI system needs access to all those data sources,” he explained.
Windsurf recently embedded a browser inside its IDE, allowing agents to test changes, read logs, and interact with live interfaces directly—much like a human engineer would.
Security and control by design
As AI begins to participate in enterprise development cycles actively, Windsurf’s emphasis on built-in security has proven essential. “We use a hybrid model for enterprise deployments—none of the personalized data is stored outside the user’s tenant. Security is core, especially with features like our integrated browser agent,” said Mohan.
These capabilities have already made Windsurf a viable option for regulated industries. Its agents are deployed on massive codebases, including those at JPMorgan Chase and Morgan Stanley.
Mohan added that as AI becomes more accessible across roles, security will become a gating factor for productivity. “If everyone at a company will contribute to technology in some way, the missing piece is security. You don’t want a one-off system built by a non-technical user destroying another service,” he said.
Small teams, rapid testing
Internally, Windsurf organizes into lean squads of three or four engineers, each focused on testing a narrow set of product hypotheses.
“There’s a belief that one-person billion-dollar companies are the future. But in reality, more people allow you to grow faster and build better products. The key is organizing into small, focused teams that test hypotheses in parallel,” Mohan explained.
This model has helped the company iterate quickly in a space where foundational AI models—and user needs—evolve at breakneck speed.
Personalization at scale
Windsurf’s biggest optimization at enterprise scale isn’t faster token generation or smaller models—it’s relevance. “At scale, the biggest optimization is personalization. Deeply understanding the codebase allows the agent to make maintainable, large-scale changes that reflect user intent,” said Mohan.
Rather than relying solely on general-purpose code generation, Windsurf’s system learns each customer’s stack’s structure, style, and preferences.
Building for future models
Looking forward, Windsurf is designing its platform to remain adaptable as the capabilities of underlying AI models continue to grow.
“Every step-function improvement in foundation models demands a major product rethink. As agents become more capable, our job is to build the platform to manage and orchestrate many of them effectively,” Mohan said.
The company is working on an open protocol that will allow enterprises to integrate any LLM—including on-premises models—into Windsurf’s agent framework, preserving flexibility and minimizing vendor lock-in.
Proving and measuring value
Windsurf provides transparency into its performance with built-in analytics. “We provide transparency on ROI through metrics—like the percentage of code written by the assistant—which can be tied directly to internal engineering performance,” Mohan said.
This approach allows platform teams to connect agentic productivity with business impact, helping justify further investment.
Focused execution over flash
Finally, when asked how Windsurf plans to differentiate itself amid competition from OpenAI, Microsoft, Google, and others, Mohan focused on internal velocity. “The challenge isn’t who’s more visible today, but who’s executing the right strategy fast enough. The risk is either moving too slowly or projecting too far out and missing near-term relevance,” he said.
Mohan also dismissed the idea that incumbents are inevitably doomed. “There’s no fundamental reason legacy companies like Salesforce can’t become AI-native. The real limiter is their speed of innovation, not their capability.”
Whether Windsurf becomes part of OpenAI’s future or continues independently, the company’s traction with enterprise customers — and its insistence on grounding AI in secure, measurable workflows — makes it a player worth watching as agentic development enters the mainstream.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
