On Thursday, Windsurf, a startup that develops popular AI tools for software engineers, announced its first family of AI software engineering models (the launch of the first family of SWE-1). The startup says it has trained a new family of AI models (SWE-1, SWE-1-LITE, and SWE-1-MINI) to optimize it not only for coding but for “the entire software engineering process.”
The launch of Windsurf’s in-house AI model may come as a shock to some, considering Openai reportedly shut down a $3 billion deal to acquire Windsurf. However, the launch of this model suggests that Windsurf is about to expand not only to develop applications, but also to develop models that provide power.
According to Windsurf, Bunch’s biggest and most capable AI model, SWE-1 is competitive with the Claude 3.5 Sonnet, GPT-4.1 and Gemini 2.5 Pro on its internal programming benchmarks. However, SWE-1 does not seem to exceed frontier AI models such as Claude 3.7 Sonnet for software engineering tasks.
Windsurf said the SWE-1-LITE and SWE-1-MINI models are available for free or for a fee to all users on the platform. On the other hand, SWE-1 is only available to paid users. Windsurf didn’t immediately announce the pricing for the SWE-1 model, but claims it’s cheaper than the Claude 3.5 sonnet.
Windsurf is best known for its tools that allow software engineers to create and edit code through conversations with AI chatbots, an exercise called “Vibe Coding.” Other popular atmosphere startups include the largest cursors and adorable cursors in the space. Most of these startups, including Windsurf, have traditionally relied on Openai, humanity, and Google’s AI models to enhance their applications.
In the video launching the SWE model, comments made by Windsurf’s research director Nicholas Moy highlights Windsurf’s latest efforts to distinguish between the approaches. “Today’s frontier models are optimized for coding and have made significant advances over the last few years,” says Moy. “But they’re not enough for us… coding is not software engineering.”
In a blog post, Windsurf said that while other models are good at writing code, they struggle to work between multiple surfaces, such as devices, IDEs, and the Internet, such as – as programmers often do. According to the startup, SWE-1 was trained using a new data model and “training recipes that encapsulate incomplete conditions, long-term tasks, and multiple surfaces.”
The startup describes SWE-1 as an “early proof of concept,” suggesting that it could potentially release more AI models in the future.
Source link