Most enterprise AI projects fail not because companies lack the technology, but because the models they use don’t understand their business. Models are often trained on the internet rather than on decades of internal documentation, workflows, and organizational knowledge.
French AI startup Mistral sees opportunity in this gap. The company on Tuesday announced Mistral Forge, a platform that allows companies to build custom models trained on their own data. Mistral announced the platform at Nvidia GTC, Nvidia’s annual technology conference. This year, the focus is on AI and agent models for the enterprise.
This is a sharp move for Mistral, a company that has built its business around enterprise customers while rivals OpenAI and Anthropic have seen rapid growth in terms of consumer adoption. CEO Arthur Mensch said Mistral’s focus on the enterprise is paying off, with the company on track to exceed $1 billion in annual recurring revenue this year.
Mistral said a key part of enhancing enterprise capabilities is giving companies more control over their data and AI systems.
“Forge’s role is to enable businesses and governments to customize AI models for their specific needs,” Elisa Salamanca, head of product at Mistral, told TechCrunch.
While some companies in the enterprise AI space already claim to offer similar capabilities, most focus on fine-tuning existing models or layering their own data on top through techniques such as search augmentation generation (RAG). These approaches essentially do not retrain the model. Instead, use enterprise data and adapt or query them at runtime.
Mistral, by contrast, claims it will allow companies to train models from scratch. In theory, this could address some of the limitations of more general approaches. Examples include improved handling of non-English or highly domain-specific data, and more control over model behavior. Companies can also use reinforcement learning to train agent systems, potentially reducing dependence on third-party model providers and avoiding risks such as model changes or obsolescence.
tech crunch event
San Francisco, California
|
October 13-15, 2026
Forge customers can build custom models using Mistral’s extensive library of openweight AI models, including smaller models such as the recently introduced Mistral Small 4. According to Mistral co-founder and chief engineer Timothée Lacroix, Forge can help you extract more value from your existing models.
“The trade-off when building a small model is that it’s not as good as a larger model on every topic, so being able to customize the model allows you to choose what to focus on and what to leave out,” LaCroix said.
Mistral will advise on which model and infrastructure to use, but both decisions will be up to the customer, LaCroix said. And for teams who need more than just guidance, Forge comes with a forward-deploying Mistral team of engineers, a model borrowed from the likes of IBM and Palantir, that integrates directly with customers to surface the right data and adapt to their needs.
“As a product, Forge already comes with all the tools and infrastructure to generate synthetic data pipelines,” Salamanca says. “But understanding how to build the right assessments and ensuring they have the right amount of data is something that companies typically don’t have the right expertise for, and that’s what FDE provides.”
Mistral already offers Forge to partners including Ericsson, the European Space Agency, Italian consulting firm Reply, and Singapore’s DSO and HTX. Early adopters include Dutch chipmaker ASML, which led Mistral’s Series C round last September at a valuation of 11.7 billion euros (approximately $13.8 billion at the time).
These partnerships represent what Mistral expects to be the primary use case for Forge. These include governments, which need to adjust their models to language and culture, according to Marjorie Janiewicz, chief revenue officer at Mistral. Financial stakeholders with high compliance requirements. Manufacturers with customization needs. and high-tech companies that need to adapt models to their codebases.
Source link
