- Mistral’s Forge is positioned as an on-prem or isolated training platform that lets enterprises build custom models on proprietary data without sending data to external APIs.
- The product targets the core weaknesses of the API-based AI delivery model: data sovereignty risk, exposure of proprietary datasets, and per-token costs that balloon at high query volumes.
- Forge shifts control to the customer—model ownership, infrastructure ownership, and the decision of when to retrain—rather than relying on provider-managed updates and ongoing API dependency.
- At low usage levels (e.g., under ~10,000 queries/day) APIs can be cheaper, but at enterprise scale (millions of queries) custom training plus self-hosted inference can drive per-query costs down substantially.
- If enterprises adopt Forge-style “build your own model” workflows, it threatens the long-term revenue model of API providers like OpenAI and Anthropic by removing recurring per-call spend.

Mistral launched Forge at Nvidia's GTC conference on March 17, and the pitch is direct: train custom AI models on your proprietary data, on your infrastructure, under your control. No data leaves your environment. No API calls to external providers. You own the model and the infrastructure it runs on.
CEO Arthur Mensch said Mistral is on track to surpass $1 billion in annual recurring revenue this year. For a company that didn't exist three years ago, that number is a statement about where enterprise AI spending is actually going.
Forge is Mistral's answer to a question that every large enterprise is asking: should we keep renting AI through API calls, or should we build our own?
The API economy problem
Most enterprise AI deployments today work the same way. You send your data to an external API, a model hosted by OpenAI, Anthropic, or Google processes it, and you get results back. You pay per token. The model improves when the provider ships updates. Your data flows through someone else's infrastructure.
This works fine for many use cases. It falls apart for organizations with strict data sovereignty requirements, proprietary datasets that create competitive advantage, or workloads where per-token costs at scale get expensive fast.
A bank processing millions of loan applications through an external API is sending customer financial data to a third party. A pharmaceutical company running drug interaction analysis through Claude is sharing proprietary research data with Anthropic's infrastructure. A defense contractor using GPT for document analysis is routing classified-adjacent information through OpenAI's servers.
These organizations want AI. They don't want the data exposure that comes with the current delivery model.
What Forge offers
Forge is a platform for training custom models from Mistral's base architectures using only your data. The model trains on your infrastructure (or Mistral's isolated cloud instances), and the resulting model belongs to you. No shared infrastructure, no data commingling, no API dependency.
| Dimension | API model (OpenAI, Anthropic) | Forge (Mistral) |
|---|---|---|
| Data handling | Your data sent to provider's infrastructure | Your data stays on your infrastructure |
| Model ownership | Provider owns the model | You own the trained model |
| Cost structure | Per-token, scales with usage | Training cost + inference on your hardware |
| Customization | Prompt engineering, some fine-tuning | Full custom training on proprietary data |
| Dependency | Ongoing API dependency | Self-contained after training |
| Updates | Provider pushes updates | You control when/whether to retrain |
The economics shift depending on scale. For organizations running fewer than 10,000 AI queries per day, the API model is cheaper. For organizations running millions of queries, training a custom model and running inference on owned hardware costs a fraction per query.
Mistral launched Forge alongside Mistral Small 4, a new model optimized for enterprise deployment. Small 4 is designed to be the base that enterprises customize through Forge, creating a model trained on your domain knowledge and proprietary data.

Why this threatens the API providers
OpenAI and Anthropic's business models depend on enterprises continuing to rent access to their models. Every API call is revenue. Every enterprise that builds its own model is revenue that disappears permanently.
Forge doesn't compete with OpenAI on model quality. It competes on a different axis entirely: ownership and control. The pitch isn't "our model is better." The pitch is "you should own your model."
For enterprises that have been building AI products on top of API access, Forge introduces a strategic question they've been deferring: at what point does it make more sense to invest in building your own model than to keep paying per-token for someone else's?
The answer depends on your specific situation. How much proprietary data do you have that would make a custom model noticeably better than a general-purpose one? How many queries do you run per day, and when does the per-token cost exceed the amortized cost of training your own? How important is data sovereignty to your business, for regulatory compliance or competitive protection?
For most small and mid-size companies, the API model still wins. For large enterprises with proprietary data, high query volumes, and regulatory constraints, Forge is making the alternative viable.
The $1 billion signal
Mistral reaching $1 billion ARR on an enterprise-first strategy tells you something about where the money actually is in AI. Consumer AI products get the headlines. Enterprise AI deployments write the checks.
Mistral doesn't have a consumer chatbot. It doesn't have a consumer image generator. It sells models and infrastructure to businesses. And it's growing faster than companies with ten times its brand recognition.
The Tesco partnership (three years, full operational deployment), the Forge launch, and the revenue trajectory all point in the same direction: Mistral is building the enterprise AI company that OpenAI and Anthropic are trying to become, without the consumer baggage.
The enterprise strategy shift
If your organization is planning AI investments for the next 12-18 months, Forge changes the option set. Before Forge, the choice was which API provider to use. Now there's another option: build your own model on your own data.
This doesn't mean every company should rush to build custom models. It means the cost-benefit analysis has changed. The question "should we build or rent?" now has a concrete "build" option with an identifiable vendor, a working platform, and transparent pricing.
For organizations sitting on large proprietary datasets, the first step is evaluating whether a custom model trained on that data would outperform a general-purpose model for your specific use cases. In healthcare, financial services, legal, and manufacturing, the answer is probably yes. Proprietary data contains domain knowledge that general-purpose models simply don't have.
The enterprises that figure this out first will have AI that knows their business in a way that no competitor using generic API models can replicate.
If you’re building AI into a regulated or IP-sensitive business, start by mapping which workflows can’t tolerate third-party data handling and estimate your daily query volume to see whether API pricing will compound into a long-term liability. For high-throughput or sovereignty-constrained use cases, evaluate a build-and-host path (like Forge) by pricing out training plus ongoing inference on your own hardware and comparing it to per-token spend. Either way, treat “model ownership and retraining control” as a strategic decision, not just an implementation detail, because it determines how dependent you’ll be on external providers over time.