Mistral AI
p/mistral-7b
Open and portable generative AI for devs and businesses
Zac Zuo

Mistral Small 3 — High performance in a 24b open-source model

Featured
6
Mistral Small 3 is the most efficient and versatile model of Mistral. Pre-trained and instructed version, Apache 2.0, 24B, 81% MMLU, 150 token/s. No synthetic data so great base for anything reasoning.
Replies
Best
Zac Zuo
Hunter
📌
Hey everyone! 👋 Check out Mistral Small 3 – it's setting a new benchmark for "small" LLMs (under 70B)! 🚀 This 24B parameter model from Mistral AI offers performance comparable to much larger models, but with a focus on efficiency. So here's the key features: · Powerful & Efficient: State-of-the-art results with low latency (150 tokens/s). · Locally Deployable: Runs on a single RTX 4090 or a 32GB RAM MacBook (once quantized). · Knowledge-Dense: Packs a lot of knowledge into a compact size. · Versatile: Great for fast conversational agents, low-latency function calling, creating subject matter experts (via fine-tuning), and local inference (for privacy). It's also open-source under the Apache 2.0 License!
Odin Urdland
Oh that's really cool, congratulations on the release! Small open models are where I feel a lot of the really cool applications are at! Will check this out. ☺️
Zac Zuo
Hunter
@odinu Mistral is on fire right? :)
🚀 Pierre-Henry 💡
What does that mean in three words? Mistral, please tell me!
Zac Zuo
Hunter
@phenrysay Summit's Parfait Preparation!