
🤖 • Run LLMs on your laptop, entirely offline 📚 • Chat with your local documents 👾 • Use models through the in-app Chat UI or an OpenAI compatible local server
🤖 • Run LLMs on your laptop, entirely offline 📚 • Chat with your local documents 👾 • Use models through the in-app Chat UI or an OpenAI compatible local server
Absolutely beautiful User Interface. It's super easy to setup and start using. I've used ollama, but since the UI is a separate project, it's a bit difficult to setup. LM Studio also have very good collection of models available compared to Ollama. My favourite thing about LMS is it shows the model size upfront and I don't have to dig through to find it. CUDA runtime is also a great plus.
Raycast
- 1. Download LM Studio for your operating system from here.
- 2. Click the 🔎 icon on the sidebar and search for "DeepSeek"
- 3. Pick an option that will fit on your system. For example, if you have 16GB of RAM, you can run the 7B or 8B parameter distilled models. If you have ~192GB+ of RAM, you can run the full 671B parameter model.
- 4. Load the model in the chat, and start asking questions!
Of course, you can also run other models locally using LM Studio, like @Llama 3.2, @Mistral AI, Phi, Gemma, @DeepSeek AI, and Qwen 2.5.chatWise
Raycast
chatWise
Raycast