🚨 Local LLMs Are Coming Soon! Built Right Into the App!
We're excited to announce that Local LLM support is rolling out soon, directly inside the app.
What does this mean?
The app will have a new section that:
Analyzes your system specs (CPU, GPU, RAM)
Recommends the best local AI model based on your hardware
Lets you download and run it — no extra setup needed
Why use a local LLM?
Runs fully offline — great for NDA or restricted environments
No API costs — completely free
Private and secure — your data never leaves your device
Full control — no cloud, no limits
Whether you’re working on sensitive projects or just want more control, this is a powerful option.
What models will be supported?
We’ll support a range of models, from lightweight to more powerful options depending on your system — including LLaMA, Mistral, and others.
Launching soon, expected later this week, Stay tuned!
Replies
This is cool, can't wait