Local-first LLM chat using Groq’s Deepseek distill LLaMa 70B for instant UI and super snappy responses. Built on top of Basic’s user-owned data store technology, so that all your data lives in your data store in US servers and is permanently in your control!
Hey we're Abhi and Rashid, makers of locl.chat! We were frustrated by the slowness of the Deepseek app and website, so we decided to build a local-first version of it on top of our Basic database and sync tech. We chose the Groq Deepseek distill model because we felt it had the best balance of speed and accuracy.
This is an experimental open-source project that we threw together as quickly as we could, so please bare with us with any janky UI and bugs you face (happy to fix them as you point them out), but we personally have started using this instead of any of the other models and chat interfaces just because of how fast everything is!
We hope this can be a pleasant contribution to your workflow 😋
hutsyhomes
🧱 NFTWalls
Much faster than Deepseek!