Unsloth

Unsloth

Open-source finetuning of LLMs

5.0
โ€ข1 reviewโ€ข

245 followers

Unsloth Fine-tunes LLMs (Llama 3, Mistral, Gemma, Qwen, Phi) 2x faster with up to 80% less memory. Open-source, with free Colab notebooks. Now with reasoning capabilities!
Unsloth gallery image
Unsloth gallery image
Unsloth gallery image
Free
Launch tags:
Open Sourceโ€ขArtificial Intelligenceโ€ขGitHub
Launch Team

What do you think? โ€ฆ

Zac Zuo
Hunter
๐Ÿ“Œ
Hi everyone๏ผ Sharing Unsloth, an amazing open-source project that makes finetuning large language models (LLMs) significantly faster and more memory-efficient. If you've ever wanted to customize an LLM but were intimidated by the resource requirements, Unsloth is definately worth a try. What's cool about it: ๐Ÿš€ 2x Speed, Up to 80% Less Memory: Massive performance gains without sacrificing accuracy. ๐Ÿฆ™ Wide Model Support: Works with Llama 3 (all versions!), Mistral, Gemma 2, Qwen 2.5, Phi-4, and more. ๐Ÿ’ป Free Colab Notebooks: Get started immediately, for free, with their Colab notebooks. No expensive hardware needed. ๐Ÿ’ก Reasoning Capabilities Added: Reproduce DeepSeek-R1 "aha" moment. ๐Ÿ”“ Open Source: Fully open-source and actively developed. Unsloth is all about making LLM finetuning accessible to everyone, not just those with huge GPU budgets.
Shadman Nazim
@zaczuo Loved that name! ๐Ÿ˜…๐Ÿ”ฅ
Shivam Singh

The combination of speed and memory efficiency is a game-changer, especially for those who are just venturing into this area and might not have access to high-end hardware.


Congrats on the launch! Best wishes and sending lots of wins :)

Daniel Han

@whatshivamdo Thank you so much for the support! :D

Max Comperatore
good. I will consume.