Fine-tune open-source LLMs (including Llama2, Falcon, etc) in minutes. With Taylor AI, you can focus on experimentation & building better models, not on digging through python libraries or keeping up with every open-source LLM. And you get to own your models.
Hey Product Hunt! ๐
This is Ben & Brian, and weโre excited to announce Taylor AI! ๐งถ
Taylor AI (YC S23) empowers enterprises to start fine-tuning open-source LLMs in seconds, so that data science and engineering teams can focus on building great products instead of worrying about GPUs, debugging Python libraries, and keeping up with every new LLM.
We started Taylor AI because we saw companies rushing to adopt AI, but struggling to integrate one-size-fits-all chat models like GPT-4 with their proprietary data. Fine-tuning is a great way to customize models, but much of the tooling available for fine-tuning models today is buggy and hard to use. We want to make fine-tuning accessible to every developer and data scientist.
With Taylor, you can:
๐ Start a training run in seconds
๐ Fine-tune state-of-the-art open-source LLMs
๐ง Own your model
๐งช Benefit from cutting-edge techniques (QLoRA, sequence packing, etc.)
๐ Focus on experimentation, not squashing Python bugs
Weโre excited for you to try Taylor AI outโevery new user can launch up to 3 training jobs for free. Let us know if you have any questions or concerns by emailing us at contact@trytaylor.ai or using the chat on our website (that goes straight to us, not a bot!).
What are you fine-tuning LLMs for? Comment below ๐
Taylor AI