Unify dynamically routes each prompt to the best LLM and provider so you can balance cost, latency, and output quality with ease. Sign up now and get $100 free credits.
Hey Everyone ๐
Super excited to be launching today! ๐
So, why did we build this? As a team of engineers, we faced tons of problems trying to build efficient and effective LLM applications. With new models and providers constantly coming onto the scene, it's overwhelming and very hard to keep up.
So, what does Unify bring to the table?
โ๏ธImpartiality: We treat all models and providers equally, as we don't have a horse in the race. You can trust our quality, cost and speed benchmarks.
โ๏ธControl: Choose which models and providers you want to route to and then adjust three sliders: quality, cost, and latency. That's it; now the performance of your LLM app is fully in your hands, not the providers!
๐ Self Improvement: As each new model and provider comes onto the scene, sit back and watch your LLM application automatically improve over time. We quickly add support for the latest and greatest, ensuring your custom cost-quality-speed requirements are always fully optimized.
๐งโ๐ป Focus: Don't stress updating the model and provider every few weeks. Just specify your performance needs and get back to building great AI products. We'll handle the rest for you!
๐ Observability: Don't want to route? No sweat. Quickly compare all models and providers, and see which are truly the best for your own needs, on your own prompts, for your own task.
๐ Convenience: The power of all models and providers behind a single endpoint, queryable individually or via the router, all with a single API key. pip install unifyai, and away you go!
I'm super excited to hear what you think; I'm all ears for comments, feedback and especially criticisms. Thanks so much for your support, and you're very welcome to follow us on our journey as we strive to help unify AI ๐ฅ
@dan_lenton So this is a 'semi-automated' chat.lmsys.org for setting your ELO for your data, essentially?
Looks nice. Glad to see you have a LangChain integration.
Hey @aris_nakos, yep that's one way of thinking about it! We use GPT4-as-a-judge which is a bit different to the ELO score, but the concept is definitely similar :)
Great question @sheikhirfan10, one difference between us and other routers is that you can train your own custom router, as explained here: https://youtu.be/15wgxK1Cw0E
It is a big big and important part to create the right prompt, esp if you have cost limits. The app sounds like a solution for solopreneurs like me :) Wish you luck guys and definitely want test 'Unify'
@dan_lenton Congratulations on launching Unify! It sounds like a powerful and much-needed tool for managing and optimizing the use of large language models. Could you provide more details on the observability features? What kind of analytics and insights can users expect to receive about the performance of different models?
This is a great question @ena_gluhakovic. I will add a full product walkthough soon, but for now you can check out this video to get a feel for it :)
https://youtu.be/luDlixvzch8
Congratulations on the launch! ๐
Unify appears to be a robust remedy for navigating the intricate territory of LLM applications. The unbiased criteria, adaptable routing based on excellence, expense, and time lagging as well as automated upgrade towards modern models are all revolutionary advancements for developers. Moreover, the ease of having a solitary endpoint and API key amplifies simplicity even more. I am thrilled about witnessing how Unify will simplify and escalate AI development! ๐
Exciting launch with Unify! Impartial routing, control over quality, cost, and latency plus automated improvements make it a game-changer for LLM applications. Kudos on simplifying the complex world of AI models!
sounds nice! Love how it simplifies managing multiple LLM models and providers with just a few sliders, making sure our AI apps are always up-to-date and optimized.
Congratulations on the launch of Unify! ๐ Your tool's impartiality, control, and self-improvement features are game-changers for LLM application development. Excited to see how Unify simplifies and enhances AI product building. ๐
I think you have a great idea here. As an AI dev testing which AI works best for a certain use case is very time consuming. I'm definitely going to check it out.
Good luck today with the launch!
I haven't used it but surely after using it I will come to conclusions for any improvements but it's looking great. Excited to contribute to more in Ivy
The ability to dynamically route each prompt to the best LLM is a fantastic feature. This will save so much time and optimize costs. Congrats on the launch today as well!!
Congrats on the launch! Unify seems like a much needed service in this space with the pace of new LLMs coming out and old ones being replaced. Love this idea and I look forward to trying it out soon!
Replies
Unify
Llanai
Unify
Unify
Unify
SocialBu
Unify
Unify
Unify
Hostomo
Unify
Unify
Unify
Scalenut
Unify
Tap My Back
Unify
ReplyMind
Unify
ClothingAI
Unify
Unify
CopyFrog.AI
Unify
Unify
Unify
Unify
Unify
Roomblocker
Unify
Collective.work
Unify
Nowadays
Unify
Unify