MIOSN
We needed a better way to choose LLMs.
48 followers
We match your task with the best AI models — based on real inputs, real outputs, and what you actually care about.
48 followers
We match your task with the best AI models — based on real inputs, real outputs, and what you actually care about.
MIOSN
It becomes hideous when every task requires you to sample across a plethora of models.
What is the pricing as I am not seeing it on the site?
MIOSN
@thefullstack Hi, Im Mark.
You’re absolutely right — testing every model in the pool takes time, money, and, above all, patience.
As for pricing: we haven’t rolled out billing yet. We're focused on working closely with users to refine the experience together. That’s why we’re giving new users free credits to test things out.
If you ever need more credits, just reach out to us on discord— we’ll be more than happy to give out more credits!
@chohchmark Our org constantly requires to test through models for their coding capabilities. We have our own benchmarks and more or less rely on human to evaluate the outputs. If this can be automated in some ways, that would be very useful.
MIOSN
@thefullstack Coding capabilities is one of the most important practical benchmarks I agree. We already have implemented batch evaluations (we decided to call this batch: an interview) on auto, so how about we let you guys know when coding capabilities become one of our new evaluation criterion in the near future? We are on the way and hope to become one of your main supporters soon.
@chohchmark Sounds awesome, looking forward!
This would be really helpful given the market situation
MIOSN
@charvibothra True! We couldn't agree with you more.
The fact that there are more than 300+ LLMs on a single unified endpoint like the "openrouter" to even start with... We had to make a solution, and are here to help those who face same challenges!