OpenAI-compatible endpoint. Single API, routes to the cheapest and fastest provider for each model. Works with closed and open LLMs. Real-time benchmarks (price, latency, load) run in the background. Usable direclty now on Roo and Cline forks
The main difference with OR, is that we are doing real time arbitrage within the many provider we reference, which allows you to always get the absolute best value for your \$ at the exact moment of inference, which we find very cool.
Really exciting! The real-time arbitrage feature sounds like a game-changer for optimizing cost and performance. How does it handle model compatibility across different providers, especially with closed LLMs?
@evgenii_zaitsev1 It does very well on those subject, we spent a lot of time uniformising everything from tool call to prompt caching, so overall it works as if you had only a single API key. Closed LLMs were particularly difficult because most of them have their own framework, and act like they handle openai compatible endpoints but it doesn't work on all their features, so we have to create proxys to do that. Hoping that I answered your question!
Big congratulations on your product launch — it looks truly impressive and caught our attention! 🚀
I’m Liam from Scrapeless, and we’d love to explore a potential collaboration with you.
We offer a robust Deep SERP API, providing high-quality access to both Google Search and Google Trends data — fast, reliable, and tailored for AI-native products and analytics workflows.
We’d love to offer you free access to our API in exchange for a mention or shoutout on your Twitter or LinkedIn, and we're also happy to cover the promotion costs to help boost your visibility.
If this sounds interesting, I’d love to chat more — feel free to suggest a time or just reply here!
MakeHub.ai
Really exciting! The real-time arbitrage feature sounds like a game-changer for optimizing cost and performance. How does it handle model compatibility across different providers, especially with closed LLMs?
MakeHub.ai
@evgenii_zaitsev1 It does very well on those subject, we spent a lot of time uniformising everything from tool call to prompt caching, so overall it works as if you had only a single API key. Closed LLMs were particularly difficult because most of them have their own framework, and act like they handle openai compatible endpoints but it doesn't work on all their features, so we have to create proxys to do that. Hoping that I answered your question!
Scrapeless
Hi Make Hub team,
Big congratulations on your product launch — it looks truly impressive and caught our attention! 🚀
I’m Liam from Scrapeless, and we’d love to explore a potential collaboration with you.
We offer a robust Deep SERP API, providing high-quality access to both Google Search and Google Trends data — fast, reliable, and tailored for AI-native products and analytics workflows.
We’d love to offer you free access to our API in exchange for a mention or shoutout on your Twitter or LinkedIn, and we're also happy to cover the promotion costs to help boost your visibility.
If this sounds interesting, I’d love to chat more — feel free to suggest a time or just reply here!