MakeHub.ai - LLM Provider arbitrage to get the best performance for the $
OpenAI-compatible endpoint. Single API, routes to the cheapest and fastest provider for each model. Works with closed and open LLMs. Real-time benchmarks (price, latency, load) run in the background. Usable direclty now on Roo and Cline forks