SelfHostLLM

SelfHostLLM

Calculate the GPU memory you need for LLM inference

100 followers

Calculate GPU memory requirements and max concurrent requests for self-hosted LLM inference. Support for Llama, Qwen, DeepSeek, Mistral and more. Plan your AI infrastructure efficiently.

SelfHostLLM makers

Here are the founders, developers, designers and product people who worked on SelfHostLLM