Chris Messina

1yr ago

Groq® - Hyperfast LLM running on custom built GPUs

An LPU Inference Engine, with LPU standing for Language Processing Unit™, is a new type of end-to-end processing unit system that provides the fastest inference at ~500 tokens/second.