We had this deployed now in our Data centers, but also McLeod works extremely well and also with some of the new technology fifth and sixth generation GPU is outstanding efficiency
Recently I a long flight and having ollama (with llama2) locally really helped me prototype some quick changes to our product without having to rely on spotty plane wifi.
Ollama is the best way to run LLMs locally and the easiest way to test, build, and deploy new models. It has opened my eyes to the world of LLMs, a fantastic product.