I use the Ollama model library to pick and pull the exact LLM I want to run locally, like DeepSeek-R1. It makes it straightforward: copy the model name, pull it in Open WebUI, and you’re ready to chat on your own server.
You'll be taken to Ollama to complete your purchase.