Install Ollama on Windows
A quick, reliable setup: download the official installer, verify the CLI, pull your first model, and run locally — all in a few minutes.
Step‑by‑step
Open the official download page and get the Windows .exe.
Launch the .exe and follow prompts. The installer adds the ollama CLI and a background service. If a firewall dialog appears, allow local access.
Open Command Prompt or PowerShell and check that Ollama is available. If the command isn’t found, open a new terminal window.
Download a starter model like Llama 3:
Start a local chat session with the model:
To verify the local API is up, you can also check the tags endpoint in a browser: http://localhost:11434/api/tags.
What’s next?
- For better performance with NVIDIA/DirectML, see GPU Acceleration.
- Explore more models in the Models Hub (Llama 3, Mistral, Qwen 2.5, Gemma 2, Phi‑4).
- If something goes wrong, visit Troubleshooting.
Community‑driven guide. Not affiliated with the official Ollama project.