Quick diagnostics
1) Is the CLI available?
2) Is the local API reachable?
3) Is the port in use?
4) Can you list models?
5) Test a minimal run:
If you use PowerShell, you can also try: Test-NetConnection -ComputerName localhost -Port 11434
Quick diagnostics and proven fixes to get Ollama working on Windows.
1) Is the CLI available?
2) Is the local API reachable?
3) Is the port in use?
4) Can you list models?
5) Test a minimal run:
If you use PowerShell, you can also try: Test-NetConnection -ComputerName localhost -Port 11434
CLI not recognized — Open a new terminal after installation so PATH refreshes. Reboot if needed; reinstall if the issue persists.
Port 11434 busy — Find the PID via netstat, stop the conflicting app, then retry. Allow local connections in Windows Firewall when prompted.
Download stuck/slow — Check internet and disk space, pause heavy downloads/scans, retry later. Ensure antivirus isn’t scanning large model files continuously.
Out of memory (RAM/VRAM) — Try smaller models or lower quantizations; close GPU‑heavy apps; ensure drivers are up to date. CPU mode can work for small models.
Performance is poor — Enable GPU acceleration (see CUDA/DirectML), keep models on SSD, and use smaller quantizations.
Firewall/antivirus blocks — Allow local access for Ollama; add an exception if a corporate policy blocks localhost connections.
List installed models:
Remove a model to free space, then re‑pull if needed:
Keeping only the models you use helps avoid disk and memory pressure.
Community‑driven guide. Not affiliated with the official Ollama project.