AI Ollama Helper
Local LLM Hub

Troubleshooting

Quick diagnostics and proven fixes to get Ollama working on Windows.

Quick diagnostics

1) Is the CLI available?

ollama --version

2) Is the local API reachable?

http://localhost:11434/api/tags

3) Is the port in use?

netstat -ano | findstr 11434

4) Can you list models?

ollama list

5) Test a minimal run:

ollama run llama3

If you use PowerShell, you can also try: Test-NetConnection -ComputerName localhost -Port 11434

Common issues & fixes

CLI not recognized — Open a new terminal after installation so PATH refreshes. Reboot if needed; reinstall if the issue persists.

Port 11434 busy — Find the PID via netstat, stop the conflicting app, then retry. Allow local connections in Windows Firewall when prompted.

Download stuck/slow — Check internet and disk space, pause heavy downloads/scans, retry later. Ensure antivirus isn’t scanning large model files continuously.

Out of memory (RAM/VRAM) — Try smaller models or lower quantizations; close GPU‑heavy apps; ensure drivers are up to date. CPU mode can work for small models.

Performance is poor — Enable GPU acceleration (see CUDA/DirectML), keep models on SSD, and use smaller quantizations.

Firewall/antivirus blocks — Allow local access for Ollama; add an exception if a corporate policy blocks localhost connections.

Manage and clean models

List installed models:

ollama list

Remove a model to free space, then re‑pull if needed:

ollama rm MODEL_NAME ollama pull MODEL_NAME

Keeping only the models you use helps avoid disk and memory pressure.

⬇️ Download Ollama for Windows

Community‑driven guide. Not affiliated with the official Ollama project.