Llama 3
Flagship open‑source model. Pull and run with Ollama on Windows in seconds.
Quick start
Pull the default Llama 3 and start a local chat:
ollama pull llama3
ollama run llama3
Tip: Keep models on an SSD for faster loads.
Variants & hardware
8B: great on most PCs; 16 GB RAM recommended. 70B: requires high‑end hardware and plenty of RAM/VRAM.
For acceleration, see GPU Acceleration. For model sizes/speed, check Benchmarks.
Community‑driven guide. Not affiliated with the official Ollama project.