Local by design
• Ollama runs models on your machine. Prompts and outputs stay local by default.
• No cloud calls are needed to generate text; only model downloads use the internet.
• You control what is stored — models can be added/removed anytime.
How Ollama keeps your data local and how to run fully offline on Windows — plus our site’s cookie policy.
• Ollama runs models on your machine. Prompts and outputs stay local by default.
• No cloud calls are needed to generate text; only model downloads use the internet.
• You control what is stored — models can be added/removed anytime.
• After installing and pulling models, disconnect from the internet — generation works offline.
• Keep essential models cached so you don’t need to re‑download later.
• If needed, block outbound traffic for the Ollama process via Windows Firewall to enforce offline use.
• Models and caches are stored in your user profile (typical location includes a .ollama folder).
• You can list installed models and remove them to free space:
Exact paths may vary by setup; keep models on an SSD for best performance.
• Allow local connections when prompted by Windows Firewall so apps can reach localhost:11434.
• To restrict outbound connections, create a Windows Firewall rule denying internet access for the Ollama executable while allowing local loopback.
Corporate environments may enforce policies; contact your admin if needed.
• We only load basic analytics if you click “Accept” in the cookie banner.
• Click “Cookie preferences” in the footer to change your choice anytime.
• See the full Privacy Policy for details.
• Avoid pasting sensitive data on shared PCs even when running locally.
• Keep your OS and GPU drivers updated; apply security patches promptly.
• Back up important prompts/configs securely on your machine.
Community‑driven guide. Not affiliated with the official Ollama project.