Privacy and performance are key drivers for self-hosting large language models (LLMs). DeepSeek-R1 has emerged as a powerful alternative to proprietary models. Here is how to run it locally.

Why Self-Host DeepSeek-R1?

DeepSeek-R1 offers reasoning capabilities comparable to top-tier models while being open-weight. Running it on your own hardware ensures 100% data privacy and zero API costs.

Prerequisites

  • Hardware: At least 16GB RAM (32GB+ recommended for larger quantizations).
  • Software: Ollama installed (Linux, macOS, or Windows).

Installation Steps

  1. Install Ollama: Download from ollama.com.
  2. Pull the Model: Run ollama pull deepseek-r1 in your terminal.
  3. Run the Model: Start a chat with ollama run [deepseek](/posts/make-deepseek-guide/)-r1.

Optimization for Performance

If you have an NVIDIA GPU, ensure CUDA is properly configured to e VRAM. For Mac users, Ollama automatically uses Metal acceleration.

Automation Use Case

Connect your local Ollama instance to via the “Ollama” node to create autonomous, privacy-focused content pipelines.