Build a Local AI Agent with OpenClaw on Raspberry Pi (2026 Guide)

Build a Local AI Agent with OpenClaw on Raspberry Pi

Why rent a VPS when a $80 Raspberry Pi can run your AI agent 24/7 from home? With OpenClaw on a Raspberry Pi 5, you get a private, always-on assistant that keeps your data on your network — no cloud required.

This guide walks you through the complete setup: from choosing hardware to having a working AI agent on Telegram, with optional Home Assistant integration and remote access via Tailscale.

What you’ll have at the end: A local AI agent running on your Raspberry Pi, accessible via Telegram from anywhere, with optional smart home control and fully local LLM inference via Ollama.

Why Raspberry Pi?

FactorRaspberry PiVPS
Monthly cost~$3 electricity$6–20/month
Data privacyAll localThird-party server
Latency (local)<10ms50–200ms
API dependencyOptional (Ollama)Required
Always-onYes (low power ~5W)Yes
Setup difficultyMediumEasy

The Raspberry Pi approach shines when privacy matters and you want to minimize recurring costs. If you need cloud reliability, check our VPS deployment guide instead.

Hardware Requirements

Minimum: Raspberry Pi 4 (8GB)

Works but noticeably slower for local LLM inference. The 4GB model is not recommended — Ollama alone needs 4GB+ for useful models.

What Won’t Work

Step 1: OS Setup (15 minutes)

Flash Raspberry Pi OS Lite (64-bit) using Raspberry Pi Imager. The Lite (headless) variant saves RAM for AI workloads.

In Imager settings, pre-configure:

Boot the Pi, SSH in:

ssh pi@openclaw.local

Update the system:

sudo apt update && sudo apt upgrade -y

Step 2: Install Node.js (5 minutes)

OpenClaw requires Node.js 22+:

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
node --version  # Should show v22.x

Step 3: Install OpenClaw (10 minutes)

sudo npm install -g openclaw
openclaw --version

Initialize OpenClaw:

openclaw init

This creates ~/.openclaw/ with your configuration files.

Configure Your AI Provider

Edit ~/.openclaw/config.yaml:

Option A: Cloud API (simplest)

providers:
  anthropic:
    apiKey: "sk-ant-..."
  openai:
    apiKey: "sk-..."

Option B: Local Ollama (fully private — see Step 5)

providers:
  ollama:
    baseUrl: "http://localhost:11434"
    defaultModel: "llama3.1:8b"

Option C: Hybrid (recommended) Use Ollama for routine tasks, cloud APIs for complex reasoning:

providers:
  ollama:
    baseUrl: "http://localhost:11434"
  anthropic:
    apiKey: "sk-ant-..."

models:
  default: "ollama/llama3.1:8b"
  thinking: "anthropic/claude-sonnet-4-6"

Step 4: Connect Telegram (10 minutes)

  1. Message @BotFather on Telegram
  2. /newbot → name it → get your token
  3. Add to config:
channels:
  telegram:
    token: "YOUR_BOT_TOKEN"
    allowedUsers:
      - your_telegram_id

Find your Telegram ID by messaging @userinfobot.

Start the gateway:

openclaw gateway start

Message your bot — you should get a response!

Step 5: Install Ollama for Local LLM (15 minutes)

This is what makes the Pi setup special: fully local AI inference.

curl -fsSL https://ollama.com/install.sh | sh

Pull a model that fits in 8GB RAM:

# Recommended: good balance of quality and speed on Pi 5
ollama pull llama3.1:8b

# Lighter alternative for faster responses
ollama pull phi3:mini

# For coding tasks
ollama pull codellama:7b

Test it:

ollama run llama3.1:8b "What is OpenClaw?"

Performance expectations on Pi 5 (8GB):

Tip: If you have an NVMe SSD, store models there. Model loading from microSD is painfully slow.

Step 6: Run as a System Service (5 minutes)

Create a systemd service so OpenClaw starts on boot:

sudo tee /etc/systemd/system/openclaw.service << 'EOF'
[Unit]
Description=OpenClaw AI Agent
After=network-online.target ollama.service
Wants=network-online.target

[Service]
Type=simple
User=pi
ExecStart=/usr/bin/openclaw gateway start --foreground
Restart=always
RestartSec=10
Environment=NODE_ENV=production

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl enable --now openclaw
sudo systemctl status openclaw

For more on service management and crash recovery, see our systemd service guide.

Step 7: Remote Access with Tailscale (10 minutes)

Access your Pi from anywhere without port forwarding:

curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up

Authenticate via the URL it prints. Now you can SSH into your Pi from any device on your Tailscale network:

ssh pi@openclaw  # Via Tailscale MagicDNS

The Telegram bot already works from anywhere (it connects outbound to Telegram’s servers), so Tailscale is mainly for SSH management and accessing local services.

Step 8: Home Assistant Integration (Optional, 20 minutes)

This is where the Raspberry Pi setup really shines. If you’re running Home Assistant on the same network (or same Pi via Docker), OpenClaw can control your smart home.

Install the Home Assistant Skill

openclaw skill install home-assistant

Configure in config.yaml:

skills:
  home-assistant:
    url: "http://homeassistant.local:8123"
    token: "YOUR_LONG_LIVED_ACCESS_TOKEN"

Generate the token in Home Assistant: Profile → Long-Lived Access Tokens → Create.

What You Can Do

Once connected, tell your agent things like:

Best practice: Use Home Assistant’s native automations (YAML/UI) for deterministic rules, and OpenClaw for natural-language control and AI-powered decisions. As Markaicode’s integration guide notes: “Hybrid approach works best.”

Security Considerations

Running OpenClaw locally doesn’t mean you can skip security. Recent CVE disclosures (including CVE-2026-25253 and the new Endor Labs batch) affect all deployments.

Essential steps:

  1. Keep OpenClaw updated: sudo npm update -g openclaw — verify you’re on ≥2026.2.21
  2. Run openclaw security --audit: New in v2026.2.21, checks for common misconfigurations
  3. Restrict allowedUsers: Only your Telegram ID, never leave it open
  4. Audit installed Skills: Check our ClawHub skill security guide — 12% of marketplace skills were found to be malicious
  5. Firewall: sudo ufw enable && sudo ufw allow ssh — don’t expose unnecessary ports
  6. Use Docker sandboxing for browser automation if enabled

See our security hardening guide for the full checklist.

Troubleshooting

”Out of memory” or agent crashes

Slow first response

Can’t connect to Telegram

NVMe SSD not detected

Cost Comparison (Annual)

SetupHardwareMonthlyAnnual Total
Pi 5 + Ollama only~$100~$3 (electricity)~$136
Pi 5 + Cloud API~$100~$5-15 (API)~$160-280
VPS + Cloud API$0~$12-25~$144-300

The Pi pays for itself in 6-12 months vs. a VPS, and you get to keep your data.

What’s Next?

References

Was this article helpful?

đź’¬ Comments