NullClaw Deploy Guide: The Smallest AI Assistant Infrastructure in Zig

Step-by-step guide to deploying NullClaw on a Linux VPS. Covers Zig binary, 22+ LLM providers, 17 chat channels, hybrid memory, sandboxing, and running on $5 hardware.

NullClaw Deploy Guide: The Smallest AI Assistant Infrastructure in Zig

NullClaw is the smallest AI assistant I’ve found that still does everything. A single 678 KB static binary written in Zig, about 1 MB of RAM at runtime. No runtime, no VM, no framework overhead. It boots in under 2 milliseconds and runs on anything with a CPU, including $5 ARM boards.

This guide walks through deploying NullClaw on a VPS with your choice of LLM provider, chat channels, hybrid memory, and proper sandboxing.

NullClaw GitHub

What this guide covers

  • Building NullClaw from source or using Docker
  • Configuring 22+ LLM providers via OpenAI-compatible endpoints
  • Setting up Telegram, Discord, Slack, or 10 other channels
  • Hybrid memory system with SQLite FTS5 + vector search
  • Multi-layer sandboxing (Landlock, Firejail, Bubblewrap, Docker)
  • MCP server integration
  • Running on edge hardware and $5 boards

If you’re comparing self-hosted bot options, our OpenClaw alternatives roundup includes NullClaw alongside ZeroClaw, nanobot, NanoClaw, and PicoClaw.

What NullClaw actually is

NullClaw is an AI assistant written entirely in Zig. The tagline is “null overhead, null compromise” — a static binary with zero runtime dependencies that runs on the cheapest hardware you can find.

The architecture uses vtable interfaces for every subsystem. Want to swap your LLM provider? Change one config line. Same for channels, memory backends, tools, tunnels, sandboxes, and peripherals.

678 KB binary · <2 ms startup · 3,230+ tests · 22+ providers · 17 channels · Pluggable everything

How it compares

OpenClawnanobotZeroClawNullClaw
LanguageTypeScriptPythonRustZig
RAM> 1 GB> 100 MB< 5 MB~1 MB
Startup> 500 s> 30 s< 10 ms< 2 ms
Binary~28 MBN/A3.4 MB678 KB
Tests1,0173,230+
Channels498+17
ProvidersSeveral13+22+22+
Min HardwareMac Mini $599Linux SBC ~$50Any $10 hardwareAny $5 hardware

The benchmark numbers are measured on 0.8 GHz edge hardware. NullClaw starts in under 8 milliseconds even on the slowest targets.

Why Zig

Zig is a systems programming language that compiles to C-like performance with zero runtime overhead:

PropertyWhat it means for NullClaw
No garbage collectorDeterministic memory, no pauses
No hidden allocationsYou control every byte
Static binaryNo dependencies, drop and run
Compile-time executionConfig validation at build time
Cross-compilationBuild for ARM from x86

That 678 KB binary covers the same feature set as alternatives that weigh in at multiple gigabytes.

Installation

Two main ways to get NullClaw installed.

You need Zig 0.15.2 (exact version required):

# Install Zig (Linux)
curl -L https://ziglang.org/download/0.15.2/zig-linux-x86_64-0.15.2.tar.xz | tar -xJ
sudo mv zig-linux-x86_64-0.15.2 /usr/local/zig
sudo ln -s /usr/local/zig/zig /usr/local/bin/zig

# Clone and build
git clone https://github.com/nullclaw/nullclaw.git
cd nullclaw

# Release build (678 KB)
zig build -Doptimize=ReleaseSmall

# The binary is at zig-out/bin/nullclaw
ls -lh zig-out/bin/nullclaw

If you prefer containers:

# Build the image
docker build -t nullclaw .

# Run
docker run -d \
  -v ~/.nullclaw:/root/.nullclaw \
  -p 3000:3000 \
  --name nullclaw \
  nullclaw

After building, run the onboard wizard:

# Quick setup (non-interactive)
nullclaw onboard --api-key sk-... --provider openrouter

# Or interactive wizard
nullclaw onboard --interactive

This creates ~/.nullclaw/ with a config.json and workspace folder.

Check that everything’s working:

nullclaw status

Configuring LLM providers

NullClaw supports 22+ providers out of the box. Every provider uses the OpenAI-compatible interface, so switching is just a config change.

Supported providers

ProviderUse case
OpenRouterGateway to any model
AnthropicClaude models
OpenAIGPT models
OllamaLocal models
VenicePrivacy-focused
GroqFast inference
MistralMistral models
xAIGrok models
DeepSeekDeepSeek models
TogetherOpen-source models
FireworksFast open-source
PerplexitySearch-augmented
CohereCommand models
BedrockAWS-hosted models
GeminiGoogle Gemini
CustomAny OpenAI-compatible endpoint

Get an API key

For OpenRouter (recommended for flexibility):

  1. Go to openrouter.ai
  2. Create an account and generate an API key
  3. The key starts with sk-or-

MiniMax coding plan — 10% off

If you want to use MiniMax M2.5 or GLM-5 through OpenRouter, check our referral links for discounts on coding plans.

Add to config

Edit ~/.nullclaw/config.json:

{
  "default_provider": "openrouter",
  "default_temperature": 0.7,
  "models": {
    "providers": {
      "openrouter": {
        "api_key": "sk-or-your-key"
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-sonnet-4"
      }
    }
  }
}

Using custom providers

Any OpenAI-compatible endpoint works:

{
  "models": {
    "providers": {
      "minimax": {
        "api_key": "your-minimax-key",
        "base_url": "https://api.minimax.chat/v1"
      }
    }
  }
}

Then set "default_provider": "minimax".

If you’re looking for cost-effective models to pair with NullClaw, two open-source options work well:

MiniMax M2.5 — A 230B MoE model with 10B active parameters. Scores 80.2% on SWE-Bench Verified at a fraction of Claude’s cost ($0.15/M input tokens). The Lightning variant runs at 100 tokens/sec. Available through OpenRouter or directly via the MiniMax API.

GLM-5 — A 744B MoE model from Z.AI with 40B active parameters. 95.8% on SWE-bench Verified and near-zero hallucinations. Available through OpenRouter or Z.AI coding plans.

Both are covered in detail in our nanobot setup guide, which walks through API key configuration and provider setup. The same models work with NullClaw through OpenRouter or direct provider endpoints.

Test it

nullclaw agent -m "What's 42 * 17?"

Channel setup

NullClaw supports 17 chat channels. Pick what you use.

Telegram

  1. Create a bot via @BotFather
  2. Copy the bot token
  3. Get your Telegram user ID (message @userinfobot)

Config:

{
  "channels": {
    "telegram": {
      "accounts": {
        "main": {
          "bot_token": "123456789:ABCdefGHIjklMNOpqrSTUvwxYZ",
          "allow_from": ["your_telegram_user_id"],
          "reply_in_private": true
        }
      }
    }
  }
}

Allowlist behavior

Empty allow_from means deny all. Use ["*"] to allow everyone, or add specific user IDs to restrict access.

Discord

  1. Go to discord.com/developers/applications
  2. Create a bot and copy the token
  3. Enable MESSAGE CONTENT INTENT in the Bot settings

Config:

{
  "channels": {
    "discord": {
      "accounts": {
        "main": {
          "token": "your-discord-bot-token",
          "guild_id": "your-server-id",
          "allow_from": ["your_user_id"],
          "allow_bots": false
        }
      }
    }
  }
}

Slack

Config:

{
  "channels": {
    "slack": {
      "accounts": {
        "main": {
          "bot_token": "xoxb-your-bot-token",
          "app_token": "xapp-your-app-token",
          "allow_from": ["U1234567890"]
        }
      }
    }
  }
}

Other channels

ChannelConfig keyNotes
iMessageimessagemacOS only
MatrixmatrixHomeserver required
WhatsAppwhatsappVia Meta webhook
SignalsignalSignal-cli required
LinelineLINE Messaging API
WebhookwebhookCustom HTTP endpoint
IRCircLibera, OFTC, etc.
Lark/FeishularkByteDance workplace
DingTalkdingtalkAlibaba workplace
QQqqVia go-cqhttp
OneBotonebotUniversal bot protocol
EmailemailIMAP/SMTP
MaixCammaixcamSipeed hardware

Full config example

Here’s a complete ~/.nullclaw/config.json with OpenRouter, Telegram, hybrid memory, and sandboxing:

{
  "default_provider": "openrouter",
  "default_temperature": 0.7,

  "models": {
    "providers": {
      "openrouter": { "api_key": "sk-or-..." }
    }
  },

  "agents": {
    "defaults": {
      "model": { "primary": "anthropic/claude-sonnet-4" },
      "heartbeat": { "every": "30m" }
    }
  },

  "channels": {
    "telegram": {
      "accounts": {
        "main": {
          "bot_token": "123:ABC",
          "allow_from": ["your_user_id"]
        }
      }
    }
  },

  "memory": {
    "backend": "sqlite",
    "auto_save": true,
    "embedding_provider": "openai",
    "vector_weight": 0.7,
    "keyword_weight": 0.3,
    "hygiene_enabled": true
  },

  "gateway": {
    "port": 3000,
    "require_pairing": true,
    "allow_public_bind": false
  },

  "autonomy": {
    "level": "supervised",
    "workspace_only": true,
    "max_actions_per_hour": 20
  },

  "runtime": {
    "kind": "native",
    "docker": {
      "image": "alpine:3.20",
      "network": "none",
      "memory_limit_mb": 512,
      "read_only_rootfs": true
    }
  },

  "security": {
    "sandbox": { "backend": "auto" },
    "resources": { "max_memory_mb": 512, "max_cpu_percent": 80 },
    "audit": { "enabled": true, "retention_days": 90 }
  },

  "tunnel": { "provider": "none" },
  "secrets": { "encrypt": true },
  "identity": { "format": "openclaw" }
}

Memory system

NullClaw’s memory is built on SQLite with no external dependencies:

LayerImplementation
Vector DBEmbeddings stored as BLOB, cosine similarity search
Keyword searchFTS5 virtual tables with BM25 scoring
Hybrid mergeConfigurable vector/keyword weights
EmbeddingsOpenAI, custom URL, or noop
HygieneAutomatic archival + purge of stale memories
SnapshotsExport/import for migration

Config:

{
  "memory": {
    "backend": "sqlite",
    "auto_save": true,
    "embedding_provider": "openai",
    "vector_weight": 0.7,
    "keyword_weight": 0.3,
    "hygiene_enabled": true
  }
}

Set embedding_provider to noop if you don’t want to pay for embeddings. FTS5 keyword search still works.

Identity files

NullClaw supports two identity formats:

FormatFilesUse case
openclawIDENTITY.md, SOUL.md, USER.mdMarkdown-based
aieosSingle JSON filePortable AI personas

Edit these in ~/.nullclaw/workspace/ to customize the bot’s personality and knowledge about you.

Security model

NullClaw locks things down at multiple levels, not just application-level allowlists.

Gateway security

LayerDefaultWhat it does
Localhost binding127.0.0.1Refuses public exposure
Pairing requiredtrue6-digit code exchange for bearer token
Tunnel requiredfalseRefuses 0.0.0.0 without tunnel

To expose the gateway, configure a tunnel:

{
  "tunnel": {
    "provider": "cloudflare"
  }
}

Supported tunnels: Cloudflare, Tailscale, ngrok, or custom binary.

Sandbox isolation

NullClaw auto-detects the best sandbox backend:

BackendPlatformSecurity level
LandlockLinux 5.13+Kernel-level
FirejailLinuxNamespaces
BubblewrapLinuxLightweight containers
DockerAnyFull container isolation

Config:

{
  "security": {
    "sandbox": { "backend": "auto" }
  }
}

Set "backend": "docker" for maximum isolation.

Encrypted secrets

API keys are encrypted with ChaCha20-Poly1305:

{
  "secrets": { "encrypt": true }
}

The encryption key is stored locally in ~/.nullclaw/.

Workspace scoping

With workspace_only = true, the bot can only access files inside its workspace. Symlink escape attempts are blocked through path canonicalization.

MCP support

NullClaw supports Model Context Protocol servers:

{
  "mcp_servers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/documents"]
    }
  }
}

MCP tools are discovered and registered automatically. The bot can use them alongside built-in tools.

Scheduled tasks

NullClaw has a built-in cron scheduler:

# List scheduled tasks
nullclaw cron list

# Add a recurring task
nullclaw cron add --name "morning" --message "What's on my calendar today?" --cron "0 9 * * *"

# Run a task once
nullclaw cron run <task_id>

# Pause/resume
nullclaw cron pause <task_id>
nullclaw cron resume <task_id>

Tasks are persisted to JSON and survive restarts.

CLI reference

CommandDescription
nullclaw onboard --api-key sk-...Quick setup
nullclaw onboard --interactiveFull wizard
nullclaw agent -m "..."Single message
nullclaw agentInteractive chat
nullclaw gatewayStart webhook server
nullclaw daemonFull autonomous runtime
nullclaw statusSystem status
nullclaw doctorDiagnostics
nullclaw channel doctorChannel health
nullclaw service installInstall as system service
nullclaw cron list/add/removeManage scheduled tasks
nullclaw skills list/installManage skill packs
nullclaw hardware scanDetect peripherals
nullclaw migrate openclawImport from OpenClaw

VPS deployment

For production use, here’s a complete VPS setup.

Server requirements

SpecMinimumWorks with
CPUAny0.8 GHz edge core
RAM256 MB1 GB+ recommended
Storage1 GB10 GB for logs
OSLinuxUbuntu 22.04+

NullClaw runs on $5 boards. I tested it on a Raspberry Pi Zero 2 W and it worked fine.

Quick deployment

ssh root@YOUR_SERVER_IP

# Update system
apt update && apt upgrade -y

# Install Zig
curl -L https://ziglang.org/download/0.15.2/zig-linux-x86_64-0.15.2.tar.xz | tar -xJ
sudo mv zig-linux-x86_64-0.15.2 /usr/local/zig
sudo ln -s /usr/local/zig/zig /usr/local/bin/zig

# Clone and build
git clone https://github.com/nullclaw/nullclaw.git
cd nullclaw
zig build -Doptimize=ReleaseSmall

# Install binary
sudo cp zig-out/bin/nullclaw /usr/local/bin/

# Initialize
nullclaw onboard --interactive

# Edit config
nano ~/.nullclaw/config.json

# Start daemon
nullclaw daemon

Systemd service

For automatic startup:

nullclaw service install
nullclaw service start
nullclaw service status

Or create manually:

[Unit]
Description=NullClaw AI Assistant
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/nullclaw daemon
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Save to /etc/systemd/system/nullclaw.service, then:

systemctl daemon-reload
systemctl enable nullclaw
systemctl start nullclaw

Running on edge hardware

NullClaw’s low resource usage makes it a good fit for edge deployment:

HardwareRAMStatus
Raspberry Pi Zero 2 W512 MB✅ Works
Raspberry Pi 42-8 GB✅ Works
Orange Pi Zero256 MB✅ Works
$5 AliExpress SBC256 MB✅ Works
MaixCam (RISC-V)256 MB✅ Supported

Build for ARM from x86:

zig build -Dtarget=aarch64-linux -Doptimize=ReleaseSmall

Gateway API

EndpointMethodAuthDescription
/healthGETNoneHealth check
/pairPOSTX-Pairing-CodeExchange code for token
/webhookPOSTBearer tokenSend message
/whatsappGETQuery paramsMeta webhook verification
/whatsappPOSTMeta signatureWhatsApp incoming

Pairing flow:

# Start gateway (shows pairing code)
nullclaw gateway

# Exchange code for token
curl -X POST http://127.0.0.1:3000/pair \
  -H "X-Pairing-Code: 123456"

# Use token for requests
curl -X POST http://127.0.0.1:3000/webhook \
  -H "Authorization: Bearer YOUR_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, nullclaw!"}'

NullClaw vs ZeroClaw vs nanobot

I run all three, so here’s a direct comparison:

AspectNullClaw 🟠ZeroClaw 🦀nanobot 🐍
LanguageZigRustPython
RAM usage~1 MB< 5 MB~100 MB
Startup< 2 ms< 10 ms> 2 s
Binary size678 KB3.4 MBN/A
Channel count178+9
Provider count22+22+13+
MemorySQLite hybridSQLite hybridFile-based
SecuritySandbox + pairingPairing + sandboxAllowlists
Min hardware$5 board$10 board$50 SBC
Setup methodzig buildcargo installpip install

NullClaw is the lightest option here. ZeroClaw is easier to set up if you’re already in the Rust ecosystem. nanobot has the broadest channel support.

Frequently asked questions

How much does it cost to run NullClaw?

Hardware: $5 for a cheap ARM board, or $0 if you already have a VPS. API costs depend on your provider — expect $5-50/month for personal use.

Can I run NullClaw without any API costs?

Yes. Configure Ollama as your provider and point it at a local model. You need hardware that can run inference, but there are no API bills.

Does NullClaw work on a Raspberry Pi Zero?

Yes. The 512 MB Pi Zero 2 W runs NullClaw without issues. The original Pi Zero (single-core) might struggle with inference but the assistant itself works.

Can multiple people use one NullClaw instance?

Yes. Add multiple user IDs to allow_from. Each person gets their own conversation context.

What’s the difference between gateway and daemon?

nullclaw gateway starts the webhook server only. nullclaw daemon starts the full autonomous runtime including all channels, heartbeat tasks, and scheduler.

Can I migrate from OpenClaw to NullClaw?

Yes. NullClaw has a built-in migration command:

nullclaw migrate openclaw --dry-run
nullclaw migrate openclaw

How do I add a new channel that’s not supported?

Implement the Channel vtable interface in src/channels/ and submit a PR. The architecture is designed for extensions.

If you’re comparing container-based isolation, see our NanoClaw deploy guide — it uses Docker containers with Claude’s Agent SDK. For a Python-based alternative with MiniMax and GLM-5 setup walkthroughs, check the nanobot setup guide.

If you want to explore other AI coding tools, our AI coding tools comparison covers the current landscape. For MCP basics that work across assistants, check the MCP introduction for beginners.

Este artículo también está disponible en español: Guía de Despliegue de NullClaw.