CoPaw Setup Guide: Multi-Channel AI Assistant You Can Self-Host

Step-by-step guide to installing CoPaw on a VPS or locally. Covers pip install, Docker, model providers, channels like DingTalk, Discord and Telegram, skills, memory, and scheduled tasks.

CoPaw Setup Guide: Multi-Channel AI Assistant You Can Self-Host

CoPaw is an open-source personal AI assistant from the AgentScope team at Alibaba. It runs on your own machine or server and connects to DingTalk, Feishu, QQ, Discord, Telegram, and iMessage. I’ve been setting it up alongside my OpenClaw and nanobot instances. What got my attention was the built-in web console for configuration and the fact that it ships with three local model backends out of the box.

CoPaw GitHub

What this guide covers

  • Installing CoPaw via pip, one-line script, or Docker
  • Configuring cloud LLM providers (DashScope, OpenAI, Azure OpenAI)
  • Running local models with llama.cpp, MLX, and Ollama
  • Setting up channels — DingTalk, Feishu, QQ, Discord, Telegram, iMessage
  • Built-in skills, custom skills, and importing from skill hubs
  • Memory system with hybrid semantic + full-text search
  • Scheduled tasks and heartbeat check-ins

If you’re comparing self-hosted AI assistant options, our OpenClaw alternatives roundup covers several projects including nanobot, NanoClaw, PicoClaw, and more. For a lighter Go-based option, see the PicoClaw setup guide.

What CoPaw actually does

CoPaw stands for “Co Personal Agent Workstation.” The AgentScope team built it on top of AgentScope and ReMe for memory management. The pitch: one assistant that connects to your messaging apps and runs tasks on a schedule without you having to ask.

The architecture:

You (DingTalk / Feishu / QQ / Discord / Telegram / iMessage)

CoPaw Server (running on your machine or VPS)

LLM Provider (DashScope, OpenAI, Azure, Ollama, llama.cpp, MLX)

Skills (cron, PDF, Word/Excel/PPT, news, file reader, browser, custom)

Messages come in from whatever chat app you use, CoPaw routes them to whatever LLM you picked, and the model can use built-in skills to handle files, run scheduled jobs, browse the web, or deal with documents. There’s a web console at http://127.0.0.1:8088/ where you configure everything instead of editing config files by hand.

How it compares to OpenClaw and nanobot

FeatureCoPawOpenClawnanobot
LanguagePythonTypeScriptPython
Built byAgentScope (Alibaba)CommunityHKUDS
Installpip / one-liner / Dockercurl one-linerpip
Web consoleYes (built-in at :8088)No (third-party dashboards)No
ChannelsDingTalk, Feishu, QQ, Discord, Telegram, iMessageTelegram, WhatsApp, Slack, DiscordTelegram, Discord, WhatsApp, Slack, Feishu, DingTalk, Email, QQ
Local modelsllama.cpp, MLX, OllamaOllamavLLM
MemoryFile-based + vector/BM25 hybrid searchFile-based + semantic searchFile-based
SkillsBuilt-in + importable from hubsCommunity skillsBuilt-in
Scheduled tasksCron + heartbeatCronCron
LicenseApache 2.0MITMIT

Where CoPaw pulls ahead: the built-in web console means you don’t need third-party dashboards, and the DingTalk/Feishu/QQ support is first-class rather than bolted on. If those are your daily chat apps, CoPaw saves you a lot of config headaches.

Installation

CoPaw gives you five ways to get running. I’ll cover the three most practical ones.

Installing on a Hetzner VPS

If you want CoPaw running 24/7 on a server, a cheap VPS does the job. I use a Hetzner CX22 (2 vCPU, 4GB RAM) for €3.99/month.

Get Started with Hetzner

Get €20 credit, Hostinger VPS when you sign up through our referral link. That covers about 5 months of running CoPaw.

SSH into your server and run:

ssh root@YOUR_SERVER_IP
apt update && apt upgrade -y
curl -fsSL https://copaw.agentscope.io/install.sh | bash

Open a new shell session, then:

copaw init --defaults
copaw app

To keep it running after you close the SSH session, use a process manager like systemd or run it in a tmux/screen session.

Uninstalling

copaw uninstall          # keeps config and data
copaw uninstall --purge  # removes everything

Model Configuration

Before CoPaw can do anything useful, you need to configure an LLM. Open the Console at http://127.0.0.1:8088/ and go to Settings → Models.

Cloud providers

CoPaw works with DashScope, ModelScope, OpenAI, Azure OpenAI, and Aliyun Coding Plan. For any of them:

  1. Go to Settings → Models in the Console
  2. Find the provider card and click Settings
  3. Enter your API key and click Save
  4. The card status changes to Available
  5. In the LLM Configuration section at the top, select the provider and model, then click Save

You can also set API keys as environment variables. For DashScope:

export DASHSCOPE_API_KEY=your_key_here

Or put it in a .env file in the working directory (default ~/.copaw/).

Using OpenAI or Azure OpenAI

These work through the custom provider system:

  1. Click Add provider on the Models page
  2. Enter a Provider ID (e.g. openai) and display name
  3. Click Settings, enter the Base URL (https://api.openai.com/v1 for OpenAI) and API key
  4. Click Models, add the model ID (e.g. gpt-4o)
  5. Select it in the LLM Configuration dropdown

Local models

CoPaw supports three local model backends. No API keys needed.

BackendBest forInstall command
llama.cppCross-platform (macOS, Linux, Windows)pip install 'copaw[llamacpp]'
MLXApple Silicon Macs (M1–M4)pip install 'copaw[mlx]'
OllamaAnyone already using Ollamapip install 'copaw[ollama]'

To download and use a local model from the command line:

copaw models download Qwen/Qwen3-4B-GGUF
copaw models    # select the downloaded model
copaw app       # start the server

You can also download and manage models from the Console UI under Settings → Models. Click Models on the llama.cpp or MLX card, then Download model and enter the Hugging Face repo ID.

For Ollama, make sure the Ollama daemon is running first, then pull models through Ollama as usual:

ollama pull qwen3:4b

CoPaw syncs with whatever models Ollama has available.

If you’re new to running local models, our Ollama Docker install guide covers the basics.

Cost comparison

ProviderExample ModelTypical Monthly Cost
DashScopeQwen 3.5 Plus$5–20
OpenAIGPT-4o$20–70
Azure OpenAIGPT-4o$20–70
Local (llama.cpp)Qwen3 4B$0 (compute only)
Local (Ollama)Qwen3 4B$0 (compute only)

DashScope models run cheaper than OpenAI for comparable quality. If you want to avoid API bills altogether, llama.cpp or Ollama with a local model costs nothing beyond electricity.

Channel Setup

Channels are how you talk to CoPaw from your messaging apps. You can configure them through the Console (Control → Channels) or by editing config.json directly.

Discord

  1. Create a Discord application at discord.com/developers
  2. Go to the Bot tab, click Reset Token, and copy the bot token
  3. Under Privileged Gateway Intents, enable Message Content Intent
  4. Generate an OAuth2 invite URL with bot scope and Send Messages + Read Message History permissions
  5. Invite the bot to your server
  6. In the CoPaw Console, go to Control → Channels, click Discord, enable it, and paste the token

In config.json, it looks like this:

{
  "channels": {
    "discord": {
      "enabled": true,
      "token": "YOUR_DISCORD_BOT_TOKEN"
    }
  }
}

Telegram

  1. Open Telegram, search for @BotFather, send /newbot
  2. Pick a name and username, copy the token
  3. In the Console, enable Telegram and paste the token
{
  "channels": {
    "telegram": {
      "enabled": true,
      "token": "YOUR_TELEGRAM_BOT_TOKEN"
    }
  }
}

DingTalk

DingTalk setup involves creating a custom app in the DingTalk developer console. CoPaw has a built-in skill called dingtalk_channel_connect that walks you through credential lookup, Client ID/Secret configuration, and the manual steps. Enable the DingTalk channel in the Console and follow the guided prompts.

Feishu (Lark)

Same idea as DingTalk. Create an app in the Feishu Open Platform, grab the App ID and App Secret, and enter them in the Console. CoPaw also supports SOCKS proxy for Feishu if you’re behind a corporate firewall.

iMessage (macOS only)

If you’re running CoPaw on a Mac, iMessage works as a channel. This is one of the few self-hosted assistants that supports iMessage natively.

Multiple channels at once

CoPaw can connect to several channels simultaneously. Messages go to whichever channel you last talked in, or you can target specific channels for scheduled messages.

Skills

Skills are how CoPaw does more than just chat. Several come built-in, and you can write your own or import from community hubs.

Built-in skills

SkillWhat it does
cronScheduled jobs — create, list, pause, resume, delete
file_readerRead and summarize text files (.txt, .md, .json, .csv, .py, etc.)
pdfRead, extract, merge, split, rotate, watermark, OCR PDFs
docxCreate, read, and edit Word documents
xlsxRead, edit, and create spreadsheets
pptxCreate, read, and edit PowerPoint files
newsFetch and summarize latest news from configured sources
browser_visibleLaunch a headed browser for demos or CAPTCHA scenarios

Managing skills in the Console

Go to Agent → Skills in the Console to see all loaded skills, toggle them on or off, create custom skills, or edit existing ones.

Importing skills from hubs

CoPaw can import skills from these sources:

  • https://skills.sh/
  • https://clawhub.ai/
  • https://skillsmp.com/
  • GitHub repositories (any repo with a SKILL.md file)

In the Console, go to Agent → Skills, click Import Skills, paste the URL, and confirm.

Creating custom skills

Drop a folder with a SKILL.md file into ~/.copaw/customized_skills/:

~/.copaw/
  customized_skills/
    my_research_skill/
      SKILL.md

The SKILL.md is plain Markdown that describes what the skill does:

---
name: my_research_skill
description: Research a topic and summarize findings
---

# Research Skill

When asked to research something:
1. Search the web for current information
2. Summarize key findings in bullet points
3. List sources

CoPaw picks up new skills on restart. Custom skills take priority over built-in ones when names collide.

Memory System

CoPaw’s memory system uses ReMe and stores everything in plain Markdown files. There are two parts to it: context management that compresses long conversations before you hit token limits, and long-term memory that writes facts to files and indexes them so CoPaw can find them later.

Memory file structure

~/.copaw/
  MEMORY.md                    # Long-term facts, preferences, decisions
  memory/
    2026-03-05.md              # Daily log for today
    2026-03-04.md              # Yesterday's log
    ...

MEMORY.md holds persistent information — things like “I prefer Python 3.12” or “My team uses Slack for standups.” Daily logs capture what happened in each conversation, and the system auto-summarizes conversations when they get too long.

CoPaw uses vector semantic search and BM25 full-text search together. The fusion weights default to 70% vector, 30% BM25.

Search typeGood atWeak at
Vector semanticFinding related concepts with different wordingExact token matching (function names, error codes)
BM25 full-textExact matches on specific termsSynonyms and paraphrasing
Hybrid (both)Best overall recallRequires embedding API config

To enable vector search, configure the embedding service:

export EMBEDDING_API_KEY=your_key
export EMBEDDING_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
export EMBEDDING_MODEL_NAME=text-embedding-v4

Without an embedding API key, CoPaw falls back to BM25 full-text search only.

Making things stick

Tell CoPaw directly:

“Remember: I always deploy to staging before production.”

It writes this to MEMORY.md. You can also edit memory files directly:

nano ~/.copaw/MEMORY.md

Scheduled Tasks and Heartbeat

CoPaw can run things on a schedule in two ways: cron jobs for specific commands and heartbeat for periodic check-ins.

Cron jobs

Create scheduled tasks through the CLI or Console:

# Create a job that runs at 9am every day
copaw cron create --type agent --name "morning-digest" --cron "0 9 * * *" --message "Summarize my pending tasks and calendar for today"

# List all jobs
copaw cron list

# Check a job's state
copaw cron state <job_id>

You can also manage cron jobs from the Console under Control → Cron Jobs.

Heartbeat

Heartbeat is CoPaw’s version of a scheduled check-in. You write a block of questions in a Markdown file, and CoPaw runs through them on a timer and sends the answers to your last-used channel. Set it up during copaw init or edit HEARTBEAT.md in the working directory.

Example HEARTBEAT.md:

Check the following and report back:
- Any new emails from clients?
- What meetings do I have today?
- Summarize overnight GitHub notifications

Set the interval and target in config.json:

{
  "heartbeat": {
    "enabled": true,
    "interval": "2h",
    "active_hours": "08:00-22:00"
  }
}

CLI Reference

CommandDescription
copaw initInteractive setup wizard
copaw init --defaultsQuick setup with defaults
copaw appStart the server
copaw modelsManage local models
copaw models download <repo>Download a model from Hugging Face
copaw cron listList scheduled jobs
copaw cron createCreate a new scheduled job
copaw uninstallRemove CoPaw (keeps data)
copaw uninstall --purgeRemove CoPaw and all data

Troubleshooting

CoPaw not responding

Check that the server is running and the model is configured:

  1. Open http://127.0.0.1:8088/ — if this doesn’t load, the server isn’t running
  2. Go to Settings → Models and verify a provider is Available and a model is selected
  3. Check the terminal output for error messages

API key issues

If you see authentication errors:

  • Double-check the API key in Settings → Models
  • For DashScope, verify the key at dashscope.console.aliyun.com
  • Try setting the key as an environment variable instead: export DASHSCOPE_API_KEY=xxx

Docker can’t reach Ollama

Inside a Docker container, localhost points to the container, not the host. Use --add-host=host.docker.internal:host-gateway and set the Ollama Base URL to http://host.docker.internal:11434/v1.

On Linux, you can also use --network=host:

docker run --network=host -v copaw-data:/app/working agentscope/copaw:latest

Channel not receiving messages

  • Verify the channel is enabled in Control → Channels
  • Check that tokens and credentials are correct
  • For Discord, make sure Message Content Intent is enabled in the developer portal
  • Restart CoPaw after changing channel config
Frequently Asked Questions

Do I need to know Python to use CoPaw?

No. The one-line installer and Docker options don’t require any Python knowledge. The Console UI handles most configuration. You only need Python skills if you want to create custom skills or install from source.

Which cloud provider should I start with?

DashScope is the default and cheapest option. If you already have an OpenAI API key, add it as a custom provider. For running without any API costs, use llama.cpp or Ollama with a local model.

Can I run CoPaw on a Raspberry Pi?

Technically yes if you have a Pi 4/5 with 4GB+ RAM, but performance will be limited. A cheap VPS at €3.99/month gives you a better experience.

Does CoPaw work on Windows?

Yes. Use the PowerShell installer (irm https://copaw.agentscope.io/install.ps1 | iex) or Docker. The pip install also works if you have Python 3.10+.

Can multiple people use one CoPaw instance?

CoPaw replies in the channel where you last talked. Multiple users can interact through group channels in DingTalk, Feishu, or Discord. For separate conversations, each person should use a different channel or session.

How does CoPaw compare to OpenClaw?

OpenClaw has broader Western messaging app support (WhatsApp, Slack). CoPaw has better support for Chinese apps (DingTalk, Feishu, QQ), a built-in web console, and more local model options. Both are open source and self-hosted. See our OpenClaw setup guide for the full comparison.

Is my data private?

All data stays on your machine. The only external calls go to your configured LLM provider. If you run local models, nothing leaves your server at all.

CoPaw is worth trying if you want a self-hosted assistant with a proper web UI instead of editing JSON files blind. The three local model backends mean you can run it without any API costs, and if DingTalk or Feishu is where your team lives, it’s the most polished option I’ve found for those platforms.

For other self-hosted assistant options, check out our OpenClaw alternatives roundup. If you want something that compiles to a single binary and runs on a $10 board, the PicoClaw setup guide covers that. For a self-improving assistant with voice mode and session search, see the Hermes Agent setup guide. And for running local models behind any of these assistants, our Ollama Docker guide has the setup details.