CoPaw Setup Guide: Multi-Channel AI Assistant You Can Self-Host
Step-by-step guide to installing CoPaw on a VPS or locally. Covers pip install, Docker, model providers, channels like DingTalk, Discord and Telegram, skills, memory, and scheduled tasks.
CoPaw is an open-source personal AI assistant from the AgentScope team at Alibaba. It runs on your own machine or server and connects to DingTalk, Feishu, QQ, Discord, Telegram, and iMessage. I’ve been setting it up alongside my OpenClaw and nanobot instances. What got my attention was the built-in web console for configuration and the fact that it ships with three local model backends out of the box.
CoPaw GitHubWhat this guide covers
- Installing CoPaw via pip, one-line script, or Docker
- Configuring cloud LLM providers (DashScope, OpenAI, Azure OpenAI)
- Running local models with llama.cpp, MLX, and Ollama
- Setting up channels — DingTalk, Feishu, QQ, Discord, Telegram, iMessage
- Built-in skills, custom skills, and importing from skill hubs
- Memory system with hybrid semantic + full-text search
- Scheduled tasks and heartbeat check-ins
If you’re comparing self-hosted AI assistant options, our OpenClaw alternatives roundup covers several projects including nanobot, NanoClaw, PicoClaw, and more. For a lighter Go-based option, see the PicoClaw setup guide.
What CoPaw actually does
CoPaw stands for “Co Personal Agent Workstation.” The AgentScope team built it on top of AgentScope and ReMe for memory management. The pitch: one assistant that connects to your messaging apps and runs tasks on a schedule without you having to ask.
The architecture:
You (DingTalk / Feishu / QQ / Discord / Telegram / iMessage)
↓
CoPaw Server (running on your machine or VPS)
↓
LLM Provider (DashScope, OpenAI, Azure, Ollama, llama.cpp, MLX)
↓
Skills (cron, PDF, Word/Excel/PPT, news, file reader, browser, custom)
Messages come in from whatever chat app you use, CoPaw routes them to whatever LLM you picked, and the model can use built-in skills to handle files, run scheduled jobs, browse the web, or deal with documents. There’s a web console at http://127.0.0.1:8088/ where you configure everything instead of editing config files by hand.
How it compares to OpenClaw and nanobot
| Feature | CoPaw | OpenClaw | nanobot |
|---|---|---|---|
| Language | Python | TypeScript | Python |
| Built by | AgentScope (Alibaba) | Community | HKUDS |
| Install | pip / one-liner / Docker | curl one-liner | pip |
| Web console | Yes (built-in at :8088) | No (third-party dashboards) | No |
| Channels | DingTalk, Feishu, QQ, Discord, Telegram, iMessage | Telegram, WhatsApp, Slack, Discord | Telegram, Discord, WhatsApp, Slack, Feishu, DingTalk, Email, QQ |
| Local models | llama.cpp, MLX, Ollama | Ollama | vLLM |
| Memory | File-based + vector/BM25 hybrid search | File-based + semantic search | File-based |
| Skills | Built-in + importable from hubs | Community skills | Built-in |
| Scheduled tasks | Cron + heartbeat | Cron | Cron |
| License | Apache 2.0 | MIT | MIT |
Where CoPaw pulls ahead: the built-in web console means you don’t need third-party dashboards, and the DingTalk/Feishu/QQ support is first-class rather than bolted on. If those are your daily chat apps, CoPaw saves you a lot of config headaches.
Installation
CoPaw gives you five ways to get running. I’ll cover the three most practical ones.
The cleanest approach if you already have Python:
pip install copaw
copaw init --defaults
copaw appOpen http://127.0.0.1:8088/ and you’ll see the Console. That’s it for a basic setup.
For interactive configuration where you pick your LLM provider and channels upfront:
copaw initThis walks you through heartbeat interval, target channel, active hours, and optional skill setup.
No Python needed. The installer handles everything using uv.
macOS / Linux:
curl -fsSL https://copaw.agentscope.io/install.sh | bashWindows (PowerShell):
irm https://copaw.agentscope.io/install.ps1 | iexOpen a new terminal after install, then:
copaw init --defaults
copaw appTo install with local model support:
# llama.cpp (cross-platform)
curl -fsSL https://copaw.agentscope.io/install.sh | bash -s -- --extras llamacpp
# MLX (Apple Silicon only)
curl -fsSL https://copaw.agentscope.io/install.sh | bash -s -- --extras mlx
# Ollama
curl -fsSL https://copaw.agentscope.io/install.sh | bash -s -- --extras ollama Images are on Docker Hub (agentscope/copaw). Tags: latest (stable), pre (pre-release).
docker pull agentscope/copaw:latest
docker run -p 127.0.0.1:8088:8088 -v copaw-data:/app/working agentscope/copaw:latestConfig, memory, and skills are stored in the copaw-data volume. To pass API keys:
docker run -p 127.0.0.1:8088:8088 \
-e DASHSCOPE_API_KEY=your_key_here \
-v copaw-data:/app/working \
agentscope/copaw:latestIf you need CoPaw in Docker to reach Ollama on the host:
docker run -p 127.0.0.1:8088:8088 \
--add-host=host.docker.internal:host-gateway \
-v copaw-data:/app/working agentscope/copaw:latestThen in CoPaw settings, change the Ollama Base URL to http://host.docker.internal:11434/v1.
Installing on a Hetzner VPS
If you want CoPaw running 24/7 on a server, a cheap VPS does the job. I use a Hetzner CX22 (2 vCPU, 4GB RAM) for €3.99/month.
Get Started with Hetzner
Get €20 credit, Hostinger VPS when you sign up through our referral link. That covers about 5 months of running CoPaw.
SSH into your server and run:
ssh root@YOUR_SERVER_IP
apt update && apt upgrade -y
curl -fsSL https://copaw.agentscope.io/install.sh | bash
Open a new shell session, then:
copaw init --defaults
copaw app
To keep it running after you close the SSH session, use a process manager like systemd or run it in a tmux/screen session.
Uninstalling
copaw uninstall # keeps config and data
copaw uninstall --purge # removes everything
Model Configuration
Before CoPaw can do anything useful, you need to configure an LLM. Open the Console at http://127.0.0.1:8088/ and go to Settings → Models.
Cloud providers
CoPaw works with DashScope, ModelScope, OpenAI, Azure OpenAI, and Aliyun Coding Plan. For any of them:
- Go to Settings → Models in the Console
- Find the provider card and click Settings
- Enter your API key and click Save
- The card status changes to Available
- In the LLM Configuration section at the top, select the provider and model, then click Save
You can also set API keys as environment variables. For DashScope:
export DASHSCOPE_API_KEY=your_key_here
Or put it in a .env file in the working directory (default ~/.copaw/).
Using OpenAI or Azure OpenAI
These work through the custom provider system:
- Click Add provider on the Models page
- Enter a Provider ID (e.g.
openai) and display name - Click Settings, enter the Base URL (
https://api.openai.com/v1for OpenAI) and API key - Click Models, add the model ID (e.g.
gpt-4o) - Select it in the LLM Configuration dropdown
Local models
CoPaw supports three local model backends. No API keys needed.
| Backend | Best for | Install command |
|---|---|---|
| llama.cpp | Cross-platform (macOS, Linux, Windows) | pip install 'copaw[llamacpp]' |
| MLX | Apple Silicon Macs (M1–M4) | pip install 'copaw[mlx]' |
| Ollama | Anyone already using Ollama | pip install 'copaw[ollama]' |
To download and use a local model from the command line:
copaw models download Qwen/Qwen3-4B-GGUF
copaw models # select the downloaded model
copaw app # start the server
You can also download and manage models from the Console UI under Settings → Models. Click Models on the llama.cpp or MLX card, then Download model and enter the Hugging Face repo ID.
For Ollama, make sure the Ollama daemon is running first, then pull models through Ollama as usual:
ollama pull qwen3:4b
CoPaw syncs with whatever models Ollama has available.
If you’re new to running local models, our Ollama Docker install guide covers the basics.
Cost comparison
| Provider | Example Model | Typical Monthly Cost |
|---|---|---|
| DashScope | Qwen 3.5 Plus | $5–20 |
| OpenAI | GPT-4o | $20–70 |
| Azure OpenAI | GPT-4o | $20–70 |
| Local (llama.cpp) | Qwen3 4B | $0 (compute only) |
| Local (Ollama) | Qwen3 4B | $0 (compute only) |
DashScope models run cheaper than OpenAI for comparable quality. If you want to avoid API bills altogether, llama.cpp or Ollama with a local model costs nothing beyond electricity.
Channel Setup
Channels are how you talk to CoPaw from your messaging apps. You can configure them through the Console (Control → Channels) or by editing config.json directly.
Discord
- Create a Discord application at discord.com/developers
- Go to the Bot tab, click Reset Token, and copy the bot token
- Under Privileged Gateway Intents, enable Message Content Intent
- Generate an OAuth2 invite URL with
botscope andSend Messages+Read Message Historypermissions - Invite the bot to your server
- In the CoPaw Console, go to Control → Channels, click Discord, enable it, and paste the token
In config.json, it looks like this:
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_DISCORD_BOT_TOKEN"
}
}
}
Telegram
- Open Telegram, search for @BotFather, send
/newbot - Pick a name and username, copy the token
- In the Console, enable Telegram and paste the token
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_TELEGRAM_BOT_TOKEN"
}
}
}
DingTalk
DingTalk setup involves creating a custom app in the DingTalk developer console. CoPaw has a built-in skill called dingtalk_channel_connect that walks you through credential lookup, Client ID/Secret configuration, and the manual steps. Enable the DingTalk channel in the Console and follow the guided prompts.
Feishu (Lark)
Same idea as DingTalk. Create an app in the Feishu Open Platform, grab the App ID and App Secret, and enter them in the Console. CoPaw also supports SOCKS proxy for Feishu if you’re behind a corporate firewall.
iMessage (macOS only)
If you’re running CoPaw on a Mac, iMessage works as a channel. This is one of the few self-hosted assistants that supports iMessage natively.
Multiple channels at once
CoPaw can connect to several channels simultaneously. Messages go to whichever channel you last talked in, or you can target specific channels for scheduled messages.
Skills
Skills are how CoPaw does more than just chat. Several come built-in, and you can write your own or import from community hubs.
Built-in skills
| Skill | What it does |
|---|---|
| cron | Scheduled jobs — create, list, pause, resume, delete |
| file_reader | Read and summarize text files (.txt, .md, .json, .csv, .py, etc.) |
| Read, extract, merge, split, rotate, watermark, OCR PDFs | |
| docx | Create, read, and edit Word documents |
| xlsx | Read, edit, and create spreadsheets |
| pptx | Create, read, and edit PowerPoint files |
| news | Fetch and summarize latest news from configured sources |
| browser_visible | Launch a headed browser for demos or CAPTCHA scenarios |
Managing skills in the Console
Go to Agent → Skills in the Console to see all loaded skills, toggle them on or off, create custom skills, or edit existing ones.
Importing skills from hubs
CoPaw can import skills from these sources:
https://skills.sh/https://clawhub.ai/https://skillsmp.com/- GitHub repositories (any repo with a
SKILL.mdfile)
In the Console, go to Agent → Skills, click Import Skills, paste the URL, and confirm.
Creating custom skills
Drop a folder with a SKILL.md file into ~/.copaw/customized_skills/:
~/.copaw/
customized_skills/
my_research_skill/
SKILL.md
The SKILL.md is plain Markdown that describes what the skill does:
---
name: my_research_skill
description: Research a topic and summarize findings
---
# Research Skill
When asked to research something:
1. Search the web for current information
2. Summarize key findings in bullet points
3. List sources
CoPaw picks up new skills on restart. Custom skills take priority over built-in ones when names collide.
Memory System
CoPaw’s memory system uses ReMe and stores everything in plain Markdown files. There are two parts to it: context management that compresses long conversations before you hit token limits, and long-term memory that writes facts to files and indexes them so CoPaw can find them later.
Memory file structure
~/.copaw/
MEMORY.md # Long-term facts, preferences, decisions
memory/
2026-03-05.md # Daily log for today
2026-03-04.md # Yesterday's log
...
MEMORY.md holds persistent information — things like “I prefer Python 3.12” or “My team uses Slack for standups.” Daily logs capture what happened in each conversation, and the system auto-summarizes conversations when they get too long.
Hybrid search
CoPaw uses vector semantic search and BM25 full-text search together. The fusion weights default to 70% vector, 30% BM25.
| Search type | Good at | Weak at |
|---|---|---|
| Vector semantic | Finding related concepts with different wording | Exact token matching (function names, error codes) |
| BM25 full-text | Exact matches on specific terms | Synonyms and paraphrasing |
| Hybrid (both) | Best overall recall | Requires embedding API config |
To enable vector search, configure the embedding service:
export EMBEDDING_API_KEY=your_key
export EMBEDDING_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
export EMBEDDING_MODEL_NAME=text-embedding-v4
Without an embedding API key, CoPaw falls back to BM25 full-text search only.
Making things stick
Tell CoPaw directly:
“Remember: I always deploy to staging before production.”
It writes this to MEMORY.md. You can also edit memory files directly:
nano ~/.copaw/MEMORY.md
Scheduled Tasks and Heartbeat
CoPaw can run things on a schedule in two ways: cron jobs for specific commands and heartbeat for periodic check-ins.
Cron jobs
Create scheduled tasks through the CLI or Console:
# Create a job that runs at 9am every day
copaw cron create --type agent --name "morning-digest" --cron "0 9 * * *" --message "Summarize my pending tasks and calendar for today"
# List all jobs
copaw cron list
# Check a job's state
copaw cron state <job_id>
You can also manage cron jobs from the Console under Control → Cron Jobs.
Heartbeat
Heartbeat is CoPaw’s version of a scheduled check-in. You write a block of questions in a Markdown file, and CoPaw runs through them on a timer and sends the answers to your last-used channel. Set it up during copaw init or edit HEARTBEAT.md in the working directory.
Example HEARTBEAT.md:
Check the following and report back:
- Any new emails from clients?
- What meetings do I have today?
- Summarize overnight GitHub notifications
Set the interval and target in config.json:
{
"heartbeat": {
"enabled": true,
"interval": "2h",
"active_hours": "08:00-22:00"
}
}
CLI Reference
| Command | Description |
|---|---|
copaw init | Interactive setup wizard |
copaw init --defaults | Quick setup with defaults |
copaw app | Start the server |
copaw models | Manage local models |
copaw models download <repo> | Download a model from Hugging Face |
copaw cron list | List scheduled jobs |
copaw cron create | Create a new scheduled job |
copaw uninstall | Remove CoPaw (keeps data) |
copaw uninstall --purge | Remove CoPaw and all data |
Troubleshooting
CoPaw not responding
Check that the server is running and the model is configured:
- Open
http://127.0.0.1:8088/— if this doesn’t load, the server isn’t running - Go to Settings → Models and verify a provider is Available and a model is selected
- Check the terminal output for error messages
API key issues
If you see authentication errors:
- Double-check the API key in Settings → Models
- For DashScope, verify the key at dashscope.console.aliyun.com
- Try setting the key as an environment variable instead:
export DASHSCOPE_API_KEY=xxx
Docker can’t reach Ollama
Inside a Docker container, localhost points to the container, not the host. Use --add-host=host.docker.internal:host-gateway and set the Ollama Base URL to http://host.docker.internal:11434/v1.
On Linux, you can also use --network=host:
docker run --network=host -v copaw-data:/app/working agentscope/copaw:latest
Channel not receiving messages
- Verify the channel is enabled in Control → Channels
- Check that tokens and credentials are correct
- For Discord, make sure Message Content Intent is enabled in the developer portal
- Restart CoPaw after changing channel config
Frequently Asked Questions
Do I need to know Python to use CoPaw?
No. The one-line installer and Docker options don’t require any Python knowledge. The Console UI handles most configuration. You only need Python skills if you want to create custom skills or install from source.
Which cloud provider should I start with?
DashScope is the default and cheapest option. If you already have an OpenAI API key, add it as a custom provider. For running without any API costs, use llama.cpp or Ollama with a local model.
Can I run CoPaw on a Raspberry Pi?
Technically yes if you have a Pi 4/5 with 4GB+ RAM, but performance will be limited. A cheap VPS at €3.99/month gives you a better experience.
Does CoPaw work on Windows?
Yes. Use the PowerShell installer (irm https://copaw.agentscope.io/install.ps1 | iex) or Docker. The pip install also works if you have Python 3.10+.
Can multiple people use one CoPaw instance?
CoPaw replies in the channel where you last talked. Multiple users can interact through group channels in DingTalk, Feishu, or Discord. For separate conversations, each person should use a different channel or session.
How does CoPaw compare to OpenClaw?
OpenClaw has broader Western messaging app support (WhatsApp, Slack). CoPaw has better support for Chinese apps (DingTalk, Feishu, QQ), a built-in web console, and more local model options. Both are open source and self-hosted. See our OpenClaw setup guide for the full comparison.
Is my data private?
All data stays on your machine. The only external calls go to your configured LLM provider. If you run local models, nothing leaves your server at all.
CoPaw is worth trying if you want a self-hosted assistant with a proper web UI instead of editing JSON files blind. The three local model backends mean you can run it without any API costs, and if DingTalk or Feishu is where your team lives, it’s the most polished option I’ve found for those platforms.
For other self-hosted assistant options, check out our OpenClaw alternatives roundup. If you want something that compiles to a single binary and runs on a $10 board, the PicoClaw setup guide covers that. For a self-improving assistant with voice mode and session search, see the Hermes Agent setup guide. And for running local models behind any of these assistants, our Ollama Docker guide has the setup details.