Repo: https://github.com/sipeed/picoclaw
PicoClaw is an ultra-lightweight AI agent built in Go. It runs in under 10MB of RAM with a 1 second startup time. It's inspired by NanoBot (which is inspired by OpenClaw), but way more portable. Single binary, works across RISC-V, ARM and x86.
If you're thinking of running this on your main PC, I'd strongly suggest using a VPS instead. PicoClaw can execute tools and commands on whatever system it's running on, which is risky if you have personal data on it.
The VPS provider that I use is Hostinger because it's the simplest one to set up and also one of the cheapest. If you use the link below you will get an extra 20% off.
๐ https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20
Coupon code: HOWTO20
sudo apt update && sudo apt upgrade -y
curl -fsSL https://ollama.ai/install.sh | sh
This is a lighter model that works well on CPU-only setups like a VPS without a GPU.
ollama pull qwen2.5:3b
ollama list
You should see your model listed. If Ollama isn't running, start it with ollama serve.
PicoClaw is built in Go, so you need it installed to compile from source.
sudo apt install -y golang-go
go version
Building from source gives you the most recent and stable version since the project is changing rapidly.
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
make install
This compiles and installs PicoClaw as a single binary. Wait for it to finish.
The binary installs to /root/.local/bin/ which might not be in your PATH by default.
export PATH=$PATH:/root/.local/bin
picoclaw onboard
This creates all the default files and folders PicoClaw needs, including the config file at ~/.picoclaw/config.json.
This writes the config that points PicoClaw to your local Ollama model.
python3 -c "
import json, os
config = {
'model_list': [
{
'model_name': 'qwen2.5:3b',
'model': 'ollama/qwen2.5:3b',
'api_base': 'http://localhost:11434/v1',
'api_key': 'ollama'
}
],
'agents': {
'defaults': {
'model': 'qwen2.5:3b'
}
}
}
path = os.path.expanduser('~/.picoclaw/config.json')
with open(path, 'w') as f:
json.dump(config, f, indent=2)
print('Config written to ' + path)
"
Open Telegram, search for @BotFather, send /newbot, give your bot a name, give it a username, and copy the token it gives you.
Replace YOUR_TELEGRAM_BOT_TOKEN with the token you just got from BotFather.
python3 -c "
import json, os
path = os.path.expanduser('~/.picoclaw/config.json')
with open(path) as f:
config = json.load(f)
config['tools'] = {
'web': {
'enabled': True,
'duckduckgo': {
'enabled': True,
'max_results': 5
}
}
}
config['channels'] = {
'telegram': {
'enabled': True,
'token': 'YOUR_TELEGRAM_BOT_TOKEN'
}
}
with open(path, 'w') as f:
json.dump(config, f, indent=2)
print('Telegram + web search added.')
"
picoclaw gateway
You should see it load all the tools, connect the Telegram bot, and start all channels. It's very fast.
Go to Telegram, search for your new bot by the username you gave it, click Start, and send it a message. It's all local, all private, running through your Ollama model.
You can leave the gateway running in the terminal or in a tmux/screen session so it stays up.
If you don't have one yet, I use Hostinger for all my setups. Takes under a minute to get a clean Ubuntu server running. Use the link below for an extra 20% off.
๐ https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20
Coupon code: HOWTO20