Skip to content

Instantly share code, notes, and snippets.

@Pyr0zen
Created March 8, 2026 15:14
Show Gist options
  • Select an option

  • Save Pyr0zen/0f74b714ec1e0174880f216cf58b792e to your computer and use it in GitHub Desktop.

Select an option

Save Pyr0zen/0f74b714ec1e0174880f216cf58b792e to your computer and use it in GitHub Desktop.

PicoClaw + Ollama Setup Guide (Local Models + Telegram)

Repo: https://github.com/sipeed/picoclaw


Before You Start

PicoClaw is an ultra-lightweight AI agent built in Go. It runs in under 10MB of RAM with a 1 second startup time. It's inspired by NanoBot (which is inspired by OpenClaw), but way more portable. Single binary, works across RISC-V, ARM and x86.

If you're thinking of running this on your main PC, I'd strongly suggest using a VPS instead. PicoClaw can execute tools and commands on whatever system it's running on, which is risky if you have personal data on it.

The VPS provider that I use is Hostinger because it's the simplest one to set up and also one of the cheapest. If you use the link below you will get an extra 20% off.

๐Ÿ‘‰ https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20

Coupon code: HOWTO20


Step 1: Update Your System

sudo apt update && sudo apt upgrade -y

Step 2: Install Ollama

curl -fsSL https://ollama.ai/install.sh | sh

Step 3: Pull a Model

This is a lighter model that works well on CPU-only setups like a VPS without a GPU.

ollama pull qwen2.5:3b

Step 4: Verify Ollama Is Running

ollama list

You should see your model listed. If Ollama isn't running, start it with ollama serve.


Step 5: Install Go

PicoClaw is built in Go, so you need it installed to compile from source.

sudo apt install -y golang-go

Step 6: Verify Go Is Installed

go version

Step 7: Clone the PicoClaw Repo

Building from source gives you the most recent and stable version since the project is changing rapidly.

git clone https://github.com/sipeed/picoclaw.git
cd picoclaw

Step 8: Build and Install PicoClaw

make install

This compiles and installs PicoClaw as a single binary. Wait for it to finish.


Step 9: Add PicoClaw to Your PATH

The binary installs to /root/.local/bin/ which might not be in your PATH by default.

export PATH=$PATH:/root/.local/bin

Step 10: Run the Onboarding

picoclaw onboard

This creates all the default files and folders PicoClaw needs, including the config file at ~/.picoclaw/config.json.


Step 11: Connect PicoClaw to Ollama

This writes the config that points PicoClaw to your local Ollama model.

python3 -c "
import json, os
config = {
    'model_list': [
        {
            'model_name': 'qwen2.5:3b',
            'model': 'ollama/qwen2.5:3b',
            'api_base': 'http://localhost:11434/v1',
            'api_key': 'ollama'
        }
    ],
    'agents': {
        'defaults': {
            'model': 'qwen2.5:3b'
        }
    }
}
path = os.path.expanduser('~/.picoclaw/config.json')
with open(path, 'w') as f:
    json.dump(config, f, indent=2)
print('Config written to ' + path)
"

Step 12: Create a Telegram Bot

Open Telegram, search for @BotFather, send /newbot, give your bot a name, give it a username, and copy the token it gives you.


Step 13: Add Telegram + Web Search to the Config

Replace YOUR_TELEGRAM_BOT_TOKEN with the token you just got from BotFather.

python3 -c "
import json, os
path = os.path.expanduser('~/.picoclaw/config.json')
with open(path) as f:
    config = json.load(f)
config['tools'] = {
    'web': {
        'enabled': True,
        'duckduckgo': {
            'enabled': True,
            'max_results': 5
        }
    }
}
config['channels'] = {
    'telegram': {
        'enabled': True,
        'token': 'YOUR_TELEGRAM_BOT_TOKEN'
    }
}
with open(path, 'w') as f:
    json.dump(config, f, indent=2)
print('Telegram + web search added.')
"

Step 14: Start the Gateway

picoclaw gateway

You should see it load all the tools, connect the Telegram bot, and start all channels. It's very fast.


Step 15: Chat with Your Bot

Go to Telegram, search for your new bot by the username you gave it, click Start, and send it a message. It's all local, all private, running through your Ollama model.

You can leave the gateway running in the terminal or in a tmux/screen session so it stays up.


Need a VPS?

If you don't have one yet, I use Hostinger for all my setups. Takes under a minute to get a clean Ubuntu server running. Use the link below for an extra 20% off.

๐Ÿ‘‰ https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20

Coupon code: HOWTO20

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment