curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install nodecurl -fsSL https://ollama.ai/install.sh | sh
systemctl start ollama
ollama pull qwen2.5:3bThis is a lighter model for CPU-only setups (e.g., a VPS without a GPU). If you have a stronger GPU, replace
qwen2.5:3bwith a larger model likeqwen2.5-coder:32bordeepseek-r1:32b.
npm install -g openclaw@latestopenclaw onboard --install-daemonFollow the wizard steps:
- Select "Yes, I understand this is inherently risky" → Enter
- Select Quick Start → Enter
- Providers: select Skip for now → Enter
- Select All Providers → then Enter model manually → just press Enter (we'll replace the config later)
- Select Telegram → Enter
- Go to Telegram, find @BotFather, send
/newbot, create your bot, and copy the access token - Paste your bot token → Enter
- Select Yes, configure skills → Enter
- Install Homebrew → select npm
- Select your skills using arrow keys + spacebar (recommended: Claw Hub for adding custom skills later)
- Press Enter to confirm skill selection
- Add any API keys required for your selected skills
- Install the gateway service when prompted
- Select Do this later for hatching
- You should see "Onboarding complete" — press Ctrl+C to exit back to terminal
Paste this command to replace the default model config with your local Ollama model:
python3 -c "
import json
with open('/root/.openclaw/openclaw.json') as f:
cfg = json.load(f)
cfg['agents']['defaults']['model']['primary'] = 'ollama/qwen2.5:3b'
cfg['agents']['defaults']['models'] = {'ollama/qwen2.5:3b': {'alias': 'Qwen 2.5 3B'}}
cfg['models'] = {'mode': 'merge', 'providers': {'ollama': {'baseUrl': 'http://127.0.0.1:11434/v1', 'apiKey': 'ollama', 'api': 'openai-responses', 'models': [{'id': 'qwen2.5:3b', 'name': 'Qwen 2.5 3B', 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 32000, 'maxTokens': 4096}]}}}
with open('/root/.openclaw/openclaw.json', 'w') as f:
json.dump(cfg, f, indent=2)
print('Done')
"You should see "Done" if it worked.
Swap
qwen2.5:3bwith whatever model you pulled in Step 2.
systemctl --user restart openclaw-gateway- Open your Telegram bot chat and send any message
- The bot will reply with a pairing code and a command
- Copy the command, replace the code, and paste it in your terminal:
openclaw pairing approve telegram YOUR_PAIRING_CODEMake sure to remove any arrows/brackets from the code — just the plain code.
That's it — your OpenClaw bot is now running with your local Ollama model through Telegram!