Skip to content

Instantly share code, notes, and snippets.

@Pyr0zen
Last active February 24, 2026 21:26
Show Gist options
  • Select an option

  • Save Pyr0zen/232ad6dd819fd95db7e27786bc9b18f9 to your computer and use it in GitHub Desktop.

Select an option

Save Pyr0zen/232ad6dd819fd95db7e27786bc9b18f9 to your computer and use it in GitHub Desktop.

OpenClaw + Ollama: Full Setup Guide

Step 1: Install Node.js

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install node

Step 2: Install Ollama + Pull a Model

curl -fsSL https://ollama.ai/install.sh | sh
systemctl start ollama
ollama pull qwen2.5:3b

This is a lighter model for CPU-only setups (e.g., a VPS without a GPU). If you have a stronger GPU, replace qwen2.5:3b with a larger model like qwen2.5-coder:32b or deepseek-r1:32b.


Step 3: Install OpenClaw

npm install -g openclaw@latest

Step 4: Run the Onboarding Wizard

openclaw onboard --install-daemon

Follow the wizard steps:

  1. Select "Yes, I understand this is inherently risky" → Enter
  2. Select Quick Start → Enter
  3. Providers: select Skip for now → Enter
  4. Select All Providers → then Enter model manually → just press Enter (we'll replace the config later)
  5. Select Telegram → Enter
  6. Go to Telegram, find @BotFather, send /newbot, create your bot, and copy the access token
  7. Paste your bot token → Enter
  8. Select Yes, configure skills → Enter
  9. Install Homebrew → select npm
  10. Select your skills using arrow keys + spacebar (recommended: Claw Hub for adding custom skills later)
  11. Press Enter to confirm skill selection
  12. Add any API keys required for your selected skills
  13. Install the gateway service when prompted
  14. Select Do this later for hatching
  15. You should see "Onboarding complete" — press Ctrl+C to exit back to terminal

Step 5: Connect Ollama to OpenClaw

Paste this command to replace the default model config with your local Ollama model:

python3 -c "
import json
with open('/root/.openclaw/openclaw.json') as f:
    cfg = json.load(f)
cfg['agents']['defaults']['model']['primary'] = 'ollama/qwen2.5:3b'
cfg['agents']['defaults']['models'] = {'ollama/qwen2.5:3b': {'alias': 'Qwen 2.5 3B'}}
cfg['models'] = {'mode': 'merge', 'providers': {'ollama': {'baseUrl': 'http://127.0.0.1:11434/v1', 'apiKey': 'ollama', 'api': 'openai-responses', 'models': [{'id': 'qwen2.5:3b', 'name': 'Qwen 2.5 3B', 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 32000, 'maxTokens': 4096}]}}}
with open('/root/.openclaw/openclaw.json', 'w') as f:
    json.dump(cfg, f, indent=2)
print('Done')
"

You should see "Done" if it worked.

Swap qwen2.5:3b with whatever model you pulled in Step 2.


Step 6: Restart the Gateway

systemctl --user restart openclaw-gateway

Step 7: Pair Telegram

  1. Open your Telegram bot chat and send any message
  2. The bot will reply with a pairing code and a command
  3. Copy the command, replace the code, and paste it in your terminal:
openclaw pairing approve telegram YOUR_PAIRING_CODE

Make sure to remove any arrows/brackets from the code — just the plain code.


That's it — your OpenClaw bot is now running with your local Ollama model through Telegram!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment