If you're thinking of running OpenClaw on your main PC, I'd strongly suggest using a VPS instead. Anything you can do on your PC, OpenClaw will be able to do on your PC, which is extremely risky if you have personal info on it.
A VPS is essentially just a fresh virtual PC. That way you're not risking any of your personal data and the VPS runs 24/7 so you'll have access to the bot whenever you want. It's also much cheaper than spending hundreds of dollars on a dedicated device like a Mac Mini.
The VPS provider that I use is Hostinger because it's the simplest one to set up and also one of the cheapest. If you use the link below you will get an extra 20% off.
π https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20
Coupon code: HOWTO20
sudo apt update && sudo apt upgrade -ycurl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -sudo apt install -y nodejsVerify:
node --versioncurl -fsSL https://ollama.ai/install.sh | shVerify:
ollama --versionollama pull qwen3.5:0.8bVerify:
ollama listThis is a ~1GB model that runs on CPU. If you want a smarter model and have the RAM, replace
qwen3.5:0.8bwithqwen3.5:4borqwen3.5:9b.
curl -fsSL https://openclaw.ai/install.sh | bashopenclaw onboard --install-daemonThis sets up the gateway and installs the background service. Press Ctrl+C when it's done.
openclaw configure --section channels- Gateway location: select Local (this machine)
- Channels: select Configure/link
- Select Telegram (Bot API)
- Go to Telegram, find @BotFather, send
/newbot, create your bot, copy the token - Select Enter Telegram bot token, paste it, press Enter
- Select Finished
- DM access policies: select No
Paste this command to replace the default config with your local Ollama model:
If you pulled a different model in Step 4, swap
qwen3.5:0.8bandQwen3.5 0.8Bwith your model name.
python3 -c "
import json
with open('/root/.openclaw/openclaw.json') as f:
cfg = json.load(f)
cfg['models'] = {'mode': 'merge', 'providers': {'ollama': {'baseUrl': 'http://127.0.0.1:11434/v1', 'apiKey': 'ollama', 'api': 'ollama', 'models': [{'id': 'qwen3.5:0.8b', 'name': 'Qwen3.5 0.8B', 'reasoning': False, 'input': ['text'], 'cost': {'input': 0, 'output': 0, 'cacheRead': 0, 'cacheWrite': 0}, 'contextWindow': 32768, 'maxTokens': 8192}]}}}
cfg['agents'] = {'defaults': {'model': {'primary': 'ollama/qwen3.5:0.8b'}}}
with open('/root/.openclaw/openclaw.json', 'w') as f:
json.dump(cfg, f, indent=2)
print('Done')
"You should see "Done" if it worked.
Important: This step must come AFTER Telegram setup because
openclaw configureoverwrites the config file.
openclaw gateway stopopenclaw gateway startOpen your bot in Telegram and send it any message. It will reply with a pairing code. Approve it:
Replace CODE with the actual pairing code you received:
openclaw pairing approve CODERemove any arrows or brackets from the code. Just the plain code.
That's it β your OpenClaw bot is now running with a free local Ollama model through Telegram.
If you don't have one yet, I use Hostinger for all my setups. Takes under a minute to get a clean Ubuntu server running. Use the link below for an extra 20% off.
π https://www.hostinger.com/self-hosted-n8n?REFERRALCODE=HOWTO20
Coupon code: HOWTO20