| name | description | homepage | metadata | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
replicate |
Search, explore, and run ML models on Replicate (image gen, video, audio, text, etc.) |
|
Run state-of-the-art open-source and proprietary ML models via the Replicate cloud API.
| name | description | homepage | metadata | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
replicate |
Search, explore, and run ML models on Replicate (image gen, video, audio, text, etc.) |
|
Run state-of-the-art open-source and proprietary ML models via the Replicate cloud API.
| name | description |
|---|---|
openclaw-introspect |
Explore, understand, and reconfigure your own OpenClaw gateway, agent harness, and system prompt. Use when you need to inspect or change OpenClaw configuration (openclaw.json), understand how the system prompt is built, debug session/channel/model issues, navigate the docs or source code, or tune agent defaults (models, thinking, sandbox, tools, heartbeat, compaction, channels, skills, plugins, cron, hooks). Also use for questions about OpenClaw architecture, the agent loop, context window, or how any OpenClaw feature works internally. |
Explore and reconfigure your own harness. This skill gives you structured knowledge about the OpenClaw internals so you can inspect, debug, and tune the running gateway.
| name | description |
|---|---|
replicate-mcp |
Configure and validate Replicate MCP connectivity in Codex using REPLICATE_API_TOKEN and the official replicate-mcp server package. Use when setting up Replicate MCP for the first time, reconnecting after auth/config changes, or troubleshooting missing Replicate MCP tools. |
Use this skill to set up and verify Replicate MCP access in Codex with minimal back-and-forth.
| #!/bin/bash | |
| # Update | |
| sudo apt-get update | |
| # Install tools | |
| sudo apt install nvidia-cuda-toolkit | |
| # Install cog | |
| sudo curl -o /usr/local/bin/cog -L "https://github.com/replicate/cog/releases/latest/download/cog_$(uname -s)_$(uname -m)" | |
| sudo chmod +x /usr/local/bin/cog |
| palette = 0=#23272e | |
| palette = 1=#f38020 | |
| palette = 2=#a8e6a3 | |
| palette = 3=#faae40 | |
| palette = 4=#4da6ff | |
| palette = 5=#ff80ab | |
| palette = 6=#66d9ef | |
| palette = 7=#c0c5ce | |
| palette = 8=#4f5b66 | |
| palette = 9=#f38020 |
| # Replace /etc/docker/daemon.json docker config in Brev.dev Crusoe GPUs | |
| { | |
| "default-runtime": "nvidia", | |
| "mtu": 1500, | |
| "runtimes": { | |
| "nvidia": { | |
| "args": [], | |
| "path": "nvidia-container-runtime" | |
| } | |
| }, |
| #!/bin/bash | |
| # This is local cli command that allows users to use kokoro on a Macbook Pro | |
| # Requires you to first run the kokoro docker container: | |
| docker run -p 8880:8880 ghcr.io/remsky/kokoro-fastapi-cpu:latest | |
| # Then save this file to /usr/local/bin | |
| # Finally you can test: | |
| kokoro "The quick brown fox jumped over the lazy dog" | |
| # Or even pipe from a stream like: | |
| llm "tell me a joke" | kokoro |
| # Setup: | |
| # conda create -n wan python=3.10 | |
| # conda activate wan | |
| # pip3 install torch torchvision torchaudio | |
| # pip install git+https://github.com/huggingface/diffusers.git@3ee899fa0c0a443db371848a87582b2e2295852d | |
| # pip install accelerate==1.4.0 | |
| # pip install transformers==4.49.0 | |
| # pip install ftfy==6.3.1 |
| services: | |
| pihole-unbound: | |
| image: 'bigbeartechworld/big-bear-pihole-unbound:2024.07.0' | |
| environment: | |
| - SERVICE_FQDN_PIHOLE_8080 | |
| - SERVICE_FQDN_PIHOLE_10443 | |
| - 'DNS1=127.0.0.1#5353' | |
| - DNS2=no | |
| - TZ=America/Chicago | |
| - WEBPASSWORD=$SERVICE_PASSWORD_PIHOLE |
| from optimum.quanto import freeze, qfloat8, quantize | |
| from diffusers import FluxPipeline | |
| import torch | |
| import time | |
| seed=1337 | |
| generator = torch.Generator("cuda").manual_seed(seed) | |
| pipeline = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cuda") |