Yue Song

OpenClaw Setup: My First 8 Hours From Zero to Actually Working

Mar 20, 2026

What I actually did to get OpenClaw working from a clean reset, including Slack, Tavily, Lossless Claw, QMD, and the parts that still failed.

Setting up OpenClaw was not a one-command install for me.

It was more like bringing a small system online piece by piece: model, channel, search, context, memory, and then a long debug loop until the parts finally started working together.

This is what I actually did over roughly eight hours:

  • started from scratch with a clean reset
  • completed onboarding for the model, Slack, and search
  • replaced the default search option with Tavily
  • installed a context system with Lossless Claw
  • set up memory with QMD and local GPU acceleration
  • tried, and failed for now, to harden secrets

If you are trying OpenClaw for the first time, this is the version I wish I had read before starting.

1. Start clean

rm -rf ~/.openclaw/

If you have installed OpenClaw before, clearing the directory is worth doing.

In my case, old state was more confusing than helpful. A clean reset made onboarding much more predictable, especially when switching model providers.

2. Install and onboard

curl -fsSL https://openclaw.ai/install.sh | bash
openclaw onboard

This part was actually smooth.

During onboarding, OpenClaw walks through three core choices:

  • model provider
  • channel
  • search tool

My choices

  • Model: Minimax
  • Channel: Slack
  • Search: skipped Brave and replaced it later

I picked Minimax because the cost-to-performance tradeoff looked good enough for an initial setup.

Why I skipped Brave

OpenClaw suggests Brave search by default, but I wanted something with a more generous free tier.

Tavily was the more practical choice for me because it offers 1000 free credits per month, which is plenty for early testing.

3. Replace search with Tavily

After onboarding, I went to the dashboard and asked:

setup tavily skill

That was enough to get the configuration flow started. I did not need to wire everything manually.

You do still need to provide your own Tavily API key, but the overall setup was much easier than I expected.

4. Connect Slack

During onboarding, OpenClaw prompts you to create and configure a Slack app. After that, I used:

openclaw pairing list slack
openclaw pairing approve slack <CODE>

Reference:

What happened in practice

  • Pairing worked.
  • Replies did not fully work at first.

The main issue I hit was missing_scope. In my case, I needed to add im:write before OpenClaw could reliably send messages back in Slack.

So the integration was not broken, but it was not fully done just because pairing succeeded.

5. Install a context system early

Before setting up memory, I installed Lossless Claw:

This felt important early on because the default failure mode for long-running agents is simple: they gradually lose useful context.

Lossless Claw is meant to preserve more of that context and compress it intelligently instead of dropping it outright. If you want OpenClaw to stay useful over longer sessions, this seems foundational.

It also helped that the OpenClaw creator recommended it, so I treated it as part of the baseline rather than an optional add-on.

6. Set up memory with QMD

I installed QMD with:

npm install -g @tobilu/qmd

Reference:

Minimal config

"memory": {
  "backend": "qmd",
  "citations": "auto",
  "qmd": {
    "update": {
      "interval": "5m",
      "waitForBootSync": false
    },
    "limits": {
      "maxResults": 6
    }
  }
}

I tried to keep this part simple.

There are a lot of ways to over-tune memory settings before the basics are working. This minimal config was enough to get something usable.

7. Run QMD locally with GPU acceleration

This was the most environment-specific part of the setup.

My machine:

  • Windows with WSL2
  • RTX 5090
  • CUDA 12.9 worked
  • CUDA 13.0 did not work for me

Clean up failed local builds

When QMD builds went wrong, I removed the cached local build artifacts and tried again:

rm -rf ~/.npm-global/lib/node_modules/@tobilu/qmd/node_modules/node-llama-cpp/build
rm -rf ~/.npm-global/lib/node_modules/@tobilu/qmd/node_modules/node-llama-cpp/llama/localBuilds

CUDA environment

export CUDA_PATH=/usr/local/cuda-12.9
export CUDAToolkit_ROOT=/usr/local/cuda-12.9
export CUDACXX=/usr/local/cuda-12.9/bin/nvcc
export PATH=/usr/local/cuda-12.9/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-12.9/lib64:${LD_LIBRARY_PATH:-}

Compiler alignment

export CC=/usr/bin/gcc-12
export CXX=/usr/bin/g++-12
export CUDAHOSTCXX=/usr/bin/g++-12
export CMAKE_C_COMPILER=/usr/bin/gcc-12
export CMAKE_CXX_COMPILER=/usr/bin/g++-12

Quick verification

qmd query "who"

If Hugging Face requests hang, this helped:

HF_ENDPOINT=https://hf-mirror.com qmd query "who"

That was enough for me to confirm the local QMD path was finally alive.

8. Expect a long debug loop

Most of my time was not spent on installation. It was spent here:

openclaw logs --follow
openclaw gateway status --json
openclaw channels status --probe

The actual loop looked like this:

  • edit config
  • restart gateway
  • check logs
  • repeat

That is the part worth expecting upfront. The complexity is rarely in one command failing. It is in several components almost working, but not quite working together yet.

9. Secrets hardening was the part I could not finish

I left secrets to the end, which turned out to be the right call.

openclaw secrets audit
openclaw secrets configure
openclaw gateway restart

Reference:

My result

  • env mode: failed
  • file mode: failed

I could not get secrets hardening working in this session, so I temporarily stayed with plaintext config.

That is not where I want to end up, but it was good enough to keep the rest of the system moving while I learned the stack.

10. What I ended up with after eight hours

Working

  • OpenClaw installed and onboarded
  • Slack connected, at least partially
  • Tavily search working
  • Lossless Claw installed
  • QMD memory working with GPU acceleration

Not working yet

  • secrets hardening

Final takeaway

My main lesson is that OpenClaw is not really a tool you install once.

It is a system you gradually make work.

No individual step was impossible. The real difficulty was the number of moving parts and the way they depend on each other: model provider, channel permissions, search, context, memory, local acceleration, and secrets.

If you are stuck on a general technical issue, ChatGPT is still useful for debugging background concepts. But when the problem is specific to OpenClaw itself, asking OpenClaw about its own configuration can actually be the faster move.

References

If you'd like to follow what I'm learning about AI tools and workflows, you can subscribe here → Subscribe to my notes