AI ImplementationFebruary 19, 2026· 16 min read

Stop Paying Opus for Everything: Dual-Gateway OpenClaw with OpenRouter

Run separate gateways for coding and browsing instead of switching models mid-session. An alternative approach to the multi-provider gateway from Part 2 — two gateways, two ports, zero model switching.

Bright vibrant dual-gateway AI architecture concept with two separate pathways for coding and research

Part 3: Run separate gateways for coding and browsing instead of switching models mid-session.

The series so far

In Part 1, I migrated 208 OpenClaw agent sessions from Claude Opus 4.6 to Google Vertex AI — cutting daily costs from $250 to $5 using tiered model routing with GPT-OSS and Qwen on Vertex AI MaaS.

In Part 2, I went further and built a multi-provider architecture inside a single OpenClaw gateway: MiniMax M2.5 via OpenRouter as the primary, Vertex AI models as fallbacks, and Claude Opus 4.6 as the last resort. That gave me the best price-to-intelligence ratio I'd found — $7/day instead of $250/day — with in-session model switching using aliases like @minimax25 and @opus.

This post is an alternative approach to Parts 1 and 2.

The single-gateway setup from Part 2 works, but switching models mid-session can be clunky. You have to remember the aliases, you're toggling inside the same conversation context, and the gateway doesn't "know" what kind of task you're about to do — you have to tell it every time. If you forget to switch, you're either overpaying (Opus for a simple refactor) or underperforming (a cheap model for a visual browser task).

The idea here is simpler: run two separate gateways on two ports, each permanently pinned to a different model. One for coding. One for browsing. No switching. You just send the task to the right port.

Why two gateways instead of one?

Both approaches solve the same problem — stop routing everything through the most expensive model. But they solve it differently:

Part 2: Single gateway, model switching Part 3: Dual gateway, task routing
How you pick a model Type an alias (@minimax25) in the chat Send the task to the right port
Risk of using the wrong model Easy to forget to switch Low — the port is the model
Session context Shared across models Isolated per gateway
Setup complexity One gateway, one config, multiple providers Two gateways, two configs, two auth stores
Best for Exploratory work where you want flexibility Repetitive workflows with clear task boundaries

Neither is strictly better. If you liked the flexibility of Part 2, keep using it. If you want something where routing is automatic and you never think about model aliases, read on.

The cost math (still dramatic)

On OpenRouter, the pricing gap between Opus and the alternatives is massive:

Model OpenRouter ID OpenClaw model ref Input / M tokens Output / M tokens
Claude Opus 4.6 anthropic/claude-opus-4.6 openrouter/anthropic/claude-opus-4.6 $5.00 $25.00
MiniMax M2.5 minimax/minimax-m2.5 openrouter/minimax/minimax-m2.5 $0.30 $1.10
Kimi K2.5 moonshotai/kimi-k2.5 openrouter/moonshotai/kimi-k2.5 $0.23 $3.00

Prices from OpenRouter as of February 19, 2026. Prices shown reflect the default route.

Why two columns for model names? OpenRouter and OpenClaw use different ID formats — confusing them is the most common setup error. More on this below.

For a realistic chunk of work — 1 million input tokens and 300K output tokens:

Model Cost
Opus 4.6 (1 × $5.00) + (0.3 × $25.00) = $12.50
MiniMax M2.5 (1 × $0.30) + (0.3 × $1.10) = $0.63
Kimi K2.5 (1 × $0.23) + (0.3 × $3.00) = $1.13

MiniMax is roughly 17× cheaper on input and 23× cheaper on output than Opus. Kimi is roughly 22× cheaper on input and 8× cheaper on output.

The models and why these two

MiniMax M2.5 is a strong coding model — the same one I made the primary in Part 2. It scores 80.2% on SWE-Bench Verified, is cost-efficient, and solid for generating code, patching files, and iterative refactors.

Kimi K2.5 is new to this series. It's Moonshot AI's native multimodal model with strong visual understanding and agentic tool-calling — exactly what you want for tasks that involve navigating and interpreting browser state. In the Part 2 single-gateway setup, you'd have to alias-switch to a visual model. Here, the research gateway just is the visual model.

For readers unfamiliar with it: OpenClaw is an open-source AI assistant gateway that runs locally on your machine and connects AI models to your chat apps, coding tools, and browser. It's the control plane powering this entire series.

Architecture

One machine. Two gateway processes. Two ports. Two default models. Both routing through OpenRouter.

                 ┌──────────────────────────────┐
                 │          OpenRouter           │
                 │   (one API, many providers)   │
                 └───────────────┬───────────────┘
                                 │
      ┌──────────────────────────┴──────────────────────────┐
      │                                                     │
┌─────▼─────┐                                         ┌─────▼─────┐
│ Gateway A │                                         │ Gateway B │
│ Coding    │                                         │ Research  │
│ :18789    │                                         │ :19001    │
│ openrouter│                                         │ openrouter│
│ /minimax/ │                                         │ /moonshot │
│ minimax-  │                                         │ ai/kimi-  │
│ m2.5      │                                         │ k2.5      │
└───────────┘                                         └───────────┘
      │                                                     │
  IDE / CLI tasks                                   Browser / visual tasks

Compare this to the Part 2 architecture, where everything ran through a single gateway with a fallback chain. Here, each gateway is self-contained — its own config, its own auth store, its own model default.

That isolation is the point, but it has a cost: each profile maintains its own auth store. Your OpenRouter API key doesn't automatically carry over when you create a new profile. This catches almost everyone.

The 20-port rule (and the hard lesson that produced it)

Update (February 20, 2026): The original version of this post used ports 18789 and 18790 — adjacent ports, one apart. That turned out to be a serious mistake that cost several hours of debugging.

What went wrong with adjacent ports

OpenClaw doesn't just use the base gateway port. It derives additional ports from it for internal services — browser control, canvas hosting, CDP (Chrome DevTools Protocol) ranges, and sub-agent communication. When two gateways run on adjacent ports like 18789 and 18790, those derived port ranges overlap and collide.

The symptoms were subtle and maddening:

  1. Sub-agent spawning failures. The coding gateway tried to spawn a sub-agent, couldn't bind to a derived port because the research gateway already held it, and the task silently failed with gateway closed (1008): pairing required.
  2. An agent killed the other gateway. This was the worst one. The coding agent detected what it interpreted as a conflicting process on a nearby port — the research gateway. It decided the "conflict" was an error condition and terminated the research gateway process mid-session. Work was lost. The research gateway's logs simply showed gateway closed (1006 abnormal closure) with no close reason.
  3. Bonjour name collisions. Both gateways advertised themselves via Bonjour on the local network, leading to repeated "name conflict resolved" churn in the logs — harmless but noisy, and it made the real errors harder to spot.

The fix: space your ports by at least 20

The rule is simple: leave a gap of at least 20 between gateway ports so derived port ranges never overlap.

Gateway Port (original, broken) Port (fixed)
Coding 18789 18789
Research 18790 ❌ 19001 ✅

Any port ≥ 18809 would work for the research gateway. I chose 19001 for readability.

You can also suppress Bonjour advertising entirely if you don't need local network discovery:

OPENCLAW_DISABLE_BONJOUR=1 openclaw --profile coding gateway --port 18789
OPENCLAW_DISABLE_BONJOUR=1 openclaw --profile research gateway --port 19001

The two traps (read these before setup)

Trap 1: The model ID prefix

This was a gotcha in Part 2 as well. OpenRouter identifies models like moonshotai/kimi-k2.5. But OpenClaw parses model refs by splitting on the first / — so if you put moonshotai/kimi-k2.5 directly into your OpenClaw config, it interprets moonshotai as the provider (which you haven't configured), and fails with:

Unknown model: moonshotai/kimi-k2.5

The fix: prefix the OpenRouter model ID with openrouter/.

What you see on OpenRouter What you put in OpenClaw
minimax/minimax-m2.5 openrouter/minimax/minimax-m2.5
moonshotai/kimi-k2.5 openrouter/moonshotai/kimi-k2.5
anthropic/claude-opus-4.6 openrouter/anthropic/claude-opus-4.6

Trap 2: Each profile needs its own OpenRouter API key

If you read Part 2, you may remember Gotcha #9 — the auth-profiles.json file. The same issue hits harder here because you have two profiles that each need credentials.

OpenClaw stores provider credentials per profile at:

~/.openclaw-<profile>/.openclaw/agents/main/agent/auth-profiles.json

When you create a new profile, that auth store starts empty. You'll see:

No API key found for provider "openrouter".
Auth store: /Users/you/.openclaw-research/.openclaw/agents/main/agent/auth-profiles.json

You need to provision the OpenRouter key into every profile. Three options (pick one per profile):

Option A — Run openclaw configure (cleanest):

OPENCLAW_HOME="$HOME/.openclaw-research" openclaw configure

Choose OpenRouter when prompted, paste your API key locally.

Option B — Copy the auth store from a working profile (fastest):

mkdir -p "$HOME/.openclaw-research/.openclaw/agents/main/agent"

cp "$HOME/.openclaw-coding/.openclaw/agents/main/agent/auth-profiles.json"    "$HOME/.openclaw-research/.openclaw/agents/main/agent/auth-profiles.json"

chmod 600 "$HOME/.openclaw-research/.openclaw/agents/main/agent/auth-profiles.json"

Option C — Set the key in the profile's config file:

Add an env block to that profile's openclaw.json:

{
  "env": {
    "OPENROUTER_API_KEY": "sk-or-..."
  }
}

Convenient but stores the key in plaintext on disk.

After any of these, verify with a probe:

OPENCLAW_HOME="$HOME/.openclaw-research" openclaw models status --probe

You should see api_key=1 and a probe result of ok. If it says "Missing auth," the key didn't land — re-run openclaw configure for that profile.

Setup: the clean happy path

1. Create two isolated OpenClaw profiles

~/.openclaw-coding
~/.openclaw-research

If fresh, initialize with openclaw onboard --accept-risk for each OPENCLAW_HOME.

2. Provision your OpenRouter API key into both profiles

Don't skip this. Run openclaw configure for each:

OPENCLAW_HOME="$HOME/.openclaw-coding"   openclaw configure
OPENCLAW_HOME="$HOME/.openclaw-research" openclaw configure

3. Set each gateway to its own port (with proper spacing)

Coding stays on 18789. Research moves to 19001 (not 18790 — see the 20-port rule above).

4. Pin each gateway to a default model

The fastest way — openclaw models set writes the config in the correct schema:

OPENCLAW_HOME="$HOME/.openclaw-coding" openclaw models set openrouter/minimax/minimax-m2.5
OPENCLAW_HOME="$HOME/.openclaw-research" openclaw models set openrouter/moonshotai/kimi-k2.5

If you prefer to edit the config directly, remember that agents.defaults.model is an object, not a string:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "openrouter/moonshotai/kimi-k2.5"
      }
    }
  }
}

For those who want to script both configs at once:

python3 - <<'PY'
import json, pathlib

def patch(home_dir, port, primary):
    p = pathlib.Path(home_dir).expanduser() / ".openclaw" / "openclaw.json"
    j = json.loads(p.read_text())

    j.setdefault("gateway", {})
    j["gateway"]["mode"] = j["gateway"].get("mode", "local")
    j["gateway"]["bind"] = j["gateway"].get("bind", "loopback")
    j["gateway"]["port"] = port

    j.setdefault("agents", {})
    j["agents"].setdefault("defaults", {})
    j["agents"]["defaults"]["model"] = {"primary": primary}

    p.write_text(json.dumps(j, indent=2) + "
")
    print(f"patched {p} -> {primary} on {port}")

patch("~/.openclaw-coding",   18789, "openrouter/minimax/minimax-m2.5")
patch("~/.openclaw-research", 19001, "openrouter/moonshotai/kimi-k2.5")
PY

5. Start both gateways

OPENCLAW_HOME="$HOME/.openclaw-coding"   openclaw gateway
OPENCLAW_HOME="$HOME/.openclaw-research" openclaw gateway

Or with --profile:

openclaw --profile coding gateway --port 18789
openclaw --profile research gateway --port 19001

6. Verify everything end-to-end

OPENCLAW_HOME="$HOME/.openclaw-coding"   openclaw models status --plain --probe
OPENCLAW_HOME="$HOME/.openclaw-research" openclaw models status --plain --probe

Both should show the correct openrouter/-prefixed model ref and an auth probe of ok.

Finding the gateway token

When running multiple gateways, one of the most common friction points is finding the right token for the right port. You need the gateway auth token to open the Control UI in your browser. Here are three ways to get it, from easiest to most manual.

Method 1: Let OpenClaw open the dashboard for you (easiest)

OpenClaw has a built-in command that opens the Control UI in your default browser with the correct token already embedded in the URL:

openclaw --profile coding  dashboard
openclaw --profile research dashboard

This is the recommended approach. It constructs the correct /#token=... URL and opens it directly — no copy-pasting required.

Method 2: Print the token from the CLI

If you want to construct the URL yourself (for example, to open it in a specific browser or Incognito window):

openclaw --profile coding  config get gateway.auth.token --plain
openclaw --profile research config get gateway.auth.token --plain

Then open:

http://127.0.0.1:18789/#token=<PASTE_CODING_TOKEN>
http://127.0.0.1:19001/#token=<PASTE_RESEARCH_TOKEN>

Method 3: Extract the token from the config file directly

If the gateway isn't running (so CLI commands fail), you can pull the token straight from the JSON config:

python3 -c "
import json, pathlib, sys
profile = sys.argv[1]
p = pathlib.Path.home() / f'.openclaw-{profile}' / '.openclaw' / 'openclaw.json'
print(json.loads(p.read_text())['gateway']['auth']['token'])
" coding

Replace coding with research for the other profile. Or grab both at once:

for profile in coding research; do
  token=$(python3 -c "
import json, pathlib
p = pathlib.Path.home() / '.openclaw-$profile' / '.openclaw' / 'openclaw.json'
print(json.loads(p.read_text())['gateway']['auth']['token'])
  ")
  echo "$profile: $token"
done

Method 4: Open the config files and look

When all else fails, open the config file in any editor and search for the token field under gateway.auth:

open -a "Visual Studio Code" "$HOME/.openclaw-coding/.openclaw/openclaw.json"
open -a "Visual Studio Code" "$HOME/.openclaw-research/.openclaw/openclaw.json"

Or with TextEdit:

open -a TextEdit "$HOME/.openclaw-coding/.openclaw/openclaw.json"

Or in the terminal:

nano "$HOME/.openclaw-coding/.openclaw/openclaw.json"

Look for:

{
  "gateway": {
    "auth": {
      "token": "your-token-here"
    }
  }
}

Note on code vs open: If you get zsh: command not found: code, the VS Code: CLI isn't installed in your PATH. Either use open -a "Visual Studio Code" instead, or install it from VS Code: Cmd+Shift+P → "Shell Command: Install 'code' command in PATH."

Connecting the Control UI (and fixing device token mismatches)

When running multiple gateways, browser auth state gets tangled. There are two different auth layers at play, and confusing them is the source of most Control UI headaches:

  1. Gateway token — the token you put in the URL as /#token=.... This authenticates you to the gateway. You find it using the methods in the previous section.
  2. Device token — a per-browser credential stored in your browser's local storage for that specific origin (http://host:port). This is generated automatically during the pairing process and tied to the gateway's internal device registry.

The device_token_mismatch error means the browser is sending a device token that the gateway doesn't recognize. This typically happens when you restart a gateway, change its config, or (in the original version of this post) swap which profile runs on which port.

The permanent fix for device token mismatches

A. Never reuse the same port for a different profile. Once 18789 = coding and 19001 = research, keep it that way. Most device token mismatches come from changing what lives behind a port.

B. Use different origins for each gateway. The browser keys device tokens by origin (host + port). If both gateways run on 127.0.0.1, each port is already a distinct origin. But you can make the isolation even stronger by using different hostnames:

  • Coding UI: http://127.0.0.1:18789/#token=...
  • Research UI: http://localhost:19001/#token=...

Because 127.0.0.1 and localhost are technically different origins, the browser maintains completely separate storage for each — eliminating any chance of cross-contamination.

C. Use a separate Chrome profile per gateway. This is the most robust isolation: each Chrome profile has its own local storage, cookies, and device tokens. Create one called "OpenClaw Coding" and another called "OpenClaw Research."

When it happens anyway (fast recovery)

If you see device_token_mismatch, here's how to fix it quickly:

Option 1 — Open an Incognito window. Incognito has clean storage, so it won't present the stale device token. Navigate to http://127.0.0.1:18789/#token=<YOUR_TOKEN> and you'll get a fresh pairing.

Option 2 — Clear site data for that origin.

  1. Open Chrome DevTools on the Control UI tab (F12)
  2. Go to ApplicationStorage
  3. Click Clear site data for http://127.0.0.1:18789 (or whichever port)
  4. Reload the page with the /#token=... URL

Option 3 — Nuclear reset from Chrome settings.

  1. Go to chrome://settings/siteData
  2. Search for 127.0.0.1
  3. Delete entries for 18789 and/or 19001
  4. Re-open the gateway URLs with their tokens

Preventing agents from killing each other's gateways

This was the most disruptive issue we encountered, and it deserves its own section.

What happened

With two gateways running on adjacent ports (18789 and 18790), the coding agent tried to spawn sub-agents for a complex task. During the spawning process, it detected what it interpreted as a conflicting gateway process on a nearby port. The agent treated this as an error — a stale or broken process that needed cleanup — and terminated it. The "stale process" was actually the research gateway, which was in the middle of an active session.

The research gateway's logs showed only:

gateway closed (1006 abnormal closure (no close frame)): no close reason

No warning. No negotiation. The agent just killed it.

Why it happens

OpenClaw agents have the ability to manage gateway processes as part of their self-healing behavior. When an agent encounters connection failures during sub-agent spawning, it may inspect nearby ports, find processes it doesn't recognize as "its own," and terminate them. With adjacent ports, the derived service ports overlap, making the other gateway look like a broken instance of the same gateway.

How to prevent it

  1. Space your ports by ≥ 20. This is the primary fix. With 18789 and 19001, the derived port ranges don't overlap, so neither agent will "see" the other gateway's processes as something to clean up.
  2. Always start gateways with explicit --profile and --port flags. This ensures each gateway process is clearly scoped:
openclaw --profile coding  gateway --port 18789
openclaw --profile research gateway --port 19001
  1. Never use --force unless you mean it. The --force flag tells OpenClaw to kill whatever is currently bound to the target port. If you accidentally run openclaw --profile research gateway --port 18789 --force, it will terminate the coding gateway. Always double-check port numbers when using --force.
  2. Disable Bonjour if you don't need local network discovery. Bonjour advertising causes name collisions between gateways, and while the collisions are harmless, they create noise in the logs that makes real errors harder to spot:
OPENCLAW_DISABLE_BONJOUR=1 openclaw --profile coding gateway --port 18789
OPENCLAW_DISABLE_BONJOUR=1 openclaw --profile research gateway --port 19001

Routing rubric: which gateway gets the task?

Signal in the task Route to Why
Code generation, refactoring, test writing, file patching Coding gateway (MiniMax M2.5) Strong coding benchmarks at a fraction of Opus cost
Browser navigation, screenshot interpretation, DOM interaction, form filling Research gateway (Kimi K2.5) Native multimodal model built for visual + agentic work
Ambiguous, high-stakes, or unfamiliar territory Opus (manual override or separate gateway) When the cost of being wrong exceeds the cost of the model

A simple heuristic: if the prompt mentions a URL, a screenshot, or "click" — Kimi. If it mentions a file path, a function, or "refactor" — MiniMax. If you're nervous about the answer — Opus.

Gotchas (quick reference)

  1. Unknown model error. You used the bare OpenRouter ID instead of the OpenClaw model ref. Always prefix with openrouter/.
  2. No API key found for provider "openrouter". Each profile has its own auth-profiles.json. New profiles start empty. Run openclaw configure for each, or copy the auth store from a working profile.
  3. Port conflict on startup. New profiles default to port 18789. Change gateway.port in the research config to 19001. Do not use adjacent ports — leave a gap of at least 20.
  4. agents.defaults.model expects an object, not a string. Use {"primary": "openrouter/minimax/minimax-m2.5"}. Or use openclaw models set to avoid hand-editing.
  5. Two Control UI auth errors that look the same but aren't. device_token_mismatch is stale browser pairing state (clear site data or use Incognito); gateway token missing means the UI needs /#token=<TOKEN> in the URL.
  6. Always verify with openclaw models status --plain --probe. It confirms model defaults and auth in one command.
  7. Agent killed the other gateway. Adjacent ports cause derived port range collisions. One agent may interpret the other gateway as a broken process and terminate it. Fix: space ports by ≥ 20 (e.g., 18789 and 19001).
  8. gateway closed (1006 abnormal closure) when running CLI commands. The gateway on that port isn't running or crashed during startup. Start it in the foreground first (openclaw --profile coding gateway --port 18789) to see the error output, then check logs at /tmp/openclaw/openclaw-*.log.
  9. pairing required on scope upgrade. The gateway is refusing a device's request to escalate permissions. Approve it via the Control UI's devices screen, or run openclaw --profile <name> devices approve --latest.
  10. zsh: command not found: code. The VS Code: CLI isn't in your PATH. Use open -a "Visual Studio Code" <file> instead, or install the CLI from VS Code: (Cmd+Shift+P → "Shell Command: Install 'code' command in PATH").

When this pattern works (and when it doesn't)

This dual-gateway approach is worth it when your workload naturally splits into coding and browsing, you do enough volume that pricing differences compound, and you want routing that's automatic — no aliases, no switching, just "send it to the right port."

It's probably not worth the overhead if you liked the flexibility of the Part 2 single-gateway setup and don't mind typing @minimax25 or @opus, or if you rarely use the browser tool.

The two approaches aren't mutually exclusive, either. You can run a dual-gateway setup for your main workflows and still have the Part 2 fallback chain inside each gateway for resilience.

Looking ahead, the pattern extends naturally: a third gateway for writing-heavy tasks, automatic routing based on prompt classification, or fallback chains where a cheap model tries first and escalates to Opus only on failure. The core idea across all three parts of this series is the same: route by competency, not by habit.

All pricing sourced directly from OpenRouter model pages on February 19, 2026: Claude Opus 4.6, MiniMax M2.5, Kimi K2.5.

This is Part 3 of my OpenClaw cost optimization series. Read Part 1: Migrating to Vertex AI and Part 2: Multi-Provider Gateway with MiniMax M2.5.

About the Author

Levi Brackman

Levi Brackman is the founder of Be AI First, helping companies become AI-first in 6 weeks. He builds and deploys agentic AI systems daily and advises leadership teams on AI transformation strategy.

Learn more →