How the stealth browser handles CAPTCHAs
What OpenClaw's stealth browser actually does, what it doesn't bypass, and how to think about CAPTCHA failure modes honestly.
Estimated time: PT4M
Every OpenClaw deployment ships with a stealth browser — a patched Chromium that hides the surface area bot-detection vendors look at. This page is the honest version of what it does and doesn't do, because the marketing version of CAPTCHA-bypass software is almost always overstated.
What the stealth browser is
It's a Chromium build with a set of patches applied to the bits of the runtime that scream "I am an automation framework":
- The
navigator.webdriverflag is gone. - The Chrome DevTools Protocol artifacts that Puppeteer/Playwright leak by default are suppressed.
- Headless rendering tells (font fingerprints, WebGL renderer strings, timing tells in
requestAnimationFrame) are normalized to look like a real machine. - The default automation user-agent is replaced.
That's roughly forty patches in total. None of them are exotic — most are well-known in the scraping community. The value is that they all ship together, configured, and updated as detection vendors learn new tells.
What the stealth browser handles well
Most of the web. Specifically:
- Static content sites (blogs, docs, marketplaces, public data dashboards). 99%+ success.
- Lightly protected forms. Sign-up forms with Cloudflare's "Just a moment" challenge usually pass cleanly.
- Authenticated sessions you brought yourself. If you log in once and persist cookies, the stealth browser keeps the session alive and doesn't give itself away on subsequent requests.
- Sites that gate on user-agent strings or simple JS checks. The patches handle these without intervention.
If your agent's main job is "read this URL and summarize it" or "fill in this form on this site I trust," the stealth browser will not be your bottleneck.
What the stealth browser does NOT do
This is the section that matters more than the previous one.
- Solve interactive CAPTCHAs. hCaptcha enterprise, reCAPTCHA v3 with low scores, Arkose FunCAPTCHA — when the site demands you click pictures of crosswalks, the stealth browser cannot do that. The agent will surface the puzzle back to the operator.
- Defeat fingerprint-correlation services. DataDome, PerimeterX (now HUMAN), Kasada — these correlate hundreds of subtle signals across multiple requests. We can win one or two requests, but a full session against an enterprise vendor will eventually get flagged.
- Bypass residential-IP gating. If the site blocks all data-center IPs, no amount of browser-side patching helps. You'd need to route through a residential-proxy service, which we don't currently provide.
- Solve audio CAPTCHAs. They're trivially solvable with a separate STT pipeline, but we don't ship that.
How CAPTCHA failures surface
When a CAPTCHA blocks the agent:
- The browser tool returns a structured error to the agent's tool-call loop with
kind: "captcha_blocked"and a screenshot. - The agent typically tells the user: "I hit a CAPTCHA on that site — here's a screenshot. You'll need to either solve it for me, or use a different source."
- The operator can manually solve it in a one-shot claim-the-browser flow if they're on a paid tier (the dashboard exposes a live browser handoff).
This is the honest answer. We've found it builds more trust with users than promising a magic CAPTCHA bypass that fails 30% of the time on important sites.
Where it runs
On ShipClaw, the stealth browser runs inside the same isolated container as your agent. There is no shared cookie jar between users, no cross-tenant DOM state, and no chance another user's session bleeds into yours. The architecture detail lives in the pool-node glossary entry.
Tuning
Most users don't need to tune anything — the defaults are good. If you're scraping a specific high-protection site, you can raise the per-request timeout in the agent's openclaw.json to give the browser more time on slow JS challenges. We'd rather you contact us first; if a site keeps failing, we'd often add a targeted patch.