OpenClaw (previously known as Moltbot and before that Clawdbot) is a self-hosted personal AI assistant that lives on your server and communicates through the messaging apps you already use: WhatsApp, Telegram, Discord, Slack, iMessage, and more than 20 others. You send it a message. It executes real actions: reads your email, runs shell commands, manages files, calls APIs, browses the web, and reports back. It uses whatever AI model you point it at: Claude, GPT, local Ollama, or a combination routed by cost and task type.
This guide is the complete reference: from understanding what OpenClaw is, through installation, channels, models, automation, integrations, security, and scaling.
Every section links to a dedicated tutorial when you need the full details.
Jump to what you need
- New to OpenClaw? Start at What OpenClaw is, then Quick start (5 minutes).
- Installing right now? Go straight to Quick start or pick a deployment option.
- Want automation? See Cron jobs, heartbeat and webhooks.
- Deploying for a team? Start at Security, secrets and privacy.
- Evaluating cost? See Choosing AI models and how much OpenClaw costs.
- Going deep? See Multi-agent setups, advanced memory, and CLI and config reference.
In this guide
- What OpenClaw actually is
- What people use OpenClaw for
- Quick start: running in 5 minutes
- System requirements
- Deployment options: VPS, Docker, local
- Connecting messaging channels
- Choosing and configuring AI models
- Memory and context management
- Automation: cron, heartbeat and webhooks
- Skills and integrations
- Multi-agent setups
- Security, secrets and privacy
- Configuration and the CLI
- Monitoring, backups and upgrades
- Frequently asked questions
What OpenClaw actually is
OpenClaw runs as what's called an agent runtime. The term sounds abstract, but the practical difference from a chatbot is real. A chatbot takes a prompt and returns a response; that's the whole loop. An agent runtime does something more involved: it decides which tools to run, executes them, uses the results to think further, and produces output grounded in actual state: your real files, your live calendar, your actual inbox. That's why you can ask it to "summarize my emails from this week and flag anything that needs a reply by Friday" and get something useful back, rather than a description of what that might look like.
The architecture has three components:
- Gateway: a long-running Node.js process that handles message routing, channel connections, scheduling, and tool execution.
- CLI (
openclaw): how you configure and operate it from the terminal. - Workspace: a folder of Markdown files that define the agent's behavior:
SOUL.mdfor personality,AGENTS.mdfor capabilities and rules,MEMORY.mdfor curated persistent facts,HEARTBEAT.mdfor what to check during periodic background runs.
The flow when a message arrives:
You (Telegram / WhatsApp / Discord / Slack)
↓
OpenClaw Gateway (your server)
↓
Agent Runtime (builds context from session + workspace)
↓
AI Model (Claude / GPT / Ollama / etc.)
↓
Tool Execution (files / shell / web / APIs)
↓
Reply delivered back through the same channel
Nothing routes through OpenClaw's servers. Your messages, data, and credentials stay on your machine. The project started as Clawdbot, was briefly named Moltbot, and is now OpenClaw. All names refer to the same continuously developed project. If you're searching for Moltbot or Clawdbot guides, you're in the right place.
Further reading: What is OpenClaw and how it works | OpenClaw vs ChatGPT: Key differences | Why the name changed from Clawdbot to Moltbot to OpenClaw
What people use OpenClaw for
Once OpenClaw has been running for a few weeks, most people settle into one of these patterns. Some use it as a personal assistant; others build it into developer tooling or content workflows. Here's what shows up most in production setups:
Personal assistant
Daily email summaries, calendar management, reminders, web research, note-taking from voice, and a persistent memory of your preferences and ongoing projects, all through WhatsApp or Telegram. The key advantage over a cloud assistant: it remembers things across weeks, not just sessions.
Developer automation
PR review summaries triggered by GitHub webhooks, CI monitoring with alerts on failure, automated issue triage, code generation via terminal commands from Discord, system monitoring with alerts sent to your phone. Engineers running OpenClaw on a VPS report replacing several separate automation tools with a single configured agent.
Content and social media
Scheduled social media posts, content research via web scraping, RSS monitoring with daily briefings, draft review and publishing workflows. The cron scheduler handles timing; the agent handles the actual work.
Document and data processing
PDF summarization and extraction, file organization, spreadsheet processing, email thread summarization. OpenClaw runs locally so sensitive documents stay on your server.
Team and DevOps monitoring
Infrastructure monitoring alerts, log analysis, deployment status reports, on-call alerting through messaging apps. Multiple agents can cover different systems in parallel.
For integration-specific guides: Gmail and email automation | Google Calendar | GitHub PR reviews and CI | Web scraping | Social media scheduling | PDF workflows | File management
Quick start: running in 5 minutes
If you have a Linux server and want OpenClaw running immediately, this is the shortest path. For full detail on any step, the dedicated tutorials cover each one.
- Provision a server. Ubuntu 24.04, minimum 2 GB RAM. A LumaDock OpenClaw VPS comes with a pre-installed template (skip steps 2 and 3).
- Install Node.js 22+. OpenClaw requires Node 22 or later. Use your package manager or nvm.
- Install and onboard OpenClaw:
The wizard configures your model provider, links your first channel, and installs the Gateway as a system service.npm install -g openclaw@latest openclaw onboard --install-daemon - Connect Telegram. Create a bot via @BotFather, copy the token, enter it when the wizard asks. Easiest first channel.
- Send your first message. Open Telegram, message your bot. A reply means everything is working end to end.
After that: Verify your first message is working | Full Ubuntu 25.04 install guide | systemd + Discord + free Qwen (zero API cost)
System requirements
OpenClaw is a Node.js application with no database dependency. It runs on Linux, macOS, and Windows, though Linux VPS is the most common production environment.
- Node.js: 22 or later (required). Earlier versions produce cryptic errors.
- RAM: 512 MB minimum to start; 2 GB recommended for a single-user setup with automation; 4 GB+ for multi-agent or heavy cron workloads.
- Storage: 1 GB for the install and config. Sessions, memory logs, and workspace files grow over time; 10-20 GB is comfortable for long-running production setups.
- Network: Outbound HTTPS to your AI provider's API. Inbound only needed if you use webhooks (port 18789, behind a reverse proxy).
- OS: Ubuntu 22.04+ or Debian 11+ recommended for VPS. macOS works for local use. Windows works but is less tested in production.
For local models via Ollama, add approximately 4-8 GB RAM per model depending on parameter count. A 7B model runs on 4-6 GB; a 14B model needs 8-12 GB. Cloud API models have no local RAM requirement beyond running the Gateway itself.
Deployment options: VPS, Docker, local
VPS (recommended for most people)
A VPS gives you 24/7 uptime, a stable IP for webhooks, and the ability to run cron jobs and heartbeats without depending on a laptop staying awake. It's the right choice for anyone using OpenClaw for real automation rather than occasional use.
A 2 GB instance handles most single-user setups. 4 GB is comfortable for multi-agent configurations or heavy automation workloads. LumaDock's OpenClaw VPS hosting ships with a pre-installed Ubuntu 24.04 template, and the Gateway is already running when you first SSH in. For manual setup on any VPS, always bind the Gateway to 127.0.0.1 (loopback) and put nginx or Caddy in front for HTTPS. Exposing port 18789 directly to the internet has caused credential leaks in the community.
How to host OpenClaw securely on a VPS | OpenClaw VPS hosting announcement
Docker and Kubernetes
Docker is right for reproducible deployments, containerized isolation, or agent sandboxing (running tool executions inside isolated sub-containers separate from your host). OpenClaw ships a docker-setup.sh script that builds the image, runs the onboarding wizard, and configures Docker Compose automatically. On Kubernetes, the key constraint is storage: you need ReadWriteMany persistent volumes (NFS, CephFS, or Longhorn) because all replicas must share the same config and workspace directories simultaneously. Standard block storage doesn't work for multi-replica setups.
Docker and Kubernetes deployment guide | HA setup and scaling beyond one instance
Local machine
Running on a laptop or home server is the fastest way to try OpenClaw. The trade-off is uptime: if the machine sleeps, the Gateway goes offline and scheduled tasks don't run. Fine for testing and development, not reliable for automation you depend on.
For zero-cost local use with local models, the combination of a home machine plus Ollama covers everything without any cloud API spend.
Run OpenClaw locally for free with Ollama
Connecting messaging channels
OpenClaw works by meeting you on channels you already use. You add a bot to your existing Telegram, Discord, WhatsApp, or Slack and message it there, with no new app to learn. All channels can run simultaneously from a single Gateway instance.
Telegram (easiest to start)
Create a bot via @BotFather, copy the token, add it to your config. The Telegram bot API is stable, supports groups and threads natively, and has fine-grained permission controls. Start here if you're undecided.
Connect to Telegram | BotFather: menus, privacy and groups
Setup uses QR-scan device pairing (like WhatsApp Web). The linked number becomes the agent's WhatsApp identity. Best for mobile-first personal use. Production deployments need a dedicated number; there are real considerations around session stability and account bans at scale.
Connect to WhatsApp | WhatsApp production setup
Discord
Well-suited for developer workflows and team setups. Different channels and threads can route to different agents, so a #research channel can talk to a research agent while #code talks to a coding agent from the same Gateway.
Connect to Discord | Discord memory and persistent brain setup | Create Linux aliases from Discord
Slack
Connects as a Slack app and responds in channels or DMs. Common in professional and team environments. Requires careful OAuth scope management during app setup.
Integrate OpenClaw with Slack securely
SMS and iMessage
SMS via Twilio, iMessage via BlueBubbles (macOS), or the native iMessage bridge. Useful when you need a plain phone number as the interface.
Running multiple channels simultaneously
All channels run from a single Gateway. A personal WhatsApp bot, a team Discord bot, and a Telegram notification agent can all be live at the same time, routing to the same or different agents depending on your config.
Multi-channel setup across WhatsApp, Telegram, Discord and Slack
Choosing and configuring AI models
OpenClaw is model-agnostic. You configure which provider and model to use in openclaw.json, define fallbacks, and can route different agents or task types to different models. This decision has the largest single impact on quality and monthly cost.
Important note on subscription OAuth: Anthropic formally banned using Claude Free, Pro, and Max subscription OAuth tokens in any third-party tool or service, including OpenClaw, effective January 2026 and documented in February 2026. Their official position: "Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service is not permitted and constitutes a violation of the Consumer Terms of Service."
This means the "Login with Claude" flow no longer works. Anthropic clarified that existing accounts will not be cancelled, but the subscription OAuth path is blocked. The only supported way to use Claude in OpenClaw is via a pay-per-token API key from the Anthropic console.
OpenAI went the opposite direction and explicitly permits subscription OAuth in OpenClaw.
OpenAI (GPT and Codex)
GPT-5.x models via API key. If you have ChatGPT Plus or Pro, OpenClaw supports Codex OAuth: authenticate with your ChatGPT account and use your flat monthly subscription instead of per-token billing. OpenAI has publicly confirmed ChatGPT subscriptions can power OpenClaw.
Use OpenAI Codex with a ChatGPT subscription
Local models via Ollama (zero API cost)
Ollama runs open-source models locally. Qwen 2.5 Coder 7B and Llama 3.2 work well on 4-8 GB RAM at zero API cost. The practical strategy most experienced users land on: local models for heartbeats, embeddings, and routine cron checks, and cloud models for complex reasoning and user-facing replies.
Run OpenClaw free with Ollama | All free AI models for OpenClaw
Claude (Anthropic API key only)
Strong at complex multi-step instructions, natural writing, and following nuanced behavioral rules. Claude Sonnet is the practical balance of capability and cost. Haiku is cheap enough for heartbeats and routine checks. You must use an API key from the Anthropic console, not a Claude.ai subscription login. Subscription OAuth is blocked as noted above.
Claude vs OpenAI: which model to choose
OpenRouter and LiteLLM proxy
OpenRouter provides 100+ models via one API key with automatic failover. LiteLLM proxy adds prompt caching (20-50% cost reduction), rate limiting, and routing between providers. Both are valuable when running multiple agents or needing automatic fallback when a provider hits rate limits.
API proxy setup | Reduce API costs by 90%
Memory and context management
Memory is one of OpenClaw's most important differentiators. The agent remembers things across sessions, not just within one conversation.
How the memory system works
MEMORY.md holds curated facts loaded at every session start. memory/YYYY-MM-DD.md files are daily logs the agent writes automatically during compaction. SOUL.md defines personality and behavioral rules. Together they give the agent genuine long-term memory.
When a conversation approaches the model's context window limit, OpenClaw compacts it: the agent first writes important notes to the day's memory log, then summarizes the conversation history and continues with the summary. Context doesn't "fill up and stop"; it degrades gracefully while preserving what matters. Context pruning separately trims old tool results before each LLM call, preventing session bloat from large tool outputs.
How OpenClaw memory works and how to control it
Advanced memory backends
Three additional backends go beyond file-based memory: QMD (hybrid BM25 + vector retrieval with reranking), Cognee (knowledge graphs for entity-relationship queries), and mem0 (automatic fact extraction from conversations). Meaningful improvement for large knowledge bases or multi-agent setups.
Advanced memory: QMD, graphs, mem0
Automation: cron, heartbeat and webhooks
Talking to an agent reactively works fine for one-off tasks. The real value shows up when it's running things on its own schedule, monitoring conditions in the background, and reaching out to you when something actually needs attention. That's what the automation layer is for.
Cron jobs
Define a schedule with a standard cron expression, a message or system prompt, and an optional isolated session. The Gateway runs the job, the agent does the work, and you get a report in your channel. Common production examples: morning email and calendar brief, weekly cost digest, automated PR review summary after each CI run, nightly backup verification.
openclaw cron add \
--name "morning-brief" \
--cron "0 8 * * *" \
--message "Check my emails and calendar. Summarize what needs attention today."
Cron scheduler guide for proactive automations
Heartbeat
The heartbeat runs a periodic background check every 30 minutes by default. The agent reads HEARTBEAT.md, checks for anything that needs attention, and only messages you if it finds something. If nothing needs attention, it replies HEARTBEAT_OK internally and stays silent. This is the pattern for a proactive assistant that doesn't spam you.
Heartbeat vs cron: Which to use and when
Webhooks
Any external system that can send an HTTP POST can trigger an OpenClaw agent turn: GitHub, Stripe, PagerDuty, your own application. This is how you connect OpenClaw to the rest of your stack without polling.
OpenClaw webhooks: Complete guide
Concurrency and queue management
When inbound messages, cron jobs, heartbeats, and webhook-triggered runs overlap, OpenClaw serializes work through a lane-based FIFO queue. Each session has its own lane. Separate global lanes for main, cron, and subagent traffic mean background jobs never block interactive replies.
Concurrency, retry policies and queue configuration
Skills and integrations
Skills extend OpenClaw's capabilities through structured Markdown files that get injected into the agent's context, giving it access to new tools, APIs, and workflows. Install from the community registry or write your own.
Installing and building skills
Skills guide: Install, build and avoid risky bundles | Build custom OpenClaw skills | Custom API integration guide
Productivity integrations
Gmail and email automation | Google Calendar integration
Developer integrations
GitHub: PR reviews and CI monitoring | Web scraping and data extraction | Social media scheduling
Files and documents
File management automation | PDF summarization and extraction
Voice
Text-to-speech via ElevenLabs and other providers, speech-to-text via Whisper, voice note transcription on Telegram and WhatsApp, and a dedicated mobile Talk Mode app for iOS and Android.
Add voice with TTS, STT and Talk Mode
Multi-agent setups
A single well-configured agent covers most personal use cases. Multi-agent setups earn their complexity in specific scenarios: you need security isolation between agents (a public Discord bot with read-only tools vs a personal assistant with shell access), domain specialization running in parallel (a coding agent and a research agent on the same complex task) or different channels routing to different specialized agents.
OpenClaw's multi-agent model uses a coordinator-specialist pattern. The coordinator owns shared state and task lists. Specialists are stateless; they receive a task, execute it, and return results without accumulating conversation history. The Gateway's lane system keeps their concurrent executions isolated.
The honest caveat: multi-agent setups multiply token costs (roughly 3-4x for a coordinator with two active specialists), add debugging complexity, and introduce failure modes that don't exist in single-agent setups. Start with one agent and move to multiple only when you have a specific reason.
Multi-agent setup guide | Coordination patterns, governance and loop prevention
Security, secrets and privacy
OpenClaw has access to your files, shell, APIs, and messaging accounts. That is a large attack surface and it's worth being deliberate about from day one.
How secure is OpenClaw?
OpenClaw is secure when configured correctly. The core risks come from misconfiguration: exposing the Gateway port publicly without authentication, hardcoding credentials in config files, and giving agents broader tool permissions than they need. The defaults have improved over releases, but you still need to take active steps. The non-negotiable rules: bind to loopback, never a public IP. Use a reverse proxy with TLS and auth for remote access. Use tool allowlists to limit each agent to exactly what it needs. Enable DM pairing to prevent unknown senders from accessing your agent.
OpenClaw security best practices
Secrets management
API keys and bot tokens should never appear in plain text in openclaw.json. OpenClaw's SecretRef system references credentials from Docker secrets, HashiCorp Vault, AWS Secrets Manager, 1Password CLI, or Bitwarden. Values are resolved at runtime and never written into readable config files. At minimum, keep credentials in ~/.openclaw/.env and reference them with ${VAR_NAME} in config.
Keep your API keys safe: secrets management guide
Privacy and compliance
Self-hosting gives you data control but doesn't automatically satisfy legal obligations. For multi-user setups or anything touching personal information, you need session isolation per user (dmScope: "per-channel-peer"), a data retention policy, and clear agreements with your LLM providers. GDPR and HIPAA each require specific configuration; neither works out of the box.
GDPR, HIPAA and compliance guide
Configuration and the CLI
Everything in OpenClaw is configured through ~/.openclaw/openclaw.json, a JSON5 file (comments and trailing commas allowed) with strict schema validation. Unknown keys cause the Gateway to refuse to start with no warning mode. Safe editing workflow: back up first, make your change, validate with python3 -m json.tool, then run openclaw doctor --fix to catch schema issues. The Gateway hot-reloads most changes without a restart; port and bind address changes require one.
The CLI has over 100 subcommands. The ones you'll reach for constantly:
openclaw doctor --fix: health check and auto-repair. Run after every config change and every upgrade.openclaw gateway restart: restart the Gateway after changes that don't hot-reload.openclaw config get/set <key>: read and write config values safely without touching the file directly.openclaw channels status --probe: check channel health and connectivity.openclaw models status: verify which model is active and auth token status.openclaw cron list: see all scheduled jobs and their last run status.
Full CLI and config file reference
Monitoring, backups and upgrades
Monitoring
OpenClaw exports Prometheus metrics (queue depth, run duration, retry counts, token usage per agent) and OTEL traces for distributed tracing of multi-agent workflows. Pair this with uptime monitoring on port 18789, log aggregation, and alerts on queue depth and error rates for a complete production observability stack.
Monitoring: uptime, logs, metrics and alerts
Backups
All state lives in ~/.openclaw/: config, credentials, session files, memory logs, cron job definitions. Back it up before every upgrade and on a weekly schedule at minimum. Encrypt archives that contain credentials. The backup guide covers what to include, how to encrypt, how to restore to a new machine, and how to verify a backup actually works.
Backup guide: data, settings and memory
Upgrading
OpenClaw releases frequently and the config schema can change between versions; keys get renamed, new required fields appear. Always back up before upgrading. After upgrading, run openclaw doctor --fix to remove stale keys and add any new required fields. Silent schema drift (settings that appear saved but are silently ignored because they were renamed) is the most common post-upgrade failure mode.
How to upgrade OpenClaw safely | Troubleshooting common errors
Frequently asked questions
Is OpenClaw the same thing as Moltbot or Clawdbot?
Yes. OpenClaw, Moltbot, and Clawdbot are all the same project at different points in its history. It launched as Clawdbot, was rebranded to Moltbot, and is now OpenClaw. The codebase is continuous; only the name changed. If you're following guides that reference Moltbot or Clawdbot, they apply to OpenClaw. Read the full rebrand history.
How much will OpenClaw cost me?
OpenClaw itself is free and open source. The costs come from three places: your AI model provider (Claude API, OpenAI API, or free local models via Ollama), your server (a VPS starting at a few dollars per month, or a local machine you already own), and optionally third-party API keys for integrations like Google Calendar or GitHub. A typical personal setup using Claude Sonnet on a $5-10/month VPS costs $10-30/month total depending on usage. Using free local models via Ollama brings cloud API costs to zero.
See the cost optimization guide for strategies to reduce spend by 80-90%.
Do I need a VPS to run OpenClaw?
No. OpenClaw runs on your laptop, a home server, a VPS, or in Docker. A VPS is recommended for production use because it provides 24/7 uptime; if your machine sleeps, the Gateway goes offline and scheduled tasks don't run. For personal use without automation, a local machine works fine.
Can I run OpenClaw locally without a cloud API?
Yes. Combine OpenClaw with Ollama and run entirely locally at zero API cost. Models like Qwen 2.5 Coder 7B and Llama 3.2 run on 4-8 GB RAM. Quality is lower than frontier cloud models, but for many automation tasks the difference is acceptable.
Run OpenClaw free with Ollama.
Which AI models can I use with OpenClaw?
Any model accessible via an OpenAI-compatible API works, which covers most of the ecosystem. Native integrations exist for Anthropic (Claude), OpenAI (GPT and Codex), Ollama (local models), and OpenRouter (100+ models via one key). You configure the primary model, fallbacks, and per-agent or per-task model routing in openclaw.json.
Model comparison guide | Free model options.
Which channel should I connect to OpenClaw first?
Telegram. It's the simplest setup (one bot token, stable API, no phone number linking) and the most popular in the OpenClaw community. WhatsApp is better if you want a mobile-first experience and don't mind the QR-scan pairing process. Discord is the right choice for team or developer workflows.
How secure is OpenClaw?
When configured correctly, yes. The main risks are misconfiguration: exposing the Gateway without authentication, hardcoding credentials, or giving agents broader permissions than needed. The mandatory steps are binding to loopback (not a public IP), using a reverse proxy with TLS for remote access, and using tool allowlists.
Detailed guide: Security best practices.
How much RAM do I need for OpenClaw?
512 MB minimum to start. 2 GB recommended for a single-user setup with automation. 4 GB+ for multi-agent or heavy cron workloads. If you're running local models via Ollama, add 4-8 GB per model on top of that. The Gateway itself is lightweight; the RAM requirement scales with how many things you run simultaneously.
How does OpenClaw compare to ChatGPT or Claude.ai?
The biggest difference is where it runs: on your own server, so your data never leaves your infrastructure. On top of that, it talks to you through messaging apps you're already on (WhatsApp, Telegram, Discord) so there's no separate interface to open. And it can take real actions like running code, managing files, and calling APIs rather than just generating text responses.
Full comparison: OpenClaw vs ChatGPT.
Can I use my ChatGPT Plus subscription with OpenClaw?
Yes. OpenClaw supports Codex OAuth, which authenticates against your ChatGPT account and uses your flat monthly subscription rather than per-token API billing. OpenAI has publicly confirmed this use case is permitted.
How do I troubleshoot OpenClaw if it stops responding?
Start with openclaw doctor (catches most config issues), then openclaw channels status --probe (verifies channel connectivity), then openclaw logs --follow (shows live Gateway activity). The most common causes of silent failures are: model provider rate limits, invalid or expired API keys, channel pairing not approved, and config changes that didn't hot-reload.
Full guide: Troubleshooting common errors.

