Hermes ships a built-in dashboard for inspecting sessions, but the Hermes CLI is still the main way you talk to the agent. The community-maintained Hermes WebUI (github.com/nesquena/hermes-webui, MIT-licensed, not affiliated with Nous Research) fills the gap with a browser-based chat interface that talks to your existing Hermes install. Open it on your phone, your tablet, your other laptop; you get a chat panel, a session list and a workspace file browser, all served over HTTPS from your VPS.
This guide covers the install, the security wiring you absolutely should not skip and the pattern for deploying it alongside an existing Hermes setup without messing up either side.
What WebUI is and isn't
Hermes WebUI is a separate Python web application that connects to your Hermes install. It mounts the same ~/.hermes/ data directory, calls into Hermes's internal APIs and renders chat sessions, skills and memories in a browser interface.
It's not an official Nous Research project. The README says so explicitly. It's maintained by a community contributor and updated frequently; check the GitHub releases for the version you're running.
What it gives you: a comfortable chat interface accessible from any device with a browser, the ability to run multiple parallel sessions in tabs, a workspace file panel for inspecting what the agent is working on, streaming responses with rendered markdown.
What it doesn't give you: anything the agent can't do via the CLI. WebUI is a different surface on the same Hermes brain. It doesn't add new agent capabilities, just makes existing capabilities easier to use from a phone.
Install with Docker
Easiest path. The maintainer publishes a Docker image at ghcr.io/nesquena/hermes-webui:latest. Run it alongside your existing Hermes install with this Compose snippet:
services:
hermes-webui:
image: ghcr.io/nesquena/hermes-webui:latest
container_name: hermes-webui
restart: unless-stopped
volumes:
- /home/youruser/.hermes:/root/.hermes
- hermes-webui-state:/var/lib/hermes-webui
environment:
HERMES_HOME: /root/.hermes
HERMES_WEBUI_STATE_DIR: /var/lib/hermes-webui
HERMES_WEBUI_AUTH_MODE: basic
HERMES_WEBUI_BIND_HOST: 127.0.0.1
HERMES_WEBUI_BIND_PORT: 8780
ports:
- "127.0.0.1:8780:8780"
mem_limit: 1g
volumes:
hermes-webui-state:
Two things worth noting.
The volume mount of ~/.hermes from the host is read-write. WebUI writes session data back to state.db; for that to work, the container needs write access. Pair this with the database locking guide's tuning, since now you have one more potential writer to state.db (the gateway, the CLI, plus WebUI).
The bind to 127.0.0.1:8780 is intentional. WebUI exposes a chat interface; if you bind it to 0.0.0.0, anyone on the internet who finds your IP can chat with your agent. We're going to put it behind HTTPS with auth in the next section. Don't skip that part.
Bring it up:
cd /opt/hermes
docker compose up -d hermes-webui
docker compose logs -f hermes-webui
The logs show the WebUI starting and connecting to your Hermes install. If it can't connect (the Hermes process isn't running, the data dir isn't readable), the WebUI logs the error and exits.
HTTPS and authentication
Add the WebUI as a backend to the Nginx config from the HTTPS guide. Put it on a separate hostname so it has its own cert and its own basic-auth file:
upstream hermes_webui {
server 127.0.0.1:8780;
keepalive 32;
}
server {
listen 443 ssl http2;
server_name hermes-webui.example.com;
ssl_certificate /etc/letsencrypt/live/hermes-webui.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/hermes-webui.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
auth_basic "Hermes WebUI";
auth_basic_user_file /etc/nginx/.htpasswd-webui;
location / {
proxy_pass http://hermes_webui;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
proxy_buffering off;
}
}
Get the cert from Let's Encrypt:
sudo certbot certonly --webroot -w /var/www/letsencrypt -d hermes-webui.example.com
Set up the basic-auth credentials:
sudo htpasswd -c /etc/nginx/.htpasswd-webui you
Reload Nginx:
sudo nginx -t && sudo systemctl reload nginx
Visit https://hermes-webui.example.com, get prompted for credentials, log in. You should see the WebUI's chat interface.
Session model
WebUI sessions are stored separately from CLI sessions. When you open WebUI for the first time, you start with an empty chat panel and a fresh session. Past CLI conversations are visible via the session search panel but aren't loaded as the current session.
The agent's memory (SOUL.md, MEMORY.md, USER.md) is the same regardless of which surface you use, so the agent still remembers your projects and preferences. What's different is the per-session conversation history; each WebUI session is its own thread, like opening a new chat in any other UI.
Sessions persist across browser sessions because the WebUI stores its session list in the volume mount. Closing the browser tab doesn't lose anything; reopening WebUI shows your session list and you can resume any session.
Multi-user setups
WebUI's basic-auth model is one credential, one user. For real multi-user setups, the auth needs to be more sophisticated.
The recent versions of WebUI support OIDC and OAuth2. Configure with:
environment:
HERMES_WEBUI_AUTH_MODE: oidc
HERMES_WEBUI_OIDC_ISSUER: https://accounts.google.com
HERMES_WEBUI_OIDC_CLIENT_ID: your-client-id
HERMES_WEBUI_OIDC_CLIENT_SECRET: your-client-secret
HERMES_WEBUI_OIDC_ALLOWED_EMAILS: "[email protected],[email protected]"
Each authenticated user gets their own session list and (if configured) their own per-user MEMORY.md. The shared MEMORY.md still applies to global facts; per-user memories segregate by authenticated identity.
For most personal setups, basic auth is enough. OIDC is for team installs where you want individual identities tied to your existing identity provider.
Keeping WebUI updated
WebUI ships frequently. Pull and restart on a schedule that matches your tolerance for surprise:
cd /opt/hermes
docker compose pull hermes-webui
docker compose up -d hermes-webui
docker compose logs -f hermes-webui
For automatic updates, use Watchtower (covered in the Compose article) or pin to a specific tag (:v0.50.0 instead of :latest) and bump deliberately when you're ready.
Test new versions on a staging instance first if WebUI is something you depend on for daily work. The maintainer is responsive on GitHub but small bugs do land sometimes; staging catches them before they bite.
Resource sizing
WebUI on its own is light. The Python process plus a small in-memory cache sits around 200 to 400 MB resident. Active websocket sessions add a small amount per concurrent user.
The 1 GB mem_limit in the Compose snippet is generous for a personal install; tighten to 512 MB if you're tight on RAM. CPU is rarely the constraint; WebUI is mostly waiting on the agent.
Storage in the WebUI state volume is small (a few MB for session metadata) regardless of how heavy your usage gets. The actual conversation content lives in Hermes's state.db, not in the WebUI's own state.
Troubleshooting
Three issues come up repeatedly.
"WebUI shows the page but no sessions, no skills, nothing" usually means the Hermes data dir isn't reachable from inside the container. Check the volume mount and the file permissions; docker compose exec hermes-webui ls -la /root/.hermes tells you. If it's empty or unreadable, fix the mount.
"The chat panel never streams; messages just appear all at once after a delay" is the WebSocket-not-upgraded issue. Check Nginx's WebSocket headers (per the HTTPS guide) and confirm the Upgrade and Connection "upgrade" headers are getting forwarded.
"Browser tool calls don't work in WebUI" is a known limitation in some WebUI versions; the /browser connect command requires CLI features that haven't all migrated to WebUI yet. The workaround is to invoke browser-using skills via the CLI; subsequent sessions in WebUI can read the results.
WebUI vs the built-in dashboard
Hermes ships a built-in dashboard on port 9119 that's a different thing. The dashboard is more of a config and inspection surface, with editors for your config, keys, skills, cron jobs and session logs. The community WebUI is a chat interface that lets you actively talk to the agent. Our Hermes Agent VPS setup guide walks through the built-in dashboard's SSH-tunnel access pattern if you haven't tried it yet.
Run both. They don't conflict (different ports, different volumes). The built-in dashboard is good for "what is the agent doing right now and where do I tweak it"; WebUI is good for "I want to chat with the agent without opening a terminal".
The OpenClaw equivalent
OpenClaw has its own web interface story; the patterns are similar but not directly transferable. The OpenClaw multi-channel guide covers the OpenClaw side; if you've migrated from OpenClaw to Hermes via hermes claw migrate, you'd add the WebUI from this article rather than carry across an OpenClaw web setup.
The 1-click route
The LumaDock Hermes Agent VPS template ships with Docker pre-installed and the Hermes data dir set up, so you can drop in the WebUI Compose snippet and have it running in a minute. Fast enough that you can be chatting with the agent over HTTPS before your coffee gets cold.

