Skip to content

Docker Deployment

EdgeCrab ships a multi-stage Docker image that runs the HTTP gateway (edgecrab-gateway). This is the recommended deployment method for team use, CI/CD pipelines, and server environments.


Terminal window
docker pull ghcr.io/raphaelmansuy/edgecrab:latest
docker run --rm -it \
-p 8642:8642 \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
-v ~/.edgecrab:/root/.edgecrab \
ghcr.io/raphaelmansuy/edgecrab:latest

Visit http://localhost:8642/health to verify the gateway is running.


The repository includes a docker-compose.yml:

docker-compose.yml
version: '3.9'
services:
edgecrab:
image: ghcr.io/raphaelmansuy/edgecrab:latest
restart: unless-stopped
ports:
- "8642:8642"
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
- EDGECRAB_PROVIDER=openai
- EDGECRAB_MODEL=gpt-4o
volumes:
- edgecrab-data:/root/.edgecrab
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8642/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
edgecrab-data:
Terminal window
# Start
docker compose up -d
# View logs
docker compose logs -f edgecrab
# Stop
docker compose down

Terminal window
git clone https://github.com/raphaelmansuy/edgecrab
cd edgecrab
docker build -t edgecrab:local .
docker run -p 8642:8642 -e OPENAI_API_KEY="$OPENAI_API_KEY" edgecrab:local

The Dockerfile uses a multi-stage build:

  1. Builder stage: rust:1.85-slim — compiles all crates
  2. Runtime stage: debian:bookworm-slim — adds only the binary and runtime libs (~25 MB final image)

All EdgeCrab configuration can be driven by environment variables inside Docker:

VariableDescription
EDGECRAB_PROVIDERActive LLM provider (e.g. openai)
EDGECRAB_MODELModel name (e.g. gpt-4o)
EDGECRAB_LOG_LEVELLog verbosity: trace/debug/info/warn/error
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GITHUB_TOKENGitHub Copilot token
GOOGLE_API_KEYGoogle Gemini API key
XAI_API_KEYxAI Grok API key
DEEPSEEK_API_KEYDeepSeek API key
HUGGING_FACE_HUB_TOKENHugging Face API token
ZAI_API_KEYZ.AI API key

Pass them via --env-file:

Terminal window
docker run --env-file ~/.edgecrab/.env -p 8642:8642 ghcr.io/raphaelmansuy/edgecrab:latest

Container pathPurpose
/root/.edgecrabAll state: config, memories, skills, SQLite DB

Mount this to a named volume or host path to persist data across container restarts:

Terminal window
docker run \
-v /data/edgecrab:/root/.edgecrab \
-p 8642:8642 \
ghcr.io/raphaelmansuy/edgecrab:latest

The edgecrab-gateway exposes an OpenAI-compatible HTTP API:

EndpointMethodDescription
/healthGETHealth check
/v1/chat/completionsPOSTOpenAI-compatible chat
/v1/modelsGETList available models

This means any tool that supports the OpenAI API can connect to EdgeCrab as a backend.


For one-off interactive sessions inside a container:

Terminal window
docker run --rm -it \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
-v ~/.edgecrab:/root/.edgecrab \
ghcr.io/raphaelmansuy/edgecrab:latest \
edgecrab

  • Mount /root/.edgecrab to a persistent volume
  • Pass API keys via --env-file (never bake them into the image)
  • Set restart: unless-stopped or restart: always
  • Expose port 8642 only on 127.0.0.1 if behind a reverse proxy
  • Add a health check (/health)
  • Set EDGECRAB_LOG_LEVEL=warn to reduce log noise in production

Use --env-file, never --env. Passing secrets via --env VAR=value leaks them to docker ps and process listings. --env-file ~/.edgecrab/.env is safe.

Check memory usage. EdgeCrab is lightweight (~15 MB resident), so you can run multiple instances on a single host without resource pressure. Set a memory limit anyway for hygiene:

services:
edgecrab:
deploy:
resources:
limits:
memory: 256M

Run edgecrab doctor inside the container after first deploy:

Terminal window
docker compose exec edgecrab edgecrab doctor

This confirms API keys are visible and the provider ping succeeds.


Q: The container starts but the gateway doesn’t receive messages.

Check that:

  1. The platform tokens are set in the env file
  2. Port 8642 is exposed and not blocked by firewall
  3. edgecrab gateway status (run inside container) shows the platform as active

Q: Can I run EdgeCrab in Kubernetes?

Yes. Use a Deployment with one replica, a PersistentVolumeClaim for /root/.edgecrab, and a Secret for the API keys. EdgeCrab has no clustering or leader-election requirements — it’s a stateful single-process app.

Q: I updated the Docker image but my data was lost.

Ensure the volume mount is set up correctly before the first run. The container’s internal /root/.edgecrab must be mounted to a persistent location. If no volume is mounted, all data is lost when the container stops.

Q: How do I run the TUI inside a running container?

Terminal window
docker exec -it edgecrab-container edgecrab

The TUI works inside Docker as long as the container has a TTY (-it or tty: true in compose).

Q: Can I use Ollama inside Docker with EdgeCrab?

Yes. Run Ollama in a separate container and set:

environment:
EDGECRAB_MODEL: "ollama/llama3.3"
OLLAMA_HOST: "http://ollama:11434"

With a depends_on: [ollama] in your compose file.