Quick Start
EdgeCrab is a Super Powerful Personal Assistant inspired by NousHermes and OpenClaw — a single static binary, no runtime dependencies, no daemon, no Python venv required.
Install --> edgecrab setup --> edgecrab doctor --> edgecrab | | | |npm/pip/cargo writes config verifies keys TUI startsPrerequisites
Section titled “Prerequisites”- At least one LLM access method: GitHub Copilot subscription, an API key, or a local Ollama instance
- For npm install: Node.js 18+
- For pip install: Python 3.10+
- For cargo install: Rust 1.85+ (optional — only needed for building from source)
Installation
Section titled “Installation”Option A: npm (no Rust required)
Section titled “Option A: npm (no Rust required)”npm install -g edgecrab-cliDownloads a pre-built native binary for your platform automatically. This is the fastest path if you already have Node.js.
Option B: pip (no Rust required)
Section titled “Option B: pip (no Rust required)”pip install edgecrab-cliDownloads a pre-built native binary on first run. Use pipx install edgecrab-cli for an isolated install.
Option C: cargo (compile from source)
Section titled “Option C: cargo (compile from source)”cargo install edgecrab-cliOption D: Build from source
Section titled “Option D: Build from source”git clone https://github.com/raphaelmansuy/edgecrabcd edgecrabcargo build --release# Binary is at ./target/release/edgecrabcp ./target/release/edgecrab ~/.local/bin/ # add to PATHOption E: Docker
Section titled “Option E: Docker”docker pull ghcr.io/raphaelmansuy/edgecrab:latestdocker run -it --rm \ -e GITHUB_TOKEN="$GITHUB_TOKEN" \ -v "$HOME/.edgecrab:/root/.edgecrab" \ ghcr.io/raphaelmansuy/edgecrab:latestSee Docker guide for full production setup.
Step 1 — Guided Setup
Section titled “Step 1 — Guided Setup”edgecrab setupThe wizard detects your API keys from the environment, lets you choose a provider, and writes ~/.edgecrab/config.yaml:
EdgeCrab Setup Wizard──────────────────────────────────────────────────────────✓ Detected GitHub Copilot (GITHUB_TOKEN)✓ Detected OpenAI (OPENAI_API_KEY)
Choose LLM provider: [1] copilot (GitHub Copilot — gpt-4.1-mini) <- auto-detected [2] openai (OpenAI — GPT-4.1, GPT-5, o3/o4) [3] anthropic (Anthropic — Claude 4.5/4.6) [4] ollama (local Ollama — llama3.3) ...
Provider [1]: 1
✓ Config written to ~/.edgecrab/config.yaml✓ Created ~/.edgecrab/memories/✓ Created ~/.edgecrab/skills/Supported providers: copilot · openai · anthropic · gemini · xai · deepseek · huggingface · zai · openrouter · ollama · lmstudio. See Provider Overview.
Step 2 — Verify Health
Section titled “Step 2 — Verify Health”edgecrab doctorEdgeCrab Doctor──────────────────────────────────────────────────────────✓ Config file ~/.edgecrab/config.yaml✓ State directory ~/.edgecrab/✓ Memories ~/.edgecrab/memories/✓ Skills ~/.edgecrab/skills/✓ GitHub Copilot GITHUB_TOKEN set✓ OpenAI OPENAI_API_KEY set✓ Provider ping copilot/gpt-4.1-mini --> OK (312 ms)──────────────────────────────────────────────────────────All checks passed.If a check fails, see Configuration or Provider Overview.
Step 3 — Start Chatting
Section titled “Step 3 — Start Chatting”Interactive TUI (default)
Section titled “Interactive TUI (default)”edgecrabThe TUI opens with a full-screen editor. Type your prompt and press Enter. Use Shift+Enter for multi-line input.
One-shot prompt (headless)
Section titled “One-shot prompt (headless)”edgecrab "summarise the git log for today"edgecrab --quiet "count lines in src/**/*.rs" # no banner, pipe-safeSpecify model
Section titled “Specify model”edgecrab --model anthropic/claude-sonnet-4-20250514 "review this PR"edgecrab --model ollama/llama3.3 "run completely offline"Continue the last session
Section titled “Continue the last session”edgecrab -C # resume last CLI sessionedgecrab -C "refactor-api" # resume named sessionedgecrab --session <id> # resume by exact session IDPreload skills
Section titled “Preload skills”edgecrab -S git-workflow "review this branch"edgecrab -S security,refactor # comma-separatedSee Skills for creating your own.
Parallel isolation with worktrees
Section titled “Parallel isolation with worktrees”edgecrab -w "explore performance improvements"# Opens EdgeCrab in an isolated git worktree so changes don't pollute mainSee Worktrees for the full workflow.
TUI Quick Reference
Section titled “TUI Quick Reference”Once inside the TUI, type /help to see all commands. Key shortcuts:
| Key | Action |
|---|---|
Enter | Submit prompt |
Shift+Enter | Insert newline |
Ctrl+C | Interrupt running tool |
Ctrl+L | Clear screen |
Ctrl+U | Clear input line |
Alt+Up/Down | Scroll output |
Ctrl+Home / Ctrl+End | Jump to top / bottom |
| Slash command | Action |
|---|---|
/model provider/model | Hot-swap LLM mid-session |
/new | Start a fresh session |
/help | Full command reference |
/theme [name] | List or switch skin preset |
/memory | View loaded memories |
/tools | List active tools |
Full slash command reference: Slash Commands.
SDK Quick Start
Section titled “SDK Quick Start”Python SDK
Section titled “Python SDK”pip install edgecrab-sdkfrom edgecrab import Agent
agent = Agent(model="anthropic/claude-sonnet-4-20250514")reply = agent.chat("Explain Rust ownership in 3 sentences")print(reply)Full docs: Python SDK.
Node.js / TypeScript SDK
Section titled “Node.js / TypeScript SDK”npm install edgecrab-sdkimport { Agent } from "edgecrab-sdk";
const agent = new Agent({ model: "openai/gpt-4o" });const reply = await agent.chat("Write a TypeScript hello-world");console.log(reply);Full docs: Node.js SDK.
Next Steps
Section titled “Next Steps”Quick Start --> Configuration --> Features --> Reference | | providers/ skills/ overview.md tools.md| Goal | Where to go |
|---|---|
| Configure providers, toolsets, memory | Configuration |
| Switch or add providers | Provider Overview |
| Understand the TUI in depth | TUI Interface |
| Enable messaging platforms | Messaging Gateways |
| Create custom skills | Your First Skill |
| Run on a server | Self-Hosting |
| All CLI flags | CLI Reference |
Pro Tips
Section titled “Pro Tips”Write better prompts. Include context upfront: “In this Rust workspace using tokio 1.x, add a health-check endpoint to the axum server in crates/api/src/main.rs.” The agent reads your AGENTS.md automatically, so put project conventions there once.
Use one-shot mode for scripts. edgecrab --quiet "count total test files" pipes perfectly into shell scripts — no banner, no TUI, just the answer.
Alias common invocations. Add to your shell config:
alias ec='edgecrab'alias ecs='edgecrab --quiet'alias ecc='edgecrab -C' # continue last sessionLet the agent see your codebase. Start edgecrab from your project root — it auto-loads AGENTS.md from the current directory and all parent directories. The more context it has, the fewer clarifying questions it asks.
Control costs. Add display: { show_cost: true } to config.yaml to see token usage after every response. Use copilot or ollama for exploration, claude-opus for final review.
Frequently Asked Questions
Section titled “Frequently Asked Questions”Q: I ran edgecrab setup but edgecrab doctor shows my API key is missing.
Check that you exported the key in the same shell session (or have it in ~/.edgecrab/.env). edgecrab reads ~/.edgecrab/.env automatically at startup — add OPENAI_API_KEY=sk-... there for a persistent solution.
Q: The agent ran a command that changed my files. How do I undo?
All file writes use atomic operations (write temp → rename). Undo is via git: git diff to see changes, git checkout -- . to revert. Use edgecrab -w (worktree mode) for risky operations — changes are isolated to a separate branch.
Q: How do I stop the agent mid-task?
Press Ctrl+C. This cancels the current tool execution and LLM generation. The session history is preserved — you can continue from where you left off.
Q: Can I use EdgeCrab without an internet connection?
Yes. Point it at a local Ollama instance:
# Start Ollama with any modelollama pull llama3.3edgecrab --model ollama/llama3.3 "explain this code"Set EDGECRAB_MODEL=ollama/llama3.3 in ~/.edgecrab/.env for a permanent default.
Q: How do I run the same prompt on multiple files?
Write a prompt that uses tool calls, or use the --quiet flag in a shell loop:
for f in src/**/*.rs; do edgecrab --quiet "summarise $f in one sentence" >> summaries.txtdoneQ: The agent keeps exceeding the context window.
Reduce tools.max_loop_depth or increase session.max_context_tokens. For large codebases, use Worktrees to scope each session or preload only the relevant Skills.
Q: Where are my conversation logs stored?
In ~/.edgecrab/state.db (SQLite). Browse them with edgecrab sessions list and search with edgecrab sessions search "auth bug".