Skip to content

Configuration

All EdgeCrab settings live in ~/.edgecrab/. Configuration is layered — later entries override earlier ones, so you can set-and-forget in config.yaml and override per-invocation from the shell.


~/.edgecrab/
├── config.yaml # All settings (model, terminal, compression, memory, etc.)
├── .env # API keys and secrets
├── SOUL.md # Primary agent identity (slot #1 in system prompt)
├── AGENTS.md # Project-agnostic instructions for every session
├── memories/ # Persistent memory files (auto-managed by agent)
├── skills/ # Agent skills (directories with SKILL.md)
├── cron/ # Scheduled job storage
├── checkpoints/ # Shadow git repos for rollback (per working directory)
├── profiles/ # Named profiles with isolated configs
├── skin.yaml # TUI color and kaomoji customization
├── state.db # SQLite session database (WAL mode)
├── plugins/ # Optional plugin binaries
└── logs/ # Error and gateway logs

Override the home directory with EDGECRAB_HOME:

Terminal window
export EDGECRAB_HOME=/opt/edgecrab

Terminal window
edgecrab config show # print active config as YAML
edgecrab config edit # open config.yaml in $EDITOR
edgecrab config set <key> <value>
edgecrab config path # print path to config.yaml
edgecrab config env-path # print path to .env

The set command routes automatically — non-secret values go to config.yaml, API keys and tokens go to .env.


Settings resolve from lowest to highest priority:

  1. Compiled defaultsAppConfig::default() in config.rs
  2. ~/.edgecrab/config.yaml — your primary config file
  3. EDGECRAB_* environment variables — override specific keys at runtime
  4. CLI flags--model, --toolset, etc. (highest priority, per-invocation)

~/.edgecrab/config.yaml
model:
default: "anthropic/claude-sonnet-4-20250514" # Default model
max_iterations: 90 # Max tool call iterations per conversation
streaming: true # Stream tokens to terminal
prompt_caching: true # Enable OpenAI/Anthropic prompt caching
cache_ttl: 300 # Cache TTL in seconds
max_tokens: ~ # Max response tokens (null = model default)
temperature: ~ # Sampling temperature (null = model default)
api_key_env: "OPENROUTER_API_KEY" # Env var name for the API key
base_url: ~ # Custom OpenAI-compatible base URL
# Fallback model when primary fails
fallback:
model: "copilot/gpt-4.1-mini"
provider: "copilot" # Provider to use for auth
# Smart routing: use a cheap model for simple messages
smart_routing:
enabled: false
cheap_model: "copilot/gpt-4.1-mini"

Defaults (from ModelConfig::default()):

KeyDefault
default"anthropic/claude-sonnet-4-20250514"
max_iterations90
streamingtrue
prompt_cachingtrue
cache_ttl300

Override model per-invocation:

Terminal window
edgecrab --model copilot/gpt-4.1-mini "quick question"
edgecrab -m ollama/llama3.3 "offline task"

tools:
enabled_toolsets: ~ # null = all toolsets; or list like ["coding"]
disabled_toolsets: ~ # toolsets to disable even if in enabled list
custom_groups: # define your own toolset aliases
my-group:
- read_file
- write_file
- terminal
tool_delay: 1.0 # seconds between consecutive tool calls
parallel_execution: true # allow concurrent tool calls
max_parallel_workers: 8 # concurrency limit

Override toolsets per-invocation:

Terminal window
edgecrab --toolset coding "implement the feature"
edgecrab --toolset file,terminal "run tests and fix"
edgecrab --toolset all "maximum capability"

terminal:
shell: ~ # null = user's login shell
timeout: 120 # per-command timeout in seconds
env_passthrough: [] # env var names to forward to subprocesses

memory:
enabled: true # master switch for persistent memory
auto_flush: true # auto-write memory after each session

Disable memory for a single session:

Terminal window
edgecrab --skip-memory "no memory this session"

skills:
enabled: true
hub_url: ~ # override skills hub URL
disabled: [] # globally disabled skill names
platform_disabled: # platform-specific disable
telegram:
- heavy-skill
external_dirs: # additional skill directories (read-only)
- ~/.agents/skills
- ${TEAM_SKILLS_DIR}/skills
preloaded: [] # skills loaded into every session

tools:
file:
allowed_roots: [] # extra roots beyond the active workspace cwd

tools.file.allowed_roots extends the file-tool workspace boundary for read_file, write_file, patch, search_files, apply_patch, local vision image reads, and @file / @folder context refs. Relative paths still resolve from the active workspace. Use absolute paths when targeting an extra allowed root.


security:
approval_required: [] # command patterns requiring user approval
blocked_commands: [] # commands that are always blocked
path_restrictions: [] # deny-list roots overriding workspace + allowed_roots
injection_scanning: true # scan for prompt injection in tool results
url_safety: true # block private IPs and SSRF targets

url_safety blocks: private IPv4 ranges, private IPv6, localhost, 169.254.169.254, metadata.google.internal, and non-HTTP(S) URLs.

Set EDGECRAB_MANAGED=1 to enable managed mode — blocks config writes (useful for shared deployments).


Control the messaging gateway server:

gateway:
host: "127.0.0.1"
port: 8080
webhook_enabled: true
session_timeout_minutes: 30
enabled_platforms: [] # auto-detected from env vars
telegram:
enabled: false # auto-set when TELEGRAM_BOT_TOKEN is present
token_env: "TELEGRAM_BOT_TOKEN"
allowed_users: [] # empty = all users
home_channel: ~
discord:
enabled: false
token_env: "DISCORD_BOT_TOKEN"
allowed_users: []
home_channel: ~
slack:
enabled: false
bot_token_env: "SLACK_BOT_TOKEN"
app_token_env: "SLACK_APP_TOKEN"
allowed_users: []
home_channel: ~
signal:
enabled: false
http_url: ~ # URL of signal-cli HTTP daemon
account: ~ # Phone number registered with signal-cli
allowed_users: []
whatsapp:
enabled: false
bridge_port: 3000
bridge_url: ~
mode: "self-chat"
allowed_users: []
install_dependencies: true

Platform env vars auto-enable their section:

PlatformRequired Env Var
TelegramTELEGRAM_BOT_TOKEN
DiscordDISCORD_BOT_TOKEN
SlackSLACK_BOT_TOKEN + SLACK_APP_TOKEN
SignalSIGNAL_HTTP_URL + SIGNAL_ACCOUNT
WhatsAppWHATSAPP_ENABLED=1

mcp_servers:
github:
command: npx
args: ["-y", "@modelcontextprotocol/server-github"]
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "ghp_xxx"
enabled: true
timeout: 30 # per-call timeout (seconds)
connect_timeout: 10 # connection timeout (seconds)
tools:
include: [] # if non-empty, only expose listed tools
exclude: [] # tools to hide
resources: true # enable list/read resource wrappers
prompts: true # enable list/get prompt wrappers
filesystem:
command: npx
args: ["-y", "@modelcontextprotocol/server-filesystem", "/workspace"]
my-http-server:
url: "http://localhost:9001/mcp"
bearer_token: "my-static-token"
headers:
X-Custom-Header: "value"

Manage without editing YAML:

Terminal window
edgecrab mcp list
edgecrab mcp add github npx -y @modelcontextprotocol/server-github
edgecrab mcp remove github

EdgeCrab automatically compresses long conversations to stay within the model’s context window:

compression:
enabled: true
threshold: 0.50 # compress when context exceeds 50% of window
target_ratio: 0.20 # keep 20% of recent messages uncompressed
protect_last_n: 20 # always keep the last 20 messages
summary_model: ~ # null = use main model for summarization

Trigger manually:

/compress

Configure how the delegate_task tool spawns subagents:

delegation:
enabled: true
model: ~ # null = inherit parent model
provider: ~ # null = inherit parent provider
base_url: ~ # direct OpenAI-compatible endpoint
max_subagents: 3 # max concurrent subagents
max_iterations: 50 # max tool iterations per subagent
shared_budget: false # share parent's iteration budget

Example: use a cheap model for subtasks:

delegation:
model: "copilot/gpt-4.1-mini"
provider: "copilot"
max_subagents: 5

display:
compact: false # reduce whitespace in output
personality: "helpful" # default personality preset
show_reasoning: false # show model thinking tokens
streaming: true # stream response tokens
show_cost: true # show cost in status bar
skin: "default" # skin name from ~/.edgecrab/skin.yaml

Built-in personalities: helpful, concise, technical, kawaii, pirate, philosopher, hype, shakespeare, noir, catgirl, creative, teacher, surfer, uwu.


privacy:
redact_pii: false # strip PII (phone numbers, user IDs) from LLM context

When enabled, applies to gateway platforms (Telegram, WhatsApp, Signal). Hashes are deterministic — the same user always maps to the same hash.


checkpoints:
enabled: true # create shadow git commits before destructive ops
max_snapshots: 50 # max checkpoints per working directory

See Checkpoints & Rollback for the full guide.


tts:
provider: "edge-tts" # "edge-tts" | "openai" | "elevenlabs"
voice: "en-US-AriaNeural" # provider-specific voice name
rate: ~ # edge-tts rate modifier (e.g. "+10%")
model: ~ # openai TTS model (e.g. "tts-1-hd")
auto_play: true # auto-play in voice mode
# ElevenLabs options
elevenlabs_voice_id: ~
elevenlabs_model_id: ~
elevenlabs_api_key_env: "ELEVENLABS_API_KEY"

stt:
provider: "local" # "local" (whisper) | "groq" | "openai"
whisper_model: "base" # local: tiny|base|small|medium|large-v3
silence_threshold: -40.0 # dB for voice activity detection
silence_duration_ms: 1500 # ms of silence before auto-stop

voice:
enabled: false # enable voice mode components
push_to_talk_key: "ctrl+b" # push-to-talk key binding
continuous: false # continuous listening (no key press)
hallucination_filter: true # filter STT hallucinations

Enable voice in the TUI:

/voice on # enable microphone input
/voice tts # toggle spoken replies

honcho:
enabled: true # persistent cross-session user modeling
cloud_sync: false # sync to Honcho cloud (requires HONCHO_API_KEY)
api_key_env: "HONCHO_API_KEY"
api_url: "https://api.honcho.dev/v1"
max_context_entries: 10 # entries injected into system prompt
write_frequency: 0 # auto-conclude every N messages (0 = manual)

auxiliary:
model: ~ # auxiliary model identifier
provider: ~ # provider for auxiliary tasks
base_url: ~ # custom OpenAI-compatible endpoint
api_key_env: ~ # env var for API key

Auxiliary models are used for compression summaries and TTS prompts. Defaults to the main model.


reasoning_effort: "" # "" | "low" | "medium" | "high" | "xhigh"

Empty string = medium (default). Change mid-session:

/reasoning high # increase reasoning depth
/reasoning off # disable reasoning
/reasoning show # display model thinking tokens

timezone: "" # "" = server-local; or IANA string e.g. "America/New_York"

Affects timestamps in logs, cron scheduling, and the system prompt time injection.


browser:
command_timeout: 30 # CDP call timeout in seconds
record_sessions: false # auto-record sessions as WebM video
recording_max_age_hours: 72 # auto-delete recordings older than this

Key EDGECRAB_* variables (applied via apply_env_overrides in config.rs):

VariableConfig keyDescription
EDGECRAB_MODELmodel.defaultOverride default model
EDGECRAB_MAX_ITERATIONSmodel.max_iterationsMax agent iterations
EDGECRAB_TIMEZONEtimezoneIANA timezone string
EDGECRAB_SAVE_TRAJECTORIESsave_trajectoriesEnable trajectory logging
EDGECRAB_SKIP_CONTEXT_FILESskip_context_filesSkip auto-loading context files
EDGECRAB_SKIP_MEMORYskip_memoryDisable memory for this session
EDGECRAB_GATEWAY_HOSTgateway.hostGateway bind host
EDGECRAB_GATEWAY_PORTgateway.portGateway bind port
EDGECRAB_TTS_PROVIDERtts.providerTTS provider override
EDGECRAB_TTS_VOICEtts.voiceTTS voice override
EDGECRAB_REASONING_EFFORTreasoning_effortReasoning effort level
EDGECRAB_HOMEOverride ~/.edgecrab home directory
EDGECRAB_MANAGEDsecurity.managed_modeBlock config writes (1 to enable)
TELEGRAM_BOT_TOKENgateway.telegram.enabledAuto-enable Telegram
DISCORD_BOT_TOKENgateway.discord.enabledAuto-enable Discord
SLACK_BOT_TOKENgateway.slack.enabledAuto-enable Slack (with SLACK_APP_TOKEN)
SIGNAL_HTTP_URLgateway.signal.enabledAuto-enable Signal (with SIGNAL_ACCOUNT)
HONCHO_API_KEYhoncho.cloud_syncEnable Honcho cloud sync

See Environment Variables Reference for the full list.