Skip to content

Python SDK

The EdgeCrab Python SDK (edgecrab-sdk) provides an async-first Agent class, streaming support, and a built-in CLI. Compatible with Python 3.10+.


Terminal window
pip install edgecrab-sdk

from edgecrab import Agent
agent = Agent(model="anthropic/claude-sonnet-4-20250514")
reply = agent.chat("Explain Rust ownership in 3 sentences")
print(reply)

from edgecrab import Agent
agent = Agent(
model="openai/gpt-4o", # provider/model string
system_prompt="You are a Rust expert.", # optional system prompt
api_key="sk-...", # optional; falls back to env var
max_loop_depth=20, # max ReAct iterations (default: 20)
toolsets=["file", "web"], # enable specific toolsets
session_name="my-project", # named session (persisted)
base_url=None, # custom API endpoint (OpenAI-compatible)
)

from edgecrab import Agent
agent = Agent(model="openai/gpt-4o")
# Single turn
reply = agent.chat("List all .rs files in the current directory")
print(reply)
# Multi-turn (maintains history)
agent.chat("Explain the main function in src/main.rs")
agent.chat("Now add error handling to it")

import asyncio
from edgecrab import AsyncAgent
async def main():
agent = AsyncAgent(model="openai/gpt-4o")
reply = await agent.chat("Run cargo test and summarize failures")
print(reply)
asyncio.run(main())

from edgecrab import Agent
agent = Agent(model="anthropic/claude-opus-4-5")
for chunk in agent.stream("Write a Rust async HTTP client"):
print(chunk, end="", flush=True)
print()

Async streaming:

import asyncio
from edgecrab import AsyncAgent
async def main():
agent = AsyncAgent(model="anthropic/claude-opus-4-5")
async for chunk in agent.astream("Write a Rust async HTTP client"):
print(chunk, end="", flush=True)
print()
asyncio.run(main())

Inspect tool calls and results during execution:

from edgecrab import Agent, ToolCallEvent, ToolResultEvent
agent = Agent(model="openai/gpt-4o")
for event in agent.stream_events("Fix the failing tests in src/"):
if isinstance(event, ToolCallEvent):
print(f"Tool call: {event.name}({event.args})")
elif isinstance(event, ToolResultEvent):
print(f"Result: {event.result[:100]}")
else:
print(event.text, end="", flush=True)

Register your own Python functions as tools:

from edgecrab import Agent, tool
@tool(description="Get the current UTC time")
def get_time() -> str:
from datetime import datetime, timezone
return datetime.now(timezone.utc).isoformat()
@tool(description="Read a file from a custom secure location")
def read_restricted_file(path: str) -> str:
# your security logic here
allowed = ["/data/project/"]
if not any(path.startswith(p) for p in allowed):
raise ValueError(f"Path not allowed: {path}")
with open(path) as f:
return f.read()
agent = Agent(model="openai/gpt-4o", extra_tools=[get_time, read_restricted_file])
reply = agent.chat("What time is it?")

from edgecrab import Agent
# Named session — history is persisted across runs
agent = Agent(model="openai/gpt-4o", session_name="my-project")
agent.chat("Explain the architecture")
# Later, in a new script — session continues where it left off
agent = Agent(model="openai/gpt-4o", session_name="my-project")
agent.chat("Now add authentication") # Has context from previous session

The SDK includes a CLI:

Terminal window
# Interactive chat
edgecrab chat
# Single prompt
edgecrab chat "Summarize the last 10 git commits"
# Use a specific model
edgecrab chat --model anthropic/claude-opus-4-5 "Explain this codebase"
# List available models
edgecrab models
# Check health
edgecrab health

from edgecrab import Agent, EdgeCrabError, ProviderError, ToolError
agent = Agent(model="openai/gpt-4o")
try:
reply = agent.chat("Read /etc/passwd")
except ToolError as e:
print(f"Tool failed (likely security): {e}")
except ProviderError as e:
print(f"LLM provider error: {e}")
except EdgeCrabError as e:
print(f"General error: {e}")

See sdks/python/README.md in the repository for the complete API reference.


  • Use session_name for long-running projects: Named sessions persist their history in ~/.edgecrab/state.db, so you pick up where you left off even after restarting Python.
  • Use stream_events over stream when you need tool visibility: It surfaces ToolCallEvent and ToolResultEvent so you can log or display exactly what the agent is doing.
  • Gate file tools tightly in production: Pass toolsets=['web'] to limit the agent to web-only tools when running in an untrusted pipeline.
  • Set short max_loop_depth for unit tests: max_loop_depth=3 makes tests fast and deterministic by forcing early completion.
  • Errors are typed: Catch ToolError (security rejection, tool failure) and ProviderError (API quota, model error) separately for clean error handling.

Does the SDK require a running EdgeCrab server? No. edgecrab-sdk calls the LLM provider directly using the same logic as the CLI. No local server is needed.

Can I use the SDK with a self-hosted gateway? Yes. Pass base_url="https://your-gateway.example.com/v1" to Agent() and it will send all requests there.

Does the SDK respect ~/.edgecrab/config.yaml? Yes. The .yaml config is loaded automatically unless overridden by constructor arguments.

Can I use this with Jupyter notebooks? Yes. Use the AsyncAgent with await in a notebook cell. The sync Agent also works but may block the event loop in async contexts.

What Python versions are supported? Python 3.10+. Tested on 3.10, 3.11, 3.12, and 3.13.