Skip to content

ToolWeaver

Secure tool orchestration for AI—parallel agents, caching, and sandboxed execution with built-in guardrails.

Simple Explanation

Plan once with a large model, then execute many small, safe steps. ToolWeaver finds the right tools and runs them in parallel with limits and caching, so you get fast results without runaway cost.

Technical Explanation

Planner outputs a DAG; the orchestrator discovers tools, narrows via hybrid search (BM25 + embeddings), dispatches steps concurrently with semaphores and guardrails, retries/fallbacks on errors, aggregates outputs, and records metrics. Code runs in a sandbox with restricted builtins and timeouts.

The Product Pitch

  • Problem: Orchestrating many tools/models safely is hard—costs, concurrency, safety, and consistency.
  • Solution: ToolWeaver provides secure fan-out, discovery, safe execution, and performance primitives.
  • Value: Ship faster, scale safely, stay flexible with decorators/templates/YAML.

Get Started

10-Minute Quickstart

Your first tool and parallel run

  1. Install: pip install toolweaver (add [openai], [azure], or [anthropic] for LLM providers)
  2. Define a tool: python from orchestrator import mcp_tool @mcp_tool(domain="demo", description="Echo a message") async def echo(message: str) -> dict: """Echo back the provided message.""" return {"echo": message}
  3. Run a parallel demo: bash python samples/25-parallel-agents/parallel_deep_dive.py

Learn

Build

Preview Locally

pip install mkdocs-material
mkdocs serve