dispatchmy.ai specialist agent

Stop asking one agent to do everything.

Build specialist agents with their own prompts, tools, and context. Compose them into workflows that delegate the way a real team would — visual, auditable, reusable.

Early access in waves · Free while in beta · Bring your own LLM key

One agent hitting a ceiling isn't a prompt problem.

LLMs have a finite context window. The fuller it gets, the worse the model reasons. Ask a single agent to research, browse, code, and report in one context and you get overflow, drift, and a generalist that does none of those jobs well.

Browsing alone is a specialty — web pages are enormous, full of consent popups and dynamic JS, and every site has its quirks. A browsing agent needs site-specific tricks, retry patience, and its own examples. None of that fits in the same context where it's also reasoning about your ambiguous task.

Split the jobs, and each agent gets to be good at one thing.

"I asked one agent to research three competitors and summarize each. It drifted halfway through — started attributing features to the wrong product, and nothing I added to the prompt could fix it." — You, probably

Your manager delegates. Specialists do their one thing well, or delegate further.

Build a manager agent and give it specialist subagents — browsing, research, email, code. Each has its own prompt, model, and tool set. The manager plans, pulls from memory, and delegates with a tight brief. Each subagent's session can be continued, or a new session can be started. Specialists can delegate further. Work completes without a single agent drowning in context.

dispatchmy — manager runs a game between two subagents

A simple demo showing the delegation workflow in action.

Any agent can be another agent's tool.

Once you configure an agent, it shows up to its parent as just another tool to call. No depth limit. No circular references. Any agent you build becomes reusable across every workflow that needs it.

/1

No depth limit

Nest trees as deep as the task demands. Manager → research → browser → parser → tool is a real shape.

/2

Reusable

Write a browsing specialist once, plug it into any other agent, at any depth.

/3

Composable

Create any hierarchy tree to suit your workflow. As many hierarchies as you need.

Every workflow runs in its own container.

Each top-level agent gets a container with exactly the dependencies it and its subagent tree needs — nothing more. Workflows stay reproducible, safe to share, and easy to reset.

No more "my agent's environment is broken" headaches. You control the environment and can recreate it from scratch, based on the configuration.

01

Per-workflow isolation

Each tree has its own dependencies. A browsing agent's Playwright install doesn't leak into your email agent's environment.

02

Reproducible environments

Same config runs the same everywhere. No drift between your machine and a teammate's.

03

Safe sharing

Export a tree, someone imports it, it just runs. No hidden state, no "works on my machine".

04

Clean uninstall

Delete the agent — its container and files go with it. Nothing lingers on your system.

Everything else.

Do I need my own LLM API key?

Yes — bring your own Anthropic key, OpenRouter key, or any OpenAI-compatible endpoint (Ollama, LM Studio, vLLM, custom hosted proxy, whatever you run). Your traffic, your budget, your control. We don't proxy or touch your model usage.

What operating systems do you support?

macOS and Linux (both amd64 and arm64) as Tier 1. Windows via Docker Desktop + WSL2 is in beta. You install Docker, we do the rest.

Can I use this at work?

Yes — personal or work use is fine.

What happens to my data?

Everything runs locally on your machine. Your agent configs, session histories, and any files they touch never leave your environment. We only know your account details.

Stop asking one agent to do everything.

Runs locally, uses your own LLM keys, your data never leaves your machine.

Join waitlist →