| name | parallel-agents |
|---|---|
| description | Orchestrate parallel headless Claude Code sessions in isolated git worktrees. Each task (a PRD implementation OR a free-form generic prompt) runs in its own worktree with its own session. Activate when the user wants to run one or more tasks in parallel ("implement PRD-010 and PRD-012 in parallel", "start PRD-010 in a worktree", "run this prompt in the background", "investigate the flaky calendar test in a worktree", "try this refactor in isolation"), check on parallel tasks they already started ("how's PRD-010 going", "any updates", "what's running"), send a follow-up prompt to a running worktree ("tell PRD-010 to also update the CHANGELOG", "broadcast — run the linter in every worktree"), jump into a running session ("take me into PRD-010", "open PRD-010 in VSCode"), or clean up worktrees after PRs have merged ("close all completed tasks", "destroy PRD-010"). Also activates on the first mention of parallel-agents in a project that hasn't been configured yet — runs the init sub-flow inline to register repositories before proceeding. |
| argument-hint | start <PRD-id[s]|prompt> | status [id] | send <id|all> <prompt> | attach <id> | destroy <id> | cleanup |
You are orchestrating parallel headless Claude Code sessions, each running inside an isolated git worktree on a dedicated branch. This skill is the user-facing layer; the mechanical parts go through a helper script paw.py that owns config, state, worktrees, process lifecycle, and PR detection.
The plugin tracks two kinds of tasks, and both behave the same way structurally (worktree + branch + headless session + TaskCreate todo + queue):
- PRD tasks (
type: prd, idPRD-<N>) — implement a concrete, pre-written Product Requirements Document. The launch prompt is always/implement-prd PRD-<N>. The source of truth for PRD content is$PWD/prd/PRD-<N>*.mdin the planning project. - Generic tasks (
type: generic, idGEN-<NNN>auto-allocated) — arbitrary free-form prompts the user wants to run in isolation. Examples: "investigate why the calendar test is flaky", "try refactoring the filter bar to use reducers", "spike out an alternative approach to X". Ids are assigned by the helper, not by the user.
Both types support the same actions: start, status, send follow-up, broadcast-receive, attach, destroy. The only differences are:
- PRD tasks get PR-state tracking via
gh; generic tasks do not. task cleanup(bulk merged-cleanup) only touches PRD tasks — generic tasks are never auto-destroyed because there's no "merged" signal for them.- The launch prompt is different (see above).
Never refuse a generic task because it "isn't a PRD". Never try to cram a generic request into a PRD id. If the user says "spin up a worktree and figure out X", that is a perfectly valid generic task.
- Every shell command that runs under this skill is spelled out explicitly in this file. Do not invent flags. Do not call
claudeorpaw.pywith arguments you did not see here. - Each spawned
claudeprocess runs with its cwd set to the worktree directory.paw.pyenforces this when it callsPopen(cwd=<worktree_path>). Never suggest shell commands thatcdelsewhere before invokingclaudefor an existing task. - One live
claudeprocess per worktree. If a target task is currentlyrunning, follow-up prompts are queued, not launched in parallel inside the same worktree. This is how we preserve edit-consistency. - You mirror parallel-agents state into Claude Code's native task list via the
TaskCreate/TaskUpdate/TaskListtools. The user sees the task list — that is your primary UI. Keep it flat (one todo per parallel session); use the content field to note dependencies with "blocked by ". - Do not auto-destroy worktrees. When you detect a merged PR, mark the corresponding todo as
completed. The user will later ask you to "close all completed tasks" (or similar); that is when you run the cleanup command. - Run a status-refresh pass whenever you are about to spawn or destroy tasks. Do not run one on every turn — only around mutations.
- Honour the monitoring config. When
monitoring.mode == "live", every successfultask startmust be followed by launching a live watcher (see below). Whenmonitoring.mode == "wait", do not launch a watcher — rely on status refreshes.
The config file has a monitoring block that controls whether the skill watches spawned tasks live or polls on demand. Read it with:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" repo list(the full config comes back in other calls too — use whichever is convenient; the fields you care about are config.monitoring.mode and config.monitoring.event_granularity).
monitoring.mode
live(current default) — after everytask start, the skill runspaw.py task waitin a Bash background and attaches theMonitortool. The assistant gets woken up when events arrive.wait— no live watcher is launched. The skill only learns about completion when the user asks for status or when a subsequent mutation triggers a refresh pass.
monitoring.event_granularity (only used when mode == "live")
exit(default) —task waitemits oneWATCHINGline on attach and oneEXITEDline when the process dies. Quietest; one wake-up per task. Best for "notify me when it's done" workflows.tool— additionally emitsTOOL task=<id> tool=<name>for each parsed tool-use event in the log stream. Narrates progress at the cost of more wake-ups (one per Edit/Write/Bash/etc. inside the child).all— streams every--include-hook-eventslog line as aLOG task=<id>: …event. Very noisy; use only when debugging what a headless session is actually doing.
The plugin auto-migrates older config files on first load: if monitoring is missing, defaults are filled in in-memory on read and persisted on the next init.
Config + state live at $PWD/.claude/parallel-agents/. Every action under this skill must start with this check, and if the check fails you run the init sub-flow inline — do not tell the user to run a separate command. This is a skill responsibility, not a slash command.
Check:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" repo listThree possible states:
- Helper fails / files missing — config hasn't been created yet. Run the full init sub-flow below.
repositoriesarray is empty — files exist but no repos registered. Run the init sub-flow from Step 3 onward.repositoriesarray has entries — ready to use. Proceed with the requested action.
Use this whenever you detect state 1 or 2 above. Tell the user once: "I need to set up parallel-agents before I can do that — this only happens once." Then:
Step 1 — Create config + state files and scan for sibling repos:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
init --discover-from "$PWD"Parse the JSON. Note config_created, state_created, and the siblings array (sibling directories of the current project that contain a git repo).
Step 2 — Offer sibling repos for registration via AskUserQuestion (multiSelect). Options: one entry per sibling path, plus "Custom path…" (user types an absolute path) and "None — I'll add later".
If the user picks "None", tell them they can run the setup again later by asking for any parallel-agents action. Stop the init sub-flow without registering anything (but the original request they made still needs a registered repo, so you'll re-prompt next time).
Step 3 — Register each selected repo. For each chosen path:
- Propose a short name (default: basename of the path). Only ask via
AskUserQuestionif the name collides with an existing registered repo. - Detect the default branch:
If empty, ask the user via
git -C "<path>" symbolic-ref --short refs/remotes/origin/HEAD 2>/dev/null | sed 's|origin/||'
AskUserQuestion(singleSelect with the output ofgit -C <path> branch --format '%(refname:short)'). - Register:
Registration also appends
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ repo add \ --name "<name>" \ --path "<path>" \ --default-branch "<branch>"
.claude/worktrees/to<repo>/.git/info/excludeso future worktrees don't show as untracked in the parent repo.
Step 4 — Confirm final state:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" repo listShow the user the registered repos.
Step 5 — Resume the user's original request. Go back to whatever action triggered the init sub-flow (start / status / send / destroy / cleanup) and execute it. The user should not have to repeat themselves.
This is the exhaustive list of paw.py calls you are allowed to make. All of them emit JSON on stdout.
Data root argument (always the same):
--root "$PWD/.claude/parallel-agents"
Repo discovery / listing:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" repo listTask creation — new worktree, new branch, state entry (no session yet):
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task create \
--type prd \
--repo "<repo-name>" \
--task-id "PRD-<N>" \
--description "<short label>"Generic variant (auto-allocates GEN-NNN):
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task create \
--type generic \
--repo "<repo-name>" \
--description "<short label>"With dependency annotation:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task create \
--type prd \
--repo "<repo-name>" \
--task-id "PRD-<N>" \
--description "<short label>" \
--blocked-by "PRD-<M>,PRD-<K>"--blocked-by is informational only — it populates task.blocked_by so you can render "blocked by PRD-" in the todo content. No enforcement; you are still free to start the task.
Start (first session launch) — spawn headless claude in the worktree:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task start \
--task-id "<id>" \
--prompt "<prompt>"The prompt for a PRD task is literally /implement-prd PRD-<N>. For a generic task it's the raw user text. paw.py assembles the full claude argv:
claude -p "<prompt>" --session-id <new-uuid> --name "<id>: <description>" --dangerously-skip-permissions --include-hook-events- cwd set to the worktree path.
- stdout/stderr redirected to
<root>/logs/<task-id>.jsonl.
Send follow-up prompt (queue-aware):
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task send \
--task-id "<id>" \
--prompt "<prompt>"- If target task is
running, the prompt is queued. - If target task is
exitedorcreated,paw.pyspawns immediately viaclaude --resume <session-id> -p "<prompt>"(keeps the same session history). - Response JSON has
dispatched: boolandqueued_count: int.
Broadcast to all active tasks:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task broadcast \
--prompt "<prompt>"Destroyed tasks are ignored. Running tasks receive the prompt in their queue; exited ones dispatch immediately.
Status of one task — reconciles running→exited transitions and drains the queue if the task is now idle:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task status \
--task-id "<id>" \
--tail 20List all active tasks (used by status refresh pass):
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task listAdd --include-history to see destroyed ones too.
PR lookup for a task (PRD tasks only — uses gh pr list --head <branch>):
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task pr-check \
--task-id "<id>"Attach info — returns the shell command for interactive resume:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task attach \
--task-id "<id>"Destroy one task — removes worktree, kills any live process, marks destroyed:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task destroy \
--task-id "<id>" \
[--delete-branch] \
[--force]--delete-branchforce-deletes the local branch; only use it when the PR is merged or for generic tasks the user wants fully purged.--forceis needed if the worktree has uncommitted changes.
Bulk cleanup of merged PRD tasks — refreshes PR state for every active PRD task and destroys those whose PR is MERGED, with --delete-branch:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task cleanupAdd --dry-run to see what would be destroyed without touching anything.
Live watcher — blocks until the task exits, emits event lines to stdout. Designed to be launched via Bash(run_in_background=true) and consumed via Monitor:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \
task wait \
--task-id "<id>" \
--granularity <exit|tool|all>Output line formats:
WATCHING task=<id> pid=<N> granularity=<mode>— always emitted once on attachLOG task=<id>: <raw log line>— only whengranularity=allTOOL task=<id> tool=<name>— only whengranularity=tooland a tool-use event was parsedEXITED task=<id>— always emitted when the process dies (exit line)TIMEOUT task=<id>— if a--timeoutwas set and it elapsed before the process diedERROR task=<id> reason=<reason>— misconfiguration (no session, invalid granularity, etc.)
Exit codes: 0 normal exit, 2 invalid granularity, 3 task not found, 4 no process to watch, 5 timeout hit.
Triggered when the user asks to implement/start/run a PRD, a batch of PRDs, or a generic prompt in a worktree.
Run task list to see what's already in flight, so you can:
- detect tasks whose sessions have exited (the status call will reap them)
- show the user an up-to-date picture before adding more
- avoid duplicating a task id that already exists
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" task listFor each task in the response with status == "running", also call task status --task-id <id> to trigger per-task reconciliation + queue drain. This also refreshes merged-PR detection if the task has a PR.
Sync the TaskCreate todo list with what you just learned:
- For tasks you didn't have a todo for yet, create one via
TaskCreatewithstatus: in_progressand content describing it (id, repo, branch, worktree path, session id, initial prompt). - For tasks whose state changed since the last todo snapshot (exited, PR merged), call
TaskUpdateaccordingly:- session running →
in_progress - session exited, no PR yet →
in_progresswith a content note "session ended — awaiting PR" - PR merged →
completed(but do not destroy) - PR open / closed (not merged) →
in_progresswith note about PR state
- session running →
Apply to $ARGUMENTS or to the user's natural-language message:
-
Pure PRD implementation — matches
^\s*(PRD-\d+[,\s]*)+$OR contains PRD ids with explicit implementation verbs (implement,build,start,ship,run,do). Go to Step 3a with the extracted list of PRD ids. -
PRD destructive/status intent —
close PRD-010,destroy PRD-010,finish PRD-010,mark PRD-010 done. Jump to the Destroy one task action instead. -
PRD with ambiguous intent — PRD ids present but the verb is unclear (
look into PRD-010,check PRD-010,what's in PRD-010). UseAskUserQuestion(singleSelect, per PRD): "Implement it", "Research it (generic task)", "Cancel". -
Generic prompt — no PRD ids, or PRDs are only incidental references. Go to Step 3b with the user's prompt as the task description.
For each PRD id:
-
Pick the target repo. If exactly one is registered, use it. Otherwise use
AskUserQuestion(singleSelect, options = repo names) to ask which repo this PRD goes into. Ask per-PRD since different ones might go different places. -
Derive a short description (≤60 chars). Try to read the PRD title from
$PWD/prd/PRD-<N>*.md(first#heading or frontmattertitle). Fall back to the literal id. -
Detect dependencies. If the PRD file has a
Depends on:line in its frontmatter and any of those dependency ids are currently active tasks, pass them via--blocked-by. Also add a note in the TaskCreate content: "blocked by: ". -
Create the worktree:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task create \ --type prd \ --repo "<repo>" \ --task-id "PRD-<N>" \ --description "<short>" \ [--blocked-by "PRD-<M>,…"]
-
Launch the session with
/implement-prd:python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task start \ --task-id "PRD-<N>" \ --prompt "/implement-prd PRD-<N>"
-
Attach live watcher if
monitoring.mode == "live". See the "Live watcher launch" procedure below. Do this once per task, right aftertask startsucceeds and before moving to the next task. -
Create the todo via
TaskCreate:content:PRD-<N>: <short description>(+ "blocked by …" if relevant)status:in_progressactiveForm:Running PRD-<N>or similar
-
If any step fails, surface the stderr message and move on to the next PRD — don't abort the whole batch. Reflect failed ones in todos too (
in_progresswith a note about the failure or skip creating the todo entirely).
- Ask for a short label (≤60 chars) via
AskUserQuestionif not obvious from the prompt. - Ask for the repo if >1 registered.
- Create + start using the same two commands, with
--type genericand the raw user text as--prompt:Note the allocatedpython3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task create \ --type generic \ --repo "<repo>" \ --description "<label>"
GEN-NNNid from the JSON response.python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task start \ --task-id "GEN-<N>" \ --prompt "<raw user text>"
- Attach live watcher if
monitoring.mode == "live"(see procedure below). - Create a
TaskCreatetodo entry for it.
Run this any time a new headless process was just spawned and monitoring.mode == "live". The trigger points are:
- After every successful
task start(always a fresh spawn). - After
task sendortask broadcaston a task where the response showsdispatched: true(means the helper spawned a newclaude --resumeprocess). Do not launch a watcher whendispatched: false(that means the prompt was queued behind a currently-running process — the existing watcher, if any, already covers that pid; and if there's no existing watcher, the nextdispatched: truewill be your hook).
Skip this procedure entirely when monitoring.mode == "wait".
-
Ensure
Monitoris loaded.Monitoris a deferred Claude Code tool — it may not be in the current tool set. If you don't already see it as available, load it once viaToolSearch:ToolSearch { query: "select:Monitor", max_results: 1 }Idempotent: loading an already-loaded tool is a no-op.
-
Launch the watcher directly via
Monitor.Monitortakes the shell command as its own parameter — do not wrap it in aBash(run_in_background=true)call first. OneMonitortool call per task:Monitor { command: 'python3 /absolute/path/to/.claude/plugins/parallel-agents/scripts/paw.py --root /absolute/path/to/.claude/parallel-agents task wait --task-id <task-id> --granularity <granularity> --poll-interval 1.0', description: "<task-id> live watcher (<granularity>)", timeout_ms: 900000, persistent: false }Notes on the arguments:
- Use absolute paths for both
paw.pyand--root.${CLAUDE_PLUGIN_ROOT}and$PWDget expanded by shells but not always reliably insideMonitor— safest to hardcode the resolved paths. You can get them once viaecho "$PWD"at the start of the skill invocation. <granularity>= the value ofconfig.monitoring.event_granularity(exit,tool, orall).--poll-interval 1.0gives cmd_task_wait a 1s heartbeat — fast enough to catch exits within a second, slow enough to not burn CPU.timeout_ms— 900000 (15 min) is a reasonable ceiling for most PRD-style runs. Raise to 3600000 (1 h, max) for long refactors. Usepersistent: trueif you truly don't know how long the task will take; the watcher will run for the life of the session.descriptionshows in every event notification; make it specific (include the task id and granularity).
- Use absolute paths for both
-
Each stdout line from
task waitbecomes an in-session notification. The notification body contains the exact linetask waitprinted. Expected event types:Event line When What to do WATCHING task=<id> pid=<N> granularity=<mode>Fires once on attach Acknowledge in summary; no action needed TOOL task=<id> tool=<name>Only if granularity= tool, per parsed tool-use event in the child's logOptionally narrate ("GEN-001 is editing now") LOG task=<id>: <raw line>Only if granularity= all, per raw log lineDebug-only; don't surface to user unless asked EXITED task=<id>Child process died Run task status --task-id <id>; for PRD tasks runtask pr-check; update the TaskCreate todoTIMEOUT task=<id>--timeoutdeadline hit (rare — we don't set one)Treat as a tool-side error; run manual status ERROR task=<id> reason=<…>Watcher startup failure Log it; fall back to manual task statuschecks for this taskYou will also see a stream-ended lifecycle notification once the Monitor command exits. That is Monitor's own bookkeeping, not a
task waitevent — ignore it. -
One watcher per (task, pid) generation, not per task. Every time the helper spawns a fresh process for a task (initial start, or a
dispatched: truesend/broadcast), that's a new pid, and you should attach a newMonitor— the old watcher has already exited withEXITEDwhen its pid died. Do NOT attach two monitors to the same live pid. -
Destroy cleanup is automatic. When the user destroys a task,
paw.py task destroysends SIGTERM to the live pid; the watcher sees the process disappear and emitsEXITEDwithin the poll interval (≤1s). The watcher process exits cleanly on its own — you don't need toTaskStopit.
Print a compact table of what was just created in this invocation:
| id | type | repo | branch | worktree | session |
|---|
Remind the user they can say "how's it going" for an update, "take me into " to jump in, or "close all completed" to clean up merged work.
Triggered by things like "how's it going", "any updates", "status", "what's running", "check on PRD-010".
- Run
task list(see the Helper command reference). For each task withstatus == "running", also runtask status --task-id <id>to reap + drain. - For each PRD task whose session is
exitedand whoseprfield hasn't been refreshed recently, runtask pr-check --task-id <id>. - Update the TaskCreate todos to reflect what you learned (see Step 1 rules in the "start" action).
- Summarize to the user:
- Tasks that changed state since last time
- Tasks whose PRs just went
MERGED— flag these loudly with "ready to close" - Tasks still running
- Any queued prompts waiting to dispatch
Do not destroy anything in this action, no matter what you find.
Triggered by "tell PRD-010 to also X", "ask GEN-001 to Y", "broadcast Z to every worktree".
-
Parse the target and the prompt. Target can be a specific id or
all(broadcast). -
If the prompt is ambiguous, clarify with
AskUserQuestion. If the target is ambiguous and there are multiple tasks, list them viatask listand ask which. -
For a single target:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task send \ --task-id "<id>" \ --prompt "<prompt>"
-
For broadcast:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task broadcast \ --prompt "<prompt>"
-
Inspect the dispatch result per task. For
task send, read the top-leveldispatchedfield. Fortask broadcast, readresults[].dispatchedfor every target. -
Launch a fresh live watcher for each task where
dispatched: true— only whenmonitoring.mode == "live". Adispatched: trueresponse means the helper just spawned a brand-newclaude --resumeprocess inside the worktree; its pid is different from any previous generation and any prior watcher has already exited. Follow the Live watcher launch procedure once per freshly dispatched task.Do not launch a watcher when
dispatched: false— that means the prompt was appended to the queue of a process that is already running, and any existing watcher for that pid is still covering it. The queued prompt will fire on next idle, at which point callingtask status(or the nexttask send) will drain the queue and dispatch with a new pid — attach a watcher then, not now. -
Report per-task:
dispatched: true→ "PRD-010: dispatched (resume, pid ), watcher attached"dispatched: false→ "GEN-001: queued (position N, behind running pid )"- broadcast: show each target in the list with the same format.
-
Optionally update the corresponding TaskCreate todos with a content note about the queued or dispatched follow-up.
Queued prompts fire the next time the task goes idle and something calls task status, task send, or task broadcast for it — this is the "status refresh around mutations" pattern; there's no background drainer. The live watcher has no effect on queue semantics — it only tells the assistant when the current pid has died. To actually drain the queue, something must call into paw.py again after the watcher fires EXITED.
Triggered by "take me into PRD-010", "open PRD-010 in VSCode", "attach to PRD-010", "how do I resume PRD-010 interactively".
-
Run
task attachfor the id:python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task attach \ --task-id "<id>"
This returns
worktree_path,session_id,session_name, andresume_command. -
Decide execute vs. print based on the user's verb:
-
"Open [in VSCode]" — the user wants the editor opened now. Execute
code "<worktree_path>"via Bash (orcursor "<worktree_path>"if that's what's installed). Do not ask for confirmation first — opening an editor is local and reversible. After running it, report that the worktree was opened and then also print the terminal resume command (see below) so the user can jump the session too if they want. -
"How do I attach / resume / jump into" — the user wants instructions. Print both commands without executing. In this mode never run either.
-
-
Resume in terminal is always printed (not executed) because
claude --resumeis an interactive TUI that can't run under Claude Code's Bash tool:cd "<worktree_path>" && claude --resume <session_id>(this is the
resume_commandfield from the JSON; use it verbatim) -
Open in VSCode command shape:
code "<worktree_path>"VSCode treats each git worktree as its own workspace — the parent repo's main window is unaffected.
If task attach fails with "has no session yet", run the start action for this task instead.
Triggered by "close PRD-010", "destroy PRD-010", "remove the PRD-010 worktree".
- Run
task listto confirm the task exists and is active. - If task type is
prd, runtask pr-check:Interpret thepython3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task pr-check \ --task-id "<id>"
prfield:- merged → destroy + delete branch recommended
- open or closed (not merged) → destroy worktree but keep the branch so no work is lost
- no PR → warn that the branch hasn't been pushed; destroying now abandons any uncommitted work
- Confirm with the user via
AskUserQuestion(multiSelect): "Also delete local branch" (preselected iff PR is merged), "Force" (off by default, needed for dirty worktree). - Run destroy:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task destroy \ --task-id "<id>" \ [--delete-branch] \ [--force]
- If the response's
branch_delete_erroris non-null, surface it verbatim (usually "branch not fully merged" — tells the user they'd be losing unmerged commits). - Update the corresponding TaskCreate todo to
completed.
Triggered by "close all completed tasks", "clean up merged worktrees", "destroy anything that's done".
-
Run the bulk cleanup, which internally refreshes PR state for every active PRD task and destroys the merged ones:
python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task cleanup
Consider running with
--dry-runfirst and showing the user what will happen:python3 ${CLAUDE_PLUGIN_ROOT}/scripts/paw.py --root "$PWD/.claude/parallel-agents" \ task cleanup --dry-run
Then confirm, then re-run without
--dry-run. -
Each
results[]entry has anaction:destroyed— worktree removed, local branch deleted, task marked destroyed.would_destroy— dry-run only.skipped— reason inreasonfield (usuallypr_state=OPENorpr_state=none).error— show the error and theprpayload if present.
-
For every destroyed task, set its TaskCreate todo to
completed(it should already be, but confirm). -
Generic tasks are never auto-cleaned. If the user wants to remove generic tasks, use the single-destroy action on each.
- One todo per parallel task. Flat, no nesting.
- Todo content includes: id, repo, branch, worktree path, session id, initial prompt (truncated), and any "blocked by " annotation.
- Status mapping:
createdorrunningin paw state →in_progressexited→in_progresswith a content note "session ended"- PR state
MERGED→completed(but worktree still exists until cleanup) destroyedin paw state →completed(worktree is gone)
- Updates happen in status-refresh passes, not on every user turn. The triggers are: before spawning, before destroying, when the user asks for status, before a cleanup pass.
- Never fabricate todos for tasks that don't exist in
paw.py's state. The source of truth istask listJSON.
These exist for quick inspection or on-demand docs without invoking the skill:
/pa-list— raw table dump of active tasks (and--allfor history)/pa-status <id>— one-task detail view with log tail/pa-attach <id>— prints the resume command, opens VSCode/pa-help— static reference card: task types, actions, config, modes, file locations/pa-demo— interactive guided demo that runs the full lifecycle against a real repo (creates 3 tasks, opens real PRs, lets the user play, then cleans up)
Use them when the user asks for them explicitly or when a quick raw view is simpler than the conversational flow. Init is not a slash command — it runs inline via the preconditions check at the top of this file whenever it's needed. Everything else (start / send / broadcast / destroy / cleanup / jump / init) goes through this skill.