| description | Two-model peer-review with cross-validation and confirmed-issues synthesis |
|---|---|
| argument-hint | <model1> <model2> [scope] |
You are the orchestrator of a two-phase peer-review workflow. Execute the steps below in order.
User-supplied scope argument: $3
If that argument is empty, build the diff yourself:
- Run
git rev-parse --abbrev-ref HEADto get the current branch name. - Run
git diff origin/<branch>...HEADto capture every change since the last push to origin — this includes staged, unstaged, and committed-but-not-yet-pushed changes, not just working-tree modifications. - If the branch has no remote tracking branch, fall back to
git diff HEAD~1..HEAD.
Capture the resulting diff as the review target. Paste it inline into each reviewer's task prompt.
Search for a plan file in this order: plan.md, PLAN.md, docs/plan.md, docs/PLAN.md, planning/*.md, docs/superpowers/plans/*.md.
Also check for files based on the user-supplied scope (i.e. the argument says to verify phase0 and there's a doc in any of the expected locations named phase0-something.md)
Read the first one found, make sure that's the plan we're looking for, and note its path. If you doubt about the plan, stop and ask the user with your findings, they should clarify which plan to follow or confirm none applies.
Always tell the user which plan file you are using (or that none was found) before proceeding to Step 3. This lets them catch a wrong match early.
If a plan file exists, both reviewers must evaluate whether the implementation faithfully follows the plan — missed steps, deviations, and scope creep must all be flagged as issues.
Launch both reviewer subagents simultaneously with async: true and context: "fresh", using the model overrides below. Save both run IDs.
- Reviewer A → model
$1 - Reviewer B → model
$2
Each reviewer's task prompt must contain:
- The full review target (diff or scope from Step 1).
- The plan file path and full contents (if found), with this explicit instruction: "Verify the implementation follows this plan step by step. Any deviation, missing step, or out-of-scope change is a finding."
- This instruction: "You are review-only. Do NOT edit any files. Return your findings exclusively as a single Markdown table with columns
| Severity | File/Area | Issue | Recommendation |. Severity values: Critical / High / Medium / Low / Info. If you find nothing, return an empty table with a short note."
After launching, inform the user that both reviews are running in the background and the conversation is free. You can continue doing other useful local work (or end your turn), and you will poll for results before proceeding to the cross-review phase.
Poll subagent({ action: "status", id: "..." }) for each run until both are complete. Once both finish, extract the findings tables and display them under clearly labelled headings:
Review A — $1 (table)
Review B — $2 (table)
Each model now peer-reviews the other model's findings. Launch both with async: true and context: "fresh":
- Cross-Reviewer 1 → model
$1, receives Reviewer B's findings table. - Cross-Reviewer 2 → model
$2, receives Reviewer A's findings table.
Task prompt for each cross-reviewer:
"You are given a code-review findings table produced by another model. For each row: (1) verify the issue is real and present in the diff, (2) identify false positives or misreadings, (3) note any significant issue the reviewer clearly missed that you can see from the diff. Return a single Markdown table with columns | # | Original Issue (brief) | Verdict | Notes |. Verdict values: Confirmed / False Positive / Needs Clarification."
Include the original diff (from Step 1) in each cross-reviewer's task so they can validate findings against the actual code.
Save both run IDs.
Poll until both cross-reviews are complete. Then apply these rules to build the final output:
- Confirmed — issue raised by a reviewer and upheld (or independently raised) by the cross-reviewer. → Include in final table.
- False Positive — issue flagged as "False Positive" by the cross-reviewer. → Move to Discarded section.
- Needs Clarification — cross-reviewer is uncertain. → Surface separately.
Present the synthesis in three sections:
| Severity | File/Area | Issue | Source | Recommendation |
|---|
Source values: Both models / $1 → upheld by $2 / $2 → upheld by $1
(issues where the cross-reviewer could not reach a verdict — include original issue + cross-reviewer's question)
(brief list: original issue + reason it was rejected by the cross-reviewer)
Close with a one-line summary: N confirmed · M needs clarification · K discarded