Use this one-page guide when working with Copilot agents.
| Agent | Icon | Purpose |
|---|---|---|
| Product Manager | π | Defines features, requirements, acceptance criteria |
| Orchestrator | π― | Plans work, tracks dependencies, sequences tasks |
| Developer | π» | Writes code, implements features, fixes bugs |
| Tester | β | Designs tests, validates code, writes test files |
| Reviewer | π | Reviews code, validates architecture, catches issues |
@product-manager
Create acceptance criteria for [feature]
Consider:
- Shop-floor reality (WiFi, device crashes)
- Interview talking points
- Cross-practice impact
@orchestrator
Break down this work into tasks:
[Feature description]
Provide:
- Task breakdown
- Dependencies
- Recommended sequence
@developer
Implement [feature name]
Context: [background]
Requirements: [list]
Files: [paths]
@tester
Write tests for [code/feature]
Must test:
- Happy path
- Error cases
- Edge cases
@reviewer
Review this code:
[Files changed]
Focus on:
- Type safety
- Error handling
- Performance
π Product Manager β Define feature
β
π― Orchestrator β Plan tasks & dependencies
β
π» Developer β Implement feature
β
β
Tester β Write tests & validate
β
π Reviewer β Code review & approval
β
π― Orchestrator β Mark complete
Use this structure for better results:
@[agent-name]
[What you want]
Context:
- [Relevant background]
- [Related files]
- [Constraints]
Requirements:
- [Req 1]
- [Req 2]
- [Req 3]
Output:
- [What format do you want?]
β Be Specific
β "Help me implement inventory"
β
"Implement Apollo Client subscription for inventory updates
in practice-3-nextjs-graphql/lib/hooks/useInventorySubscription.ts"
β Provide Context
β "Write a test"
β
"Write Jest test for useInventorySubscription hook.
Must test: subscription lifecycle, cache updates, error handling"
β Use Project Docs
.github/copilot-instructions.mdβ Commands & conventionsDESIGN.mdβ Architecture patternsCLAUDE.mdβ Technology details.copilot/agents/β Agent responsibilities
β Chain Agents (don't ask one to do everything)
@developer β write code
@tester β write tests
@reviewer β review code
β Reference Previous Steps
Based on this implementation from Developer:
[paste the code]
Now @tester, write tests for it...
- @product-manager β Define what it does
- @orchestrator β Plan impact & blockers
- @developer β Implement activity
- @tester β Write unit & integration tests
- @reviewer β Verify idempotency & error handling
- @product-manager β Define data model
- @orchestrator β Check schema impact
- @developer β Create migration + metadata
- @tester β Write query/subscription tests
- @reviewer β Check relationships & constraints
- @orchestrator β Diagnose & plan fix
- @developer β Implement fix
- @tester β Write regression test
- @reviewer β Validate fix completeness
- Ask one agent to design + implement + test + review (breaks specialization)
- Ask one agent to work on 2+ unrelated tasks (loses focus)
- Make architectural decisions without context from Orchestrator (miss constraints)
- Ignore escalation criteria (blockers compound)
- Use default model for complex multi-practice tasks (insufficient reasoning)
Instead: Chain agents by specialty and follow escalation thresholds
@orchestrator
The inventory subscription is slow. How do we approach this?
β [get diagnosis & tasks]
@developer
Implement the optimizations suggested
β [get optimized code]
@tester
Write performance tests ensuring <500ms
β [get performance tests]
@reviewer
Verify the approach is correct
β [get approval]
| β Anti-Pattern | β Better Approach |
|---|---|
| "Help me" (too vague) | "Implement X with these requirements" |
| No context | Include files, constraints, background |
| One agent for everything | Chain agents by specialty |
| "Is this right?" (no specifics) | "@reviewer Check for [specific issues]" |
| Too much rambling | Concise, structured requirements |
Use these GitHub Copilot CLI commands to enhance your workflow:
| Command | Purpose | Common Use |
|---|---|---|
/plan |
Create implementation plan | Start complex feature, define approach |
/diff |
Review changes before committing | Validate changes before pushing |
/review |
Automated code review | Find bugs and issues before merge |
/ask |
Ask clarifying questions | Unblock without changing context |
/delegate |
Send work to GitHub (auto-create PR) | Escalate multi-practice or blocker issues |
/lsp |
Language server for code intelligence | Navigate, find definitions, refactor |
/tasks |
View and manage background tasks | Monitor long-running operations |
/fleet |
Enable parallel subagent execution | Run multiple agents in parallel |
Example Usage:
/plan β Create feature implementation plan
/diff β Review your code changes before git push
/ask β "What's the best way to structure this?" (without losing context)
/review β Get automated code quality check
/delegate β "This impacts 3 practices, escalate to GitHub PR"
Default Model: Claude Haiku 4.5 (cost-efficient, fast)
When to Use Premium Models (requires explicit /model command):
gpt-5.4β Complex multi-practice architectural decisionsclaude-sonnet-4.6β Large codebase analysis, refactoringclaude-opus-4.6β Emergency high-complexity debugging
How to Override:
/model gpt-5.4
@developer
[Your task that needs premium reasoning]
Justification: Multi-practice impact requires complex tradeoff analysis
Cost Control: Premium model requests are logged. Use sparingly for genuinely complex work.
- Handle: 0β1 concurrent blockers
- Escalate: 2+ concurrent blockers OR work blocked >2 hours
- Red Flags: Multi-practice dependencies without clear sequence
- Approve: 0β10% scope creep (feature aligned with original goal)
- Review: 10β30% scope creep (borderline, needs refinement)
- Restart: >30% scope creep (restart requirement gathering)
- Block PR: Code coverage <80% (non-negotiable)
- Report Flaky: Tests pass <95% consistently (investigate)
- Escalate: Any test takes >5 seconds to run (performance issue)
- Block PR: Critical bugs or security issues (blocker red flags)
- Request Changes: Type safety issues, missing error handling
- Approve: Minor code style issues only (non-blocking)
- Escalate to Orchestrator: Multi-practice impact unclear OR depends on unfinished task
How Copilot agents use CLI tools:
| Agent | Primary Tools | Secondary Tools | Rarely Used |
|---|---|---|---|
| Orchestrator | /plan, /ask, /delegate |
/diff, /tasks |
/lsp |
| Product Manager | /ask, /plan |
/review |
/delegate |
| Developer | /lsp, /diff, /plan |
/ask, /review |
/fleet |
| Tester | /plan, /ask, /diff |
/review, /tasks |
/delegate |
| Reviewer | /review, /diff, /lsp |
/ask, /tasks |
/plan |
When to Use Each:
/askβ Clarify without escalating (Developer β Orchestrator)/delegateβ Escalate blocker to GitHub (all agents)/diffβ Validate before commit (all agents)/fleetβ Run parallel tasks (Orchestrator coordinating agents)
Instead of generic "Red Flags", use specific escalation criteria:
| Scenario | What to Do |
|---|---|
| 1 blocker, <2 hours | Orchestrator handles directly |
| 2+ blockers OR >2 hours blocked | Orchestrator escalates via /delegate |
| Scope creep 5% | Product Manager approves |
| Scope creep 20% | Product Manager refines with stakeholder |
| Scope creep 35% | Product Manager escalates /delegate to restart |
| Test coverage 85% | Tester approves |
| Test coverage 75% | Tester blocks PR, requests additional tests |
| Code has type errors | Reviewer blocks PR |
| Code has style issues | Reviewer requests minor changes (non-blocking) |
| Task affects 2+ practices unclear | Developer escalates /ask to Orchestrator |
For advanced workflows, see: .copilot/agents/README.md
This guide includes:
- Communication flow diagram showing how agents interact
- CLI commands matrix (which agent uses which commands)
- 3 real-world multi-agent scenarios with exact command sequences
- Model override coordination policy
- Complete escalation matrix with decision trees
docs/agent-prompt-flows.mdβ Full onboarding guide (you're reading an excerpt).copilot/agents/β Agent documentation (read for details).copilot/agents/README.mdβ Meta-Agent Collaboration guide (advanced).github/copilot-instructions.mdβ Build/test commands & conventionsDESIGN.mdβ Architecture patterns
- For workflow? β Ask @orchestrator
- For implementation? β Ask @developer
- For code review? β Ask @reviewer
- For testing? β Ask @tester
- For requirements? β Ask @product-manager
Pro Tip: Save this file and reference it during development. The full guide is in docs/agent-prompt-flows.md.