Skip to content

Instantly share code, notes, and snippets.

@pluto-atom-4
Last active April 18, 2026 00:32
Show Gist options
  • Select an option

  • Save pluto-atom-4/16612135de9a6f9df1fcfa7832646fef to your computer and use it in GitHub Desktop.

Select an option

Save pluto-atom-4/16612135de9a6f9df1fcfa7832646fef to your computer and use it in GitHub Desktop.
Copilot Agens Utilization

Agent Quick Reference Card

Use this one-page guide when working with Copilot agents.

⚠️ First Time? Read the Code Quality Standards to understand ESLint, Prettier, and npm audit requirements.


The Agent Team

Agent Icon Purpose
Product Manager πŸ“‹ Defines features, requirements, acceptance criteria
Orchestrator 🎯 Plans work, tracks dependencies, sequences tasks
Developer πŸ’» Writes code, implements features, fixes bugs
Tester βœ… Designs tests, validates code, writes test files
Reviewer πŸ‘€ Reviews code, validates architecture, catches issues

When to Use Each Agent

πŸ“‹ Product Manager

@product-manager

Create acceptance criteria for [feature]

Consider:
- Shop-floor reality (WiFi, device crashes)
- Interview talking points
- Cross-practice impact

🎯 Orchestrator

@orchestrator

Break down this work into tasks:
[Feature description]

Provide:
- Task breakdown
- Dependencies
- Recommended sequence

πŸ’» Developer

@developer

Implement [feature name]

Context: [background]
Requirements: [list]
Files: [paths]

βœ… Tester

@tester

Write tests for [code/feature]

Must test:
- Happy path
- Error cases
- Edge cases

πŸ‘€ Reviewer

@reviewer

Review this code:
[Files changed]

Focus on:
- Type safety
- Error handling
- Performance

Feature Development Flow

πŸ“‹ Product Manager β†’ Define feature
        ↓
🎯 Orchestrator β†’ Plan tasks & dependencies
        ↓
πŸ’» Developer β†’ Implement feature
        ↓
βœ… Tester β†’ Write tests & validate
        ↓
πŸ‘€ Reviewer β†’ Code review & approval
        ↓
🎯 Orchestrator β†’ Mark complete

Prompt Template

Use this structure for better results:

@[agent-name]

[What you want]

Context:
- [Relevant background]
- [Related files]
- [Constraints]

Requirements:
- [Req 1]
- [Req 2]
- [Req 3]

Output:
- [What format do you want?]

Quick Tips

βœ… Be Specific

❌ "Help me implement inventory"
βœ… "Implement Apollo Client subscription for inventory updates
    in practice-3-nextjs-graphql/lib/hooks/useInventorySubscription.ts"

βœ… Provide Context

❌ "Write a test"
βœ… "Write Jest test for useInventorySubscription hook.
    Must test: subscription lifecycle, cache updates, error handling"

βœ… Use Project Docs

  • .github/copilot-instructions.md β€” Commands & conventions
  • DESIGN.md β€” Architecture patterns
  • CLAUDE.md β€” Technology details
  • .copilot/agents/ β€” Agent responsibilities

βœ… Chain Agents (don't ask one to do everything)

@developer β†’ write code
@tester β†’ write tests
@reviewer β†’ review code

βœ… Reference Previous Steps

Based on this implementation from Developer:
[paste the code]

Now @tester, write tests for it...

Common Scenarios

Adding a New Temporal Activity

  1. @product-manager β†’ Define what it does
  2. @orchestrator β†’ Plan impact & blockers
  3. @developer β†’ Implement activity
  4. @tester β†’ Write unit & integration tests
  5. @reviewer β†’ Verify idempotency & error handling

Adding a New GraphQL Type

  1. @product-manager β†’ Define data model
  2. @orchestrator β†’ Check schema impact
  3. @developer β†’ Create migration + metadata
  4. @tester β†’ Write query/subscription tests
  5. @reviewer β†’ Check relationships & constraints

Fixing a Bug

  1. @orchestrator β†’ Diagnose & plan fix
  2. @developer β†’ Implement fix
  3. @tester β†’ Write regression test
  4. @reviewer β†’ Validate fix completeness

Anti-Patterns: Don't Do This ❌

  • Ask one agent to design + implement + test + review (breaks specialization)
  • Ask one agent to work on 2+ unrelated tasks (loses focus)
  • Make architectural decisions without context from Orchestrator (miss constraints)
  • Ignore escalation criteria (blockers compound)
  • Use default model for complex multi-practice tasks (insufficient reasoning)

Instead: Chain agents by specialty and follow escalation thresholds


Multi-Agent Conversation Example

@orchestrator
The inventory subscription is slow. How do we approach this?

β†’ [get diagnosis & tasks]

@developer
Implement the optimizations suggested

β†’ [get optimized code]

@tester
Write performance tests ensuring <500ms

β†’ [get performance tests]

@reviewer
Verify the approach is correct

β†’ [get approval]

Prompt Anti-Patterns

❌ Anti-Pattern βœ… Better Approach
"Help me" (too vague) "Implement X with these requirements"
No context Include files, constraints, background
One agent for everything Chain agents by specialty
"Is this right?" (no specifics) "@reviewer Check for [specific issues]"
Too much rambling Concise, structured requirements

Copilot CLI Commands Quick Reference

Use these GitHub Copilot CLI commands to enhance your workflow:

Command Purpose Common Use
/plan Create implementation plan Start complex feature, define approach
/diff Review changes before committing Validate changes before pushing
/review Automated code review Find bugs and issues before merge
/ask Ask clarifying questions Unblock without changing context
/delegate Send work to GitHub (auto-create PR) Escalate multi-practice or blocker issues
/lsp Language server for code intelligence Navigate, find definitions, refactor
/tasks View and manage background tasks Monitor long-running operations
/fleet Enable parallel subagent execution Run multiple agents in parallel

Example Usage:

/plan   β†’ Create feature implementation plan
/diff   β†’ Review your code changes before git push
/ask    β†’ "What's the best way to structure this?" (without losing context)
/review β†’ Get automated code quality check
/delegate β†’ "This impacts 3 practices, escalate to GitHub PR"

Model Override Policy

Default Model: Claude Haiku 4.5 (cost-efficient, fast)

When to Use Premium Models (requires explicit /model command):

  • gpt-5.4 β€” Complex multi-practice architectural decisions
  • claude-sonnet-4.6 β€” Large codebase analysis, refactoring
  • claude-opus-4.6 β€” Emergency high-complexity debugging

How to Override:

/model gpt-5.4

@developer
[Your task that needs premium reasoning]

Justification: Multi-practice impact requires complex tradeoff analysis

Cost Control: Premium model requests are logged. Use sparingly for genuinely complex work.


Escalation Criteria (Specific Thresholds)

🎯 Orchestrator

  • Handle: 0–1 concurrent blockers
  • Escalate: 2+ concurrent blockers OR work blocked >2 hours
  • Red Flags: Multi-practice dependencies without clear sequence

πŸ“‹ Product Manager

  • Approve: 0–10% scope creep (feature aligned with original goal)
  • Review: 10–30% scope creep (borderline, needs refinement)
  • Restart: >30% scope creep (restart requirement gathering)

βœ… Tester

  • Block PR: Code coverage <80% (non-negotiable)
  • Report Flaky: Tests pass <95% consistently (investigate)
  • Escalate: Any test takes >5 seconds to run (performance issue)

πŸ‘€ Reviewer

  • Block PR: Critical bugs or security issues (blocker red flags)
  • Request Changes: Type safety issues, missing error handling
  • Approve: Minor code style issues only (non-blocking)

πŸ’» Developer

  • Escalate to Orchestrator: Multi-practice impact unclear OR depends on unfinished task

Tool Interactions Reference

How Copilot agents use CLI tools:

Agent Primary Tools Secondary Tools Rarely Used
Orchestrator /plan, /ask, /delegate /diff, /tasks /lsp
Product Manager /ask, /plan /review /delegate
Developer /lsp, /diff, /plan /ask, /review /fleet
Tester /plan, /ask, /diff /review, /tasks /delegate
Reviewer /review, /diff, /lsp /ask, /tasks /plan

When to Use Each:

  • /ask β†’ Clarify without escalating (Developer ↔ Orchestrator)
  • /delegate β†’ Escalate blocker to GitHub (all agents)
  • /diff β†’ Validate before commit (all agents)
  • /fleet β†’ Run parallel tasks (Orchestrator coordinating agents)

Red Flags 🚩 vs. Escalation Criteria βœ…

Instead of generic "Red Flags", use specific escalation criteria:

Scenario What to Do
1 blocker, <2 hours Orchestrator handles directly
2+ blockers OR >2 hours blocked Orchestrator escalates via /delegate
Scope creep 5% Product Manager approves
Scope creep 20% Product Manager refines with stakeholder
Scope creep 35% Product Manager escalates /delegate to restart
Test coverage 85% Tester approves
Test coverage 75% Tester blocks PR, requests additional tests
Code has type errors Reviewer blocks PR
Code has style issues Reviewer requests minor changes (non-blocking)
Task affects 2+ practices unclear Developer escalates /ask to Orchestrator

Meta-Agent Collaboration Guide

For advanced workflows, see: .copilot/agents/README.md

This guide includes:

  • Communication flow diagram showing how agents interact
  • CLI commands matrix (which agent uses which commands)
  • 3 real-world multi-agent scenarios with exact command sequences
  • Model override coordination policy
  • Complete escalation matrix with decision trees

Key Files

  • docs/agent-prompt-flows.md β€” Full onboarding guide (you're reading an excerpt)
  • .copilot/agents/ β€” Agent documentation (read for details)
  • .copilot/agents/README.md β€” Meta-Agent Collaboration guide (advanced)
  • .github/copilot-instructions.md β€” Build/test commands & conventions
  • DESIGN.md β€” Architecture patterns

Getting Help

  1. For workflow? β†’ Ask @orchestrator
  2. For implementation? β†’ Ask @developer
  3. For code review? β†’ Ask @reviewer
  4. For testing? β†’ Ask @tester
  5. For requirements? β†’ Ask @product-manager

Pro Tip: Save this file and reference it during development. The full guide is in docs/agent-prompt-flows.md.

Comments are disabled for this gist.