You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linear-Driven Development Workflow for Claude Code
A complete skill-based workflow for Claude Code that integrates with Linear for issue tracking, planning, implementation, and review. Includes session-scoped edit locking during planning mode.
Skills
Skill
Purpose
Status Transition
/plan
Explore codebase, write plan document to Linear
— (creates document)
/approve
Finalize artifacts, create parent issue, attach plan
→ Scheduling
/release
Release planned work for implementation
Scheduling → Queueing
/implement
Break down into subtasks/checklists, start coding
Queueing → Working
/handoff
End-of-session: enrich artifacts, save progress
— (stays Working)
/remediate
Post organized review feedback, route back for fixes
The workflow assumes a team named "Technologentsia" — update skill files to match your team name
Key Design Decisions
Planning and implementation are separate sessions by default. /approve finalizes artifacts but does NOT start coding. /implement starts a fresh session with full context window.
TodoWrite and Linear checklists are the same information — composed once, posted to both. Near-zero marginal cost.
Use haiku subagents for all Linear write operations to save tokens. Opus reads and composes, haiku executes CRUD.
Agent never moves issues to Running — human in the loop required.
Remediation items are prioritized over fresh work when agent picks from the board.
Cross-session continuity via Linear: /implement TEC-xxx in a new session reads plan + checklists to reconstruct state.
Approve a plan and finalize Linear artifacts. Unlocks editing tools, creates parent issue if needed, attaches plan document, and moves to Scheduling status. Does NOT start implementation — use /implement for that.
Approve Plan
The user has reviewed and approved the plan. Finalize the artifacts and unlock editing tools — but do NOT begin implementation. The user will invoke /implement when ready to start building.
For feature explorations that are NOT yet actionable:
If the plan is purely exploratory and not ready for implementation, skip issue creation. Just confirm:
"Exploration plan is finalized on the project. When you're ready to move this toward implementation, we can create an actionable plan and feature issue."
Step 4: Promote Parent Issue if Needed
If the issue being approved is a subtask (has a parent issue), check the parent's status. If the parent is in Planning or Queueing, move it to Working:
get_issue(id: "<issue-id>") → check for parentId
# If parentId exists:
get_issue(id: "<parent-id>") → check state
# If state is Planning or Queueing:
save_issue(id: "<parent-id>", state: "Working")
Use a haiku subagent for these status updates.
Step 5: Confirm to User
Summarize what was done:
What artifacts were created or updated
Current status of the issue(s)
Where the plan document lives
Close with: "Plan is approved and artifacts are finalized. Use /implement (or /implement TEC-xxx) when you're ready to start building."
Important Rules
Do NOT begin implementation. This skill finalizes artifacts only. /implement starts the work.
Do NOT create subtasks or checklists. Those are created at implementation time by /implement.
Use haiku subagents for Linear write operations (status updates, label changes, link additions).
Use the correct status names: Scheduling (planned/approved), Queueing (released for work), Working (in progress), Reviewing (awaiting review), Running (complete).
End-of-session review. Enriches Linear artifacts with insights, updates checklist progress, and saves durable learnings to memory. Use when wrapping up a session before context gets stale.
Handoff
The session is winding down. Review what happened and make sure everything valuable is captured in the right places before the conversation ends.
Step 1: Review the Session
Scan the conversation for:
Decisions made — approach choices, design decisions, constraints established
Work completed — code written, tests added, files changed
Work in progress — tasks started but not finished
Insights discovered — things learned about the codebase, patterns, gotchas
Open questions — unresolved items that need attention in the next session
Step 2: Update Linear Artifacts
Use a haiku subagent for all Linear writes.
If implementation was in flight:
Update checklist progress (check off completed items)
If a subtask is partially complete, add a comment noting where work stopped and what remains
Do NOT move issues to Reviewing unless the work is actually complete
For all sessions:
Post a session summary comment on the primary issue being worked on. Format:
## Session Summary — [date]### Completed-[bullet list of what was done]### In Progress-[what was started but not finished, and where it stands]### Decisions-[any decisions made during this session]### Next Steps-[what the next session should pick up]
Enrich related issues:
If insights or decisions apply to other issues (parent, sibling subtasks, related issues), post brief comments on those too.
Step 3: Save Durable Learnings to Memory
If the session revealed anything that would be valuable in future conversations:
Codebase insights that aren't obvious from the code → project memory
User preferences or workflow corrections → feedback memory
External references discovered → reference memory
Only save what's genuinely durable. Don't save ephemeral task state — that lives in Linear.
Step 4: Confirm to User
Summarize what was captured and where:
Which Linear issues were updated
What was saved to memory (if anything)
What the next session should start with
"Session handoff complete. [Issue TEC-xxx] has been updated with progress and a session summary. Next session can pick up with /implement TEC-xxx."
Start implementation of a planned issue. Creates subtasks (features) or checklists (tasks), builds a TodoWrite list, and begins coding. Invoke with /implement TEC-123 or just /implement if the target is clear from context.
Implement
Begin implementation of a planned and approved issue. This skill reads the plan, creates the execution scaffolding (subtasks/checklists + TodoWrite), and starts coding.
Step 1: Identify the Target Issue
With an argument (/implement TEC-123):
Fetch the issue with get_issue(id: "TEC-123").
Without an argument (/implement):
Infer the target from conversation context — typically the issue just created or discussed via /plan and /approve. If the target is ambiguous, ask: "Which issue should I implement? I see we've been discussing [X] and [Y]."
Validate readiness:
The issue should be in Scheduling or Queueing status. If it's in Planning, ask: "This issue is still in Planning. Should I proceed, or did you want to plan it first with /plan?"
The issue should have a plan document attached. If it doesn't, ask: "I don't see a plan document on this issue. Should I proceed without one, or create a plan first?"
Step 2: Move to Working
Move the target issue to Working:
save_issue(id: "<issue-id>", state: "Working")
Promote parent issue if needed:
If the issue is a subtask (has a parent issue), check the parent's status. If the parent is in Planning or Queueing, move it to Working as well.
Use a haiku subagent for these status updates.
Step 3: Read the Plan
Read the plan document attached to the issue. Understand:
The objective and approach
The implementation steps (and phases, if any)
The test strategy
Any open questions (raise these with the user before proceeding)
If there's a referenced exploratory document (linked from the issue), read that too for broader context.
Step 4: Build the Execution Scaffolding
Re-read the plan's Implementation Steps. Determine the issue type from its labels.
For issues labeled Task:
Tasks get checklists directly on the issue. No subtasks.
Compose a checklist from the implementation steps. Each item should be concrete and independently verifiable.
Use a haiku subagent to post the checklist as a comment on the issue in this format:
For the first subtask you're about to work on, read the relevant code and compose a checklist of concrete implementation steps. Post it as a comment on that subtask.
Move the first subtask to Working.
Use a haiku subagent for creating subtasks and posting checklists.
Don't create checklists for future subtasks yet — create them when you pick each one up. This keeps them grounded in code you've actually read.
Create the TodoWrite list:
Build a TodoWrite list from the checklist you just created. This is your session-scoped execution tracker. Include:
Every step from the checklist, in execution order
A final task: "Verify implementation against plan"
Mark the first task as in_progress.
Step 5: Implement
Begin coding, following the TodoWrite list. As you work:
Progress tracking:
Mark TodoWrite tasks complete immediately as you finish each one (don't batch)
Periodically dispatch a haiku subagent to check off completed checklist items in Linear (every 2-3 completed items, not after every single one)
If you discover the task list needs to change: pause coding, update both the TodoWrite list and the Linear checklist, then continue
Scope discipline:
The plan is scope-locked. You may perform localized code hygiene (formatting, fixing an adjacent typo) but nothing beyond that. If you feel tempted to introduce refactoring that isn't in the plan:
Stop coding
Describe the refactoring, its justification, and its implications
Wait for user approval before continuing
When uncertain:
If you encounter ambiguity or a decision the plan doesn't cover:
Stop coding
Describe the uncertainty concisely
Wait for clarification before continuing
When stuck:
If you hit a blocker:
Use the counselors CLI to explore options that align with the plan
If counselors yields a path forward — take it, but note what happened
If no path aligns with the plan — stop coding, describe the difficulty, and wait for guidance
When deviating:
If you discover the task list must change to successfully implement the plan:
Stop coding
Update your TodoWrite list and the Linear checklist
You may then continue without waiting for confirmation — the updated list is your authorization
Tool selection:
Before each task, briefly consider which tools are highest leverage:
Serena LSP for structural navigation and symbol-level edits
ColGrep for behavioral/intent queries
ast-grep for structural pattern matching
Grep for exact text matches
Step 6: Completion
When you believe implementation is complete:
6a. Plan-diff check:
Re-read the original plan document. Compare it against what was implemented. Look for:
Gaps — planned steps that weren't executed
Deviations — implementation that diverged from the plan
Scope creep — work that wasn't in the plan
Remediate any issues before declaring completion.
6b. Finalize Linear artifacts:
Use a haiku subagent to:
Check off any remaining checklist items
Move the issue (or current subtask) to Reviewing
If all subtasks of a feature are complete, move the parent to Reviewing
6c. Post completion summary:
Compose a summary and present it in the conversation AND post it as a comment on the Linear issue (via haiku subagent). The summary should include:
What was done — concrete list of changes, files touched
Assumptions made — decisions you made without asking
Challenges and resolutions — anything that didn't go smoothly, including any use of counselors CLI
Insights — anything learned that would be valuable for future work on this codebase, especially guidance that would help other agents working on the project
6d. Confirm to user:
"Implementation is complete and moved to Reviewing. Summary posted above and on the Linear issue. Please review when ready — use /complete TEC-xxx to mark it Running, or /remediate TEC-xxx if there are issues to address."
Enter custom planning mode. Locks editing tools, guides thorough codebase exploration, and writes the plan to a Linear document. Invoke with /plan TEC-12, /plan hubble, or just /plan. Supports an optional prompt after the target.
Custom Planning Mode
You are entering planning mode. Your job is to explore thoroughly and think deeply before proposing any implementation. You are NOT allowed to write code until the user explicitly approves your plan.
Step 1: Lock Editing Tools
Run this command immediately to activate the planning gate:
~/.claude/hooks/planning-mode-toggle.sh activate
After running this, confirm to the user: "Planning mode is active. I can explore and read but cannot edit code until you approve the plan."
Step 2: Parse Arguments
Parsing is positional. Use these rules in order — stop at the first match.
Parsing rules:
No arguments (/plan) → ask what to plan.
First token matches issue identifier (XXX-123 — letters, dash, numbers) → issue mode. Fetch the issue with get_issue. Everything after the identifier is the prompt.
First token matches a Linear project name → call list_projects to confirm the match. Everything after the project name is the prompt.
First token matches neither an issue ID nor a known project → treat the entire argument string as a prompt. Call list_projects to check for close matches (typos, slugs). Then ask the user: "Which project does this belong to?"
Examples:
Input
Target
Prompt
/plan TEC-12 refactor the buffer to use Redis
Issue TEC-12
"refactor the buffer to use Redis"
/plan hubble add rate limiting
Project hubble
"add rate limiting"
/plan hubble
Project hubble
(none — explore and ask)
/plan add rate limiting to the buffer
ask project
"add rate limiting to the buffer"
/plan
(none)
(none — ask everything)
Determining plan placement:
The user tells you where the plan goes and what kind of work this is. Don't assume. If context makes it obvious (e.g., /plan TEC-12 is clearly an issue-level plan), proceed. Otherwise, ask:
"Should this be an exploratory document on the project, or a tactical implementation plan on an issue?"
Exploratory plans (project-level documents): Broader explorations, architecture decisions, feature-level thinking that may spawn tasks later. These attach to the Linear project.
Implementation plans (issue-level documents): Tactical plans for specific work. These attach to a Linear issue. If no issue exists yet, you'll create one.
What to do with the prompt:
The prompt is the user's initial direction. It replaces the "ask what to plan" step — you already know what they want. Use it as your starting context alongside any relevant conversation history. You still explore thoroughly; the prompt just gives you a head start.
You are still responsible for the title. The prompt informs your understanding but isn't used verbatim as a title. The title emerges from conversation and exploration.
Resolving projects:
Always call list_projects on Linear to confirm project names. Do not guess or infer from the working directory. Project matching is case-insensitive.
For existing issues (TEC-12):
Fetch the issue with get_issue to understand the task. The issue description plus any prompt is your starting context.
For new implementation plans (create issue + plan):
Use the prompt and conversation to understand what's being planned. Explore the codebase. When writing the plan, also create the issue with save_issue using a title that captures what you learned. Attach the plan doc to that issue.
For exploratory plans (project-level):
Use the prompt and conversation to understand the scope. Explore the codebase. Before creating the document, check for existing documents (see Step 5). Create a plan doc attached to the project.
Step 3: Gather Context
Before exploring the codebase, assess what context you already have. This step is critical — the user may invoke /plan after a long conversation, and relevant discussion should carry forward into the plan.
Context assessment:
Scan the conversation history for content relevant to the plan target. Relevance is determined by the plan target, not by recency — something from an hour ago may be critical, something from 2 minutes ago may be irrelevant.
Look for decisions already made — statements like "let's use Redis for this", "I don't want a new migration", "keep it simple" are constraints that must carry into the plan. Don't re-ask questions the user already answered.
Look for exploration already done — if you already read through files, discussed architecture, or identified patterns during the conversation, reference what you learned rather than re-reading everything.
Look for rejected approaches — if the user said "I don't want to do it that way", don't propose it in the plan.
Ignore unrelated work — if the first half of the conversation was fixing CSS and now the user is planning a new recorder, the CSS discussion isn't context.
When uncertain, briefly state what you're carrying forward — "I'm incorporating our earlier discussion about X and Y. Anything else I should factor in, or anything I should set aside?" Keep this short — don't recite the entire conversation back.
Cold start (no prior conversation context):
If /plan is invoked with minimal context (fresh session, or the conversation so far was about unrelated things):
Issue with good description → the issue description is your context, explore from there
Plan with no prior discussion → ask 2-3 focused clarifying questions before exploring: What's the scope? What triggered this? Any known constraints?
No arguments at all → ask what they want to plan
Step 4: Explore Thoroughly
This is the most important step. Do not rush to write a plan. Your goal is to deeply understand the codebase before proposing anything. However, don't re-explore things you already understand from the conversation context.
Exploration checklist:
Understand the request — What exactly needs to happen? What are the acceptance criteria?
Find existing patterns — How does the codebase handle similar things today? Use Serena LSP (find_symbol, get_symbols_overview, find_referencing_symbols) for structural navigation. Use ColGrep for behavioral/intent queries when you don't know what to search for.
Identify all touchpoints — Which files, classes, methods, configs, tests, and migrations will be affected?
Check for constraints — Are there architectural rules (check ArchTest.php)? Database limitations (SQLite in tests)? Convention requirements?
Look for prior art — Has something similar been attempted before? Are there related TODOs, comments, or partial implementations?
Consider the test strategy — How will this be tested? What fixtures or factories are needed?
If you try to use them, you'll get a block message. This is intentional.
Step 5: Write the Plan to Linear
Once you've explored enough to have a clear picture, create or update a Linear document with the plan.
Check for existing documents first:
For exploratory plans (project-attached):
Before creating a new document, call list_documents(project: "<project>") and scan the titles for anything relevant to what you're planning. If a document looks like it covers the same topic:
Ask: "I found [title] on this project. Should I update that document, or create a new one?"
If the user says update → use update_document with the existing document's ID
If the user says create new → proceed with create_document
If nothing matches → create a new document without asking
For implementation plans (issue-attached):
Check if the issue already has a plan document. If it does, update it with update_document rather than creating a duplicate. If it doesn't, create a new one.
## Objective
What we're trying to accomplish and why.
## Background
What exists today that's relevant. Key findings from exploration.
Include relevant context from the conversation if applicable.
## Approach
The proposed implementation strategy. Be specific about:
- Which files to create or modify
- Which patterns to follow (reference existing code)
- Key design decisions and why
## Implementation Steps
Numbered steps in the order they should be executed.
Each step should be concrete enough to act on.
Group steps into phases if the work is complex enough to warrant it.
## Test Strategy
How this will be tested. Which test files, what scenarios.
## Open Questions
Anything unresolved that needs input before starting.
After creating or updating the document, share the URL with the user and say:
"Plan is ready for review: [link]. Let me know when you'd like to approve it, suggest changes, or discuss any part of it. Use /approve when you're ready to finalize the artifacts."
Important Behavioral Rules
Do not ask to exit planning mode yourself. Only the user can approve.
Do not try to work around the edit block. The constraint exists to keep you in exploration mode.
Spend more time exploring than you think you need. The whole point of planning mode is to prevent premature coding.
If the user gives feedback on the plan, update the Linear document — you can do this because update_document is not blocked, only code editing tools are.
If you realize the plan is wrong while writing it, go back and explore more. Don't commit to a bad plan just because you started writing.
Honor decisions from the conversation. If the user already expressed preferences, constraints, or rejected approaches during the conversation, respect those in the plan without re-litigating them.
You own the titles. The user doesn't provide issue or document titles — you craft them based on what you learned from the conversation and exploration. Make them clear and specific.
Always resolve projects live. Call list_projects to confirm project names — never guess or infer from the working directory.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Send review feedback to a Linear issue. Organizes your brain dump into a structured comment, adds the Remediation label, and moves the issue back to Queueing. Usage: /remediate TEC-123 followed by your feedback.
Remediate
The user has review feedback for an issue. Organize it, post it, and route the issue back for work.
Step 1: Parse Arguments
The first token should be an issue identifier (TEC-123). Everything after it is the user's feedback — a brain dump that may be informal, unstructured, or stream-of-consciousness.
If no issue identifier is provided, check conversation context for the most recently discussed issue. If still ambiguous, ask.
Step 2: Organize the Feedback
Take the user's raw feedback and structure it into a clear, actionable comment. Don't change the meaning — just organize it.
Format:
## Remediation Feedback### Issues Found-[each distinct issue as a bullet, with enough context to act on]### Expected Behavior-[what the user expects, if stated or implied]### Additional Context-[any other details from the feedback that don't fit above]
Omit sections that don't apply (e.g., if there's no "additional context," skip that section).
Step 3: Post and Route
Use a haiku subagent to do all three in one dispatch:
Post the organized comment on the issue
Add the Remediation label to the issue
Move the issue to Queueing status
Step 4: Confirm
Show the user the organized comment you posted (so they can verify it captures their intent), and confirm:
"Feedback posted to TEC-123 and moved to Queueing with Remediation label. This will be prioritized in the next implementation session."