Skip to content

Instantly share code, notes, and snippets.

@weisisheng
Created February 17, 2026 07:36
Show Gist options
  • Select an option

  • Save weisisheng/f39857e7becb25d9b57a4ccb4a8d8a68 to your computer and use it in GitHub Desktop.

Select an option

Save weisisheng/f39857e7becb25d9b57a4ccb4a8d8a68 to your computer and use it in GitHub Desktop.
ROI Scorer for Feature Proposals
---
name: roi-score
description: Evaluate feature ideas with a structured ROI scorecard — estimates effort, revenue impact, extendability, and north star alignment before you build. Helps a solo founder stay on the highest-leverage path.
---
This skill evaluates a proposed feature, enhancement, or piece of work against a structured ROI framework. It produces a scorecard that helps prioritize what to build next by estimating effort, revenue impact, extendability, and alignment with the business north star.
The user provides a feature description — either inline text, a reference to a doc/ticket, or a list of ideas to compare.
## Before Scoring: Research the Codebase
**Do not guess.** Before producing the scorecard, you MUST:
1. **Search for existing implementations** — Use Glob and Grep to find code related to the proposed feature. Many features are partially or fully built already. Check:
- `app/` for existing routes and pages
- `components/` for existing UI
- `lib/` for existing utilities, hooks, and business logic
- `app/api/` for existing API routes
- `lib/db/schema.ts` for existing database tables
- `scripts/` for existing automation
2. **Identify reusable patterns** — Note what already exists that the feature could build on (existing components, utilities, schema, API patterns).
3. **Check constraints** — Cross-reference against CLAUDE.md rules:
- "NO NEW FEATURES UNTIL LAUNCH" — Is this polish/messaging or a genuinely new feature?
- "NO Redis/Upstash" — Does this need infrastructure we don't use?
- "NO Vercel Crons" — Does this need scheduling?
- Tier gating — Should this be free, Pro, or Executive?
4. **Check existing plans** — Look in `docs/findings/` for relevant PRDs, roadmaps, or analysis that already covers this idea (especially `JTBD_PRD_ROADMAP_2026-02-14.md`).
## Business Context
Use this context when evaluating north star alignment:
- **Stage**: Pre-launch beta, waitlist-driven. Zero paying customers yet.
- **North star**: Convert waitlist signups into paying Pro/Executive subscribers.
- **Pricing**: Free ($0, 250 customers, 1 analysis) / Pro ($149/mo, 2500 customers, 20 analyses) / Executive ($749/mo, 25K customers, 500 analyses)
- **Target user**: CFOs, finance teams, business analysts at B2B companies
- **Core value prop**: Upload customer data, get whale curve analysis + AI-powered profitability insights
- **Key conversion moments**: First analysis completion, first AI insight, first export/share
- **Stack**: Next.js 16, Supabase, Stripe, Claude Opus 4.6, Vercel, Cloudflare Workers
## Scorecard Output Format
Produce this exact structure:
```markdown
## ROI Scorecard: [Feature Name]
### Effort Estimate: [S / M / L / XL]
- Files touched: ~N (list key ones)
- New vs existing patterns: [reuses X / requires new Y]
- Infrastructure changes: [none / schema / API / cron / external service]
- Reasoning: [1-2 sentences on why this size]
### Revenue Impact: [Direct / Indirect / None]
- Conversion effect: [which transition it affects: waitlist->trial, trial->paid, retention, upsell, expansion]
- Estimated affected metric: [e.g., "+5-15% trial conversion" — be honest about uncertainty]
- Time to impact: [immediate on deploy / weeks to measure / months to compound]
### Extendability Score: [High / Medium / Low]
- Opens doors to: [future features this enables]
- Dead-end risk: [is this a one-off or a building block?]
- Tech debt likelihood: [low / medium / high — will this need rework later?]
### North Star Alignment: [Strong / Moderate / Weak]
Evaluated against:
- Does this help convert waitlist -> paid? [yes/no + why]
- Does this make the demo more compelling? [yes/no + why]
- Does this increase retention/engagement? [yes/no + why]
- Does this reduce churn risk? [yes/no + why]
### Verdict: [BUILD NOW / QUEUE / PARK / SKIP]
- [1-2 sentence recommendation with clear reasoning]
- Opportunity cost: [what you're NOT doing if you spend time on this]
```
## Sizing Guide
Use these rough benchmarks for effort sizing:
| Size | Description | Files | Example |
| ------------ | ------------------------------------------------------------ | ----- | ------------------------------------------------------------ |
| **S** | Copy change, config tweak, reuses existing patterns entirely | 1-3 | Update CTA copy, add env var, toggle feature flag |
| **M** | New component or enhancement using existing infrastructure | 3-8 | Add new insight type, new PDF page, new admin panel tab |
| **L** | New capability requiring schema + API + UI changes | 8-15 | New integration, new user-facing workflow, new export format |
| **XL** | Architectural change or new system | 15+ | New auth provider, real-time features, new external service |
## Verdict Decision Tree
- **BUILD NOW**: Strong north star alignment + S/M effort + direct revenue impact
- **QUEUE**: Strong alignment but L/XL effort, or moderate alignment + S/M effort — schedule for after current priorities
- **PARK**: Interesting but indirect impact or uncertain ROI — revisit after launch data exists
- **SKIP**: Weak alignment, high effort, or violates current constraints (e.g., "NO NEW FEATURES UNTIL LAUNCH" for genuinely new features)
## Comparing Multiple Features
If the user provides multiple features, produce a scorecard for each, then add a comparison summary:
```markdown
## Comparison Summary
| Feature | Effort | Revenue | Extendability | North Star | Verdict |
|---------|--------|---------|---------------|------------|---------|
| Feature A | S | Direct | High | Strong | BUILD NOW |
| Feature B | L | Indirect | Medium | Moderate | QUEUE |
| Feature C | M | None | Low | Weak | SKIP |
**Recommended build order**: Feature A first (highest leverage), then Feature B after launch.
```
## Honesty Rules
- **Never inflate impact** to make a feature sound better. If the revenue impact is speculative, say so.
- **Never underestimate effort** to make something seem quick. If you're unsure, size up, not down.
- **Call out "shiny object" risk** — if a feature is exciting but doesn't move the north star, say it plainly.
- **Flag if already built** — If codebase research reveals the feature (or 80% of it) already exists, say so and recommend marketing/positioning work instead of engineering.
- **Acknowledge uncertainty** — Use ranges ("M-L effort") when genuinely unclear. Say "low confidence" on estimates where the feature touches unfamiliar territory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment