| name | futureback | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| title | Futureback Product Foresight | |||||||||||
| description | Project the believable future of a product, then backcast that future into present-day strategy. Produces a two-layer artifact: a cinematic future snapshot followed by a structured decision brief with product principles, opportunity spaces, concrete bets, anti-goals, roadmap implications, and falsifiability criteria. Use when someone asks to imagine a product's future, escape incremental thinking, backcast from a winning scenario, or answer "what does this product become if it wins?" | |||||||||||
| triggers |
|
|||||||||||
| non_triggers |
|
Core promise: Project the believable future of this product, then turn that future into present-day product strategy.
This skill is not about predicting the future. It is about constructing a believable future scenario that reveals strategic choices in the present. The cinematic future scene is the hook, not the final point. The real deliverable is a decision-ready foresight memo.
The user should never feel like they're filling out a form. The entire point of this skill is that imagination comes first. The user says "futureback Prot" or "futureback this app" and the skill GOES. No intake questions. No "what's your decision target?" No setup friction.
How this works in practice:
- When invoked, immediately gather context from available files, conversation history, and pasted material. Read product docs, context files, roadmaps, anything available.
- Infer everything you can: product state, user, stage, constraints, uncertainties.
- If you have enough to imagine a believable future, START WRITING THE SCENE. Do not ask questions first.
- The decision target, strategic questions, and viability analysis emerge FROM the vision, not before it. The future scene reveals what decisions matter. You surface those in the strategy sections.
- Only ask clarifying questions if you genuinely cannot identify what product the user is talking about.
This is what separates futureback from the council or any analytical tool. Council is a room of analysts debating. Futureback is a sci-fi writer who lives inside the product's future, shows you what that world looks and feels like, and THEN converts it to strategy. The imagination opens the door. The strategy walks through it.
The strategic questions (should you build this? is the scope right? personal tool vs. real product? expand or stay focused?) all belong in the output. They just shouldn't be asked as intake questions. They should be ANSWERED by the vision.
Use this skill when:
- A founder, product lead, designer, or team needs to answer: What does this product become if it really wins?
- Someone is asking about product direction over a 12-36 month horizon
- The user wants to escape incremental thinking or local optimization
- There is a roadmap or positioning question with real stakes
- The user wants a future-facing product strategy artifact, not just ideas
- The user wants a more vivid, believable articulation of the product's future state
- Someone asks what would make the current roadmap look naive
Do not use this skill for:
- Bug triage
- Feature spec writing
- Launch copy or marketing language
- Generic ideation or brainstorming
- Short-term prioritization with no strategic question
- Market sizing
- "Give me some ideas"
- Tactical UX polish
- Vague inspiration sessions
- Empty "vision" exercises detached from real decisions
If the user just wants feature ideas, brainstorming, or copy, tell them this skill is the wrong tool and suggest what would actually help.
The user provides: a product name, description, or pointer to context. That's it. Everything else is your job to gather or infer silently.
Before generating, seek or infer the following from available files, conversation history, and context. Do NOT ask the user for these. Gather what you can, declare assumptions for what you can't, and go.
| Input | How to get it |
|---|---|
| Product / app | From invocation (e.g., "futureback Prot") |
| Primary user / customer | Infer from product docs, context files, or conversation |
| Time horizon | Default 24 months unless user specifies |
| Current product stage | Infer from available context |
| Current constraints | Infer from docs, tech stack, team size, stage |
| Existing roadmap or strategy | Read from available files |
| Evidence available | Whatever is in the environment: docs, research, data, user notes |
| Major uncertainties | Infer and surface these in the output |
| Decision target | Do NOT ask. Derive from the vision. The future scene reveals what decisions matter. Surface them in the strategy sections. |
For early-stage products: accept founder hypotheses, user assumptions, and market beliefs — but label them clearly as assumptions. Never pretend weak evidence is strong evidence.
For mature products: prefer actual product signals. Do not ignore economics, operational reality, or organizational constraints.
Use when context is limited, the user wants speed, or a focused output is sufficient.
Produces:
- One future snapshot
- Strategic thesis
- 3 opportunity spaces
- 3 concrete bets
- 3 anti-bets / anti-goals
- One immediate next step
- One thing to stop doing
Use when the user explicitly requests a fuller strategy artifact, provides rich context, or the decision stakes are high. Activate with "deep mode," "full version," "give me the deep futureback," or similar.
Produces:
- Future snapshot
- Alternate future (only if uncertainty is materially high — otherwise skip)
- Strategic thesis
- What changed in user behavior
- What the product became
- 3-5 product principles
- 3-5 opportunity spaces
- 3-5 concrete bets
- Backcast roadmap (now / next / later / stop)
- Anti-goals and seductive bad ideas
- Risks and failure modes
- What must be true (key assumptions)
- Falsifiers and evidence to seek
- Confidence labeling on all claims
- One thing to do first
- One thing to stop doing
- Belief that changed
Execute these steps in order. Do not skip steps. Do not rearrange the sequence.
Do NOT ask the user questions. Read everything available:
- Product context files, docs, briefs, roadmaps
- Conversation history
- Any pasted material
- Available files in the environment
Infer product state, user, stage, constraints, uncertainties, and time horizon from what you find. Declare assumptions for anything you can't determine. Then go straight to Step 2.
From what you gathered, assemble your internal understanding of:
- Product state (what exists now, what works, what doesn't)
- User (who they are, what they do, what they struggle with)
- Current product mechanics (how the product actually works day-to-day)
- Constraints (technical, economic, organizational, regulatory)
- Evidence (research, data, signals — distinguish strength)
- Market and behavioral forces (what is changing around the product)
- Time horizon
Do not fabricate evidence. Do not present this pack to the user. Use it to fuel the scene.
Quickly ground yourself in what the product is now. Do not write a long section for this. A few lines max. This is your baseline so the future scene departs clearly from it.
This is the core of the skill. This is what makes futureback different from every other strategy tool.
The scene should show MULTIPLE different people using the product in surprising, specific ways. Not one user's morning routine. A parade of users. Different contexts, different needs, different behaviors. Each one reveals a different facet of what the product became. The power is in seeing the range of unexpected uses that emerge when the product wins. Each vignette should be 2-4 sentences max. Quick cuts. The reader should feel like they're watching a montage of the product's future, and each cut sparks a different product idea.
Think of it like a sci-fi writer imagining all the different people who adopted this thing and what they're doing with it that nobody planned for. That's the energy.
Requirements:
- Multiple users, multiple contexts, multiple behaviors (5-8 vignettes)
- Each vignette is tight: 2-4 sentences. A person, a moment, a specific product behavior.
- Vignettes should span the timeline, not all happen at the same moment. Some early (months 2-4), some mid (months 6-12), some mature (months 18-24+). The montage should show the product's evolution — early adopters doing scrappy things, later users doing things that only became possible as the product matured. This makes the montage feel like watching a product grow, not a frozen snapshot.
- At least some uses should be SURPRISING — things the founder didn't plan for
- Shows changed user behavior, not just better UI
- Feels cinematic and alive
- Remains believable — no fantasy, no generic sci-fi
- The whole snapshot section should be 300-500 words total, not per vignette
- Each vignette should imply a product capability or design decision
If uncertainty is materially high AND the user is in deep mode, generate at most one alternate future trajectory. Otherwise, one snapshot with multiple vignettes is the default.
Scene quality tests (apply before finalizing):
- Does each vignette show a DIFFERENT user in a DIFFERENT context? If two feel similar, cut one.
- Does at least one vignette show a use the founder didn't plan for? If not, you're not imagining hard enough.
- Could you film each vignette? Is there a person, a place, a specific moment?
- Does the montage as a whole reveal the product's real identity — not just what it does, but what it BECAME?
- If you swapped the product name for a competitor, would the vignettes still work? If yes, they're too generic — rewrite.
- Does each vignette imply a specific product capability or design decision? If it's just vibes, sharpen it.
From the montage, derive one tight paragraph:
- What did the product actually become? (Not what it was designed to be.)
- What's the real unlock — the thing that makes this product different from everything else?
- Keep it to 3-5 sentences. No preamble.
Produce 3-5 product principles that made the future plausible or inevitable. These must be actual principles — constraints on decision-making — not slogans, values, or aspirations.
A good principle: "The product never asks the user to enter data that already exists in their workflow." A bad principle: "We believe in simplicity."
Identify 3-5 opportunity spaces opened by that future. These are NOT feature lists. They are higher-level leverage points — areas where investment would compound.
Example: "Ambient trust infrastructure" is an opportunity space. "Add verified badges" is a feature.
Produce 3-5 concrete bets, feature concepts, or capability shifts. Requirements:
- Specific enough to influence a real roadmap discussion
- Not generic ("add AI," "improve onboarding," "make it more social")
- Each bet should name the mechanism: what it changes, for whom, and why it matters
- Label each bet's confidence: known (evidence supports it), inferred (logical from evidence), plausible (reasonable but unproven), speculative (genuinely uncertain)
Show:
- Now: What matters immediately — what to start, validate, or investigate
- Next: What matters in the medium term — capabilities to build, bets to place
- Later: What can wait — important but not yet
- Stop: What current plans become less important or actively wrong in light of this future
Explicitly state:
- What NOT to build
- What tempting but wrong moves this future helps rule out
- What the product should stop doing or stop optimizing for
- What popular advice or obvious moves would be a mistake for this specific product
This section is required in both lean and deep mode. If you cannot say what to stop doing, the analysis is too weak.
End with:
- Key risks: What could go wrong with this trajectory
- Key assumptions: What must be true for this future to happen
- Supporting signals: What evidence would support the thesis
- Falsifiers: What evidence would prove it wrong — be specific
- What to watch: Leading indicators to track
Close the output by forcing action:
- Do first: The single highest-leverage move suggested by this analysis
- Stop doing: The single most important thing to stop, deprioritize, or kill
- Belief that changed: The one assumption about the product that this analysis challenges or overturns
## Future Snapshot — [Product], [Time Horizon]
[Montage of 5-8 vignettes. Different users, different contexts, different behaviors.
Each vignette: 2-4 sentences. Quick cuts. 300-500 words total.]
## Strategic Thesis
[One tight paragraph: 3-5 sentences. What did the product become? What's the real unlock?]
## Opportunity Spaces
1. [Space] — [one sentence: why it matters]
2. [Space] — [one sentence: why it matters]
3. [Space] — [one sentence: why it matters]
## Concrete Bets
1. [Bet] — [mechanism + confidence label]
2. [Bet] — [mechanism + confidence label]
3. [Bet] — [mechanism + confidence label]
## Anti-Bets
1. [What not to do] — [why it's tempting but wrong]
2. [What not to do] — [why it's tempting but wrong]
3. [What not to do] — [why it's tempting but wrong]
## Do First
[Single action — one sentence]
## Stop Doing
[Single thing to kill or deprioritize — one sentence]
## Future Snapshot — [Product], [Time Horizon]
[Montage of 6-10 vignettes. 400-600 words total.]
## [Alternate Future] (only if uncertainty is materially high)
[Alternate montage: 3-5 vignettes, 200-300 words]
## Strategic Thesis
[One tight paragraph]
## What Changed
- **User behavior:** [specific, 1-2 sentences]
- **Product role:** [specific, 1-2 sentences]
- **Ecosystem:** [specific, 1-2 sentences]
## Product Principles
1. [Principle] — [what it constrains]
2. [Principle] — [what it constrains]
3. [Principle] — [what it constrains]
[up to 5]
## Opportunity Spaces
1. [Space] — [why it matters]
2. [Space] — [why it matters]
3. [Space] — [why it matters]
[up to 5]
## Concrete Bets
1. [Bet] — [mechanism] — [confidence: known/inferred/plausible/speculative]
2. [Bet] — [mechanism] — [confidence]
3. [Bet] — [mechanism] — [confidence]
[up to 5]
## Backcast Roadmap
- **Now:** [one sentence]
- **Next:** [one sentence]
- **Later:** [one sentence]
- **Stop:** [one sentence]
## Anti-Bets and Seductive Bad Ideas
1. [What not to do] — [why it's tempting but wrong]
2. [What not to do] — [why it's tempting but wrong]
3. [What not to do] — [why it's tempting but wrong]
## Risks and Failure Modes
- [Risk 1]
- [Risk 2]
- [Risk 3]
## What Must Be True
- [Assumption 1]
- [Assumption 2]
- [Assumption 3]
## Falsifiers
- [Signal that would prove this wrong 1]
- [Signal that would prove this wrong 2]
- [Signal that would prove this wrong 3]
## Decision Pressure
- **Do first:** [single action]
- **Stop doing:** [single thing to kill]
- **Belief that changed:** [the assumption this analysis overturns]
All claims in the output must be tagged with one of four levels:
| Label | Meaning |
|---|---|
| Known | Supported by evidence provided or observed |
| Inferred | Logical conclusion from available evidence, but not directly proven |
| Plausible | Reasonable given the context, but unproven — could go either way |
| Speculative | Genuinely uncertain — included because it matters, not because it's likely |
In lean mode, apply labels to concrete bets. In deep mode, apply labels throughout.
These are hard constraints. Violating any of them means the output failed.
- Generic futurism that could apply to any product
- Vague inspiration or fake profundity
- TED-talk language or brand copy
- Trend soup ("AI + blockchain + community + personalization")
- Abstract adjectives with no observable behavior attached
- Feature dumps disguised as strategy
- Sci-fi tropes copied from movies or novels
- Detached speculation that ignores economics or constraints
- Assumptions presented as facts
- Beautifully written nonsense
- Montage test: Does the snapshot show multiple DIFFERENT users doing DIFFERENT things? If it's one person's day, rewrite as a montage.
- Surprise test: Does at least one vignette show a use the founder didn't plan for? If every use was obvious, push harder.
- Substitution test: If a vignette could apply to almost any product, rewrite it until it can't.
- Obsolescence test: If the output cannot say what becomes obsolete or unnecessary, it is too weak.
- AI test: If every recommendation sounds like "add AI," the output failed. Find the actual mechanism.
- Anti-goal test: If the output cannot say what to stop doing, it failed. Force a cut.
- Brevity test: If the strategy sections are longer than the snapshot, you're over-explaining. Tighten.
- Strategy test: If the result sounds like brand copy, not product strategy, it failed.
This skill is designed for repeated strategic use, not one-shot inspiration. To prevent the output from becoming a dead document:
- Decision target first. Every invocation starts with a decision. No decision target, no output.
- Name the audience. State who should act on this output. If no one will, don't generate it.
- Tie to existing strategy. Connect the output to the user's current roadmap, OKRs, or strategy question. If those exist, reference them explicitly.
- Re-run when evidence changes. When new user research, telemetry, or market signals arrive, re-run the skill against the same decision target. Compare what shifted.
- Track falsifiers. The falsifiers section is not decoration. Check those signals periodically. If a falsifier triggers, the thesis needs revision.
- Version the output. If re-running, note what changed from the previous version and why.
For the future snapshot:
- Cinematic, concrete, vivid
- Show a person, a place, a moment, a felt experience
- Emotionally resonant but not sentimental
- Tight — no purple prose, no lingering
- The scene should make the reader feel something shift
For the strategy sections:
- Direct, rigorous, sober
- Confident but not fake-certain
- Rich in specifics, grounded in constraints
- No filler, no preamble, no hedging for its own sake
- Write like you're advising someone who will actually act on this
General:
- No em dashes used decoratively
- No corporate tone
- No "here's what you need to know" framing
- Say what you mean. Cut what you don't.
"Futureback this: show me our B2B analytics product 24 months from now if PMs stop opening dashboards and expect answers in workflow."
"Project the believable future of this note-taking app and tell me what that changes for the roadmap."
"Write the future that would make our creator-tool roadmap look naive."
"Imagine this developer tool if it becomes the default workflow in 18 months. Then backcast the product bets."
"Show me what this habit app becomes if it actually wins the job users hired it for."
"Imagine the future of this marketplace if trust becomes ambient and invisible."
"Help me see what this AI coding app wants to become, not just what features it should add."
"Project the future of this project management product if status reporting disappears into the workflow."
"Write a believable future scene for this wellness product, then extract the strategic implications."
"Show me the future version of this SaaS product that changes how teams behave, then tell me what to build and what to kill."
"Futureback this in deep mode — I have user research, telemetry, and a roadmap I want to stress-test."
"Give me some ideas for my app." — Use brainstorming, not futureback.
"Write an inspiring vision statement." — This skill produces strategy, not copy.
"What features should I add?" — Use feature prioritization or product spec skills.
"Help me brainstorm." — This is not a brainstorming tool.
"Make this sound futuristic." — This is not a writing style tool.
"Write me a poetic future for my startup." — This skill is not writer mode. The scene serves strategy, not aesthetics.
This skill is written for Claude but designed to be platform-portable. The core method (decision target, context pack, present restatement, future scene, strategic extraction, backcasting, anti-goals, falsifiability, decision pressure) works in any environment that supports structured prompting and iterative conversation.
When running in an environment with file access (Claude Code, Codex, or similar):
- Read available product documents, roadmaps, research files, or strategy notes before generating
- Reference specific evidence from those files in the output
- Save the output to an appropriate location if requested
When running in a pure chat environment:
- Ask the user to paste relevant context
- Work with whatever is provided
- Declare assumptions explicitly when evidence is thin