Skip to content

Instantly share code, notes, and snippets.

@danielrose7
Created April 16, 2026 23:04
Show Gist options
  • Select an option

  • Save danielrose7/8dfe9bbe852945b602994cc2c123ecaa to your computer and use it in GitHub Desktop.

Select an option

Save danielrose7/8dfe9bbe852945b602994cc2c123ecaa to your computer and use it in GitHub Desktop.
Prompt patterns for Claude — best practices based on Anthropic guidance, spring 2026

Claude Prompt Patterns

Best practices for writing effective Claude prompts, based on Anthropic guidance as of spring 2026.

Structure: Use XML tags

XML tags are the strongest signal for separating prompt sections. Claude was trained to treat them as semantic boundaries.

<context>
...background...
</context>

<task>
...what to do...
</task>

<rules>
- ...
</rules>

XML tags are worth adopting especially when a prompt has 3+ distinct sections (context, task, rules, schema, examples). Smaller prompts don't need them.

JSON output: include the schema, not just a description

When asking for JSON, provide the exact schema shape — not just "return JSON." The more specific the contract, the fewer surprises.

Serializing a Zod schema to JSON and injecting it directly into the prompt (z.toJSONSchema(...)) is a clean pattern for TypeScript projects.

If a JSON parse failure is catastrophic, consider Anthropic's Structured Outputs API (enforces schema at the API layer). For most retry + parse approaches, that's overkill.

Positive framing over "do not"

Positive instructions are more reliable than negative ones. The model has to suppress a natural behavior rather than follow a clear directive.

# Less effective
Do not use site: filters in search queries.

# More effective
Form queries around product attributes and use case — let search results surface the best current sources.

Targeted negatives for specific known failure modes are fine (e.g. "ONLY use URLs present in the data above — never invent them" targets URL hallucination specifically). Broad negatives ("don't do X") are less reliable.

Explain why constraints exist

Claude generalizes from the reason behind a rule better than from the rule alone. Where a constraint isn't obvious, include the motivation.

# Without motivation — Claude follows it literally
Do not use site: filters.

# With motivation — Claude understands the principle
Do not use site: filters. Attribute-based queries let search results surface who makes the best thing right now, rather than anchoring on brands from training data.

This matters most when the model's defaults actively work against your goals.

System prompt vs. user prompt

  • System prompt: persona, persistent constraints, output format/schema. Keep it concise — Claude 4.6+ responds better to focused system prompts.
  • User prompt: the task, the data, the context for this specific call.

Watch for: putting dynamic context (specific topic, inputs, extracted data) in the system prompt — it belongs in the user turn.

Search-specific patterns

  1. Attribute-first queries — form queries around use case and quality signals, not brands or sites. Let real search results surface who makes the best thing today.
  2. No site: filters unless the brief explicitly names a source.
  3. One search at a time with evaluation before the next.
  4. Stop early — "stop as soon as you have N results" prevents unnecessary searches.
  5. URL source discipline — collect URLs from search result snippets only; do not fetch pages.

Few-shot examples

Including examples is the single highest-leverage technique for format compliance. When a prompt produces inconsistent output, add 1–2 concrete examples before adding more rules.

Good candidates:

  • Any prompt asking for structured notes or descriptions — show a good example vs. a vague one
  • Any prompt generating search queries — show an attribute-first query vs. a brand-anchored one

Ordering matters for long prompts

For prompts with large data payloads, put the data first and the task/rules after. This can meaningfully improve how well Claude attends to the data.

What we don't use (and why)

  • Extended thinking / budget_tokens: Skip for structured output tasks. Thinking increases latency and cost; if you just want compliant JSON, you don't need it.
  • Prefilled assistant messages: Deprecated on Claude 4.6. Don't use.
@danielrose7
Copy link
Copy Markdown
Author

also

  • Prune anything the agent already does without being told
  • Each line should be actionable, not aspirational
  • Prefer positive guidance over negation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment