Best practices for writing effective Claude prompts, based on Anthropic guidance as of spring 2026.
XML tags are the strongest signal for separating prompt sections. Claude was trained to treat them as semantic boundaries.
<context>
...background...
</context>
<task>
...what to do...
</task>
<rules>
- ...
</rules>
XML tags are worth adopting especially when a prompt has 3+ distinct sections (context, task, rules, schema, examples). Smaller prompts don't need them.
When asking for JSON, provide the exact schema shape — not just "return JSON." The more specific the contract, the fewer surprises.
Serializing a Zod schema to JSON and injecting it directly into the prompt (z.toJSONSchema(...)) is a clean pattern for TypeScript projects.
If a JSON parse failure is catastrophic, consider Anthropic's Structured Outputs API (enforces schema at the API layer). For most retry + parse approaches, that's overkill.
Positive instructions are more reliable than negative ones. The model has to suppress a natural behavior rather than follow a clear directive.
# Less effective
Do not use site: filters in search queries.
# More effective
Form queries around product attributes and use case — let search results surface the best current sources.
Targeted negatives for specific known failure modes are fine (e.g. "ONLY use URLs present in the data above — never invent them" targets URL hallucination specifically). Broad negatives ("don't do X") are less reliable.
Claude generalizes from the reason behind a rule better than from the rule alone. Where a constraint isn't obvious, include the motivation.
# Without motivation — Claude follows it literally
Do not use site: filters.
# With motivation — Claude understands the principle
Do not use site: filters. Attribute-based queries let search results surface who makes the best thing right now, rather than anchoring on brands from training data.
This matters most when the model's defaults actively work against your goals.
- System prompt: persona, persistent constraints, output format/schema. Keep it concise — Claude 4.6+ responds better to focused system prompts.
- User prompt: the task, the data, the context for this specific call.
Watch for: putting dynamic context (specific topic, inputs, extracted data) in the system prompt — it belongs in the user turn.
- Attribute-first queries — form queries around use case and quality signals, not brands or sites. Let real search results surface who makes the best thing today.
- No
site:filters unless the brief explicitly names a source. - One search at a time with evaluation before the next.
- Stop early — "stop as soon as you have N results" prevents unnecessary searches.
- URL source discipline — collect URLs from search result snippets only; do not fetch pages.
Including examples is the single highest-leverage technique for format compliance. When a prompt produces inconsistent output, add 1–2 concrete examples before adding more rules.
Good candidates:
- Any prompt asking for structured notes or descriptions — show a good example vs. a vague one
- Any prompt generating search queries — show an attribute-first query vs. a brand-anchored one
For prompts with large data payloads, put the data first and the task/rules after. This can meaningfully improve how well Claude attends to the data.
- Extended thinking /
budget_tokens: Skip for structured output tasks. Thinking increases latency and cost; if you just want compliant JSON, you don't need it. - Prefilled assistant messages: Deprecated on Claude 4.6. Don't use.
also