This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Technical and architectural audit of a system. This is an evolving documentation vault, not a codebase.
All generated document content must be in Russian by default.
- Preserve original English terminology (field names, UI labels, status names) when documenting facts. Optionally add Russian translation in parentheses where meaning is non-obvious.
- Exception: files in
20-Sources/24-code-observations/are written in English.
- All files must be Obsidian-compatible markdown
- Use Obsidian linking syntax:
[[filename]]for internal links - Images go in
_attachments/folders and are embedded with![[image.png]] - Keep files readable in Obsidian - avoid raw HTML, prefer standard markdown
- Do not create empty directories — directories appear naturally when files are added
- Structure below is a guideline, not a strict rule. Notes can be organized freely
- Naming convention: all folders and files use number prefixes for ordering (
10-Framing/,11-Goals.md,21-interviews/,33-Subdomains.md, etc.). Files inside source subfolders (e.g. individual interview notes) are named freely - 00-Overview.md: keep this index up to date after creating, moving, or deleting files. Only list files that actually exist
- No emoji in generated content
- Use
<br/>for line breaks inside Mermaid nodes and table cells - Diagrams: Mermaid only (never ASCII art), plain colors, no custom themes or styles
- 00-Overview.md — index with links to all files
- 10-Framing/
- 11-Goals (audit goals, stakeholder questions)
- 12-Scope (boundaries: what's in, what's out)
- 13-Principles (evaluation criteria, quality attributes)
- 20-Sources/ — raw inputs
- 21-interviews/ (include date and interviewee name)
- 22-documents/ (RFCs, ADRs, runbooks, specs)
- 23-ui-observations/ (UI/UX analysis, site audit)
- 24-code-observations/ (code, infra, metrics — see "Code Observation Structure" below)
- 25-product/ (CJMs, user stories, personas as-is)
- 26-team-sessions/ (audit team session notes)
- 30-Discovery/ — AS-IS: what exists (current state, no judgments, no recommendations)
- 31-Facts, 33-Subdomains, 35-Topology, 38-Process-Maps, 37-Flow-Map
- 40-Analysis/ — strategic DDD: what's wrong and what should be better
- 41-Insights, 42-Risk-Register, 43-Context-Map, 44-Target-Topology
- 50-Action-Points/ — how to get from AS-IS to TO-BE
- 51-Executive-Summary, 52-Key-Questions, 53-Roadmap, 54-Key-Decisions
- 99-Drafts.md — scratch notes
Folder layout inside 20-Sources/24-code-observations/:
00-Overview.md— living index of all codebase folders with links01-09range — reserved for cross-codebase analysis folders (e.g.01-cross-overall/,02-somesite-cross/,03-sellerssite-cross/)10+— one folder per codebase (e.g.10-site-soap/,11-old-site-ru/)
Rules:
- Every new research task = new numbered file inside the codebase folder
- After per-codebase research on a topic, update corresponding Discovery artifacts (
30-Discovery/) in a separate session - File naming:
NN-<topic-slug>.md(e.g.07-configurations.md) - Keep
00-Overview.mdup to date when adding new codebase folders - Codebases live at
../../code/<codebase-name>/
Broad topics (combine related concerns into one file):
| # | Slug | Scope |
|---|---|---|
| 01 | service-overview | Purpose, deployable units, stack, runtime, framework versions, key packages, project structure |
| 02 | code-quality | Testing (coverage, types), linting, formatting, CI/CD pipeline, Dockerfiles, deprecated/vulnerable packages |
| 03 | error-handling-and-observability | Error handling patterns, logging, APM, tracing, metrics, analytics SDKs |
| 04 | security | Auth, authz, CORS, input validation, exposed dashboards, credential handling |
Precise topics (detailed, every item with file:line references):
| # | Slug | Scope |
|---|---|---|
| 05 | api-endpoints | All REST/SOAP/GraphQL endpoints, routing, controllers, versioning — full inventory |
| 06 | external-dependencies | External APIs, third-party services, integrations — each with purpose and usage |
| 07 | configurations | All config files, env vars, connection strings, API keys, broker configs, secrets — with file:line references. What infrastructure is consumed (DBs, brokers, caches, storage) |
| 08 | data-access-and-flow | DB engines, schemas, ORM/raw SQL patterns, Kafka/RabbitMQ topics, event flows, data pipelines |
| 09 | background-tasks | Hangfire jobs, Kafka consumers, cron, workers — full inventory with schedules and purpose. For each task: DB entity mutations (which tables are written/updated/deleted) and external system calls (APIs, WCF, SMTP, etc.) |
Not every topic applies to every codebase. Skip inapplicable topics; do not create empty files.
Each section in observation files must contain:
- Status: Critical / Needs Attention / OK / Not Analyzed
- Description of current state (facts, discovered issues, metrics)
- Subsections (##) as needed for grouping
- Recommendation: at the end of each section — specific improvements
Rules for code observation sessions (20-Sources/24-code-observations/):
- File references: every code fact must include repo filename and line number, paths relative to codebase root (e.g.
site.soap/src/file.py:42) - Observation format:
- H1 as first line (observation name)
- H2-H4 for sections
- NO bold, italic, or emojis in observation files
- Mermaid for diagrams, neutral colors
- Plain text, simple lists and tables
- Credentials: never copy full secrets; reference file+line; redacted placeholders OK
- Unverified claims: mark clearly, separate section or file
Discovery documents the current state of the system as observed. No judgments, no recommendations, no target states. Pure facts derived from sources.
| File | Content |
|---|---|
| 31-Facts.md | Terminology, products, services, infrastructure, security facts, pricing formulas, authentication mechanisms, linked servers — atomic verified facts |
| 33-Subdomains.md | Business subdomains identified from the problem space: what tasks the business performs, which systems serve them, who owns each area. Current classification only (Generic/Supporting/Core). No target state, no transition plans |
| 35-Topology.md | BIG PICTURE: services, databases, servers, protocols, links between subsystems. Mermaid diagrams. Derived from code observations (shared-databases, shared-infrastructure, runtime-versions). Replaces old Service-Map |
| 38-Process-Maps.md | Business processes mapped to technical components: procedures, triggers, databases, cross-DB calls. Keep as-is |
| 37-Flow-Map.md | Data flows between systems: Kafka topics, outbox mechanisms, cross-DB writes, linked server calls, HTTP integrations. Links to source observations for detail. Processes-to-systems mapping |
Rules:
- Discovery is the synthesis phase for per-codebase observations — after research, Discovery artifacts are updated from accumulated observations
- No recommendations, no "should be", no target states in Discovery files
- Every claim traceable to a source observation file
Analysis synthesizes Discovery into strategic insights. Strictly grounded in discovered subdomains and business flows.
| File | Content |
|---|---|
| 41-Insights.md | Architectural observations, cross-cutting conclusions, open questions, risk-to-goals mapping. No risk descriptions (those live in 42-Risk-Register) |
| 42-Risk-Register.md | Individual technical risks: bugs, vulnerabilities, race conditions, performance issues. Actionable findings with description, consequences, recommendation, sources |
| 43-Context-Map.md | TO-BE: bounded contexts that would be better for the system, with explanation WHY and HOW they map to discovered subdomains. Strategic DDD relationships (Shared Kernel, Customer-Supplier, ACL, etc.). Mermaid diagrams comparing AS-IS vs TO-BE |
| 44-Target-Topology.md | Target service/database architecture: how contexts map to services, what should be split/merged, database ownership boundaries. References risks that motivate changes |
Rules:
- Every analysis artifact must reference discovered subdomains (33-Subdomains.md) and business flows (38-Process-Maps.md, 37-Flow-Map.md)
- Context Map (43) defines TO-BE bounded contexts derived from AS-IS subdomains
- Target Topology (44) shows how services/DBs should evolve to match contexts
- Risks (42) and Insights (41) inform Context Map and Target Topology but don't duplicate them
- 42-Risk-Register.md — individual technical risks: bugs, vulnerabilities, race conditions, data loss, hardcoded values, dead code, performance issues. Each risk is a specific, actionable finding with description, consequences, recommendation, and sources. Risk entries should NOT contain analytical conclusions or prioritization.
- 41-Insights.md — architectural observations, cross-cutting conclusions, open questions, and synthesis that emerges from combining multiple observations. Insights reference risks by ID (e.g. "связанный риск: R005") but do NOT restate risk descriptions. The risk prioritization matrix (mapping risks to business goals) belongs here as synthesis, using risk IDs and short labels only.
Rule: if a finding is a specific technical issue with a fix — it goes in Risk Register. If it is a systemic observation, architectural pattern, open question, or cross-risk synthesis — it goes in Insights. Never duplicate risk descriptions between the two files.
ID stability rule: never delete, renumber, or rewrite existing risk (R001, R002...) or insight (I001, I002...) IDs. To retire an entry, add "Статус: Закрыт" or "Статус: Митигирован" with explanation — do not remove the section. New entries always get the next sequential number. This preserves cross-references across all documents.
| File | Content |
|---|---|
| 51-Executive-Summary.md | 1-2 page overview for stakeholders: key findings, top risks, strategic direction |
| 52-Key-Questions.md | Questions that need answers from the team before proceeding |
| 53-Roadmap.md | Phased plan: what to do first, second, third. References risks and context map |
| 54-Key-Decisions.md | Decisions that need to be made, with options and trade-offs |
Before writing or updating any analysis artifacts (30-Discovery/, 40-Analysis/, 50-Action-Points/), ALWAYS read 10-Framing/11-Goals.md first. Every analysis must be aligned with the agreed deliverables and business goals defined there.
Never modify both source observations (20-Sources/) and downstream artifacts (30-Discovery/, 40-Analysis/, 50-Action-Points/) in the same session. Write observations first, then in a separate session update discovery/analysis/action-points based on accumulated observations.
File: 30-Discovery/33-Subdomains.md
General rules:
- Subdomain names come from the problem space (business terms, not technical)
- This is a Discovery artifact — current state only, no "should be"
Each subdomain entry must contain:
- Name — business subdomain name
- Current classification — Generic / Supporting / Core (with rationale)
- Business tasks — what the business does within this subdomain
- Systems involved — which codebases/services/databases serve this subdomain
- Ownership — who owns this area (team, person, or "unclear")
- Key observations — notable facts, NOT risks or recommendations
No target classification, no transition plans, no "should be" — those go in 43-Context-Map.md (Analysis).
Reports answering client questions about the system. Written for a non-technical or semi-technical audience.
Location: 40-Analysis/45-Client-QA/, one file per topic (01-Search.md, 02-Pricing.md, etc.).
Style rules:
- Language: Russian (preserve English technical terms as in all analysis files)
- Target audience: client stakeholders — simple, clear, no jargon without explanation
- Structure per question: short answer first, then detailed explanation
- Use tables for structured comparisons and parameter lists
- Use Mermaid diagrams (flowchart TD) for processes and system interactions — keep diagrams A4-portrait-friendly: top-down direction, short node labels, max 7-8 nodes per subgraph
- Every claim must be traceable to source observations or discovery artifacts
- Include a "Ключевые риски" section at the end if relevant findings exist
- No code snippets or file:line references — those belong in observations, not client-facing reports
- Group related questions into one file by topic (e.g. search, pricing, logistics)
- Main goal: make text easily readable
- Light edition is preferable
- Don't delete lines if unsure what to do — ask user
- Append sections at the end: Need meetings, Questions
- Use bullets (-) not checkboxes
- Extract #note tags into appropriate sections