Skip to content

Instantly share code, notes, and snippets.

@owainlewis
Last active April 23, 2026 03:07
Show Gist options
  • Select an option

  • Save owainlewis/a483297a3aff9eff6b2a5de8ca3bb388 to your computer and use it in GitHub Desktop.

Select an option

Save owainlewis/a483297a3aff9eff6b2a5de8ca3bb388 to your computer and use it in GitHub Desktop.
AI Coding Principles

AI Agent Engineering Workflow

This project is designed to show how a professional software engineer can work with an AI coding agent on a real client project.

The goal is not to ask an agent to "just build the app." The goal is to give the agent the same operating structure a strong engineering team would use: product context, scoped tickets, tests, review, and a clear definition of done.

Core Principle

Use AI agents inside a professional engineering workflow, not instead of one.

The agent is most useful when the project has:

  • clear product intent
  • explicit phase boundaries
  • well-scoped tickets
  • small implementation loops
  • meaningful tests
  • code review
  • a source of truth for project status

For this project, the source of truth for current work is Linear.

Principles for Agentic Engineering

These principles are useful beyond this project.

Think Before You Code

Do not rush from prompt to patch.

First understand the goal, the current code, the constraints, and the definition of done. A few minutes of orientation prevents the agent from confidently solving the wrong problem.

Use the Right Amount of Process

Small tasks should stay lightweight. If the change is obvious, make it, test it, and move on.

Complex work needs more structure: a spec, a ticket, acceptance criteria, and a verification plan. Ceremony is wasteful when it does not reduce ambiguity, but invaluable when it does.

Make Work Visible

For real client projects, use a proper task management system such as Linear.

Visible work helps humans think clearly and helps agents operate effectively. A ticket gives the agent scope, context, and a finish line. It also gives the team a shared view of what is in progress, done, deferred, or deliberately out of scope.

Treat Tickets as Working Specs

The best tickets describe outcomes, not keystrokes.

Tell the agent what should be true when the work is complete, why it matters, and how to verify it. Avoid prescribing implementation details unless they are genuine constraints.

Keep Durable Context Separate From Current Work

Product specs and phase docs should describe stable intent and direction.

Linear tickets should describe the current unit of implementation.

AGENTS.md should contain only persistent operating rules that help agents work well in this repo.

Mixing these layers creates stale instructions and confused agents.

Optimize for Learning Early

In early-stage projects, the biggest risk is often not code quality. It is building the wrong thing.

Choose implementation steps that produce evidence quickly. For this project, a spreadsheet is better than a database until we know whether the scraped jobs produce useful company leads.

Defer Architecture Until the Data Earns It

Do not build a database, API, UI, auth system, or deployment pipeline just because they will probably exist later.

Build them when the current evidence says they are needed. This keeps the project adaptable and prevents imagined requirements from hardening into expensive structure.

Keep Agents Grounded

Agents work best with concrete context:

  • the product spec
  • the current phase
  • the Linear ticket
  • the existing code
  • the verification command

Without grounding, agents tend to invent architecture, expand scope, or optimize for a future that may never arrive.

Verify What Matters

Tests should protect the behavior most likely to break or cause bad decisions.

For scraper work, parsing, export shape, dedupe, and rate-limit behavior matter. Framework trivia does not.

Review Before Done

Do not confuse "the agent produced code" with "the work is done."

Done means the change matches the ticket, stays in scope, passes verification, and has been reviewed for real risks.

Preserve Optionality

Good engineering keeps future paths open without building all of them now.

Use small interfaces where they reduce future mess, but avoid building abstractions before the shape of the problem is visible. The source interface in this project is useful because more job sources are plausible; a full backend platform would be premature.

Project Planning Layers

Product Spec

docs/product-spec.md is the durable product context.

It should answer:

  • what are we building?
  • who is it for?
  • what problem does it solve?
  • what is in scope now?
  • what is explicitly not in scope yet?

Agents should read this when a task touches product direction, scope, phases, or user workflow.

Phase Docs

docs/phases/ contains the roadmap.

Phase docs should be high level. Their job is to keep the team honest about sequence and scope, not to over-plan every detail months in advance.

Current phases:

  1. Scrapers and Spreadsheets
  2. Company Research
  3. Database and Backend APIs
  4. Minimal Review UI
  5. Product Polish and Workflow
  6. Deployment and Operations

Only the current phase should be planned in detail.

Linear Tickets

Linear tickets are the working specs for implementation.

A good ticket describes what outcome should exist and why it matters. It should avoid prescribing implementation details unless those details are true constraints.

Good tickets give the agent enough freedom to inspect the codebase and choose the right implementation.

Linear Ticket Template

Use this structure for implementation tickets:

## Goal

What outcome should exist when this ticket is done?

## Context

Why this matters, what phase this belongs to, and any relevant constraints.

## Acceptance Criteria

- Observable behavior or result
- Edge cases that must be handled
- What should not happen

## Out of Scope

- Tempting work that should not be done in this ticket

## Verification

Commands, checks, or manual review steps that prove this is done.

Prefer outcome-oriented acceptance criteria.

Good:

- The CLI writes a CSV file.
- The visible columns are `job_title`, `company_name`, `source`, and `created_at`.
- The command can be run locally from `backend/`.

Too prescriptive:

- Create a function called `export_jobs_csv` in `pipeline/export.py` using `csv.DictWriter`.

Implementation details are fine when they are real constraints. Otherwise, describe the behavior and let the agent work with the existing code.

Agent Workflow

For each piece of work:

  1. Pick a Linear ticket.
  2. Move it to In Progress.
  3. Read the ticket and any relevant docs.
  4. Inspect the existing code before changing it.
  5. Implement the smallest useful change that satisfies the ticket.
  6. Add or update tests where they protect meaningful behavior.
  7. Run verification commands.
  8. Review the diff.
  9. Move the ticket to Done when implemented and verified.

This loop keeps the agent grounded. It also keeps the human reviewer oriented.

Spec-Driven Development With Agents

Spec-driven development does not mean writing huge documents before coding.

It means making intent explicit before implementation begins.

Use the right level of spec:

  • Product direction: update docs/product-spec.md
  • Phase boundaries: update docs/phases/
  • Implementation scope: write the Linear ticket clearly
  • Tiny obvious fix: go straight to code, then verify

The more ambiguous or architectural a task is, the more it benefits from a written spec.

Testing Practices

Tests should match the risk.

For early scraper work, useful tests cover:

  • parser behavior
  • spreadsheet export shape
  • dedupe behavior
  • rate-limit behavior
  • source interface behavior

Avoid testing framework behavior or speculative future architecture.

Do not add database, API, frontend, or deployment tests before those phases exist.

Code Review Practices

Before marking work done, review for:

  • Does this match the Linear ticket?
  • Did the implementation expand scope?
  • Are the tests meaningful?
  • Is the code simpler than the problem requires?
  • Did we preserve the current phase boundary?
  • Are future-phase decisions deferred unless required?

The review should catch real risks, not nitpick style.

Working With Changing Requirements

Early client projects change quickly. That is normal.

The workflow should make change cheap:

  • keep tickets high level
  • avoid premature architecture
  • validate data before building UI
  • plan future phases lightly
  • only detail the current phase
  • update specs when direction changes

For this project, that means proving scraper data quality before committing to database design, backend APIs, or UI workflows.

AGENTS.md

AGENTS.md is the lightweight operating manual for AI agents in this repo.

It should stay short and practical. Put persistent project rules there, such as:

  • response style
  • repo structure
  • current phase
  • Linear workflow
  • commands
  • important constraints

Avoid putting long product strategy in AGENTS.md; link to the product spec instead.

This project helps recruitment agencies find leads by identifying businesses actively hiring in their area worth contacting.

Read `docs/product-spec.md` when a task touches product direction, scope, phases, or user workflow.

## Communication
- Keep responses concise and practical.
- Prefer a short summary plus commands/files changed over long explanations.
- Ask questions instead of guessing.
  
## Repository Shape
- `backend/` contains all Python code.
- `frontend/` Bun/TypeScript UI.
- `docs/` contains product and phase planning docs.

## Linear Workflow

Linear project: `SR: Recruitment Intelligence`.

Agents have access to Linear MCP for viewing project status and tickets.

Use Linear as the source of truth for current work:
- Pick up tickets from Linear.
- Move a ticket to `In Progress` when starting work.
- Keep implementation scoped to that ticket unless the user asks otherwise.
- Move the ticket to `Done` once the work is implemented and verified.
- Prefer high-level tickets over many tiny tasks while the product direction is still evolving.

Why This Matters

AI coding agents are powerful, but they are most effective when they operate inside good engineering discipline.

The professional workflow is:

  • write enough spec to remove ambiguity
  • track work in Linear
  • keep implementation scoped
  • verify with tests
  • review before done
  • update the plan as reality changes

That is the difference between "AI generated some code" and "an engineer used an AI agent to ship a controlled, reviewable change."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment