Skip to content

Instantly share code, notes, and snippets.

@KazW
Last active April 9, 2026 22:11
Show Gist options
  • Select an option

  • Save KazW/66019c6f46bc7de827a452fc72b01691 to your computer and use it in GitHub Desktop.

Select an option

Save KazW/66019c6f46bc7de827a452fc72b01691 to your computer and use it in GitHub Desktop.
Modern Go Architecture

Go Service Patterns

Reference guide for building Go microservices using hexagonal architecture with dual database support, REST APIs, and MCP integration.


Project Layout

cmd/
  daemon/         -- HTTP server, systemd service
  mcp-bridge/     -- stdio-to-HTTP MCP bridge
  admin/          -- CLI admin commands (seed, migrate, export/import)
internal/
  config/         -- XDG paths, koanf config loading
  domain/         -- Core types, port interfaces, error sentinels
  infra/
    sqlite/       -- SQLite store implementation + Goose migrations
    postgres/     -- PostgreSQL store implementation + Goose migrations
    httpapi/      -- REST API handlers
    mcp/          -- MCP tool handlers
main.go           -- entry point
mise.toml         -- Go toolchain pin + build tasks

The domain/ package has zero infrastructure dependencies — stdlib only (time, encoding/json, errors, context, io). All infrastructure concerns live in infra/. This separation is the core architectural invariant.

The pkg/ directory pattern is no longer recommended by the Go community. Prefer internal/ for private packages or flat layout for small services.


Library Stack

Concern Library License
CLI framework spf13/cobra Apache-2.0
Config knadh/koanf MIT
Logging log/slog (stdlib)
HTTP mux net/http (stdlib)
REST framework danielgtaylor/huma/v2 MIT
SQLite modernc.org/sqlite BSD-3-Clause
PostgreSQL jackc/pgx/v5 MIT
Migrations pressly/goose/v3 MIT
Background queue maragu.dev/goqite MIT
MCP mark3labs/mcp-go MIT
UUID google/uuid BSD-3-Clause
CEL expressions google/cel-go Apache-2.0
Linter golangci-lint MIT
Build tooling mise MIT

All libraries are permissively licensed — no GPL, LGPL, or AGPL.


Domain Layer

Types

internal/domain/types.go — entity structs, enums, and constants. No external imports beyond stdlib.

package domain

import "time"

type Status string

const (
    StatusActive   Status = "active"
    StatusInactive Status = "inactive"
)

type MyEntity struct {
    ID        string    `json:"id"`
    Name      string    `json:"name"`
    Status    Status    `json:"status"`
    CreatedAt time.Time `json:"created_at"`
    UpdatedAt time.Time `json:"updated_at"`
}

Error Sentinels

internal/domain/errors.go — domain errors as package-level variables:

var (
    ErrNotFound         = errors.New("not found")
    ErrAlreadyExists    = errors.New("already exists")
    ErrHasDependencies  = errors.New("has dependencies")
    ErrPermissionDenied = errors.New("permission denied")
)

Callers check with errors.Is(err, domain.ErrNotFound). Infra layers wrap with context: fmt.Errorf("get entity %s: %w", id, err).

Go 1.26 introduced errors.AsType[T](err) as a type-safe alternative to errors.As for extracting structured error types:

if myErr, ok := errors.AsType[*MyError](err); ok {
    // use myErr
}

Port Interfaces

As the service grows, split port definitions across multiple files by concern — Go doesn't enforce one-file-per-interface, but grouping by domain keeps things navigable:

internal/domain/
  types.go           -- entity structs, enums
  errors.go          -- error sentinels
  store.go           -- EntityStore, HealthChecker, Store interfaces
  identity.go        -- IdentityProvider interface
  chat.go            -- ChatProvider interface
  gitforge.go        -- GitForgeProvider interface

The package stays domain — consumers import it the same way regardless of which file a type lives in. Do not split domain/ into sub-packages (domain/store/, domain/identity/) as this creates circular import risks when entity types and port interfaces reference each other.

Example port interfaces:

// Small, focused interfaces — consumers accept only what they need.
type EntityStore interface {
    CreateEntity(ctx context.Context, e *MyEntity) error
    GetEntity(ctx context.Context, id string) (*MyEntity, error)
    ListEntities(ctx context.Context) ([]*MyEntity, error)
    UpdateEntity(ctx context.Context, e *MyEntity) error
    DeleteEntity(ctx context.Context, id string) error
}

type HealthChecker interface {
    Ping(ctx context.Context) error
}

// Store combines all storage interfaces for use at the composition root.
// Consumers (handlers, services) accept individual interfaces, not Store.
type Store interface {
    EntityStore
    HealthChecker
    io.Closer
}

Prefer small, role-based interfaces. Handlers that only check health accept HealthChecker, not Store. This follows Go's interface segregation principle and makes testing easier — stubs only implement the methods under test.

Port interfaces live in domain/. Implementations live in infra/.


Service Layer

For simple CRUD, handlers may call domain ports directly:

handler → EntityStore

When business logic emerges (validation, authorization, cross-entity coordination), introduce a service type that mediates between adapters and ports:

handler → Service → EntityStore
type EntityService struct {
    store  EntityStore
    auth   AuthProvider  // another port
}

func (s *EntityService) Create(ctx context.Context, e *MyEntity) error {
    // business logic here — validation, authorization, side effects
    return s.store.CreateEntity(ctx, e)
}

The service lives in internal/service/ or internal/domain/ (either is acceptable). It depends only on domain ports, never on infrastructure. Handlers call the service; the service calls ports. This keeps business logic testable without infrastructure.


Context Propagation

Pass context.Context as the first parameter through the entire call chain — from HTTP handler to service logic to store. This enables:

  • Request-scoped cancellation (client disconnects, timeouts)
  • Trace propagation (OpenTelemetry spans)
  • Request-scoped values (auth identity, request ID)

Every port interface method takes ctx context.Context as its first parameter. Handlers extract the context from the request:

ctx := r.Context()

Never store contexts in structs. Never use context.Background() in request-handling code — always propagate the request context.


CLI and Config

Root Command

var Version = "dev" // set via -ldflags at build time

func newRootCmd() *cobra.Command {
    cmd := &cobra.Command{
        Use:     "myservice",
        Version: Version,
        PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
            config.InitDirs()
            return config.LoadConfig()
        },
    }
    cmd.AddCommand(daemon.NewCmd(), mcpbridge.NewCmd(), admin.NewCmd())
    return cmd
}

func Execute() {
    if err := newRootCmd().Execute(); err != nil {
        os.Exit(1)
    }
}

Config

Koanf loads from: config file, then env vars (later sources win).

import (
    "strings"
    "github.com/knadh/koanf/v2"
    "github.com/knadh/koanf/providers/env"
    "github.com/knadh/koanf/providers/file"
    "github.com/knadh/koanf/parsers/yaml"
)

var k = koanf.New(".")

func LoadConfig(configPath string) error {
    // Load from YAML file (missing file is ok)
    _ = k.Load(file.Provider(configPath), yaml.Parser())
    // Load from environment — strip prefix, lowercase, replace _ with .
    if err := k.Load(env.Provider("MYSERVICE_", ".", func(s string) string {
        return strings.Replace(
            strings.ToLower(strings.TrimPrefix(s, "MYSERVICE_")),
            "_", ".", -1,
        )
    }), nil); err != nil {
        return fmt.Errorf("load env config: %w", err)
    }
    return nil
}

Access values via k.String("port"), k.Int("port"), etc.

Note: koanf instances are not goroutine-safe for concurrent Load/Get. The package-level instance is safe when loaded once during startup and read during request handling (single-writer, multiple-reader). If hot reload is needed, wrap access with sync.RWMutex.

XDG-compliant paths: $XDG_CONFIG_HOME/<service>/config.yaml for config, $XDG_STATE_HOME/<service>/ for runtime state (logs, DB).


Database Layer

SQLite Store

//go:embed migrations/*.sql
var migrations embed.FS

func Open(dsn string) (*Store, error) {
    db, err := sql.Open("sqlite", dsn)
    if err != nil {
        return nil, err
    }
    db.SetMaxOpenConns(1) // SQLite requires single-writer serialization
    if _, err := db.Exec("PRAGMA journal_mode=WAL"); err != nil {
        return nil, fmt.Errorf("set WAL mode: %w", err)
    }
    if _, err := db.Exec("PRAGMA foreign_keys=ON"); err != nil {
        return nil, fmt.Errorf("enable foreign keys: %w", err)
    }
    if _, err := db.Exec("PRAGMA busy_timeout=5000"); err != nil {
        return nil, fmt.Errorf("set busy timeout: %w", err)
    }
    if err := goose.SetBaseFS(migrations); err != nil {
        return nil, fmt.Errorf("set migration fs: %w", err)
    }
    if err := goose.SetDialect("sqlite3"); err != nil {
        return nil, fmt.Errorf("set dialect: %w", err)
    }
    if err := goose.Up(db, "migrations"); err != nil {
        return nil, fmt.Errorf("run migrations: %w", err)
    }
    // Note: in multi-instance deployments, run migrations as a separate init job to avoid races.
    return &Store{db: db}, nil
}
  • In-memory store for tests: Open("")
  • Migrations are numbered SQL files: 001_init.sql, 002_feature.sql
  • -- +goose Up / -- +goose Down markers
  • Parameterized ? placeholders — never string concatenation

PostgreSQL Store

Same port interface, different implementation. Uses pgx/v5 connection pool. Migrations use PostgreSQL syntax where needed.

Configure the connection pool for production use:

db.SetMaxOpenConns(25)
db.SetMaxIdleConns(5)
db.SetConnMaxLifetime(5 * time.Minute)

Dual Backend

A top-level dispatcher selects based on DSN:

func Open(dsn string) (domain.Store, error) {
    if strings.HasPrefix(dsn, "postgres") {
        return postgres.Open(dsn)
    }
    return sqlite.Open(dsn)
}

Background Queue

Use maragu.dev/goqite for persistent, database-backed background task queues. It supports both SQLite and PostgreSQL via a SQLFlavor parameter, has no non-test dependencies, and shares the same *sql.DB as the rest of the service — so the queue automatically follows whichever backend the service is deployed with.

Why goqite over alternatives

  • Dual-backend: SQLFlavor switches between SQLite and PostgreSQL with one line; the queue schema ships as two SQL files that slot directly into goose migrations.
  • No non-test dependencies: bring your own *sql.DB — no CGO, no second driver to manage.
  • Transactional enqueue: pass a *sql.Tx to q.SendTx to enqueue a task in the same transaction as a database write. Either both commit or neither does — the correct primitive for the outbox pattern.
  • Visibility timeout retry: failed messages (not deleted after processing) are automatically redelivered after a configurable timeout, up to a MaxReceives limit. No explicit retry loop needed in application code.
  • Graceful shutdown: stop the job runner by cancelling the context passed to r.Start(ctx).

Setup

Install the schema via a goose migration:

-- +goose Up
-- contents of goqite's schema_sqlite.sql or schema_postgres.sql
-- +goose Down
DROP TABLE IF EXISTS goqite;

Initialise the queue at startup and pass it the same *sql.DB used by the store:

q := goqite.New(goqite.NewOpts{
    DB:         db,
    Name:       "callbacks",
    MaxReceive: 3,
    Timeout:    10 * time.Second, // redelivery interval on failure
    // SQLFlavor: goqite.SQLFlavorPostgreSQL, // uncomment for Postgres
})

Job runner

Use the jobs sub-package for named, typed background jobs:

r := jobs.NewRunner(jobs.NewRunnerOpts{
    Limit:        5,
    Log:          slog.Default(),
    PollInterval: 500 * time.Millisecond,
    Queue:        q,
})

r.Register("servicenow_callback", func(ctx context.Context, payload []byte) error {
    // unmarshal payload and perform the callback
    return nil
})

// Enqueue — optionally inside a transaction
if err := jobs.Create(ctx, q, "servicenow_callback", payload); err != nil {
    return fmt.Errorf("enqueue callback: %w", err)
}

// Start the runner; cancel ctx to stop it
r.Start(ctx)

Graceful shutdown with the daemon

Stop the job runner before closing the store. The runner drains in-flight jobs before returning:

shutdownCtx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
// Shutdown HTTP server first, then stop the queue runner
srv.Shutdown(shutdownCtx)
runnerCancel() // cancels the context passed to r.Start — runner finishes in-flight jobs
store.Close()

HTTP Server

Server Setup

type ServerConfig struct {
    Bind  string
    Port  int
    Store domain.Store
}

func NewServer(cfg ServerConfig) *http.Server {
    mux := http.NewServeMux()
    mux.HandleFunc("GET /v1/health", handleHealth(cfg.Store))

    api := humago.New(mux, huma.DefaultConfig("My Service", Version))
    registerEntityRoutes(api, cfg.Store)

    mcpHandler := newMCPHandler(cfg.Store)
    mux.Handle("/mcp", mcpHandler)

    handler := withMiddleware(mux)

    return &http.Server{
        Addr:              fmt.Sprintf("%s:%d", cfg.Bind, cfg.Port),
        Handler:           handler,
        ReadTimeout:       15 * time.Second,
        WriteTimeout:      15 * time.Second,
        IdleTimeout:       60 * time.Second,
        ReadHeaderTimeout: 5 * time.Second,
    }
}

For endpoints that accept request bodies, wrap with http.MaxBytesReader to prevent memory exhaustion from unbounded POST bodies:

r.Body = http.MaxBytesReader(w, r.Body, 1<<20) // 1 MB limit

REST Route Registration

One register*Routes function per resource type:

func registerEntityRoutes(api huma.API, store domain.Store) {
    huma.Register(api, huma.Operation{
        Method:      http.MethodGet,
        Path:        "/v1/entities",
        OperationID: "list-entities",
    }, func(ctx context.Context, input *struct{}) (*EntityListOutput, error) {
        list, err := store.ListEntities(ctx)
        if err != nil {
            return nil, mapDomainErr(err)
        }
        return &EntityListOutput{Body: list}, nil
    })
}

Error Mapping

Domain errors to HTTP status codes:

func mapDomainErr(err error) error {
    switch {
    case errors.Is(err, domain.ErrNotFound):
        return huma.Error404NotFound("not found")
    case errors.Is(err, domain.ErrAlreadyExists):
        return huma.Error409Conflict("already exists")
    case errors.Is(err, domain.ErrPermissionDenied):
        return huma.Error403Forbidden("forbidden")
    default:
        slog.Error("unhandled domain error", "error", err)
        return huma.Error500InternalServerError("internal error")
    }
}

Middleware

Wrap the mux before passing it to http.Server:

func withMiddleware(next http.Handler) http.Handler {
    return withRequestLogging(withPanicRecovery(next))
}

type statusRecorder struct {
    http.ResponseWriter
    status  int
    written bool
}

func (r *statusRecorder) WriteHeader(code int) {
    r.status = code
    r.written = true
    r.ResponseWriter.WriteHeader(code)
}

func (r *statusRecorder) Write(b []byte) (int, error) {
    r.written = true
    return r.ResponseWriter.Write(b)
}

func (r *statusRecorder) Unwrap() http.ResponseWriter {
    return r.ResponseWriter
}

func withRequestLogging(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        rec := &statusRecorder{ResponseWriter: w, status: http.StatusOK}
        next.ServeHTTP(rec, r)
        slog.Info("request",
            "method", r.Method,
            "path", r.URL.Path,
            "status", rec.status,
            "duration", time.Since(start),
        )
    })
}

func withPanicRecovery(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        defer func() {
            if rec := recover(); rec != nil {
                slog.Error("panic recovered", "error", rec)
                // Only write error if response hasn't started
                if rw, ok := w.(*statusRecorder); ok && !rw.written {
                    http.Error(w, "internal server error", http.StatusInternalServerError)
                }
            }
        }()
        next.ServeHTTP(w, r)
    })
}

MCP Integration

Server Side

Mount a StreamableHTTPServer at /mcp:

func newMCPHandler(store domain.Store) http.Handler {
    s := server.NewMCPServer("myservice", Version)
    s.AddTool(mcp.NewTool("list_entities",
        mcp.WithDescription("List all entities"),
    ), handleListEntities(store))
    return server.NewStreamableHTTPServer(s)
}

Note: StreamableHTTPServer handles its own internal path routing. When mounting on an external mux, use http.StripPrefix to align paths:

mux.Handle("/mcp/", http.StripPrefix("/mcp", mcpHandler))

Alternatively, give the MCP server its own http.Server on a separate port. Mounting directly at /mcp without StripPrefix can cause conflicts with SSE stream handling on GET requests.

Bridge Subcommand

cmd/mcpbridge/ — reads JSON-RPC from stdin, forwards to http://<host>:<port>/mcp, relays responses to stdout. Passes auth token on every request.


Health Endpoint

func handleHealth(store domain.Store) http.HandlerFunc {
    return func(w http.ResponseWriter, r *http.Request) {
        err := store.Ping(r.Context())
        status := "healthy"
        httpCode := 200
        if err != nil {
            status = "unhealthy"
            httpCode = 503
        }
        w.Header().Set("Content-Type", "application/json")
        w.WriteHeader(httpCode)
        //nolint:errcheck // response write errors are unactionable after WriteHeader
        json.NewEncoder(w).Encode(map[string]string{"status": status})
    }
}

Observability

Add these as the service matures:

  • Structured logging: log/slog (stdlib) — JSON output via slog.NewJSONHandler; attach request IDs via context.
  • Metrics: prometheus/client_golang — expose /metrics for Prometheus scraping.
  • Tracing: go.opentelemetry.io/otel — instrument handlers and store calls with spans.

Request logging is covered by the middleware layer above. Metrics and tracing are recommended additions for production services.


Anti-Patterns

Common mistakes that violate the hexagonal model:

  • Importing infra from domain. The domain package must never import from internal/infra/. Dependency direction is always inward.
  • Passing *sql.DB through the domain. Use port interfaces instead. The domain should not know what database is behind it.
  • Business logic in handlers. If a handler has an if statement that isn't about HTTP concerns (parsing, status codes, response format), the logic belongs in a service or domain function.
  • Business logic in the store. The store implements CRUD. Validation, authorization, and coordination belong in the service layer.
  • Growing a port interface. When a port exceeds 5-7 methods, split it into focused interfaces. A port with 15 methods is a code smell.

Cross-Cutting Concerns

Auth, logging, tracing, and rate limiting are infrastructure concerns that must not leak into domain or service code. The strategy:

  • Middleware handles HTTP-level concerns (request logging, panic recovery, auth token extraction, rate limiting). These wrap the mux.
  • Context propagation carries request-scoped values (auth identity, trace span, request ID) from middleware into handlers and services.
  • Domain ports remain pure — they accept context.Context but never inspect it for auth or tracing. That inspection happens in middleware or the handler before calling the service.

This keeps the domain testable without HTTP infrastructure.


Testing

  • Standard library testing — no testify, no mock frameworks
  • Table-driven tests with t.Run subtests
  • In-memory SQLite for store tests: sqlite.Open("")
  • net/http/httptest for handler tests
  • Stub implementations of port interfaces for service logic tests
  • Test files alongside source (_test.go suffix)
  • E2E tests in a separate tests/ directory with its own go.mod

ID Generation

IDs are generated at the infra layer, never in domain. Domain uses plain string for ID fields.

func NewID(prefix string) string {
    return fmt.Sprintf("%s_%s", prefix, uuid.New().String())
}

Daemon Subcommand

func NewCmd() *cobra.Command {
    cmd := &cobra.Command{
        Use:   "daemon",
        Short: "Start the HTTP server",
        RunE:  run,
    }
    cmd.Flags().String("bind", "127.0.0.1", "Bind address")
    cmd.Flags().Int("port", 8080, "Listen port")
    cmd.Flags().String("db", "", "Database path")
    return cmd
}

func run(cmd *cobra.Command, args []string) error {
    store, err := infra.Open(k.String("db"))
    if err != nil {
        return err
    }
    defer store.Close()

    srv := daemon.NewServer(daemon.ServerConfig{
        Bind:  k.String("bind"),
        Port:  k.Int("port"),
        Store: store,
    })

    // graceful shutdown
    ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
    defer stop()
    errCh := make(chan error, 1)
    go func() {
        if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
            errCh <- err
        }
    }()
    select {
    case <-ctx.Done():
    case err := <-errCh:
        return err
    }
    // Note: if ListenAndServe returns a non-ErrServerClosed error after
    // ctx.Done() has already been selected, that error is silently dropped.
    // The buffered channel prevents goroutine blocking.
    shutdownCtx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
    defer cancel()
    return srv.Shutdown(shutdownCtx)
}

If the service uses MCP with SSE streaming, the StreamableHTTPServer must be shut down separately — http.Server.Shutdown drains HTTP/1.1 requests but does not signal long-lived SSE connections to close:

mcpHandler.Shutdown(shutdownCtx) // signal SSE clients to reconnect

Build Tooling

[tools]
go = "1.26.0"

[tasks.build]
description = "Build the binary"
run = "go build -o build/myservice ."

[tasks.test]
description = "Run unit tests"
run = "go test ./..."

[tasks.lint]
description = "Run linter"
run = "golangci-lint run ./..."

[tasks.clean]
description = "Remove build artifacts"
run = "rm -rf build/"

Dockerfile

Multi-stage build using the official Go image and a distroless runtime:

# Keep Go version in sync with mise.toml `go` pin.
FROM golang:1.26-alpine AS build
WORKDIR /src
COPY . .
RUN CGO_ENABLED=0 go build -ldflags="-s -w" -trimpath -o build/myservice .

FROM gcr.io/distroless/static-debian12
COPY --from=build /src/build/myservice /usr/local/bin/myservice
# Run as non-root for least-privilege. Distroless images include the
# nonroot user at UID 65532.
USER nonroot:nonroot
ENTRYPOINT ["/usr/local/bin/myservice"]
CMD ["daemon"]

modernc.org/sqlite is CGO-free pure Go, so distroless/static works without a libc. If a CGO dependency is required, switch to gcr.io/distroless/base-debian12.


Emerging Patterns

Patterns and features gaining traction in the Go ecosystem (2025-2026):

json/v2 (experimental since Go 1.25) Case-sensitive by default, dramatically faster unmarshaling, cleaner semantics. Enable with GOEXPERIMENT=jsonv2. May become stable in a future release. Worth evaluating for new API layers.

eBPF-based OpenTelemetry auto-instrumentation (beta) Zero-code-change tracing and metrics via eBPF hooks. Eliminates manual otelhttp wrapping. Still beta but rapidly maturing — monitor for GA.

Bounded context nesting For services with multiple domains, the community pattern nests per context: internal/chat/domain/, internal/flow/domain/. Each context gets its own domain, ports, and adapters. Not needed for single-context services but worth planning for if the service grows.

Vertical slice organization Organizing by feature rather than by layer (all files for "create order" in one directory). Complements hexagonal — use vertical slices for early stages, transition to full hexagonal as complexity grows. More common in .NET but gaining Go mindshare.

errors.AsType[T] (Go 1.26) Type-safe generic alternative to errors.As. Documented in the Error Sentinels section above. Reduces boilerplate for structured error types.


Further Reading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment