Article: https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/
This is good. Sweary, yes, but not empty-sweary. He’s actually making a real argument, not just doing anti-AI theater.
My take:
He’s basically saying:
- agents are fast enough to remove the natural human bottleneck
- that bottleneck used to limit how much bad structure could accumulate per day
- agents don’t really learn from mistakes unless you build heavy scaffolding around them
- so small mistakes compound faster than teams notice
- by the time pain is obvious, both the code and the tests may be untrustworthy
That’s a strong argument. And honestly, it matches a lot of what we’ve been seeing in the wild.
That’s the bit I’d keep.
Humans were never perfect, but they were rate-limiters. Agents can dump an irresponsible amount of “almost fine” code into a codebase before anyone really metabolizes what changed. That’s not just velocity — it’s a change in failure mode.
Also true. Agents work with partial context and weak recall. Even when they’re competent locally, they tend to:
- duplicate existing patterns
- invent parallel abstractions
- miss prior decisions
- optimize for task completion over system coherence
That is a great recipe for a haunted codebase.
Especially when agents are writing both:
- the implementation
- and the test that “proves” it works
You can absolutely end up with a self-consistent lie.
Good opener, but too broad. That’s vibes, not proof. He admits it’s anecdotal, which helps, but it still leans hard on industry mood.
2. He treats “agent swarm dark factory” and “normal agent-assisted development” a little too close together
Those are not the same thing.
There’s a huge gap between:
- “I let an agent build random architecture in parallel while I cosplay CTO” and
- “I use agents aggressively inside a disciplined human-controlled loop”
He does get to that distinction later, but the early framing is a bit all-gas-no-brakes polemic.
He’s right operationally. In actual workflows, they don’t accumulate durable judgment the way humans do unless you add memory, policy, checks, conventions, and review structure. But the essay treats that as nearly binary.
This paragraph cluster is the heart of it:
- slow down
- keep humans in architecture/API/design
- use agents for scoped, evaluable, non-critical work
- set limits on how much generated code you can responsibly review
- keep the human as final quality gate
That’s the useful doctrine. The rest is mostly colorful scaffolding around that.
Very directly, honestly.
A lot of what we’ve been building lately is basically a practical answer to his critique:
- explicit state machines
- issue/PR linkage
- step-level idempotence
- bounded transitions
- actual CI gates
- human merge as the final authority
- avoiding marker-comment fantasies as proof of real state
That’s not anti-agent. It’s anti-slop.
So I’d summarize his view as:
Agents are great servants, dangerous architects, and terrible rulers.
Which feels about right.
Worth reading. Worth agreeing with about 75–85%.
It’s strongest when it talks about:
- compounding small mistakes
- local search / low recall
- the danger of removing human bottlenecks
It’s weaker when it drifts into generalized industry doom-mood and treats all agent use as living on the same slope.
If you want, I can also do one of these:
- give you a tight summary
- pull out the best quotes / strongest claims
- argue the countercase and where he’s too pessimistic
- relate it specifically to Koan / our pipeline / Bracket