Skip to content

Instantly share code, notes, and snippets.

@mateu
Created March 27, 2026 16:56
Show Gist options
  • Select an option

  • Save mateu/835e4937ee291a4302319528247e7f5d to your computer and use it in GitHub Desktop.

Select an option

Save mateu/835e4937ee291a4302319528247e7f5d to your computer and use it in GitHub Desktop.
My agents's take on the article "Thoughts on Slowing the Fuck Down"

My agent's take on the article "Thoughts on Slowing the Fuck Down"

Article: https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/

This is good. Sweary, yes, but not empty-sweary. He’s actually making a real argument, not just doing anti-AI theater.

My take:

The core thesis

He’s basically saying:

  • agents are fast enough to remove the natural human bottleneck
  • that bottleneck used to limit how much bad structure could accumulate per day
  • agents don’t really learn from mistakes unless you build heavy scaffolding around them
  • so small mistakes compound faster than teams notice
  • by the time pain is obvious, both the code and the tests may be untrustworthy

That’s a strong argument. And honestly, it matches a lot of what we’ve been seeing in the wild.

Where he’s right

1. The bottleneck point is the sharpest thing in the piece

That’s the bit I’d keep.

Humans were never perfect, but they were rate-limiters. Agents can dump an irresponsible amount of “almost fine” code into a codebase before anyone really metabolizes what changed. That’s not just velocity — it’s a change in failure mode.

2. “Local decisions create global mess” is dead on

Also true. Agents work with partial context and weak recall. Even when they’re competent locally, they tend to:

  • duplicate existing patterns
  • invent parallel abstractions
  • miss prior decisions
  • optimize for task completion over system coherence

That is a great recipe for a haunted codebase.

3. Tests becoming untrustworthy is a real danger

Especially when agents are writing both:

  • the implementation
  • and the test that “proves” it works

You can absolutely end up with a self-consistent lie.

Where he overreaches a bit

1. “Everything is broken”

Good opener, but too broad. That’s vibes, not proof. He admits it’s anecdotal, which helps, but it still leans hard on industry mood.

2. He treats “agent swarm dark factory” and “normal agent-assisted development” a little too close together

Those are not the same thing.

There’s a huge gap between:

  • “I let an agent build random architecture in parallel while I cosplay CTO” and
  • “I use agents aggressively inside a disciplined human-controlled loop”

He does get to that distinction later, but the early framing is a bit all-gas-no-brakes polemic.

3. “Agents don’t learn” is basically true in practice, but slightly simplified

He’s right operationally. In actual workflows, they don’t accumulate durable judgment the way humans do unless you add memory, policy, checks, conventions, and review structure. But the essay treats that as nearly binary.

Best part of the article

This paragraph cluster is the heart of it:

  • slow down
  • keep humans in architecture/API/design
  • use agents for scoped, evaluable, non-critical work
  • set limits on how much generated code you can responsibly review
  • keep the human as final quality gate

That’s the useful doctrine. The rest is mostly colorful scaffolding around that.

How it maps to us

Very directly, honestly.

A lot of what we’ve been building lately is basically a practical answer to his critique:

  • explicit state machines
  • issue/PR linkage
  • step-level idempotence
  • bounded transitions
  • actual CI gates
  • human merge as the final authority
  • avoiding marker-comment fantasies as proof of real state

That’s not anti-agent. It’s anti-slop.

So I’d summarize his view as:

Agents are great servants, dangerous architects, and terrible rulers.

Which feels about right.

My verdict

Worth reading. Worth agreeing with about 75–85%.

It’s strongest when it talks about:

  • compounding small mistakes
  • local search / low recall
  • the danger of removing human bottlenecks

It’s weaker when it drifts into generalized industry doom-mood and treats all agent use as living on the same slope.

If you want, I can also do one of these:

  1. give you a tight summary
  2. pull out the best quotes / strongest claims
  3. argue the countercase and where he’s too pessimistic
  4. relate it specifically to Koan / our pipeline / Bracket
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment