Conditional up front: this essay is about Go vs Rust vs Python for AI-assisted development with high machine-write percentage. It is not about Rust-for-humans, where Rust may well be the better answer. It is not about Python-with-strict-pyright-in-CI, where Python’s effective layers approach Go’s. The argument is specific to one collaboration mode and the verdict applies to that mode.
I built a website last night. A full blog with three-domain routing, animated video covers, an audio player with playlists, dark mode, RSS feeds, social cards, a sticky sidebar with a lightbox. Seven commits. Zero test failures. One binary. The site you’re reading this on.
The AI wrote most of the code. I wrote about twenty words.
Twenty words shipped a website because between the AI’s output and production were five filters, each catching what the previous one missed. That framework is the subject of The Five Layers and I’m going to assume it from here. The short version: Layer 1 is the compiler, Layer 2 is the type system, Layer 3 is explicit errors, Layer 4 is enforced simplicity, Layer 5 is the human. Cheap layers should catch most things. The expensive layer should only catch what cheap layers couldn’t.
This essay is about which language stack carries those layers honestly when the AI writes most of the code, and why that matters to a question most language debates skip.
The Velocity Argument
Before the language comparison, the thing that actually matters.
Gall’s Law Got Cheap notes that the customer is the working-ness oracle — you cannot tell from inside whether a system works, you have to put it in front of someone who has to use it. AI-assisted engineering changed the math: building the simple version is now a one-week spike. The expensive thing is no longer building. The expensive thing is the loop between the customer touched it and the next version exists.
Anything that delays that loop is paid for by the customer’s working-ness oracle being silent for longer. Velocity is not vanity here. Velocity is the customer’s chance to teach you what the system actually is.
A language stack that catches problems quickly preserves the loop. A language stack that catches problems slowly — or asks the human to satisfy a lot of language-specific demands before the system can be reasoned about — collapses the loop. That is the actual frame for this comparison. Not “what is the best language.” But “what filter stack preserves the customer-feedback loop when the writer is a machine.”
Python: The Median Is The Ceiling In Practice
Python can have all five layers. Strict pyright in CI, ruff with everything enabled, exhaustive runtime contracts on boundaries — and Layers 1 and 2 effectively exist.
This is not the median Python codebase.
The median Python codebase has type hints on twenty percent of new code, Any propagating through the rest, optional mypy that nobody runs because the CI is slow, and exception handling by convention. The AI writes the median because the AI was trained on the median. When the AI ships four thousand lines of median Python, every line runs. Whether every line is correct is a question that gets answered later, by the customer, in production, somewhere between dictionary-key typo and None propagating through three function calls.
The honest claim is: Python’s ceiling is fine. Python’s default is bad for AI. AI ships at the default. The discipline required to keep strict-typed Python honest is exactly the discipline a vibing developer is trying to delegate to the toolchain. Python doesn’t delegate — it offers. AI doesn’t take the offer.
If your team runs strict pyright on every file in CI and rejects anything that doesn’t pass, this section doesn’t apply to you. Acknowledged. You are also probably the only one in your organisation who does, and you know it.
Rust: Layers That Sometimes Catch The Wrong Thing
Rust has all five layers — more than five, depending on how you count. The borrow checker, the lifetime system, the Send/Sync trait bounds, the async runtime contract. Each layer is rigorous. Each catches real bugs. The compiler says no, and when it says no, it is right.
This is not a complaint about Rust catching things. It is a question about what it catches.
When the AI writes Rust, the borrow checker frequently fights the generated code. The AI didn’t intend to clone, the borrow checker says clone, the human spends Layer 5 budget — taste, judgement, architecture — explaining lifetime semantics rather than reasoning about the system. The catch is correct. The cost is that the human’s expensive layer is being spent on language work — satisfying constraints the language imposes for very good reasons of its own — instead of on system work.
The strongest sub-argument is async.
The Async Tax
Go has goroutines. You write go func() and it works. There is one kind. They compose with everything in the standard library. Last night my single binary ran HTTP handlers, NATS consumers, filesystem watchers, periodic git push, and SSE streams — all goroutines, all unceremonious, all interoperating without thought.
Rust has coloured functions. You pick a runtime — tokio is the de facto default, but async-std exists, and smol exists, and they are not interchangeable. Then every async crate you depend on must be runtime-compatible. If a crate underneath you uses a different runtime internally, you get a compile-time mismatch or a runtime hang depending on the day. The AI does not know which runtime your codebase uses. The AI does not know which runtime each crate uses. The AI cannot disambiguate without you.
This is a Layer 5 cost the language requires. The human spends taste budget on runtime selection rather than on architecture. The customer’s working-ness oracle waits.
This is the irreducible Rust cost for this collaboration mode. It is not a skill issue. It is not solved by better AI. It is in the language.
What I Will Not Claim
The earlier version of this essay claimed cargo update generally breaks half your transitive dependencies. Edition transitions have been smoother than that, and people who ship Rust regularly know this. I retract the generalisation.
The narrower and more honest claim: the Rust async ecosystem in particular has historically required more attention from the human than Go’s standard library has, because the runtime split forces choices that the language doesn’t make for you. That is a softer claim and the one that actually holds.
Go: The Right Filter Stack For This Mode
Last night I added a field to a struct. SiteLinks got a Riclib field and a FeedRiclib field. Ran go build. The compiler showed me every call site that needed updating. Not some — every one. I fixed them. Built again. Zero errors.
In Python, the struct is a dictionary. Adding a field changes nothing. The missing key surfaces at runtime, on the code path the tests didn’t cover, somewhere I was not.
In Rust, the compiler catches the call sites and the lifetime of the new string reference, and the borrow across a thread boundary, and the Send bound the new field violates. Four things to attend to. One was mine. Three were language work — satisfying constraints the language imposes for reasons of its own.
In Go, the compiler caught my problem and only my problem and got out of the way. That is what “catches what matters, not what it invented” looks like for one trivial change.
The deploy story is the same shape. Three lines of bash:
GOOS=linux GOARCH=amd64 go build -o lg-linux .
scp lg-linux server:/home/lifelog/bin/lg
ssh server "systemctl restart lifelog"
One binary. No runtime to install. No virtualenv to activate. No container required. Three domains served from one process. The customer-feedback loop runs on the order of minutes, not the order of build pipelines. Layer 4 — one way to do it — extends past the language into the deployment.
What Changes If The Collaboration Mode Changes
If a human is writing most of the code, Rust’s extra layers may be exactly right. The taste budget can absorb lifetime annotations because the human chose to write them. The borrow checker becomes a tutor, not an obstacle. Rust-for-humans is a different essay, and people who write it have my respect — I am not in that mode and I’m not arguing against the mode I’m not in.
If a small team is writing strict-typed Python with disciplined CI, Python’s effective layers approach Go’s. The floor exists, just maintained by humans rather than enforced by the toolchain. That stack works in practice for the people who keep it strict. It does not work in practice for the median team — and AI generates for the median.
The conditional matters. The verdict here applies to the collaboration mode I am in.
The Closing
Go is not the best language. Go is the best filter stack for AI-assisted development with a tight customer-feedback loop. The argument is specific. The conclusion is conditional. The five layers exist, the cheap ones do their work, and what reaches Layer 5 is mine.
Last night, what reached Layer 5 was twenty words. The compiler caught the structural errors. The type system caught the interface mismatches. The explicit error handling ensured nothing failed silently. The enforced simplicity meant I could read the AI’s code at a glance because Go code only looks one way. The site shipped at 2 AM. It is still running.
The compiler is the floor. The human is the taste. The binary is the proof.
There is an encyclopedia where software principles get the roast they deserve, and a mythology where these ideas have been arguing with each other for a hundred and fifty episodes. The five-layer framework is the subject of its own essay, The Five Layers — the language-agnostic version. This essay is the framework applied to one specific stack and one specific collaboration mode.
See also:
- The Five Layers — The framework, language-agnostic. The home of the layered-filter argument.
- Gall’s Law Got Cheap — The velocity argument: why the customer-feedback loop is the scarce resource
- The Stones in the River — The collaboration mode: human places stones, machine leaps between them
- Vibe Engineering — Steering the machine with five-word redirections, which is what Layer 5 actually does
- Why Every AI-Assisted Codebase Will Collapse (Unless Someone Loves It) — The other side of the equation: pruning is the human’s other expensive job
- Boring Technology — Layer 4 generalised: enforced simplicity as a civic virtue
- Vibe In Go — The original Yagnipedia entry on this argument, predating the framework