Many organizations start with a simple rule to protect quality: every change must be approved by someone else before it goes live – a rule sometimes enforced by the version control system itself.
It’s a sensible idea – until the process itself starts creating more friction than it prevents.
That was the situation I found myself in while leading a globally distributed engineering team. We already worked well together, but because we were spread across different time zones, even small, low-risk changes could get stuck waiting for reviews which meant momentum slowed, not because the work was complex or risky, but because the process demanded it.
This led to an experiment: what if code reviews weren’t mandatory? What if we trusted engineers closest to the work, to decide when a review was needed, based on the nature of the change?
Different Types of Reviews
In theory, a code review should catch bugs, teach better patterns, and raise the overall quality of what we ship. In practice, many reviews are just quick skims, a few nitpicks about syntax or formatting, but also several hours – or days – of delay.
Not all pull requests are created equal and if you’re often saying “LGTM 👍” without any other feedback, odds are you’re seen your fair share of shallow reviews (by which I mean pull requests that doesn’t require much from you besides the mundane surface checks).
Reducing the clutter of low-risk, low-value changes allows reviewers to spend their energy on foundational components, architectural decisions, user experience, and risks – the kind of conversations that genuinely improve system quality and create valuable knowledge sharing.
To pull it into a business context, then it follows an idea from transaction cost economics where every coordination step – like a mandatory review – carries a hidden cost. If the cost exceeds the value added, the system slows down unnecessarily.
Guiding Good Decisions
Although making code reviews optional means moving decision to the ones closest to the work and trusting them, then I still believe that all experiments/transitions/changes has an initial opportunity to create shared investment amongst everyone involved.
So! In our case, we sat down and created a set of “rules of engagement” to help guide what would constitute code worthy of a review:
Rules of Engagement (example)
We trust your judgment to determine when a review is necessary, but here are a few questions to help you assess whether to proceed solo or ask for a review:
- Size: Small one-liners? Ship it. Large refactors? Get a review.
- Business impact: Changes to critical flows always get reviewed.
- Feature flags: Well-isolated under flags? Likely safe to merge.
- Complexity: The more complex the code, the more valuable a second opinion.
- Testing: No automated tests? Ask for manual review.
- Foundation: Code intended for reuse deserves extra care.
- Knowledge sharing: If others will benefit from visibility, consider a review.
- Familiarity: New to this code? Lean on a teammate.
- Dates and time zones: Always, always get an extra pair of eyes on it 😂
One last thing – Pull request descriptions: Even if a review isn’t needed, always write a clear description. Explain what’s changing, how you tested it, and anything important for future context. It helps your future self, helps the team, and often helps you catch mistakes before you hit “merge” – almost like a form of rubber-ducking.
What Changed When We Let Go
The results were noticeable almost immediately.
Pull requests merged faster. Developers weren’t stuck waiting overnight for approvals on minor changes. But the deeper impact wasn’t just faster merges – it was a stronger sense of ownership. As developers took direct responsibility for assessing their changes, the mental model shifted from “someone else will catch this” to “this is mine to get right”.
What’s the risk here? Who needs to know about this? What’s the fastest safe path forward?
Smaller, more focused pull requests became the norm. PR descriptions improved because the work needed to stand on its own. Risky or foundational work was proactively flagged for review when it mattered, not because process demanded it.
Beyond reviews, our automated testing coverage increased. Without relying on reviews as a safety net, engineers leaned harder into automation and coverage – strengthening the foundations of our systems organically.
A few minor bugs made it through; but their smaller impact often meant recovery was fast and none caused serious problems. The net gain in speed, ownership, and team morale easily outweighed the occasional small fix.
Why It Worked: Lessons from Practice and Theory
This experiment didn’t start with an active focus on theories backing it up – it was just an idea to solve a real world point of friction. But over time, the feedback from the team – about feeling more empowered, more accountable, and more focused – helped reveal the theoretical backbone of what we’d done. The shift wasn’t just practical; it was quietly aligning with proven principles all along:
Transaction Cost Economics teaches that every coordination step – every approval, every handoff – carries a hidden cost. By eliminating low-value mandatory reviews, we removed unnecessary friction without compromising real safety.
Local Decision-Making Theory suggests that the people closest to the work are best equipped to judge its risks and complexities. Empowering engineers to assess their own changes strengthened the quality of decision-making, rather than weakening it.
Building a culture of trust, based on Psychological Safety in Teams, creates the foundation for resilient, high-performing teams. Trusting developers to move independently and creating space to recover quickly from mistakes encourages sharper thinking, faster learning, and stronger commitment to quality.
Trust breeds ownership. Ownership builds quality. And the fastest teams are the ones that trust themselves enough to move.