After my previous post went live, I got asked what I think the right balance is between the Wild West of no standardization on the one hand, and a legible state-like system over over-standardization on the other. What is the right amount of standardization?
At first, I thought the answer was simple; just replace architects with hands-on staff+ roles to get the standardization closer to the product teams and domains. Turns out, that barely scratches the surface. Instead, I started to ponder what a healthy tech org look like, and how it behaves.
Standardization is usually a byproduct of risk aversion, misaligned incentives and unintended consequences rather than a deliberate optimization. Too little can create unnecessary friction, but too much stifles adaptability. However, I'll argue that the real challenge isn’t balance, but rather knowing when standardization actually solves a problem and when it’s just getting in the way.
Before diving into that, though, let’s tell some lies.
I. The Lies That We Tell Ourselves
As Software Engineers, we're good at telling ourselves lies. In fact, we're probably better at that than we are at our actual jobs, whatever that is again (I mean as a group, of course not you, dear reader).
With time, the lies have turned into myths, enshrined in our collective hivemind and echoed to us by a wind gust whenever we misstep.
"Thou shalt not repeat yourself".
"Why are the trees speaking?", you ask yourself right before shame engulfs your entire being and you're demoted back to junior engineer.
We talk about systems in almost Platonic terms—scalability, flexibility, DRYness—as if these are universal truths or ideal end states. Just as with Plato's ideals, the natural world is anything but. How about a game of "Spot the Lie"?
An interface towards a third party API needs to be built as a facade in a separate microservice. That way we can switch the third party system without downstream effects.
Most of the time, when you switch a provider, you also need to re-assess the assumptions that you made when first building it. So, now you've prematurely created an abstraction that's bound to leak because you didn't have all the information when you built it. That's the thing with unknown unknowns and why almost all abstractions are leaky. Great way to increase the cognitive complexity of the system (we'll get to that).
Here's another classic.
Let's re-use this service that another team built instead of building our own for our use-case.
DRY as a Dry Gin. Who cares that we've implicitly created a dependency between two teams? A dependency is only real if it's imported in my code, right? What happens in reality is that you now need to coordinate development between teams. Or worse, you try to create abstractions that cater to both teams' use cases. Or a little less bad, you split the code down team boundaries. Remind me of the point of not repeating ourselves, again?
The lies that we tell ourselves is that we do these things for good reasons, and that those good reasons outweigh the drawbacks. Some of the intended outcomes are desirable properties of a system. Who doesn't want their system to be flexible and amenable to change? The problem is that they regularly fail to achieve said outcomes, and so they end up not being very useful rules in practice.
II. The Identity Trap
I think that part of the reason why we keep telling ourselves these lies can be found in the combination of inefficient (large) companies and our identities as Software Engineers.
Identity can foster a sense of belonging in a group or a community. It can give us guidance and meaning. It's also the single hardest thing to change about ourselves. The beliefs that form our identity influence our decision-making. As Annie Duke frames it in her book "Thinking in Bets", this can lead to problematic outcomes if that identity isn't centered around "truth-seeking". The Software Engineering identity is a form of tribal identity centered on solving problems with code. Keep that in mind for a second.
Large tech organizations are inefficient. Most of them don't experience enough market pressure or strategic alignment that forces them to be efficient. They're oftentimes at a point of product and market maturity where they can shift from innovation to extracting rents. These are the major players in the tech industry, with pockets of exceptions both in the industry at large and at a subset of product teams within companies.
If there even is a coherent strategy, many software engineers will have a hard time tying their work to it. That's likely not due to any fault of their own. Driven by our identities we seek to be useful, regardless of the circumstances. We simply create work out of thin air and, much like DOGE, we shape a narrative of efficiency and abstraction around the work that we end up doing. "We're doing x because it will increase our operational resiliency by y%".
In my experience, this is more often a case of post-hoc rationalization of prior beliefs more than a form of truth-seeking in service of a strategy. It is spurred on by the Software Engineering zeitgeist of best practices and how systems should be built.
III. The ZIRP Trap
So, we tell ourselves lies and we do things that are not tied to company strategies. The final piece of the puzzle is to answer how we ended up here. In what twisted reality are companies OK with this form of status quo?
Well, what better catalyst for doing unecessary things at scale is there than zero interest rates? In a zero interest rate policy environment we're free to throw shit at the wall in the hopes that some of it will stick—anything is better for investors than letting the cash sit.
We have a whole generation of senior software engineers whose only work experience is in a low interest rate environment. It has fostered a culture of innovation, along with a staggering amount of waste. In no other enviroment can so many talented people work on so much that's not furthering company strategy. The bloat goes unnoticed because it has been normalized over time and is now the modus operandi for everyone.
In a global game of musical chairs, organizations compete for top talent in the chase of growth. High valuations and exits or buybacks of overvalued stock. That's the end game—at least until the music stops. This creates an environment where strategy does not matter. What matters is that we're a whole lotta smart people doing a whole lotta smart stuff. Slack and waste grow unchecked, and engineers take full advantage building cool tech for its own sake instead of questioning the sanity of it all.
This historical backdrop of easy money and misaligned incentives explains why we've prioritized standardization and large-scale coordination over adaptability. There's just been too much space for those tendencies to be self-serving to individual engineers (see Resume-Driven Development).
At what point do we stop to consider whether our organizational structures are serving us or holding us back?
IV. The Anti-Fragility of Disorder
The way we organize tech orgs today isn’t inevitable. It’s a product of the last 10–15 years of economic and technological conditions. That means we shouldn’t be afraid to question it. Just because we’ve only known one way doesn’t mean it’s the best way.
Most mid-to-large tech orgs follow the same structure: product teams own “products,” while platform teams and “guilds” enforce standardization. This top-down approach assumes that standardization makes everything run smoother. But in practice, we've seen that it creates misaligned incentives.
The people enforcing standards always want to do something, and that something always tends to approach too much; too much consistency, too much efficiency, too much order. That’s a problem because innovation happens closest to the domain. The more constrained domain teams are, the less innovation you get.
Viewed through the lens of risk, this kind of excessive standardization makes organizations fragile. A rigid org may be more legible, but it’s also less adaptable. When external conditions change, be it in the form of new competitors, market shifts or regulatory updates, rigid orgs struggle to respond.
On the flip side, less standardization introduces different risks, like legal or security concerns. But it also makes the org anti-fragile and more capable of adapting when conditions shift. Startups, for example, thrive on minimal standardization. They move fast precisely because they aren’t weighed down by process.
That’s the tradeoff: order reduces certain risks but increases fragility. Disorder introduces risk, but also adaptability. And in an industry defined by rapid change, adaptability wins. The exception to this are the budding monopolies of large-tech-companies-turned-rent-seekers who are no longer interested in product innovation and risk.
There's also something to be said for the number of product teams at a company and the size of each of them, but I'll leave that to people that are far smarter than I am.
V. The Fear of Disorder
We fear disorder because it feels like a lack of control. And, to be fair, sometimes it is. But control is not the same thing as effectiveness. A highly ordered system is legible, predictable, and supposedly easy to govern. It is not, however, necessarily adaptable, efficient, or aligned with the actual needs of the business.
Standardization is often a coping mechanism, a way to reduce perceived risk. The logic goes something like this:
- If we standardize how things are built, we reduce variance
- If we reduce variance, we increase predictability
- If we increase predictability, we reduce risk
Sounds reasonable, right? Except it assumes that the primary risk in a tech organization is technical inconsistency, rather than strategic inertia.
When the landscape shifts because of a new competitor, a regulatory change, or a shift in user behavior, the companies that win are the ones that move fast. Not the ones that have the tidiest internal architecture diagrams.
This is why disorder at the local level is often a strategic advantage at the global level. A bit of chaos inside teams means they can respond to problems quickly, experiment with solutions, and make tradeoffs that make sense for their domain. It’s only when disorder starts creating unmanageable dependencies that it becomes a real problem.
And that’s the crux: standardization should be about managing dependencies, not enforcing conformity.
VI. Pain-Driven Development
If standardization is about managing dependencies, not enforcing conformity, then the real question is: what kinds of dependencies actually need managing?
Let’s break it down into two types:
- Essential dependencies – Things that genuinely need to be shared across teams for the company to function. Think: authentication systems, and other core infrastructure. If these aren’t standardized, you get data inconsistencies and security holes.
- Artificial dependencies – Dependencies that only exist because someone decided that things should be done "the same way." Think: mandated frameworks, shared microservices, a one-size-fits-all CI/CD pipeline. These don’t reduce risk; they just create coordination overhead.
The trap is that standardization efforts often start in category #1 but creep into category #2. A few well-meaning platform engineers decide to make things easier by enforcing common tooling. A cross-team committee drafts a universal API contract. Before you know it, product teams are spending more time negotiating standards than talking to end users and shipping features.
Here’s the actual balance:
- Standardize where dependencies already exist. If two teams must interact, make it easy.
- Avoid creating unnecessary dependencies. If two teams can work independently, let them.
Cognitive complexity isn’t about code as much as it’s about organizational design. A system is only as simple as its dependency graph, and every unnecessary standard is another edge in that graph. The burden of proof here is on anyone wanting to implement a standard.
For product teams, the leading indicator in favor of standardization is pain. How much pain are we experiencing and would standardizing on something alleviate that pain? Dependencies are the purest form of pain as they risk halting all work, so don't unecessarily introduce them. It's also the leading indicator of having standardized too much. For instance:
- How much time is spent on modeling business processes compared to the time spent on quickly iterating and testing assumptions?
- How much time is spent thinking about code compared to the time spent thinking about data?
PAIN.
VII. Organizing Without Conformity
If we accept that standardization should be driven by real dependencies, as opposed to any of the other flawed reasons that we've been through, then how should we organize? My feeling is that:
-
Product teams should make their own choices. The default assumption is that a team owns its stack, architecture, and processes. If they want to standardize, great. If they don’t, also great.
-
The best ideas spread naturally. If a particular approach is genuinely valuable, other teams will adopt it on their own. If they don’t, maybe it wasn’t that valuable to begin with.
-
Global policies exist, but sparingly. Compliance, security, and core data models? Sure. Everything else? Up for debate.
A lot of tech orgs try to create cohesion through standardization. A better approach is to create cohesion through shared context. Some might argue that immature product teams can’t be trusted with this level of autonomy, but the solution isn’t to enforce standards from above. It’s to ensure teams have access to experienced peers, clear strategic goals, and strong internal feedback loops. The best practices will spread naturally if they actually work. Forced standardization, on the other hand, stifles the very learning process that helps teams mature in the first place.
The global order you’re looking for doesn’t come from rigid rules. It comes from trust: trust that teams are closest to the problem, trust that good ideas will spread, and trust that a bit of local disorder is what makes the whole system resilient.