We had a clean architecture (in theory), but it was strangling us with coordination requirements. Every minor feature change required a sync meeting across multiple teams and changes usually propagated to other teams as well. We were spending more time on the communication and architectural overhead than the actual engineering work.
I stopped measuring velocity and started measuring Structural Friction. I defined it as the total human hours spent on coordination tasks (namely meetings, design, development in foreign modules) relative to the actual hours spent on high-impact code. The number was horrifying.
I had to modularize every module so domain data was no longer leaked. This was essential so changes within that module would never require coordination meetings or work in other modules.
My team’s output tripled in three months. Structural friction is an architectural feature that you must actively optimize to keep low. If your architecture forces you to coordinate (either in tech, in meetings, or both), it's flawed. Eliminate the need for coordination, and you eliminate the waste.
Many engineers want to avoid the friction that comes with strictly defined data ownership. They seek a polite, neutral zone to avoid confrontation. I believe this approach is intellectually dishonest. By avoiding the confrontation required to enforce true modularity, you build systems that are destined to fail the moment a schema changes.
I had to learn this the hard way. I now enforce strict separation at the data layer for every component. It is harder upfront, but it's the only way to build systems that scale without collapsing under their own weight. If you are sharing databases or making your domain data cross the boundaries to foreign modules in any way, you are not building a modular system. You are just postponing the inevitable breakdown.
The new tiers create an environment where developer velocity is directly tied to a volatile vendor quota. This is not about cost optimization. It is about a structural change in how the tool treats its users. I realized that my reliance on a service that treats developers as a variable revenue stream rather than a partner is a major risk.
I have decided to terminate my subscription. I am returning to simpler, local-first tooling that doesn't force me to battle artificial limits to do my job. True efficiency comes from what you can control, not what a vendor allows you to access. My productivity has not dipped because I stripped out a dependency that was actively working against me.
Every time I remember that I still feel bad for not pushing back. We created an unclear interface that coupled our domains together. The second service became dependent on the internal document structure of the first, yet it had no contract to enforce that structure. We chose that path because it was the fastest way to hit our sprint goals. We let the immediate pressure win, and in doing so, we essentially guaranteed that both maintainer teams would be locked in a fragile, entangled dance for the foreseeable future.
I have since learned that sharing domain data across boundaries is a recipe for disaster. It is a classic example of prioritizing speed in the present while ignoring the mounting cost of coupling. The better approach should've been to respect domain boundaries and only connect them using a unique immutable identifier instead of sharing stateful objects or duplicating documents. By passing an ID, you maintain the independence of each service so they are free to evolve at their own pace, as long as they don't break the interfaces.
But the subsidy era is over.
The 2026 price hikes, driven by rising HBM memory costs, new energy taxes, and heavier compliance mandates, are not just inflationary pressures. They are the market correcting years of massive economic malinvestment. For years, VC subsidies allowed us to build systems without genuine demand, treating compute as infinite. From a Misesian perspective, building where there is no market demand for that resource consumption doesn't make us more productive; it makes us poorer by wasting scarce resources.
I started a business with my own savings, not venture capital. I thought this made me immune to "grow at all costs" mentalities using VC money in non-essentials that won't make the product profitable, but I was wrong. I had adopted the habit of using the most powerful, expensive models for every single task because it was easy, convincing myself that was essential to guarantee performance (it was bullshit). It wasn't until the costs started biting into our margins that I realized I wasn't just being wasteful; I was being lazy.
I started experimenting with "Model Tiering". I shifted routine processing to smaller, faster, and cheaper models and reserved flagship models only for complex reasoning where the value-add actually justifies the cost. The result was surprising: the performance difference was negligible for 80% of our tasks, but also the latency dropped and now I get things done even faster.
I’ve come to realize that real wealth is generated when you align resource usage with actual demand. Every time we create demand artificially, fueling growth with VC money, we create malinvestments that the market eventually corrects (and that's never nice).
To stay lean in this environment, I've adopted three constraints: 1. Local Compute Sovereignty: If a task can run on local hardware, it should. Subscribing to an API for something that could be a local asset is now a liability. 2. Prompt Minimalism: I am aggressively stripping filler and redundant context. In an era of high variable COGS and especially with the energy scarcity still not being resolved, every unnecessary token is a direct drain on capital. 3. Prioritize Human Action: AI is a tool, not a strategy. The most valuable builds are those where human insight directs the machine, not where we abdicate strategic judgment to a prompt.
I’m focusing on building a profitable, sustainable business model, not manufacturing fake growth. I’ve learned that the skill of making a product profitable is far more valuable than the skill of raising money to scale a loss-making one. I am finally being forced to account for the actual scarcity of intelligence, and frankly, whether we like it or not, it is making us better humans.
The issue was not the code. It was the lack of clear expectations. I stopped looking at our APIs as simple code definitions and started treating them as firm organizational treaties.
When you treat an interface as a treaty, it forces a shift in mindset. You stop optimizing for ease of implementation and start optimizing for stability and predictability. A clear, immutable interface lets teams operate independently because they no longer need to constantly negotiate with each other. They just trust the contract.
This shift forced us to simplify everything. We cut out the nice-to-have features that required constant cross-team coordination. We focused only on what was absolutely required for the treaty to hold.
I would do this differently next time. I would spend more time negotiating the treaties before writing a single line of implementation code.
I remember thinking at the time that this was the smarter engineering trade-off. We avoided the complexity of a new API interface, and the application responded faster. In reality, I was just ignoring the Rule of the Bolt. Instead of treating the immutable identifier as the only stable joint between modules, I had tied them together through shared, duplicated state.
I left the company months later, so I never had to deal with the inevitable synchronization nightmares that decision caused for the team that remained. I look back on that architectural decision as a failure. I prioritized short-term speed over long-term autonomy.
We should be comfortable with the cost of network calls if the alternative is breaking modularity. The only true interface between two domains should be an immutable reference. Everything else is just debt waiting to accumulate.