The only new thing is that small teams using these new tools will run into problems that previously only affected much larger teams. The cadence is faster, sometimes a lot faster, but the architectural problems and solutions are the same.
It seems to me that existing good practices continue to work well. I haven't seen any radically new approaches to software design and development that only work with LLMs and wouldn't work without them. Are there any?
I've seen a few suggestions of using LLMs directly as the app logic, rather than using LLMs to write the code, but that doesn't seem scalable, at least not at current LLM prices, so I'd say it's unproven at best. And it's not really a new idea either; it's always been a classic startup trick to do some stuff manually until you have both the time and the necessity to automate it.
What should give anyone pause about this notion is that historically, by far the most effective teams have been small teams of experts focusing on their key competencies and points of comparative advantage. Large organizations tend to be slower, more bureaucratic and less effective at executing because of the added weight of communication and disconnect between execution and intent.
If you want to be effective with llms, it seems like there are a lot of lessons to learn about what makes human teams effective before we turn ourselves into an industry filled with clueless middle managers.
First, the extra speed makes a qualitative difference. There is some communication overhead when you're instructing an LLM rather than just doing the work directly, but the LLM is usually so fast that it doesn't necessarily slow you down.
Second, the lack of ego is a big deal. When reviewing LLM code, I have remind myself that it's okay to ask for sweeping changes, or even completely change the design because I'm not happy with how it turned out. The only cost is extra tokens -- it doesn't take much time, nobody's ego gets bruised, team morale doesn't suffer.
This might be an area where LLMs are able to follow human best practices better than humans themselves can. It's good to explore the design space with throwaway prototypes, but I think people are often too reluctant to throw code away and are tempted to try to reuse it.
For example, a customer service playbook may have certain ways to handle different user problems, but that breaks down as soon as there are complications or compound issues. But an LLM with the ability to address individual concerns may be able to synthesize solution given fundamental constraints. It's kind of lile building mathematics from axioms.
You should only ever have a separate "service" if there's a concrete reason to. You should never have a "service" to make things simpler (it inherently does not).
Libraries on the other hand are much more subjective.
Libraries can share memory, mutable state, etc. Services can not.
> (it inherently does not)
That's going to be debatable.
It's really not. A service adds complexity. If you have no reason to add it besides to "reduce complexity" - that is an oxymoron.
There are many concrete reasons to have one. Reducing complexity is not one.
That's like arguing you can drive farther forward if you go in reverse. No.
There are reasons to drive in reverse. To move forward is not one of them.
I've been thinking about it lately and I think you are right. LLMs haven't changed what is 'good software'. But they changed some proxies I used to have for what is 'good software'.
In the past I've always loved projects that had good documentation, and many times I've used this metric to select a project/library to use. But LLMs transformed something that was (IMHO) a good indicator for "care"/"software quality" into something that is becoming irrelevant (see Goodhart's law).
Not terrible, but I'll just point my own llm to it instead of reading it myself like I would for an actual great documentation
It's also unclear whether tight coupling is actually a problem when you can refactor this fast.
Things that are small, can be easily replaced, fixed, changed, etc. with relatively low risk. Even if you have a monolith, you probably want to impose some structure on it. Whenever you get tight coupling and low cohesiveness in a system, it can become a problem spot.
Easy reasoning here directly translates into low token cost when reasoning. That's why it's beneficial to keep things that way also with LLMs. Bad design always had a cost. But with LLMs you can put a dollar cost on it.
My attitude with micro services is that it's a lot of heavy handed isolation where cheaper mechanisms could achieve much of the same effects. You can put things in a separate git repository and force all communication over the network. Or you can put code in different package and guard internal package cohesiveness and coupling a bit and use well defined interfaces to call a functions through. Same net result from a design point of view but one is a bit cheaper to call and whole lot less hassle and overhead. IMHO people do micro-services mostly for the wrong reasons: organizational convenience vs. actual benefits in terms of minimizing resource usage and optimizing for that.
While I do think actual microservices are over-kill. I don't think I've seen code anywhere that survives multiple years where somebody doesn't use internal state of another package. Like if you don't force people to use a hard barrier (i.e. HTTP) then there's going to be workarounds.
A microservice has a very well-defined surface area. Everything that flows into the service (requests) and out (responses, webhooks)
A totally valid and important point but it has been diluted by talking about microservices rather than importance of modular architectures for agent-based coding.
Agreed. Modular is what they were probably after.
> When coding in a monolith, you have to worry about implicit coupling. The order in which you do things, or the name of a cache key, might be implicitly relied-upon by another part of the monolith. It’s a lot easier to cross boundaries and subtly entangle parts of the application. Of course, you might not do such unmaintainable things, but your coworkers might not be so pious.
What it's saying could also apply to a monorepo with distinctly deployed artifacts. The reason many don't think about clear boundaries between modules is that popular interpreted languages don't support them. Using the Java ecosystem as an example, each module can be a separate .jar containing one or more package namespaces. These must have an explicit X uses Y declaration.
The problem I see isn't so much that misuse it's easy (though that's a part of it), it's that there's to clear indication that boundaries are being crossed since calling from one package to another is normal, and the fact that some packages belong to other modules isn't always obvious.
https://www.natemeyvis.com/agentic-coding-and-microservices/
In fact, my argument is that there will be more monolith applications due to AI coding assistants, not less.
One colleague describes monolith vs microservices as "the grass is greener of the other side".
In the end, having microservices is that that the release process becomes much harder. Every feature spans 3 services at least, with possible incompatibility between some of their versions. Precisely the work you cannot easily automate with LLMs.
If a feature spans more microservices it seems that the microservices boundaries are not well defined.
They may be useful for you, on the other hand atomising things brings overhead and maintenance burden and is often not worth it
Only if you’re a sufficiently bad programmer to not tell it the architecture it must comply to that hopefully you have the skills to define.
> "These AI types are all delusional. My job is secure. Sure your model can one-shot a small program in green field in 5 minutes with zero debugging. But make it a little larger and it starts to forget features, introduces more bugs than you can fix, and forget letting it loose on large legacy codebases"
What if that's not a diagnosis? What if we see that as an opportunity? O:-)
I'm not saying it needs to be microservices, but say you can constrain the blast radius of an AI going oops (compaction is a famous oops-surface, for instance); and say you can split the work up into self-contained blocks where you can test your i/o and side effects thoroughly...
... well, that's going to be interesting, isn't it?
Programming has always supposed to be about that: Structured programming, functions (preferably side-effect-less for this argument), classes&objects, other forms of modularization including -ok sure- microservices. I'm not sold on exactly the latter because it feels a bit too heavy for me. But ... something like?