That makes it very easy to roll out changes to any given service without breaking others and helps a lot with the backwards compatibility of services. Makes everything more resilient.
Shared code is always an internal dependency. You think 'We are sharing code and this is efficient'. But you end up having to take into account many different parts of the application when rolling out one change for you can easily break something totally out of sight while trying to change another.
I were mostly against duplication of code. But the more I develop and maintain larger systems, the better I see the value of separating codebases, even if this includes code duplication.
My overarching argument is this: I would characterize microservices as a response to organizational and cultural challenges. However, separating a software project into multiple repositories with their own dependencies, deployments, tests, and philosophies adds needless overhead and complexity to systems. Further, because code can no longer be shared across these component parts of the system, requirements, code, deployments, documentation, etc. must be duplicated. But this duplication is done--if it is done at all--imperfectly, because it is laborious and tedious. Finally, microservices tend to drift. Some are written in Node, others are written Python or Go, etc. Someone wants to try functional programming, someone wants to try Hexagonal. This leads to brittle, badly documented systems that by definition no one completely understands and that require a mountain of ancillary software to manage and operate.
> That makes it very easy to roll out changes to any given service without breaking others
Functionally, there's no difference between this and copying a function to modify for the new functionality. You're saying "I need to modify [functionality X] in order to deliver [feature Y], but other systems rely on [functionality X], so I have to carefully modify [functionality X] to avoid breaking [feature A-X]".
A solution to this is to just make [functionality X_new] and use some if statements. It's not elegant, but neither is forking a new repo to avoid refactoring. I'd characterize that as extreme technical debt, and would recommend refactoring instead. I understand that in lots of shops, forking a new repo is actually easier--you can break free of sclerotic design processes or overbearing colleagues/managers/architects. But the systems that allowed those problems into that project will soon force them into your new project. Microservices are a short-term solution to a long-term organizational or cultural problem.
> helps a lot with the backwards compatibility of services
There's two ways to do this. You can set up integration tests that prevent you from deploying if you've broken compatibility. Or you can fork a new repo whenever you need to make a (potentially breaking--which is all of them) change. I would recommend integration tests. You're gonna need them eventually.
> Shared code is always an internal dependency. You think 'We are sharing code and this is efficient'. But you end up having to take into account many different parts of the application when rolling out one change for you can easily break something totally out of sight while trying to change another.
This more or less applies to any part of any software system unless you're very careful about shared state and side effects. The alternative here is unit testing, which, similar to integration testing, I recommend because you'll also need that eventually too.
> But the more I develop and maintain larger systems, the better I see the value of separating codebases, even if this includes code duplication.
I'm not at all a hardcore "don't repeat yourself" person (more "rule of three") but to the extent I am, I tend to think code duplication indicates you need to reconceive the mental model of your application, not merely create a helper method or whatever.
My point here is that for me, code duplication isn't really the worst part of microservices. The thing I find most objectionable is that they lead to weird, rickety systems that are hard to develop and maintain. You haven't lived until your tickets for a sprint involve fixing multiple bugs in code copy-pasted across half a dozen microservices, or you have to deploy your changes to this microservice you've never worked on before and it involves the most bonkers incantations you've ever read about (update this Jenkins script blah blah).
I think people have this idea that microservices reduce scope, interdependence, and complexity. Maybe sometimes that's true. But for the engineers who have to work across multiple microservices, you really get all of the bad and none of the good. You're still dealing with a big software project, but now it's sprawled across multiple repositories all with their own idioms, idiosyncrasies, languages, copy-pasted code, deployment setups, etc. etc. ad nauseum.
You might argue I've only seen bad implementations of microservices. Sure, that's possible. I'm not saying they can't work. I'm saying the forces that lead teams to adopt microservices inevitably corrupt all projects no matter their architecture, and that microservices incentivize a particular type of shorttermism and myopia that makes some of the scenarios I've described the path of least resistance. I think there are far fewer pitfalls with "1 repo per project" (not "1 repo per company") and that we have great tools and techniques to help teams using this structure.
Yes, for the organization. However when the organization is larger, this may not be a disadvantage - if the teams are already as large as small startups, then it only makes sense that they have their own repo if they are doing microservices.
They drift. True. But that's no different from the dependencies in a repo that comes with reusing code. Its good practice, but it also has its drawbacks.
> A solution to this is to just make [functionality X_new] and use some if statements
That's a worse practice in the long run. Such exceptions and slightly modified functions complicate the codebase and make it more difficult to gain keep context for anyone working on that codebase in the long run. There are situations in which this is inevitable. But if it can be avoided, it should be avoided.
> You can set up integration tests that prevent you from deploying if you've broken compatibility
Nope. Trust me, you eventually can't. Things will get complicated in the long run. You wont be able to have tests for every important angle, use case or function and maintain it. User-facing interfaces and functions are even more difficult - they involve combining all of those different services and functionality in a coherent whole. Move one brick and everything will get disrupted. You can try. But your tests, your commits, deploys will take much longer and everything will get more complicated.
> This more or less applies to any part of any software system unless you're very careful about shared state and side effects
Yes it does. Microservices is a way of avoiding that as long as possible. Eventually testing will still get complicated in user-facing functionality. But until your app becomes such a large and well-featured and used one, you have pretty good runway with microservices.
> I tend to think code duplication indicates you need to reconceive the mental model of your application, not merely create a helper method or whatever.
You can do that at the start. And you will be able to do it for a good chunk of time. But when your application is large and complicated enough, it will become more difficult to do. Microservices is a way to keep things isolated and contexts understandable as long as its possible.
> they lead to weird, rickety systems that are hard to develop and maintain. You haven't lived until your tickets for a sprint involve fixing multiple bugs in code copy-pasted across half a dozen microservices
Yes, that is an inherent difficulty in microservices. Tracking bugs, logging must improve. They eventually will.
> or you have to deploy your changes to this microservice you've never worked on before and it involves the most bonkers incantations you've ever read about (update this Jenkins script blah blah).
That is not specific to microservices. It can easily be encountered when dealing with a service that is tightly integrated in a monorepo, or even the part of a singular monolithic app.
Simple standards must be applied to all code across all microservices for keeping them simple, easily understandable and modifiable.
> "1 repo per project"
Doesn't that converge to the microservice model...
This is a pretty good point. I've not dealt with microservices at a huge company like a Netflix or a Wal-Mart. My experience is "we have a website with 10-20 pages, built by 1-3 teams of ~5 on 40 microservices". Maybe they make sense when you're at the scale of 100s of engineers, but they're not better than Rails/Django for 99% of companies.
> That's a worse practice in the long run. Such exceptions and slightly modified functions complicate the codebase and make it more difficult to gain keep context for anyone working on that codebase in the long run.
Oh, I was saying there's an easier solution than forking a repo. My overall point is that adding new functionality is a core software engineering thing; we should see it coming, and architect our code to--reasonably--accommodate it. To that end, sometimes we have to refactor stuff, which means leaning on your {integration,unit} tests. You're gonna have tests anyway, why not let them help you maintain compatibility?
> You wont be able to have tests for every important angle, use case or function and maintain it.
A lot of mission-critical industries (aerospace, transit, medical devices) more or less achieve this. It comes at a velocity cost, but it's not impossible. I'm just gonna hand wave and say you can probably get 80% of the effect with 20% of the effort, which is great. The point isn't to catch all bugs every time, the point is to let you modify functions/etc. without forking a new repo. And again, you're gonna have tests anyway.
>> This more or less applies to any part of any software system unless you're very careful about shared state and side effects
> Yes it does. Microservices is a way of avoiding that as long as possible.
Well, another way of saying this is "microservices have this problem too, just later". Sometimes that's useful, but if the agreement we're forging out here is "microservices are useful if you have a big company with lots of services/teams", it sounds like you'll inevitably have this problem. The solution, therefore, isn't microservices, it's managing shared state and side effects, which microservices doesn't have a monopoly on by any stretch.
> You can do that at the start. And you will be able to do it for a good chunk of time. But when your application is large and complicated enough, it will become more difficult to do.
This makes me think we disagree a little about what a microservice is--in fairness it's kind of a vague term. The microservices I've dealt with are like, a small web app in some micro framework (Node, Flask, Tornado, Go) that gets iframe'd into a website along with other microservices. My problem with this is: this should be a single website built in a single framework. Django, Rails, Phoenix, Symfony, etc. are all great at this, or you can get some backend-as-a-service like Hasura or PostgREST if you're willing to use minimally fancy JavaScript.
People will argue, "but $FRAMEWORK gives you no tools for managing shared state and side effects". But it does: the database. And I would also argue that having a single project lets you consolidate the implementation of your business logic. Using microservices, your business logic is copy-pasted across dozens of services. That's pretty clearly bad--wouldn't it be better if there were something like a libcompany that everyone shared?
You'd probably argue that it'd be hard to change libcompany and that every team should have lots of forks of libcompany that implement whatever bespoke changes they need so they can move fast. But that sounds nightmarish to me, especially as someone who's debugged this kind of situation before. And guess what, when you find the problem, you might not even get to fix it because microservices still have dependencies, some of which are unknown, so you lose that benefit also. What you'll probably do is fork another microservice with your libcompany changes.
>> or you have to deploy your changes to this microservice you've never worked on before and it involves the most bonkers incantations you've ever read about (update this Jenkins script blah blah).
> That is not specific to microservices.
It kind of is though, in that with "1 repo per project" you only have 1 weird pipeline to deal with. I can manage that. I find it hard to manage a dozen weird pipelines, or dozens of weird pipelines on different old versions of deployment/testing tools or custom scripts, etc. Maybe the vision of microservices is a fleet of Lambdas carefully managed by Terraform. That sounds nice! My experience with microservices is a hot mess of everything from Chef to Make to Jenkins. Scale matters here: dealing with 1 Chef and 1 Make is much better than dealing with 4 Chefs, 3 Makes, and 5 Jenkins.
>> "1 repo per project"
> Doesn't that converge to the microservice model...
Depends on what you mean by project I suppose. I'm not a fan of Domain-Driven Design, but one of the things I do like about it is how it defines a domain, which is a namespace without clashes. For example take the word "job". In the construction.com domain, "job" might mean the building you're building, which has certain attributes: building address, crew size, etc. In the softwareconsultant.com job domain, "job" might mean a contract you have with a client, so client name, requirements, etc. To me, these should be separate projects--two Rails codebases with two databases, testing setups, deployment pipelines, etc.
I don't think there's any convergence to the microservice model here. Microservices say, "what if construction.com was actually multiple Sinatra apps". Unless you're very into the microservice ideology, it sounds like these projects should stay single projects.
---
Leaving out all the "bad" implementations of microservices (which I think the microservices model makes really easy to do), I still think the "fleet of Lambdas managed by infra-as-code" leads to situations where you have copy-pasted versions of your business logic everywhere. This is solvable with a "libcompany", but microservices proponents are against that because it necessitates "coupling" and "coordination". But that's what a software system is: a bunch of interacting parts. I often find myself lost in a semantic graveyard in these debates because it's like, what is a project, is each microservice a project or is the project now defined in terms of dozens of microservices, blah blah blah. It's hard for me to not see these systems as simply expensive, complicated ways to deploy individual API endpoints with the attendant overhead of multiple deployment/maintenance/testing setups. Maybe that's useful for some organizations, like if construction.com is a spawling billion dollar enterprise with tons of complicated functionality. That sounds like a hard problem. But I think most people don't have hard problems, they don't need Martin Fowler, they just need Django or Hasura.