Every place I've been, the costs of microservices get overlooked in favor of the illusion of decoupling. Microservices will absolutely make simple features fully contained within a service easier, but as soon as a feature spans services or even impacts the contract of a service, you're in for more pain than in a monolithic architecture. Microservices sieze local simplicity at the cost of integration complexity.
Microservices, as a philosophy, is encoding your org design at the networking layer. I hope you get it the factoring right the first time, because it's going to be painful to change.
I'm all for small services where they make sense. But just like the advice that "all functions should be small" microservices has been myopically adopted as a design principle, rather than a debatable decision.
In this case, using microservices was like getting drunk: a way to briefly push all your problems out of your mind and just focus on what's in front of you. But your problems didn't really go away, and in fact you just made them worse.
To me this is pretty high chance of error on day to day basis. And in our case some errors are convoluted, so possibly getting missed and causing slow data corruption.
That said, I'm going make some wild inferences about what you were getting at in ordet to say that I agree that a microservice architecture is probably a solution in search of a problem in most cases. And, even in the cases where it is a good option, I can see all sorts of ways to mess up the implementation. The article is right; the trickiest things to get right about microservices are actually organizational issues, not technical ones. My hot take is that dev teams who are considering adopting microservices should take a serious look at how much ability they have to influence the org chart and inter-team and inter-departmental communication. If management is strictly something that happens to them, I would not give them stellar odds of achieving sustainable success with microservices. Perhaps some other form of SOA, but not actual microservices.
Hearing some places just have one giant database where all teams can add and remove columns is... Kind of scary to me.
Not always. One of the things where I desperately wish people would adopt the "microservices philosophy" is in applications which provide a scripting language.
For example, if I want to "script" OpenOffice, I am stuck with the exact incarnation of Python shipped with OpenOffice. Nothing newer; nothing older; exactly binary compatible. This is a really irritating limitation.
If, however, they simply provided a "microservice interface" that anyone could talk to rather than just the anointed Python, then you could run your own Python or script using a completely different language.
I'm picking on OpenOffice here, but this is not specific to them. Nobody who has a "scripting extension language" as part of their application has demonstrated anything better.
It's also how Windows Scripting Host operates (it also makes the language interpreters into COM objects, so you can extend the list of available languages yourself)
Def the best/worst reason to use it, but it does - practically - make sense. Anyone who has had cross-team-monoliths will welcome this.
"Uh... we had that back in the early 2000s at least, they called it SOA back then."
And yes we learned back then how painful and complex that kind of architecture was to reason about and support, compared to a simple monolith.
Monolith is truly king unless obviously you're at FAANG scales. 99%+ of shops are many orders of magnitude below that scale.
And yes, a simple SPA is a distributed system, even if the backend is a monolith.
Having to push back on that over and over was frustrating.
We didn’t even have a real devops guy, or a vpc with properly partitioned CIDR blocks to segregate our databases from the public web and we’re going to start adding the complexity of a microservice architecture?
For what! We didn’t even have _users_ yet.
But try to get folk to dogfood our application since we had no actual users besides the founder and it was like pulling teeth.
Totally backwards to me.
You can have pieces of functionality in a monolith that make sense to scale independently, and those should not be micro, they should be meaningful pieces of functionality that justify the overhead of spinning them out.
In a way your comment reflects this, a lot of the places that justified microservices were at a scale where their "microservice" was serving more requests than the average company's entire codebase.
It's "big data" with 10 GBs of logs all over again.
It seems like this is not coming up because the most common context of application is one or more teams working on a single large project probably forced to grow as fast as possible because of the funding structure involved. Increasingly, though, there are companies doing software development without the need for focus and scale that is so common to venture capital powered groups.
If it's the former then you're just talking about refactoring functionality into a shared library, but at the end of the day you're still just building little monoliths. You don't have to worry about most of the problems that come up with microservices.
The world did not finally "crack" distributed systems. Most companies created multiple microservices, putting their small dev team underwater because "this is the way at the FAANG".
All of your DRY principles are out the window, and now you have to debug in production - the new word for that is OBSERVABILITY.
Not to mention that the actual reason for distributed systems is to scale multiple parts of the system independently, but you need to KNOW what has to scale individually before you do it. What I see is that the topology really reflects the company structure, of course. It's not "what has to scale separately", it's "team Y is working on X, and team Y does not want to talk to team Z, so they will create a service to make sure they don't have to talk to people".
Except that this is a giant self-own. We all still have to talk to each other, like, a LOT, because things just keep breaking all the time and no one knows why.
Dropbox, Instagram, StackOverflow - these companies are largely monoliths to this day. You thinking that your small outfit needs to be like Google is highly arrogant.
And don't get me started on the amount of money, people, CPU cycles, and CO2 emissions wasted on this solving of the problem most people don't have.
I often see appeals to Conway's Law when discussing microservices, but teams don't organize themselves this way. Instead, teams work on a macro services: the email delivery team, or the monitoring team, or whatever. In most cases these macroservices would be best implemented and deployed as a monolith, and then presented to the outside world over a reasonable API.
Usually they don't plan for how they'll coordinate their work, and that leaves gaps in the design, and puts more risk on the business.
Team A:
↑ Email product:
__|_____________ ↑ Service C <----------
/ How do we \ | Service D <-- \
| work together | --------------------- \ ------\--------------
| on overall | | How do we manage | | | How do we |
| system design? | | stakeholder risk? | | | coordinate changes? |
\_______________/ --------------------- | \--------/----------- /
| | | |
↓ | \ |
Team B: ↓ | |
"Data" team: | /
Service A <---|-------
Service B <--/
On top of that, they don't even make a true microservice. They start directly calling into each others' data rather than interface at an API layer, they make assumptions about how each other works, they don't do load testing or set limits or quotas... and because none of them understand the rest of the system, they don't see that their mutual lack of understanding is the cause of their problems.Even with multiple teams, if they're forced to work inside a monolith, there's a much better chance they will by accident come to understand the rest of the system.
I did the full trip on this microservices rollercoaster. Monolith => uServices => Monolith.
I used to vehemently advocate for using microservices because of how easy it would be to segment all the concerns into happy little buckets individuals could own. We used to sell our product to our customers as having a "microservices oriented architecture" as if that was magically going to solve all of our problems or was otherwise some inherent feature that our customers would be expected to care about. All this stuff really did for us is cause all of our current customers to walk away and force a re-evaluation of our desire to do business in this market. All the fun/shiny technology conversations and ideas instantly evaporated into a cloud of reality.
We are back on the right track. Hardcore monorepo/monolith design zealotry has recovered our ship. We are focused on the business and customers again. The sense of relief as we deprecated our final service-to-service JSON API/controllers was immense. No more checking 10 different piles of logs or pulling out wireshark to figure out what the fuck is happening in-between 2 different code piles at arbitrary versions.
I'm within spitting distance of the end of a project of collapsing a service-oriented system back into a Majestic Monolith. Every step of the way has reduced the lines of code, fixed bugs, saved money, saved time. It's been such a joy that I'm considering doing only this as a side hustle. "Saving" companies who were sold an over-complicated dream.
This has crossed my mind a lot lately. I think we are looking directly at one of the largest emerging markets in technology. What do we think the TAM is going to be for undoing webscale monstrosities by 2025? Not every business will fail due to their poor technology choices and will be able to pay some serious consulting fees...
I've practically got a system for doing this now. It mostly starts with domain modeling in excel and all the business stakeholders being in the loop at the same time until everyone agrees. I find if you get this part right it doesn't really matter if you use C# vs python, or AWS vs on-prem to build the actual product. Hard to get opinionated and locked in when your deploy to prod involves a 100 meg zip file and 3 lines of powershell ran against a single box.
Why touch a critical web service relied upon by literally every product you have when you don’t have to? Your business still functions if account management is offline; but not when authentication is offline. Even short outages to auth were unacceptable (millions of customers) and any updates had to be performed in constrained windows due to the criticality of the service.
So we cleaved off the mission-critical parts, stuck them in their own repos to be versioned independently, which let us move faster on the account management work since we could confidently deploy code that wasn’t 100% working because we didn’t need to wait for a maintenance window.
In a monolith, you just call another function/class, but in microservices that function is a http call. I guess the benefit of microservices is the ability to independently scale different microservices, being able to choose different languages for different microservices and less conflicts in the repo as more people work on it, but you still have to deal with backwards compatibility and versioning of endpoints.
I think lambdas are interesting when you look at it this way. A microservice is essentially a set of functions which is constantly deployed as one unit. But with Lambdas, each function is a single unit that can scale independently.
Run time DI configured from XML.
With enough Java anything is possible!
I like to print statements like this one and put them in a frame on the wall.
> but in microservices that function is a http call
I think that maybe you're understating the complexity that distributed systems may involve.
For example, see all of the following:
- https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
- https://blog.erratasec.com/2012/06/falsehoods-programmers-believe-about.html
- https://medium.com/@kenbantoft/falsehoods-programmers-believe-about-networks-30a328c25c50
Plus, in the current day and age, we still don't have that many convenient ways to make two systems interact over a network. REST and things like GraphQL don't map well to actions, whereas RPC solutions like gRPC also involve a certain amount of boilerplate code and you still need to think about how the concerns above.The blast radius on deployments can be smaller.
You've already done the work to build up IPC/networked communication so you can make big decisions in a service (like using a better suited language for some feature) without worrying about integrating it with every other feature in your monolith.
You can tailor the instance type to the service. Say you need some kind of video ingestion (or something) that is low use but needs a high memory ceiling. Would you rather pay for that memory across all your monolith instances or just run a few instances of a high memory service?
There's a lot of differences you're not thinking about.
You can have a single function that handles all data base writes. A single function that monitors for failures. A single function that sends outbound notifications... etc
If these functions are call in a way that: allows for A/B testing, SUPER high latency (or no response), failure notifications sent to the code owner, automatic retries of failures, independent code deploys
Then, congratulations, you have a monolith made up of microservices!
> In a monolith, you just call another function/class, but in microservices that function is a http call.
That right there is the fundamental difference.
For one thing, calling a function in the same process is going to be orders of magnitude faster than a network connection call and all that it entails. Even if performance doesn't matter at all in some use case, it's also additional cost to be running all these additional instances.
And then, complexity went up since a network call can fail in all kinds of additional ways that a jump to a function address in the local process cannot. So the code has to deal with all those. The complexity of correlating your logs and diagnostics also just went up. Your deployment automation and versioning complexity also now went up.
All these are solvable problems, of course. It just takes more people, time and budget. If the company is large enough sometimes it's worth it for the dev team decoupling aspects. If the company is tiny, it's approximately never worth taking on all this extra work and cost.
In J2EE that difference is a configurable technical detail. You have 1 service that calls another, and the protocol they use can be a normal function call, RMI, SOAP (or I think REST nowadays) depending on dynamic configuration.
The difference is enormous.
This is just like those people who consider all DB calls to be "free". Not that this went away either, it's just way more egregious now.
I'm a big SOA fan, but my experience with any non-trivial lambda architectures has soured me on the FaaS concept writ large.
He talks about the scaling issue; they were using Django at that time and scale up to 50 millions users with a small team of developers.
I do not think a majority of companies have more than 50 millions users and absolutely need to go full microservices.
For the scope of that app, it would have been absurd to use microservices. And I think most people who are in favor of microservices would say the same thing. To me, what microservices help with is when you're building an entire platform, rather than a single product. Not even necessarily on the scope of Facebook or Google, but I've worked at companies where one team might work on an app for managing social media accounts, and another app helps you optimize the SEO of your website. Neither of those things really want to own the concept of a user they both share, or deal with account creation and whatnot. So that's handled by a dedicated microservice.
Now, when you get to a size where you're building a platform, you're likely going to have lots of developers and users, but I don't think whether you use microservices is a function of either of those numbers, and they're just a side effect of the thing you've built.
They're fine. But what's NOT fine are nanoservices. I did security on a project once where it seemed that every function was its own microservice. User registration, user login, and password resetting were each a separate microservice. It was an utter nightmare.
I'm realizing that the YouTube algorithm has no idea I'm a software/security engineer. It thinks I'm only into gaming, face plants, and dash cam videos.
user-registration.api.dev.my-company.com
user-login.api.dev.my-company.com
password-reset.api.dev.my-company.com
or paths by some huge k8s nginx ingress?
I have a couple layout comments:
First, I love that you have a high contract text-to-background. That is really helpful for me. There was/is a trend to have light gray on white backgrounds for blogs; this is absolutely a terrible pattern. I appreciate that you did not go this route.
Second: serif fonts are difficult to read when the font size is relatively small. Something like Jura or similar could maintain the "terminal" feel without getting bogged down in serifs.
Third: I have a really hard time reading content when it uses smaller fonts and uses a minor fraction of the screen. This is what I see: https://imgur.com/a/NPCBkHJ -- I am getting older, and reading smaller fonts is increasingly difficult for me. I tend to keep my zoom at 150%, something about this page forced it back to 100%. I am not well versed in responsive design, so I don't know the technical details for it, but having zoom maintain or using a larger font would save some cognitive cost to older users like myself needing to zoom in.
Thanks for your thoughts on microservices!
This only protects against some threats related to insecure code, but layers of protection is the key to threats and it is useful for the parts it does help.
Lots of devs don't know infrastructure that well, though this is changing with the adoption of Kubernetes. Additionally, most devs don't want to go on-call when their app crashes unexpectedly.
For me, other than the obvious size difference, the difference between microservices and "large" (?) services is that a single team breaks down their domain into sensible layers, abstractions etc.
Similarly, A microservices/SOA architecture can fail to break down their domain into good boundaries and abstractions. I've seen this happen a lot.
It's a lot harder to fix bad microservices than to fix a bad monolith
I think certain things are easier to change in a monolith, where as other things are easier to change in a service based design. Depends what mistakes you've made along the way or how the spec/environment changes.
I mean yes, I know that each of these problems can be solved, sometimes in a relatively straightforward manner. But who really has all these aspects covered and doesn't run some services that started to smell weirdly a couple of months ago?
Use the same process you would use if you had a monolith.
The rest of your issues can be solved by planning out your services, rather than giving everyone free reign to make a new service. Switching to services doesn't magically mean your teams stop talking and designing together.
I have the feeling that with the (quite possible!) addition of an inter-service codebase we would end up with a distributed monolith, i.e., a program that doesn't target a single computer but a particular substrate. I don't know whether that's a good design, though.
Benefits: The program becomes more transparent and resilient to nonfunctional problems. It is also much easier to replace parts of the program. Downsides: Executing on a developer's workstation (critical for productivity and quality, IMO) might become harder. Efficiency gets reduced by orders of magnitude in certain spots.
But Unix isn’t all small tools. We have servers for the heavy work- like databases.
The challenge then becomes; how do you design that IPC mechanism? Maybe it exists! I don’t know the answer yet. But it’s something I think about a lot and I haven’t seen compelling evidence for “microservices are always bad, no exceptions”
The interaction mechanisms between services, e.g. REST calls, is somewhere in between pipes and function calls. They are a bit more reliable than pipes and less reliable than function calls. They can be used for more complicated task than what pipes are used for but should not be used for task that are so complicated that they need function calls.
@carnage4life on Twitter
https://mobile.twitter.com/carnage4life/status/1311702322024...
Now, I’ve not had the pleasure of working in an organisation using Microservices and so I’m only informed anecdotally. But I always assumed they where best used as an API boundary between teams, rather than adding more complexity to a teams work.
I picked a ridiculous example there. But just trying to show that if you need services, it should be easy to argue for them on the merits, because it’s gonna be the least bad option.
ie. if you work alone on a problem, you can only solve for the issues you know.
This is probably a good point, however isn't the entirety of the story.
Personally, i agree that most teams shouldn't start out with microservices, monoliths can be entirely sufficient and are easier to run and reason about. Otherwise you might end up with so much operational complexity that you don't have much capacity left to actually develop the software and make sure that it's actually good.
However, you also need to think about the coupling within your monolith, so that if the need arises, it can be broken up easily. I actually wrote more about this in my blog, in an article called "Moduliths: because we need to scale, but we also cannot afford microservices": https://blog.kronis.dev/articles/modulith-because-we-need-to...
Where this goes wrong, is that no one actually thinks about this because their code works at that point in time, so they make their PDF report generation logic be tightly coupled to the rest of the codebase, same as with their file upload and handling logic, same with serving the static assets etc., so when suddenly you need to separate the front end from the back end, or extract one of the components because it's blocking updating to newer tech (for example, Java 8 to Java 11, everything else works, that one component breaks, so it would be more logical to keep it on the old/stable version for a bit, instead for it to block everything else), you just can't.
Sooner or later, containers also have to be brought up, since they can be a way to do a multitude of applications in a manageable way, but at the same time it's easy to do them wrong, perhaps due to not understanding the tech or some of the potential concerns.
Many out there think that "doing containers" involves taking their legacy monolith, putting it inside of a container and calling it a day. It isn't so, and you'll still have plenty of operational challenges if you do that. To do containers "properly", you'd need to actually look into how the application is configured, how it handles logging, external services, and how it handles persistent data. And it's not the "No true Scotsman" fallacy either, there are attempts to collect some of the more useful suggestions in actionable steps, for example: https://12factor.net/
(though those suggestions aren't related directly to containers alone, they can work wonderfully on their own, outside of container deployments)
Lastly, i've also seen Kubernetes be used as almost something synonymous to containers - in some environments, you can't have a conversation about containers without it being mentioned. I've also seen projects essentially fail because people chose it due to its popularity and couldn't cope with the complexity it introduced ("Oh hey, now we also need Istio, Kiali, Helm, oh and a Nexus instance to store Helm charts in, and we'll need to write them all, and then also have a service mesh and some key value store for the services"), when something simpler, like Docker Swarm or Hashicorp Nomad would have sufficed. I actually have yet another blog topic on the subject, "Docker Swarm over Kubernetes": https://blog.kronis.dev/articles/docker-swarm-over-kubernete...
(honestly, this also applies outside of the context of containers, for example, picking something like Apache Kafka over RabbitMQ, and then being stuck dealing with its complexity)
In conclusion, lots of consideration should be given when choosing both the architecture for any piece of software, as well as the tech to use to get it right. In some ways, this is slower and more cumbersome than just pushing some files to an FTP server that has PHP running there, but it can also be safer and more productive in the long term (configuration drift and environment rot). Sadly, if the wrong choices are made early, the bad design decisions will compound with time.
The biggest problem I've seen is that early applications are not built in a modular fashion. More as a maze of twisty little functions calling each other, where you quickly end up with circular dependancies and other "challenges". If your base monolithic architecture mimics the world of microservices, modular single purpose functions and event busses to pass around information.
Good design is good design.
So Node being single-threaded is not itself a reason to use microservices.
I tend toward writing a monolith for the core API of a service, but then break out microservices for tasks that need to scale independently (or that need to run on high-memory/high-performance instances, for example). So I'm not totally against using microservices. But we should choose to use them when they're to our advantage to use them, not just "because they're already written that way."
With modern tooling deployment and managed storages is not a problem at all you use templates or buildbacks or even lambda combined with gitlab github CI abilities. Recent progress allows you to embrace zero-ops and microservices. In my team we don't have a dedicated ops person, and deployment from dev to prod is running via git by developer.
Node itself has a very bad profiling tooling compares to more "adult" languages. if you run a microservice it's much easier to spot a problem in CPU or especially memory leak. And vise versa if you doing something in scala, or java, microservices benefits are minor. It's also so much easier later to rewrite some of the services in a more performant language.
I also agree with some of them.