Big corp wins while their customers create DevOps and other buzzword teams and the majority of IT world loses the capability to actually administer systems and becomes users addicted to ever-changing vendor offerings that complicate learning useful stuff outside.
But i understand why, for streamligning purpose, we use kubernetes. It makes the networking "easier", and i feel it integrate better with other CI/CD tools than ansible. It is only a feeling since the ansible version i used to use was quite old, so i might be wrong.
You do not need Ansible for VMs provisioning - you can bake a VM image that will pull repos and do other preparation stuff. HashiCorp Packer[1] is an good tool for this imo. This applies to bare metal, too, as you can bake ISO or IMG the same way. Stuff that differentiates those systems can be set up with cloud-init or something similar.
Regarding Ansible, it didn't changed much over the years. At least nothing really major like statefulness.
But again, I'm not opposite to using Ansible when a project reasonably calls for it. Proper tool for configuring multiple systems with details generated for/by other systems, say multi-cloud HA provisions, clustering etc.
- a monolith with too many engineers contributing: no continuous deployment but rather "release day," which was a shitshow; said day was an extensive process where hundreds of engineers (anybody who was on git blame) had to be online and sitting by for four hours while it rolled; the release team often had to hand-revert bad patches; bullying of engineers who "broke the build" reached levels that would raise HR eyebrows; there were still often rollbacks and site breakages.
- microservice hell: there were often 2-3 APIs for the same service; platform engineering (for the protobuf RPC generation) had to support five different languages; security auditing was NP-hard; every team had its own release process; services that were still highly used were "deprecated" and left languishing for years until somebody took up the mantle and released their own parallel service that did nearly the same thing but with a different API, so now other services had to use both; etc.
"But that wouldn't happen at $my_company, we know the pitfalls and we'd never be that bad at engineering!" Sure...sure. That's what these companies said to themselves too :)
At a company we as a dev team had to maintain a constellation of desktop applications and homemade ETL Airflow-like software that scaled really badly. For the time I'd been there I had managed to automate deployment almost totally on my own. Sadly it had been somewhat controversial as it was done at the expense of other development tasks (my manager agreed but not his nontechnical manager), and probably too late too. Before I had had enough grasp on the complicated context we had to deliver on a big project and needless to say it was a shitshow and we developers took basically all the blame, with pretty much nothing to say back.
The tech stack and code had been in a state of neglect for 8+ years but that was not an acceptable answer at that time, it seemed, and I'd only been there for 6 months when it happened.
On the other end of the spectrum, at the above's parent company, I had to maintain several major versions of an API on several cloud regions. Undocumented peculiarities regarding the object storage that was specific to one region caused a production incident that took the dev and support team 3 days to untangle and solve.
I have similar stories on monorepo vs. multirepo too
Or more specifically, "I wish these services were a monolith so we could have better type checks/easier logging/debugging."
When I'm forced to work with microservices that need Skaffold and Helm charts just to run locally, but with the configuration in the monorepo being kind of mismanaged and strewn around a bunch of folders with no documentation, in addition to debugging in the IDE not working because nobody set it all up, I wish for monoliths.
Really, you can have good monolithic systems and you can have good microservices as well, in addition to something in the middle: https://blog.kronis.dev/articles/modulith-because-we-need-to... (actually the first blog post that I wrote, the casual language probably shows its age).
But there can also be plenty of poorly developed projects with either. It just so happens that people hate monoliths more in the mainstream culture because most of the older and worse projects out there were monolithic, much like many hate Java and other languages because of being exposed to bad legacy projects written in them: https://earthly.dev/blog/brown-green-language/
Just wait for 5-10 years and people's disposition towards both monoliths and microservices will even out, the advantages and disadvantages of either will become clearly recognized and the hype will shift towards something like serverless. Much like now we know the advantages and disadvantages of languages with/without GC as well as higher/lower abtraction levels pretty well (consider Python vs Rust, for example).
Maybe things will slow down a bit because Kubernetes will also become a bit easier to use, possibly thanks to projects like K3s, K0s, Portainer and Rancher.
Local development environments were a bit more tedious in certain cases, but that was the only issue I recall.
At another company that was acquired by Amazon, the "user service" team (so again this is a microservice that exposes a single table of a database, this time with a two-pizza team dedicated entirely to that microservice) told us that we couldn't just query the user service when we wanted to render a page containing a username given a user id, because that was too many queries. Product demands from a VP dictated that we didn't have time to set up our own caching layer for their service (is this the responsibility of every team other than theirs?), so we shipped the feature with the usernames saved in our own DB, and now when users change their usernames the old name will appear in the pages for our feature, depending on when the pages were created.
Likewise. When I see people complaining about microservices, more often what I see is actually poorly thought-through strawmen aimed at distributed systems, which boil down to "having to do network requests is bad".
I wonder why attacking the "microservices" buzzword gets these people on rage mode but the sight of a web app calling a dozen APIs somehow doesn't make them bat an eye.
Personally, I think micro-services should be approached very carefully but I understand the idea of them.
I did see two instances where it wasn't. (I mainly work with/for small companies, startups.) In one instance I was called in as a tech lead/expert for a small startup having a kind of a product/software crisis. They've been working on their service for a year or too (yes, way too long) and 2-3 months before the planned release at some random conference (Slush, TNW or whatnot) one of the developers figured out that the whole codebase was a piece of shit and there was no way they could be ready in time, but they should rewrite it as a collection of node microservices and that could work. The monolith they had was PHP, just to add to the fun, so switching over would mean switching languages too.
The guy, he was a smart and motivated chap, even started implementing one service in his free time, the user registration/user handling I think (the least important and least complex one, of course) which somehow screwed up the monotlith and made it start crashing. (Or so they said, I don't know what was up with that.)
Obviously, it was a 100% stupid idea and we went on with fixing their development process (starting to do scrum and teaching the stakeholders that they need to stop phoning the developers directly and asking for features, fixes, introducing automated testing, etc.) Oh, and it was a team of 2-2.5 people. (With the group manager doing some backend work too, but also managing another, totally unrelated project for another client.)
The other one was a bit different story, where I just shared my insights over a call. A guy I've known took over a project that was built by a small team (2-5 people, can't remember) for a startup and wanted some external opinion for himself and the founder. That one was built as a set of microservices but they did have all kinds of stability issues. The idea was that it had to be very-very-super scalable. Because, you know you launch and they will come and there's nothing worse than not being able to handle the load. Except, there is: they had been building the thing for over 2 years back then.
It was an online medical consultation solution (you describe your problem, pick a doctor, do a f2f call and pay by the time). The funny thing is that I've built a very similar system, as a startup cofounder, 3-4 years earlier for psychology consultation with the help of 2 other guys, who didn't even work full time (one of them came after the first one quit). The MVP was up in, I think, 2 maybe 3 months. Ours was a monolith-ish thing and theirs definitely looked better, maybe scaled better and would have been cheaper to operate at scale (we used an external service for the video calls). But ours was a lot cheaper to build and launch and we could test (validate) our solution a lot earlier with real customers.
If it worked out, we could have started breaking it down into multiple services as/if/when needed.
Why do you feel this is relevant, let alone detrimental to the idea of microservices? It looks to me that it's one of the primary positive traits.
> The guy, he was a smart and motivated chap, even started implementing one service in his free time (...) which somehow screwed up the monotlith and made it start crashing.
This statement makes no sense at all.
> Obviously, it was a 100% stupid idea (...)
I saw no stupid idea in any of your statements.
You stated the legacy codebase was crap, and that a team member took up to himself to do the strangler's vine thing and gradually peel responsibilities out of the monolith. What leads you to believe this is stupid?
I always can't help but to think most of these articles are written by engineers working at 30 people startups or something. And there's definitely a lot of org-size and structure between the startup and a faang-sized tech giant.
It's madness. The solution is to avoid the polyglot issue by fiat and to ensure that there is some actual planning and rationale around when it makes sense to add a service. Most groups I've talked to don't even have a good answer to "why is this in a separate service from that?" when asked, and I've talked to a lot of them.
The article does _not_ discuss why engineering teams ignore that advice.
Companies see microservices as a silver bullet for solving complexity. Inexperienced engineers attracted to shiny things jump on the bandwagon. Vendors sell tooling to deal with the new complexity. But in that case, if it wasn’t microservices, it would be OOP, FP, SPAs, RPC, RDBMS, NoSQL, etc. The problem is the hype cycle. Over-use of microservices is only a symptom.
Whereas when microservices were overhyped, they were introduced into orgs/companies that didn't have the brain power/experience to implement these properly or just to be able to say an informed no.
A lot of people, especially smart people, like going "everyone says X, I'm going to try to appear smart by arguing not-X".
And that's how you end up with people in the west going "Russia is the victim of the war in Ukraine! Nato encroachment!", or "they haven't tested the vaccines!"
This is more like “the trough of disillusionment” on the hype cycle. You’ve suffered so much at the hands of microservices, you want to convince everyone else not to use them. Lots of others have suffered similarly, so thanks to confirmation bias, your post gets lots of likes based on the sentiment.
A garbled knot of interdependent microservices with timing issues, bad extensibility, and unpredictable flow.
An ornate matroyshka set of wrapper functions calling each other spreading over multiple directories making any modification a large error prone effort.
Event systems, probably multiple, without any real difference between just a function call other than your debugger getting very confused by it.
Database schemas with table names that exude the narcissism of small differences with nitpicky rules that make it explode if any flexibility is demanded
Aws bill that's 10x more than any reasonable expectation given the problem set.
An object oriented design that looks like some kind of fetishized exercise of every feature possible, where defects cascade to action at a distance and unintended consequences with tight coupling that can't be extricated leading to a rewrite, just like it did last time
They are the people who create the waterfall of dozens of levels of div tags for no functional reason other than to accommodate their incompetency
They are the ones that want to pollute your entire day with needless meetings over irrelevant things that will not be acted upon.
Of course there's no useful comments or tests or documentation. The git comment messages are single words like "fix" and "rewrite". There's no versions in the deploy or a reasonable approach to logging that allows a successful audit and the thing is too state dependent to reproduce bugs.
Then there's dependencies, loads of them just picked seemingly at random, written by people who think like them with the same disregard for documentation, compatibility or testing. But they have very pretty websites which says they're painless, simple and easy, so I guess it's all ok right?
The problem with microservices is the same problem with anything else and changing paradigms won't fix it. The approach needs to change, not the technique. It's a different kind of budo.
The power of a concept isn't in the need but in how well it optimizes the process.
Microservices instantly looked like a fad. Two classes of fad apply. One is a "move stuff around and complexity will magically go away" fallacy fad. The other is a "way to promote vendor lock-in or higher cost" fad.
Other major classes of fads are: consultant self promotion fads, re-invention fads of all kinds in which devs speed run the history of some aspect of computing to arrive at the same place, magic pixie dust fads where sprinkling some buzzword on things makes everything better, management "methodology" panacea fads, etc.
Avoiding fads is a superpower. It tends to save a whole lot of money and wasted time.
The test of whether something is a fad is whether it reduces incidental complexity, enables something categorically new, or genuinely boosts developer velocity.
Incidental complexity is the complexity that creeps into our designs that is not essential to the problem but an artifact of how we got there or some prior limitation in how things are done. A genuine innovation will make incidental complexity actually go away, but not by pretending that essential complexity doesn't exist.
A categorically new thing would be e.g. deep learning or an actually practical and useful provably-safe language (Rust).
Boosting developer velocity means actually speeding up time to ship without doing so by adding a ton of technical debt or making the product suck.
If something doesn't do at least one of those things well, it's a fad.
In the right circumstances, a microservices architecture can absolutely boost developer velocity. You can reduce development/mental model complexity, reduce blocking internal dependencies, increase performance of tooling and deployment, and allow more consistent and less risky deployments. There are certainly costs: infrastructural complexity, a new network boundary between services, increased risk of techincal/product drift.
For orgs where the benefits outweigh the costs, due to scale/org structure/perf concerns/etc it can be an enormous win for velocity. For other orgs it can be a huge velocity killer. It just depends.
Microservices just takes that and spreads it around a K8S cluster using gRPC or RESTful JSON or some other RPC bus for all the various modules to talk to each other, consuming far more compute resources and helping increase atmospheric CO2 and make cloud vendors rich. Why is calling a library running in a separate task (possibly on a separate CPU) via gRPC a better approach to code modularity?
The only time this makes sense is when you are (1) totally huge and (2) have specific hot regions of your service that you want to autoscale relative to the rest of the service.
Incremental upgrades can be achieved by just incrementally cycling your service, no microservices needed. Doing so when only some modules change can be achieved with CI without the crazy runtime overhead.
This is a great description of microservices
You can have spatial and temporal memory safety by writing single threaded code in any GC language.
Edit: For more context, this opinion is more about "the art of developer management", not much about infra, security, scalability stuff
I would only seriously consider a move to microservices for deployment/perf reasons.
It's about how to keep the velocity without firing junior dev, without removing bad code from the production system.
So microservice is a leadership tool to manage it. Keep it in control.
Moreover, code quality isn't just encumbered by junior devs--in fact, in my experience it's more often managers pressuring developers to take shortcuts (e.g., taking a dependency on another system's private internals in the name of expedience, while swearing up and down that we will totally clean it up in a future iteration) or other organizational hurdles that make it difficult/impossible to fix it the right way, so shortcuts are taken instead (with microservices, the organization has to confront those issues, they can't be easily papered over by bypassing official interfaces).
Another reason to prefer microservices is security--not putting every secret in a single process's memory space is a great way to minimize blast radius if a service is compromised.
Another reason is reliability--if one system has a catastrophic failure mode, it won't take out all other systems. Of course, if the failing system is a core service, then it may not matter, but at least you're not bringing everything down when some service on the architectural periphery goes haywire.
I do question how microservices manage that, though? Tightly coupled microservices ie "the distributed monolith", are a still real danger for teams that don't have enough engineers that know "know how to make loosely coupling architecture"
The reality is like this: You have a critical system that ran for ages, and now what will you do to scale the features ? By allowing/teaching junior devs to understand how to contribute to the codebase ?
There's a simpler way to do that efficiently: Extract a subdomain into its own microservice and you control the interface. Then even that microservice has bad code quality (tech debt), your business is still running fine!
Problem solved.
But if you're a company of 10-20 people all pretty much working on the same code? Microservices just adds complexity and overhead. Deployment, telemetry, documentation, version synchronization, tracing -- everything becomes more complex when you start creating boundaries between sub parts of your system.
For me, microservices are about boundaries. The question is what benefit that boundary provides for the team. For large companies where there are many discrete teams following different processes and release cadences, microservices might be worth the overhead. For small companies, it is wasted effort.
Not all things should be easy.
The question is which things need to be easy, and which things need to be hard?
Often the answer is "we don't know" but there is still an optimal answer to the question.
Breaking up an app into microservices is total overkill in this instance...
Microservices allow team A to deploy their component, while team B are just writing code. Then team B deploys their own component and while team A is at the bar.
> Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
If we apply this law backwards, microservices reflect an organizational structure with many teams working on different things so they'd make more sense in that context rather than within a small team.
And then there are projects that grow way beyond the point where even that will keep them manageable, in which case microservices may well make sense. But the number of companies faced with challenges at that level is quite small relative to the total and the chances that you find yourself in one of those if you don't have a few hundred co-workers as developer is very small.
The service scorecard asks a bunch of reflective questions about the ramifications of making some set of functions a unique service and points its benefits or lack thereof on a scale.
You can isolate pieces of your architecture and simplify them. A lot of issues with microservices inside the system (not user facing) comes from the expectations of what the microservice deals with, the enforced boundry of the microservice, intention of it, and the fact that anyone can connect to it.
Think about streaming data systems. These allow for multiple components connecting to durable queues (with maintence polices) that will process data and pass it along. This is more for data that may take longer, and shouldn't be done in the same request.
Slight personal rant: The crap that I've seen people expect of microservices to do in a request is excessive. (If you're doing more than taking a request, reading+writting into a database.. you're doing too much and your performance is terrible) Additionally there is very little consideration about what happens when a microservice performs badly.
And this is why micro-services are poor architecture. If you need an entire service for every DB activity you are in for a terrible time.
It lets you develop, deploy, and execute AWS Lambda functions from your Laravel application.
The theory here is that sometimes you need some other language/infrastructure beyond what you're comfortable devops-ing yourself, and Lambda is actually quite good at providing you with an entire stack of stuff you don't have to own.
So if you need a single Node, Python, or Ruby function you can put just that part on Lambda and still call it from Laravel as if it were a native PHP function. No API gateway or anything to muck about with, either.
Is it a true microservice? Not really, although who knows what that actually means. It does allow you to take advantage of some parts of microservices without the pain though!
IMHO, AWS Chalice remains the goto method to generate REST API serverless manner but also curious how yours also differ from the paid Laravel solution that lets you deploy your stack serverless.
Sidecar (my library) provides no REST API. You just... call the function from PHP. So you'd call it like
<?php
MyNeatServerlessFunction::execute(['foo' => 'bar']);
And that would run the function _on Lambda._ Whether that function is JS, Ruby, Python, whatever.With Sidecar, you never have to configure anything in the AWS console, besides the initial IAM configuration. You give Sidecar admin keys, it configures all the permissions, roles, etc, and then self-destructs the admin keys. So you don't have to muck around with anything at all.
> also curious how yours also differ from the paid Laravel solution that lets you deploy your stack serverless.
Sidecar deploys and executes _non-PHP_ functions from your Laravel app. So it's completely different. Vapor deploys your whole app and runs it on Lambda, I just deploy single functions at a time.
It is true that probably any monolith can be break down into components, that won't prevent the full redeployment (and all the risks that it brings) though.
I think in reality no one needs Microservices, or Monolith for that matter. You would pick the poison that adjust the best to your needs.
I work in a 150+ yo company where change is ... well not welcome. When we said we could try to release without schedule, several times a week, whenever we want just because we finished one thing at a time, you should have seen their looks. We have 100 microservices doing low latency trading in 13 stock exchanges in heavily regulated Asia, trading $bn a day - it kinda has to work day after day and "the risk" of deploying "the whole" think more than once a quarter was terrifying.
Well a few better tests and a bit of bravery and now I just do "the full redeployment" whenever I want. Some teeth are still grinding but what can I say, we still banking lol, and now when we find a bug, we don't wait for months to fix it.
YMMV but I do have personal experience where the risks weren't psychological. Teams stepping into each other shoes and broken each other features are a real problem, solved by communication but that overhead has real costs.
I don't think the answer is as easy as "you don't need microservices". I think the answer is "you can effectively use both".
Monolith is an unfortunate term by which everyone seems to mean something different. So is any application whose parts do not communicate via network interfaces a "monolith"? Is an application where the majority of the function calls are not realized as a series of network transactions (like CORBA at that time) a less good application than one where different classes and modules on the same machine communicate directly with each other?
We should stop using the term, nor assigning blanket value attributes to it. "Monolyth" is not the antonym to "Microservices", as suggested by the article. From a certain distance, every system looks like a monolith; that has more to do with the viewer than the system.
I dunno but when I see odds are $X I read unproved and perhaps unprovably, my opinion is $X
We have something like 19 different components. Insane.
Yes, but it also comes with a cost. Microservices might still fail in a cascade manner and bringing such system up under significant load is even more challenging.
You can just shoot a couple of the offending microservices dead and replace them with better implemented versions.
[Assuming there are at least 6 developers]
The main problem isn't microservices, it's control and interoperability. Facebook decides it wants to turn into TikTok? Too bad for all its users, it'll happen. "Relax, breathe, we hear you" is what Zuck said to all his outraged users after the first big rollout of newsfeed. Then a lot of scandals later (beacon, etc.) they are still at it. Google sunset Reader just like that. People are HOPING that Elon Musk adds a feature to Twitter. This is crazy.
Host stuff yourself, and not in the cloud. And for that, we need people to be able to "just install" something, like a Wordpress 5 min install on a hosting company.
I don't want to make this comment long, so anyone who wants to read the full thesis can see it here: https://qbix.com/blog/2021/01/15/open-source-communities/