THIS. This should be printed in 36-point font on every microservice article.
I find this surprising. I would've thought having a more modular system would lend itself to easier testing. Have others had this same experience?
There are "contracts" on how to talk to other services. You can test if you follow that contract, but it's hard to verify automatically that these contracts are in sync between apps.
Would love to see this released publicly. As stated in the post, there aren't a ton of (public) examples of microservice architectures, so everyone seems to be solving the same problems independently. It would be great if we could start pooling more of our resources together.
The shipment app listens to the messaging system, sees an
order take place, looks at the details, and says, "Okay,
I need to send two boxes to this person." Any other
services interested in an order happening can do
whatever they need to with the event in their own
queues, and the store API doesn't need to worry about
it.
I'm curious how you guarantee that all the systems that need to see an event, will actually see it, before it is removed from the event queue. I assume the shipment app, in your example, is responsible for removing the event from the queue. So, what if it removes it before the mailer app or the "make cash register sound" app, sees the event?If you use RabbitMQ (which are we using in our own microservice architecture), you can use fanout exchanges based on topics to accomplish the same thing more elegantly.
In this topology, every message sent to that exchange gets copied to any queue bound to that exchange.
This system also supports routing via "topics", which are paths that support wildcards; the publisher can publish to "foo.bar.baz", and queues can bind to the exchange using the routing key "foo.*.baz", for example.
We use this to listen to specific events, eg. a specific app is associated with content under the path "someapp.someclient". A data store publishes modification events with "create", "update" or "delete" followed by the path; so the the app, to get the stream of updates, simply listens to "create.someapp.someclient.#", "update.someapp.someclient.#" and "delete.someapp.someclient.#".
Using SQS means you're either polling or long-polling with hangups every 20 seconds. This seems pretty shitty to me. Also how do you structure one of these services around polling or long-polling to get its info? It sounds like they're using Rails. Does Rails have something that makes this easy to do?
At first, reading this reminds me of the Blackboard pattern from The Pragmatic Programmer. This pattern seems like a neat way to separate an application into different agents.
All these async things don't impact the main flow of the user, so it's no problem that it isn't instantaneous. We are running background processes that do this.
We only use Rails for our frontend applications, they don't contain any business logic. Our backend processes are usually Sinatra for HTTP requests, or custom daemons.