https://github.com/mailgun/vulcand
At first, sure, Vulcand is just another reverse proxy, but what they have been doing is registering individual HTTP handlers with the proxy -- into etcd:
https://github.com/mailgun/scroll
(there is a python implementation somehwere too, just can't find it)
What's neat about exposing these individual HTTP handlers, is now your reverse proxy can produce metrics, apply circuit breakers, etc, all in a central way, but your "micro" service just has to register with etcd:
http://blog.vulcanproxy.com/vulcand-news-circuit-breakers-be...
So, you end up with an architectural style where you can deploy a single HTTP handler as its own service, similar to what the article was pointing to, but in a multi-lanaguage approach, where HTTP is the communication method.
We do use a proxy/router. The main difference is that instead of using etcd for service discovery we are deeply integrated with Mesos.
Our framework scheduler does the registration/deregistration, resource allocation/colocation, etc.
The main reason for a proxy at this point is to have unified metrics collection, distributed routing (each reverse proxy knows only its immediately connected downstream operators), tracing (think dapper), and facilitating the communication with other languages (c++, java, go, js - node, ruby, etc).
Monolithic architecture turns a configuration management problem into a coding problem. Eventually, coupling within the monolith makes it hard to develop.
Service-oriented architecture turns a coding problem into a configuration management problem. Eventually, the potential combinations of small services become unmanageable and untestable, making it hard to run operationally.
You have two kneecaps. Which one gets the bullet? Because you're gonna get kneecapped either way.
This way, you get most-every benefit of SOA whilst being able to reason about an application as a whole.
That's the approach we're taking with our microservices framework, wym: http://wym.io/
The gist behind it is that we use Mesos/Yarn to handle the complexity of deploying and running things in a cluster. We - the framework - expose an API similar to that of map reduce and a set of CLI programs that allow you to submit 'work'. The complexity that we bring is that of mesos or yarn but if you already have it installed then is 0.
Right now we only support Mesos.
Erlang is mentioned as an example, since its VM certainly offers many features we'd like from an OS. Maybe comparisons to Smalltalk, LISP machines and even regular UNIX shells should be offered.
It's likely that all this has been done before, and I'm tempted to think that its major problems would stem from having too much power in the config language. For example, I'd imagine you could do this with SysV init's runlevels and shell scripts, but it would be pretty horrible. What would make the language du jour any less horrible?
Also related is, of course, the Nix ecosystem: Nix package management, NixOS configuration management, NixOps provisioning, DisNix distributed services, etc.
In all honesty, I'm not sure you read the article. The entire point is the idea to deploy tiny chunks of code, independently, in any language.
What does "tiny chunk of code" mean? According to the article:
> Literally, a simple interface that did one (or very few) things well.
Hmm, sounds like the UNIX philosophy to me. Operating on code in a language-agnostic way? Sounds like scripting to me...
> What if instead you could deploy classes.
Classes are a language feature, so this looks to be asking for an alternative scripting language.
> What if your operating system was in a way an API to deploy services (it is!) > but the size of the code deployed was so small that it would in turn be hard > to make mistakes.
That's the point of UNIX: do one thing and do it well; compose the pieces using scripts.
Of course, UNIX is not perfect, it's just an obvious comparison. I mentioned a few others above. That's what I got from the article, after squinting through the jargon ("deploy" instead of execute, "services" instead of processes, etc.)
The big problem with distributed classes is that network communication links are fragile and you need to bake how to deal with failures into your methodology. Erlang handles this in a way that Joe Armstrong explains here:
http://armstrongonsoftware.blogspot.com/2008/05/road-we-didn...
I think the CORBA clone currently in fashion is SOAP (or did we get something newer?). There probably are SOAP tools for Earlang, and you'll really get the possibility of using an heterogeneous system, like you said you'll want.
Yet, those things never work so well as people think they should. Keeping interoperability within a code-base is hard work, distributing it just makes it harder.
Happy to explain what you don't get. Definitely not a CORBA clone.
> The size of the code deployed matters
And I disagree with this. The hardest deployment I ever seen was 2KB of code. Sim toolkit is impossible to modify once it is flashed into 2 million SIM cards. :-)
Erlang's distribution was always designed for redundancy and fault tolerance first, thus features like location transparency and heartbeats across nodes. There is no inherent service discovery mechanism built in (tools not solutions), and I suppose large topologies could benefit from third-party process registries.
I'd hardly call it "outdated", just that its default mechanisms for organization are a little different.
The issue and main point is that I want that cross language, cross platform with isolation, tracing and debugging hooks.