story
In particular, I think the idea of embedding a Procfile in a Docker image is really clever; it neatly solves the problem of how to distribute the metadata about how to run an image.
This is how I currently use Docker:
1) Custom base image with all the things my company needs like supervisord, libpq, etc..
2) Custom per-service base images like ones with Java for our Clojure services or Python for our research services which are built off of the base.
3) A release consists of pulling the latest version of the base image, example, acme-python, and then injecting the latest project code into it.
My concern here essentially boils down to the image repo. Github needs to add container storage because while I admire Docker Hub's efforts, I don't trust it.
1. We build docker images on every commit, in CI, and tag it with the git commit sha and branch (we don't actually use the branch tag anywhere, but we still tag it). This is essentially our "build" phase in the 12factor build/release/run. Every git commit has an associated docker image.
2. Our tooling for deploying is heavily based around the GitHub Deployments API. We have a project called Tugboat (https://github.com/remind101/tugboat) that receives deployment requests and fulfills them using the "/deploys" API of Empire. Tugboat simply deploys a docker image matching the GitHub repo, tagged with the git commit sha that is being requested for deployment (e.g. "remind101/acme-inc:<git sha>").
We originally started maintaining our own base images based on alpine, but it ended up not being worth the effort. Now, we just use the official base images for each language we use (Mostly Go, Ruby and Node here). We only run a single process inside each container. We treat our docker images much like portable Go binaries.
For example: I've gone searching through the blog posts, github readme, and KONG documentation, but I still have no idea _why_ it needs Cassandra. What does it store in there?
One of the main graphics on the KONG docs shows a Caching plugin (http://getkong.org/assets/images/homepage/diagram-right.png), but the list of available plugins doesn't include such an entry. Is that because caching is built in? Is the cache state stored in Cassandra? Or is the plugin yet to be built?
Note: I think the only 3rd party thing I'd call self-hosted is colocation where I delivered the server, they plugged it in, and the most they do is reboot it for me.
I guess a combo of philosophical and legal.
This is the level of engineering/communication I always shoot for, and which (somewhat disappointingly) is rare where I've worked.
We'd love to see one standard too. Personally, I think it's good to have a lot of competing solutions right now (ECS vs Kubernetes, Docker vs Rocket, etc) and we'll see things settle in the next couple of years as containerization becomes more common.
For anyone looking at a Dokku alternative, Cloud Foundry isn't one.
Openshift is nice though.
I'm not sure why that's a problem. If you want something that's actually like heroku like in terms of uptime and what not, you need something that can manage the health of the cluster. Dokku's cool, but it doesn't make sense for anything you actually need to depend on. If it doesn't make sense to pay for the overhead of running your own paas, just use Heroku instead.
The reason we went with vulcand is that it natively supports what we wanted to do i.e. route to micro-services based on dynamic etcd driven configuration. To do the same thing in nginx (at the time), we would have either had to use confd or custom lua.
I think at this point everytime we move a service we add like 5 lines to an nginx config, re-deploy the router in Empire, and the service is exposed.
The internal 'service discovery' makes this a lot easier, since we just have to tell nginx to route to http://<app_name> - no domain, no port, nothing more than the app_name thanks to DNS/resolv.conf search path & ELB stuff.
That kind of reminds me of https://xkcd.com/927/
Sorry if that's not the case. I've also played briefly with Flynn and Deis and I haven't found anything that complicated that would need a whole rewrite and changing the entire approach. Moreover with Deis I can easily change providers (DO, AWS, Azure, etc.) and with Emprire I'm bound to ECS. At least that was my first impression, I have to read more.
While Empire itself may be tied to AWS, your app is still a portable, 12-factor, Heroku-compatible app. You can run it elsewhere.
Empire doesn't actually lock you into ECS. The scheduling backend is pluggable and could support Kubernetes/Swarm in the future.
So, in theory you could autoscale just like you always would. Monitor stats for a host, if a bunch of them start to run low on resources, kick off an autoscaling event.
That said, there's been quite a bit of talk about integrating Empire with Autoscaling, so that when, say, ECS couldn't find any instances with resources free for a task, Empire could kick off the autoscaling events for you. Could be pretty awesome :)
Just putting this out there in case anyone is looking for an alternate open-source PaaS.
I've never personally used it before (self-hosted), but it may be something that someone out there is looking for.
https://www.vultr.com/pricing/ is 20% cheaper right now at least.
Both VMs are single core, 1GB RAM. DO gives you 30GB SSD, but AWS has a freely adjustable disk size. Upscaling from 8 to 30GB is another $2 - but how many single-core low ram instances use double-digit GB?
In the middle, DO has 8 core, 16GB for $160/mo, AWS has 4 core 16GB for 185/mo + storage.
At the top end of the DO offerings, DO's 20-core 64GB machine in $640/mo, and AWS's 16-core 64GB machine is $725mo + storage (not much). The difference in pricing is not that crazy, and you get a crapload of extra free features on AWS.
Those AWS prices are with the "On-Demand pricing". If you're willing to lock-in for a year, reduce by 1/3. The argument that DO is "OMG cheaper" than AWS is no longer valid.
[1] https://www.digitalocean.com/pricing/ [2] http://aws.amazon.com/ec2/pricing/
http://serverbear.com/compare?Sort=Host&Order=asc&Server+Typ...
DO 1GB instance at $10/month has a UnixBench of 1041 [1], to beat that with AWS you have to spend $374/month.
Also, with the t2.micro you get an EBS disk, whose I/O you have to pay in addition to the instance cost. You also have to pay for the bandwidth out of the chosen AWS region. This is not the case on DO.
AWS complicated pricing makes comparison like yours very difficult and error-prone: I would suggest to go with AWS only if you need the particular features (like ELB, SQS, VPC, etc.) that DO doesn't offer.
[1] http://serverbear.com/1989-1gb-ssd--1-cpu-digitalocean [2] http://serverbear.com/240-extra-large-amazon-web-services
And how reliable a measure is UnixBench, when the top-performing server "CC2 Large" (by a factor of 25% over second place!) is a 2-core, 8GB RAM offering? It easily beats out all the two-dozen core, high-ram offerings below it.
Hell, the names of the AWS instances in that list aren't even correct. What's a "high-cpu medium"? They mean a "c1.medium" from looking at the stats page, which is now two generations obsolete - you have to know about them and go out of your way to provision one. The one name they do list, "m3.medium", is incorrectly labelled a "high i/o" VM; AWS doesn't have a "high i/o" VM, and the m3.medium is not considered by them to be network-, ram-, or storage-optimised, so I'm not sure where that's coming from. And if you do need disk i/o with AWS, you can provision reserved i/o (not very expensive), which needs to be accounted for in these comparisons. It's just getting my goat at the moment, because my comment was trying to argue against FUD, but that reference list can't even get well-known and advertised names correct.
AWS billing is complex, absolutely, but there is also a ton of flexibility, and it makes sense once you pass the learning curve. And micros do get throttled, but they also get a certain number of "throttle credits" that help them survive bursts. And yes, I agree that you should choose the right tool for the right job - one HNer really uses that huge amount of free bandwidth you get with the small DO servers with a media streaming service (I forget the handle). But that still doesn't change the fact that AWS is no longer "OMG expensive!" over DO.
More than one startup has been killed purely by AWS hosting costs in the past 5 years.
> going with AWS kills productivity because you have to
> learn AWS specific APIs
Products I use:* Redshift. It's Postgres's API, and I didn't have to learn how to manage petabyte-scale clusters
* EC2. It's Ubuntu. Or CentOS. Or whatever. You choose! Except no messing about with my own virtualization or hardware agreements or going ot the datacenter.
* RDS. It's whichever database you want it to be! Only it scales! And backsup! For free!
* ElastiCache. It's Redis!
etc. etc. etc.
Learning those dang AWS-specific APIs, eh? Who'd do it?
> has been killed purely by AWS hosting costs
Then they planned badly, because AWS prices consistently go down over time. If your "business" goes under because it becomes popular, then your marginal cost per user is negative, and you're running a charity for the benefit of your users, not a business. Blaming the demise of a company whose business model is giving out free icecream on the cost of icecream is missing the point a little.Also, OP required Redshift. DO does not offer that.
DO/Linode don't offer the equivalent, which means maintaining your own.. which is fine, but if you're relatively small, or a single person... time you dedicate to operations tasks is time you aren't developing features and/or fixing bugs. One's business is paramount... technology is just a tool to serve that.