I currently colocate all my servers and I wanted to figure out just how much it might cost to potentially switch over to EC2. After much digging and benchmarking, it seems that an single ECU is roughly equivalent to 350 to 400 points on PassMark. With this information and load metrics, it is pretty easy to determine what kind of ECUs I might need to switch over (as RAM and disk are pretty straight forward): http://www.cpubenchmark.net/cpu_list.php.
Came to the same conclusion as I did a few years ago. For my scenario (about a rack of servers, established business, 24/7 usage, capacity to handle for a 10-fold increase in usage (and much more within a 2 hour window))... I save roughly $170,000 over 3 years doing it all (server costs included). This is with 3-year reserved instances.
It should be noted that I build our servers from the ground up and do all the ops.
I've created setups like yours a bunch of times (anywhere from 10-1000 servers) and always check EC2 to see if it's a viable alternative. The messy details of proper facilities management get old real quick. But, as you say, the math just doesn't work.
With existing companies you end up paying huge premiums to rent virtual instances on heavily shared hardware. You're giving up a proper network (with internal and external connections) and persistent high performance disk I/O. It's just not a great deal.
I'm trying to get it much closer to the pricing of doing it yourself, while keeping the convenience of not having to actually do it all yourself.
Would love to get some feedback from you, if you're willing. No email in your profile. Mine is jake@uptano.com.
The setup took a couple months to research, build, and make "perfect". However, ongoing it takes little time to maintain (less than an hour a week, if averaged). Every three years, I build new servers and place them into service (2 to 3 weeks of time to perform). I also do periodic hardware maintenance roughly every three months (typically 1/4 of a day to perform).
Due to the cost savings, we also are able to do quite a bit of redundancy, such as: dual PSUs, SSDs in RAID 10 on non-SAN servers, RAID-Z2/3 on SAN servers, offsite backups, complete server redundancy, spare servers ready to be slotted (I live an hour from colo), spare parts on hand, even multiple physical colos.
If components are selected carefully (i.e. sharing components between server roles), regular maintenance is performed, and redundancy is ensured on a per component, per server, and per datacenter level, it's not very time intensive or costly.
I am a software engineer by trade, but love the ins and outs of hardware/ops. As such, everything is automated and scripted (that can be). I can raise/move instances in minutes, just like EC2 (currently use XCP).
Even with the research, it still saves roughly 100k per 3 years.
The direct link to the blog post: https://blog.cloudvertical.com/2012/10/aws-cost-cheat-sheet-...
The direct link to the PDF with the data: http://s3.amazonaws.com/CloudVerticalBlog/CloudVertical-AWS-...
I only came into this discussion to post it -- glad to see somebody else already did.
Cloud Static Storage (cents/gigabyte)
site storage bandwidth
dreamobjects 7 7 http://dreamhost.com/cloud/dreamobjects/pricing/
cloudfiles 10 18 http://www.rackspace.com/cloud/public/files/pricing/
amazon s3 12.5 12 http://aws.amazon.com/s3/pricing/
I learned my lesson — there is cheap hosting and real hosting. Just as there are "unlimited" data plans (that get capped) and the real ones that you actually pay for. These days I always look for solutions that have a real business model behind them, not overbooked marketing gimmicks. Which means that amazon and rackspace are in, dreamhost is out.
S3 is specced as being very reliable.
I don't know how much a value it is, but when looking at PAAS options (like openshift, heroku, appengine, etc.), I like appfog's braindead simple pricing: 2gb free, 4gb $100/month, 16gb $380, etc.
I went with DigitalOcean and — while I'm still getting the server set up — I've been pleased with it so far. The articles on the site that help with setup are really helpful (like Linode's articles, but more up-to-date, and the prices are both good and simple.
(I'm not affiliated with them in any way.)
Also, don't forget that one of the key benefit of using the cloud is elasticity, and unless you model this, you won't get accurate estimates. We developed the notion of elasticity patterns[1] to let users do this, so you can say something like "my baseline S3 storage is 100GB, but every month this grows by 5% and in the Christmas it doubles".
It had some pricing on S3 but I think it would be nice to also have the prices for RDS. A medium-sized one of those things costs as much as a medium EC2 instance (yes I learned that the hard way).
You can do a setup based on application demand, such that you have reserved instances for baseline load, and then a mix of spot instances and on-demand instances for load/spikes over baseline.
We have dev, test and stage instances that are almost always off. I'm not going to reserve those at all, because I won't generate enough charge on them over the course of a year to even meet the initial payment.
For our production instances, yes, obviously we're reserving, but there are lots of cases where you wouldn't. Even with production instances, we occasionally spool up another instance to handle load spikes, for which we don't boot a reserved instance.
In another of our apps, we actually spool up a new AWS instance to handle customer requests. The user clicks a button on a website, uploads a (large) file, which we spool up an instance to process for awhile, return the results then shut the instance down. That kind of usage isn't really suitable for reservation.