While trying to migrate away to GCS, synchronizing data (using gsutil) has proven practically impossible. The API is incredibly slow to list objects and occassionally responds with nonsensical errors.
(Every once in a while a random "403 None" appears, causing gsutil to abort. We could probably work around that by modifying gsutil to treat 403 as retry-able, but since overall performance is so awful and we can regenerate most data, we decided to give up.)
https://docs.aws.amazon.com/AmazonS3/latest/dev/Introduction...
TL;DR: droplets are good; avoid Spaces like the plague.
Right now I spin up GCE based clusters as the head and version migration is free and I only pay for 2 non-preemptible adequate nodes. The rest scale and preempt somewhat cheaply as needed.
You'll only need to pay for your worker nodes and we'll handle upgrades for you.
If my k8s goes down then it needs to be because the entire datacenter is down not a machine or a rack.
EDIT: https://www.digitalocean.com/products/kubernetes/ "Sign up for early access and receive a free Kubernetes cluster through September 2018."
gcloud container clusters create flk1 \
--cluster-version 1.XX? \ #put your version here
--machine-type g1-small \
--disk-size 20 \
--preemptible \
--enable-autoupgrade \
--num-nodes 1 \
--network flk1 \
--scopes storage-rw,compute-rw,monitoring,logging-write
gcloud config set container/cluster flk1
gcloud container clusters get-credentials flk1
gcloud container node-pools create small-pool-p \
--cluster=flk1 \
--machine-type n1-standard-1 \
--disk-size 20 \
--preemptible \
--enable-autoupgrade \
--num-nodes 1
gcloud container node-pools create small-pool \
--cluster=flk1 \
--machine-type n1-standard-1 \
--disk-size 20 \
--enable-autoupgrade \
--num-nodes 2
I remove the original small node once the pools are created.And then other pools for higher usage nodes.. The smallest cluster I have is about $75/month.
Pre-emptible nodes seem to recover quite well and I always keep at least one container per pod/etc. on the non-pre-emptible just in case.
I'd like to see it as tightly integrated into e.g. GitLab like GKE is.
> tightly integrated into e.g. GitLab
Seconded, currently we set project integration up manually but since we're still ramping up that's not a cause of concern yet.
How do people like Kubernetes as a production ready solution to deploy containers? I've been using Docker for a while now, just starting to mess with k8s.
Same as DO?
"DigitalOcean Kubernetes will be available through an early access program starting in June with general availability planned for later this year."
I'm not sure which will be first.
> * AWS Fargate support for Amazon EKS will be available in 2018.
We are in early beta but if you are interested, please sign up and we will activate your account asap.
If you want to talk, we are at KubeCon Europe, contact @geku on Twitter.
Until then, I'd say Linode is your better bet :).
EDIT: A little more information, I had two VMs go offline abruptly around 1am one night. It took 3 hours for Digital Ocean to even acknowledge a problem existed (I had opened a ticket), and that was only after I started poking their twitter account. It was at least 12 hours before they brought it back online, it was never acknowledged in any mass ticket. If you are unlucky enough, you can have the same thing unfortunately happen to you. This is my second experience of having such an outage at Digital Ocean and is, as a result, the reason I still only use DO as a testbed and nothing more.
EDIT2: Another pretty bad example of Digital Ocean: https://status.digitalocean.com/incidents/8sk3mbgp6jgl.
I really hope they step up their game, but I won't be moving away anytime soon.
[1] The only major issue I had was a massive DDOS that took down their Atlanta data center around Christmas, couple of years ago.
Linode has been solid for me, only one bad hardware crash in the last 10+ years, but DO has more functions, good UI, larger disk,etc. I am trying it again.
You like BSD? Let the world have Linux.
You like Hg? Let the world have Git.
You like DC/OS? Let the world have Kubernetes.
But they all lost.
The big payoff (for small clusters like mine anyway) is that masters won't be charged, like other managed Kubernetes offerings from Azure and Google. I don't know enough about StackPoint to compare it to a service I haven't even seen in beta yet, but I can tell you that much.
I know that StackPoint is supposed to be "like a managed" experience. Maybe one of the DigitalOcean guys who has been responding in this thread can speak to the technical details of the new offering.
No, you pay a monthly subscription (starting at 50$/month). The service allows you to create/update clusters easily. I'm not sure, but I think you can create as many clusters as you want with a 50$ subscription (at least I never hit a limit). The procedure to create a new cluster looks something like this, if you use the web interface:
* click "add cluster"
* select cloud provider (DO, AWS, GKE, etc.)
* configure master nodes. E.g.: 2 master nodes @ 2G Ram, running in region NYC1.
* configure worker nodes. (same procedure as with master nodes)
* submit
If you choose DO, you get a cluster that works with DO load-balancers, DO block-storage, etc out of the box.
If a new version of Kubernetes is released, you can hit the "update cluster" button.
They have an API for all the stuff too.
I chose StackPoint in combination with DO, because it felt least bloated and least locking-in.
Now that DO introduces the Kubernetes service, I can imagine that I won't need the StackPoint subscription any more.
I just signed up for early DO access - can’t wait!