Edit: Gitlab-CI runners run on kubernetes as well using the dind images. Ingress nodes will soon be given public IP's. Public IP's are currently on CARP failover. After the gitlab-ci-multi-runner 1.1.1 release (allowing shared artifacts) and Kubernetes Deployment resources (providing a way easier deployment workflow and orchestration of pods), CI/CD is a breeze. We have dedicated nodes for MySQL (PXC) and ZooKeeper because these don't play well in the Kubernetes network environment - don't ask me to look at the examples ;) Currently running with Flannel for the overlay, but we're evaluating Calico and waiting on new Docker features to pull the trigger on something else... Multicast, isolated namespaces, VLAN's would be awesome :)
Edit2: I don't know why I keep saying "we" ... I've built and run this thing solo on top of programming... Not enough hours in a day...
AWS with kubernetes 1.2.1 and calico as the overlay network. We have all our web apps in kubernetes and working on our background job apps next.
Disclosure: I work at Google on Kubernetes.
Internal adoption seems to be going well so hopefully this grows.
http://kubernetes.io/docs/user-guide/ingress/
https://github.com/kubernetes/contrib/tree/master/ingress/co...
The ingress controller runs on nodes with a public IP. When you create an ingress for one of your services, one of the controllers will pick that up and expose it. The ingress controller linked above is built on nginx and essentially just reconfigures nginx based on your ingress resource spec. Similar to when you define a LoadBalancer in a service with GCE, the service's external IP is set to the upstream ingress controller node's public IP.