But, popular services, on default ports, with default APIs enabled, without hard authentication on a WAN interface? That should be a paddling. That doesn't fly. Or, well it does, except not for the guy paying the power.
I'm not familiar with enough distributions to know if there is a popular distribution that totally disabled authentication by default, but in my companies distribution, kubeadm clusters, and I suspect all managed clusters (GKE/EKS/AKS/etc), the vector outlined in the article would only work if an admin specifically disabled the authentication.
In gravity (my companies distribution), we even disable anonymous-auth, so someone would have to do real work to allow API access to the internet.
That said it's not that long ago that a lot of distros were shipping unauthenticated kubelets, and I think that's where a lot of this will come from.
From cluster reviews I've done, problems like this tend to arise where people are using older versions (so early adopters) or have hand-rolled their clusters, not realising all the areas that require hardening.
And that's where I'll turn around 180 degrees and say: If you can't give me a hard reason why you'll be a hard target on the internet, you shouldn't have a public address. Default authentication isn't enough.
I dislike trusting my edge firewall, but it gives me time to handle weak internal systems.
They may consider that fine for security (the equivalent of having an insecure MySQL install on a machine with no tables of value in it), but might perhaps forget that even an empty Kubernetes install still lets attackers dictate what your CPU is doing.
Secure defaults are irrevalent if you pay attention to the news.
When this engineer redid things they opted to go the public internet route where the master runs a public api and auth is done via a certificate. The logic here was so that external 3rd party stuff (CI) could control our master.
To my knowledge this setup is still running and chances are these machines are vulnerable to this issue.
Contrast to the prior setup where, immediately upon being offboarded from the company your VPN access became automatically terminated (thank you LDAP and Foxpass!)
With software like google IAP, and many similar products, it just seems silly.
Google has moved its internal stuff to the beyondcorp model, and it honestly seems like a better approach if you really care about security and have a big enough security team to make it work.
I generally think it's no more risky to expose a Go app with cert-based auth than it is to expose OpenVPN so long as both are set up correctly.
Many Kubernetes distributions enable anonymous authentication to allow for health checking, so there is some risk there.
As to the general point, the only thing I'd say is that Kubernetes is a massive 1.5 million Line code base which is relatively new code, where Openvpn has been around and attacked for a long time. I wouldn't be surprised if the recent CVE isn't the only issue we see in k8s over the next year.
Complexity: Single purpose apps built with a very specific threat model in mind for a boring, established usecase tend to be more secure. K8s is a fast evolving labyrinth of complexity with contributions from thousands of people, very few of whom have a grasp on the whole codebase.
Publicity: the general Internet doesn't find your VPN server just by using your API.
Why not sidestep the issue by running CI within the VPC? :/
edit: to clarify, vpn/vpc requirement would turn CVE-2018-1002105 from a pre-auth to a post-auth vulnerability, right? Which might be a big or small help depending on how controlled your user pool and signup process is.
Script kiddies are just annoying and their actions resulted in the patch killing that silent mining botnet as well.
Far too many people are adopting Docker/Kubernetes as they have been the hot new product for the last couple of years, often regardless of whether they are actually the best or most appropriate tool for the job.
A lot of the people who get sucked into the hype are often inexperienced programmers, devops or admin types who are in positions of power or influence in companies that they probably shouldn't be, IMHO.
As a result, they don't have the Linux or networking experience to be able to know when they are deploying these complex products securely or not, and they are putting their employers businesses at risk.
You could say the exact same thing about Linux, Cisco, Dell, or pretty much any of the popular FOSS projects. Popular things, regardless of their complexity, get chosen by people of all experience levels. Inexperienced people are less likely to properly configure something, regardless of its popularity or hype.
If anything, having a few attractive projects tends to be beneficial (or at least neutral) for security as there are so many more people scrutinizing it, and many more people learning how to properly use it.
I cannot agree more. Many times, I feel you da easily do away with ansible and terraform to setup VMs / docker. you dont quite need k8s. Just cuz K8s are cool.. people feel the need to use it.
For example:
- You want to spin up ephemeral environments to test PRs end2end. Sure, create a namespace, deploy your charts and run your tests. You want to do that with ansible, sure you can, but it's harder.
- You org is running apps via a multi-cloud and on-prem strategy? Okay, lets just write lots of tooling per cloud and another for on-prem, or we could abstract that away via kubernetes and only worry about tooling for kube itself.
- You want to do have rolling-upgrades. Sure, you build them with ansible then, or you could just use kubes.
Further to that, kubernetes is guiding reasonble abstractions, seperating infrastructure from code. Sure, it comes with complexity, but so does most things when you start throwing in scaling and auto-recovery.
For example, deploy terraform from your laptop? The device you probably browse porn on has becomes an attack surface. Move this to Jenkins, the CI is the attack surface. Put your code on Bitbucket? Bitbucket and the Jenkinsfile becomes the attack surface. Pretty much everything we do has complexity and attack surface _problems_ and using a managed k8s service will allow you some easy wins so you can actually think about those other problems, and those solutions will work on all platforms you can run k8s on.
So itself should be more sensitive than other infrastructure pieces.
And I think op meant to say that, not that k8s is particularly bad in security in general. Or k8s is less experienced in security.
The down vote is not warranted.
It's not hype if it solves a lot of organizations pain points.
VMs also have zero-days that have been exploited for cryptomining.
https://threatpost.com/malicious-docker-containers-earn-cryp...
From the article itself, although they mention the CVE at the top, the real point they are making is that people are deploying the products with poor defaults:
"as is typical with our findings, lots of companies are exposing their Kubernetes API with no authentication; inside the Kubernetes cluster"
Not to mention a bunch of NoSQL type db's you can easily search on Shodan if you wanted to have some fun.
So yes - the problem here is experience, or lack thereof, and not Kubernetes itself. The CVE can be patched. You can't patch inexperience - except with experience I suppose.
All I am saying is that there a lot of people who are downloading and deploying these products because of hype, who are unable or unwilling to secure them.
It's not just a Kubernetes Problem. Like many have posted, many databases, other types of clusters, shares, are accessible without Auth for those that know how to look for them (not that hard now days), mainly malicious actors.
Nice, Monero mining
Cryptocurrencies makes the bug bounty market A LOT more efficient than companies, legislation or HackerOne ever could.
Yes - if it has a CPU and access to the public internet, someone will hack it and make it mine "cypto". Let's stop pretending we aren't aware that the internet of things exists and writing breathless stories every time a toaster, router, or adult toy starts churning out Monero.