The use case in our Hackerspace was to dispatch different Docker containers through our wild-card subdomains. Traefik is supposed to also automatically create TLS certificates. I had numerous problems with the Let's Encrypt functionality.
Debugging information is quite cryptic, the documentation seems all over to me, which is even more problematic given the number of breaking changes between 1.x and 2.x versions. The way you automatically configure things through Docker labels means that a simple typo can render your configuration ignored.
Also, plugging in Traefik to complex docker-compose projects such as Sentry or Gitlab is next to impossible, because of networking: whatever I tried, Traefik just couldn't pick up containers and forward to them unless I changed the definition of every single container in the docker-compose to include an extra network. I don't feel this should be this complex.
Sometimes I just feel that we should get back to using Nginx and write our rules manually. While the concept of Traefik is awesome, the way one uses it is extremely cumbersome.
We had significant issues with Traefik not allocating or renewing certs, resulting in some painful outages. The worst part was that there was no workaround; when adding a new domain to an ingress, it was completely incomprehensible why Traefik wasn't requesting a cert, or indeed why it wasn't renewing older ones that were close to expiration. We filed GitHub issues with concrete errors, but they were never addressed. At the time, I tried to debug Traefik to understand how it worked and maybe chase down some of those bugs. I don't like to speak ill of other people's code — let's just say that peeking under the covers made me realize perfectly why Traefik was so brittle and buggy.
We eventually ditched Traefik in favour of Google Load Balancer ingresses, combined with Cert-Manager for Let's Encrypt, and this combination worked flawlessly out of the box despite not being a 1.0 release at the time. The beauty of this setup is that the control plane (cert and ingress configuration) is kept separate from the data plane (web server), so the two can be maintained and upgraded/replaced separately.
I did this with traefik and consequently many of my blog posts about it are my top visited pages.
And to be fair it the Traefik team invests in developer success and advocacy. They even send you swag for making contributions like popular posts.
I agree to parent posts though the docs lack concrete examples to take the ambiguity out. And debugging logs is painful sometimes.
No problems with Docker (Compose) networks either, but I'm not using it with GitLab because I have enough IPs.
The biggest problem I see is the accumulation of certificates that will all be kept up-to-date, whether in use or not.
Recently it all came crashing down when an old domain I had expired and I was no longer able to update the DNS in Digital Ocean. The one - unused - domain failing stopped Traefik renewing all my certificates. But I'm also on 1.7 still and really should update to 2.x
Woof, no thank you.
Go is basically incompatible with any kind of plugin-like dynamic linking. There are basically two reasonable models for doing something like plugins: the HashiCorp model, where plugins are actually separate processes that do some kind of intra-process communication with the host; or the Caddy model, where you select which plugins you want when downloading the binary, and they're built-in at compile time.
Plugins and scripting languages flourish when they democratize the process of adding features to a piece of code. To have prebuilt binaries you need a build matrix, and the complexity of the build matrix is somewhere between exponential and factorial.
This is a perverse incentive for the curators. The cost has to be justified, and as the friction grows you can only justify the things that you have a strong affinity for. Anything you don't understand or don't like gets voted off the island.
In the best addon ecosystems, the core maintainers put some safety rails on the system so the addons can't do anything too crazy. Then they watch the cream of the crop and start trying to include them in the base functionality (limiting the number of optional features the majority of their users have to manually pick). The hard part here is how to reward the people whose ideas you just co-opted, and I don't have a great answer for that (although money and/or a free license for life is a good start)
Well, it's a cost/benefit judgment call, not a single valuation. And I think for situations like this, if you have to pick a side, it's generally better to pick the exclusionary walled garden over the bazaar -- I think the value of democratization is usually overstated, and the drawbacks underemphasized.
[0] https://github.com/traefik/traefik/issues/4174
[1] https://doc.traefik.io/traefik/providers/docker/#docker-api-...
You don't have to deploy traefik with docker. If you want traefik to monitor new docker containers to add routes for them, of course traefik needs to talk to the docker api to do so.
The docker api has no way to control access such that it's not equivalent to root access.
However, there's no real vulnerability. I'm happy to provide you a url hosted by traefik with the docker integration enabled, no docker socket proxy, etc, and if you can manage to actually escalate permissions, I'll give you 500 bucks. But, of course, you can't. That security issue is just a "defense in depth" issue, and it's an issue for docker, not traefik.
This would be like saying "traefik uses the linux kernel api to open files, but the linux kernel requires traefik validate what goes into that api or else it could allow file path traversal"... But traefik does validate filepaths and so no one makes that complaint.
Similarly, traefik does validate that only safe docker api calls are made and works hard to prevent any sort of remote code execution, so the issue is not a security issue, but a defense in depth proposal that is really a feature request for the docker project.
Sure but in my case that was the whole idea.
> If you want traefik to monitor new docker containers to add routes for them, of course traefik needs to talk to the docker api to do so.
Yes, but it wouldn't be necessary for the network-facing part of Traefik to talk to the Docker API. There could be a second container (w/o network access) whose only task it is to talk to the Docker socket and generate a config and write that config to a shared volume.
> However, there's no real vulnerability.
In the present situation Traefik (with Docker integration) is effectively running as root. I don't think it's up for debate that this is much worse than just running Traefik as a normal user (outside Docker). Besides, most users expect applications running in Docker containers to be more secure – not less secure – than running them on the bare system.
> This would be like saying "traefik uses the linux kernel api to open files, but the linux kernel requires traefik validate what goes into that api or else it could allow file path traversal"... But traefik does validate filepaths and so no one makes that complaint.
No. This would be like saying "Traefik has full access to the kernel and the entire OS and the only thing preventing a hacker from exploiting this is Traefik validating incoming network requests."
Do you also run your other web servers as root?
> Similarly, traefik does validate that only safe docker api calls are made
This is completely irrelevant. Once a hacker is inside the Traefik process (i.e. can execute code under Traefik's PID), he can access the Docker socket and therefore the entire system as she/he pleases.
I've thought this was a mistake many years ago. The fact that the docker daemon is running with root privileges is also something they should have solved a long time ago. Docker is pretty pathetic when it comes to security.
https://github.com/Tecnativa/docker-socket-proxy
https://github.com/traefik/traefik/issues/4174#issuecomment-
Create a private network that only connects Traefik and the proxy, and limit Traefik's access to only the GET requests it needs to operate. Now the socket is only exposed to a local container.
The only solution I've seen/used that wasn't convoluted or brittle is running a little daemon to just shovel container metadata into Consul and going from there.
I do think it's simple to manage: As I already mentioned elsewhere, it wouldn't be necessary for the network-facing part of Traefik to talk to the Docker API. There could be a second Traefik container (w/o network access) running a binary called, say, traefik-config-generator whose only task it is to talk to the Docker socket and generate a config and write that config to a shared volume.
EDIT: Oh, I just realized you're the founder of Traefik! Thank you so much for your work! I would really appreciate your opinion on my suggestion – even if you think it's complete BS. :)
EDIT: Updated for accuracy
Here's their original announcement post https://traefik.io/blog/announcing-yaegi-263a1e2d070a
It's an identity aware proxy that uses google as an identity provider (but more could be added).
I built it mainly out of frustration on how complicated the 2.x release of traefik has become.
I eventually moved to caddy (https://caddyserver.com/) and it is fantastic. Works seamlessly and I got all my obvious and not so obvious questions answered.
The automated pulling of container data is not automatic, but there is a port for that (https://github.com/lucaslorentz/caddy-docker-proxy) with a great meta-language.
There are a few improvements to be done with the logging part, overall it is really worth checking.
$ go build ./... go: finding module for package github.com/traefik/traefik/v2/autogen/genstatic cmd/traefik/traefik.go:18:2: module github.com/traefik/traefik@latest found (v1.7.26), but does not contain package github.com/traefik/traefik/v2/autogen/genstatic
Fantastic, looking forward to playing with this.
I don't really have any interest in plugins personally, but this is still quite an amazing project.