Caddy seems like a wonderful alternative that does load balancing and static file serving but has wild config file formats for people coming from Apache/Nginx-land.
This works for me because I already knew a fair bit about nginx configuration before picking up Caddy but it really kills me to see just how many projects don't even bother to explain the nginx config they provide.
An example of this is Mattermost, which requires WebSockets and a few other config tweaks when running behind a reverse proxy. How does Mattermost document this? With an example nginx config! Want to use a different reverse proxy? Well, I hope you know how to read nginx configuration because there's no English description of what the example configuration does.
Mastodon is another project that has committed this sin. I'm sure the list is never-ending.
This is so real. I call it "doc-lock" or documentation lock-in. I don't really know a good scalable way to solve this faster than the natural passage of time and growth of the Caddy project.
I think you are totally right here - gaining critical mass over the time for battle tested solution. On the other hand, the authors [who prefers Caddy] of docs will likely abandon providing Nginx configs sample and someone else will complain on that on HN.
"Battle tested" can be seen differently of course, but in my opinion, things like the next one,
> IMO most users do require the newer versions because we made critical changes to how key things work and perform. I cannot in good faith recommend running anything but the latest release.
from https://news.ycombinator.com/item?id=36055554 , by someone working at Caddy doesn't help. May be in their bubble (can I say your bubble as you are from Caddy as well?) noone really cares on LTS stuff and just use "image: caddy:latest" and everything is in containers managed by dev teams - just my projection on why it may be so.
This method is also useful for abusive clients that one still wishes to give an error page to. Based on traffic patterns, drop them in a stick table and route those people to your pre-compressed error page in the unique back-end. It keeps them at the edge of the network.
[1] https://docs.haproxy.org/dev/configuration.html#4.4-return
Disclosure: I'm a community contributor to HAProxy.
I'm a community contributor to HAProxy.
I think I recall chatting with you on here or email, I can't remember which. I have mostly interacted with Willy in the past. He is also on here. Every interaction with HAProxy developers have been educational and thought provoking not to mention pleasant.
stockholm syndrome
[0] https://www.nginx.com/resources/wiki/start/topics/depth/ifis...
I can see why you'd want an all-in-one solution sometimes, but I also think a single-purpose service has strengths all its own.
nginx open source does all of these things and more wonderfully:
Reverse proxying web apps written in your language of choice
Load balancer
Rate limiting
TLS termination (serving SSL certificates)
Redirecting HTTP to HTTPS and other app-level redirects
Serving static files with cache headers
Managing a deny / allow list for IP addresses
Getting geolocation data[0], such as a visitor’s country code, and setting it in a header
Serving a maintenance page if my app back-end happens to be down on purpose
Handling gzip compression
Handling websocket connections
I wouldn't want to run and manage services and configs for ~10 different tools here but nearly every app I deploy uses most of the above.nginx can do all of this with a few dozen lines of config and it has an impeccable track record of being efficient and stable. You can also use something like OpenResty to have Lua script support so you can script custom solutions. If you didn't want to use nginx plus you can find semi-comparable open source Lua scripts and nginx modules for some individual plus features.
[0]: Technically this is an open source module to provide this feature.
And I think it's fine.