(and nixos automatically runs gixy on a configuration generated through it, so the system refuses to build <3)
nginx was once amazing, but it’s decidedly bad now when compared to modern webservers.
But do you know, if they’re a nicer options finder? The one I found where you just search all several thousand options kinda sucks. I want to just see my package (say, ssh) and just the ssh options, but the results get littered with irrelevancy.
https://search.nixos.org/options?channel=unstable&from=0&siz...
For anything I'm not 100% sure will be obvious I search through a local clone of the nixpkgs repo directly, but I'll be honest and say I just never took time to search for a better tool
> I want to just see my package (say, ssh) and just the ssh options
Not sure if this helps you at all or not, it really depends on your usage of Nix, but for managing user configuration I do recommend Home Manager.
Edit: Actually, I’m a bit lost as to what’s happening in the original vuln. http://localhost/foo../secretfile.txt gets interpreted as /var/www/foo/../secretfile.txt or whatever… but why wouldn’t a server without the vulnerability interpret http://localhost/foo/../secretfile.txt the same way? Why does “..” in paths only work sometimes?
https://book.hacktricks.xyz/network-services-pentesting/pent...
/imgs../flag.txt
Transforms to: /path/images/../flag.txt
I've only implemented a handful of HTTP servers for fun, but I've always resolved relative paths and constrained them. So I'd turn "/path/images/../flag.txt" into "/path/flag.txt", which would not start with the root "/path/images/" and hence denied without further checks.Am I wrong, or, why doesn't nginx do this?
In this case part of the URL is being interpreted by nginx as a directory (http://localhost/foo) due to how that URL is mapped in the configuration to the local filesystem. Apparently it references a directory, so when nginx constructs the full path to the requested resource, it ends up with "${mapped_path}/../secretfile.txt" which would be valid on the local filesystem even if it doesn't make sense in the URL. Notice how the location of the slashes doesn't matter because URLs don't actually have path elements (even if we pretend they do), they are just strings.
This is a very common problem that I have noticed with web servers in general since the web took off. Mapping URLs directly to file paths was popular because it started with simple file servers with indexes. That rapidly turned into a mixed environment where URLs became application identifiers instead of paths since apps can be targeted by part of the path and the rest is considered parameters or arguments.
And no, it generally doesn't usually make sense to honor '.' or '..' in URLs for filesystem objects and my apps sanitize the requested path to ensure a correct mapping. It's also good to be aware that browsers do treat URLs as path-like when building relative links so you have to be careful with how and when you use trailing '/'s because it can target different resources which have different semantics on the server side.
That fully depends upon the file permissions. In this case, let's assume that a user has permissions to read files all the way from the web index directory (../index.html) back to the root directory (/). At that point, since they have permission to traverse down to the root directory, they now have permission to view any world viewable file that can be traversed to from the root directory, for instance /etc/passwd.
In other words, imagine a fork with three prongs, and your web server resides on the far right prong. Imagine that the part of the fork where the prongs meet (the "palm" of the fork) is the file system. If your web server residing on the far right prong of that fork allows file permission to files and directories that lead all the way to the palm of the fork, at that point you could continue accessing files on other prongs once you have reached the palm.
In any case, it seems that nginx does try to search for .. but has a bug in the corner case where the “location” doesn’t end with a slash. I assume there’s some kind of URL normalization pass that happens before the routing pass, and if the route matches part of a path component, nothing catches the ..
If I’m right, this is just an IMO rather embarrassing bug and should he fixed.
More likely it is an omission, which could be rectified with a warning or failure running nginx -t (verify configuration).
The actual performance comes from an architectural choice between event vs process based servers, as detailed in the C10k problem article. [1]
More like because it was much faster out of the box, and came with many batteries included while Apache2 required mods to be separately install.
Like web apps have been seen various bypasses involving somehow smuggling two dots somewhere since we were on dial up modems. It's time to look for a way to close this once and for all, as the Linux kernel has done with several other classes of user land bugs.
(FreeBSD has this in ordinary openat(2) as O_RESOLVE_BENEATH.)
You could just run nginx as a separate user with very limited rights, or just run it on Docker. This, plus updating regularly usually fixes 90% of security issues.
But that won't help if you alias to "/foo/bar/www" and the the application has a SQLite database at "/foo/bar/db.db", which the nginx user has to have access to. Same if you run it in a container (or lock down permissions using systemd).
/some/../path
should pretty much 100% of the time be disallowed, there is no sensible use case that is not "someone wrote ugly code"../some/path makes sense sometimes at least
... but I'd imagine it wouldn't as useful as you think it is, because many apps resolve .. before passing it to the OS
If you are serving files to web from the folder, the web framework should handle not taversing the public root folder it was tasked to serve. If are rolling your own, well now you have to consider all kinds of stuff, including this.
Exposing email and private keys of GCP accounts only gives you $500 reward? WTF. Google being Google I guess.
Hunting for Nginx Alias Traversals in the wild
and the hn submission highlights the bitwarden vulnerability while there is a google one discussed as well.
Cross-platform, written in Rust, straightforward configuration, secure defaults, also has a hardened container image and a hardened NixOS module.
I wouldn't recommend Caddy. Their official docker image runs as root by default [1], and they don't provide a properly sandboxed systemd unit file [2].
[1]: https://github.com/caddyserver/caddy-docker/issues/104
[2]: https://github.com/caddyserver/dist/blob/master/init/caddy.s...
EDITED: phrasing
[Unit]
Description=Caddy webserver
Documentation=https://caddyserver.com/docs/
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
StartLimitIntervalSec=14400
StartLimitBurst=10
[Service]
User=caddy
Group=caddy
# environment: store secrets here such as API tokens
EnvironmentFile=-/var/lib/caddy/envfile
# data directory: uses $XDG_DATA_HOME/caddy
# TLS certificates and other assets are stored here
Environment=XDG_DATA_HOME=/var/lib
# config directory: uses $XDG_CONFIG_HOME/caddy
Environment=XDG_CONFIG_HOME=/etc
ExecStart=/usr/bin/caddy run --config /etc/caddy/Caddyfile
ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile
# Do not allow the process to be restarted in a tight loop.
Restart=on-abnormal
# Use graceful shutdown with a reasonable timeout
KillMode=mixed
KillSignal=SIGQUIT
TimeoutStopSec=5s
# Sufficient resource limits
LimitNOFILE=1048576
LimitNPROC=512
# Grants binding to port 443...
AmbientCapabilities=CAP_NET_BIND_SERVICE
# ...and limits potentially inherited capabilities to this
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
# Hardening options
LockPersonality=true
NoNewPrivileges=true
PrivateTmp=true
PrivateDevices=true
ProtectControlGroups=true
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectSystem=strict
ReadWritePaths=/var/lib/caddy
ReadWritePaths=/etc/caddy/autosave.json
ReadOnlyPaths=/etc/caddy
ReadOnlyPaths=/var/lib/caddy/envfile
[Install]
WantedBy=multi-user.targetTreat any web request like you would a real user on a Linux system you'd need to give access to to download files via scp. Chroot, strict permissions, etc. Can't escape what you can't escape. A ../ should return the same as expected in the shell, permission denied
I've run a couple of websites (WordPress or Hugo based, including my personal blog) like that and it's great.
I personally prefer something like Github Pages, though - it doesn't get much more hands-off than that!
It is still Bitwarden's responsibility since they shipped a dangerous configuration via Docker. Which they seemingly acknowledge and have since fixed.
The screenshot makes it look like the docker setup option was still in beta and the page had warnings all over it saying there could be possible issues. I can't really judge Bitwarden too harshly here for releasing something in beta that was later found to have a vulnerability in it.
If nginx does not run as root, how can it read other files than the ones explicitly assigned to the nginx user?
It's LITERALLY app hosting 101 and people did it that way 20+ years ago.
It may require more fiddling with group memberships, but it's well worth it.
Probably really screwing things up. Ouch.
Unfortunately, nginx (and other web servers) generally need to run as root in normal web applications because they are listening on port 80 or 443. Ports below 1024 can be opened only by root.
A more detailed explanation can be found here: https://unix.stackexchange.com/questions/134301/why-does-ngi...
Or processes running with the CAP_NET_BIND_SERVICE capability! [1]
Capabilities are a Linux kernel feature. Granting CAP_NET_BIND_SERVICE to nginx means you do not need to start it with full root privileges. This capability gives it the ability to open ports below 1024
Using systemd, you can use this feature like this:
[Service]
ExecStart=/usr/bin/nginx -c /etc/my_nginx.conf
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
User=nginx
Group=nginx
(You probably also want to enable a ton of other sandboxing options, see `systemd-analyze security` for tips)[1]: https://man7.org/linux/man-pages/man7/capabilities.7.html
> ...we started dwelving into the code base...
The author may not be a native speaker, but this is far from a judgement on their English. I'm just curious about the provenance of this mistake, given the scarcity of words that begin with "dw". At first I thought it was a typo -- especially on a QWERTY keyboard -- but I've seen it often enough to question this.
Because of English pronunciation (pronounciation? :-P). English is extremely irregular, there are a thousand of footguns in the language - both spoken and written -, so as non-native speakers we tend to make small mistakes that stick to our brains like glue, and it's very hard to get rid of (rid off? :-P).
For me it kinda makes sense to say "dwelve" because it reminds me of "dwarfs" (dwarves? :-P) that live underground!
Another comment, added years later, admits the same confusion.
My particular pet peeve is using "weary" instead of "wary" or "leery". I've started to hear it spoke in youtube videos now, too, so it's not just a typo.
As an aside, I didn't know Github code search accepted regex.
Horse pucky. In those days, Apache httpd held dominant market share. Nice historical hijacking.
> This vulnerability has been disclosed to Bitwarden and has since then been fixed. Bitwarden issued a US$6000 bounty, which is the highest bounty they issued on their HackerOne program.
That's a ridiculously low payout.
https://siliconangle.com/2022/09/06/bitwarden-reels-100m-ope...
If you want to play the long game and collect a lot of encrypted data now, you can simply wait until it is possible to trivially decrypt, and/or start cracking now and let the years work on it.
Most encryption decisions are framed as a tradeoff of the time and resources it would currently take to brute-force your way through it, and how many years before a simple attack becomes feasible, vs. your $5 wrench attacks in the present day.
But anyways, said vuln doesn't apply to vaultwarden.
[1] https://github.com/dani-garcia/vaultwarden/blob/19e671ff25bf...