What stops me is security. I simply do not know enough about securing a self-hosted site on real hardware in my home and despite actively continuing to learn, it seems like the more I learn about it, the more questions I have. My identity is fairly public at this point, so if I say the wrong thing to the wrong person on HN or whatever, do I need to worry about someone much smarter than me setting up camp on my home network and ruining my life? That may sound really stupid to many of you, but this is the type of anxiety that stops the under-informed from trying stuff like this and turning to services like Akamai/Linode or DO that make things fairly painless in terms of setup, monitoring and protection.
That said, I'm 110% open to reading/watching any resources people have that help teach newbies how to protect their assets when self-hosting.
[1] https://www.bbb.org/article/news-releases/20509-amazon-brush... [2] https://www.reddit.com/r/tulsa/comments/hpe8s1/just_got_a_su...
Without getting too deep into it, there are some things I know how to do with computers that I probably shouldn't, so my thought is this; if I, a random idiot who just happened to learn a few things, can do X, then someone smarter than me who learned how to attack a target in an organized way probably has methods that I cannot even conceive of, can do it easier, and possibly without me even knowing. It's this weird vacillation between paranoia and prudence.
For me, it's really about acknowledging what I know I don't know. I do some free courses, muck about with security puzzles, etc, even try my own experiments on my own machines, but the more I learn, the more I realize I don't know. I suppose that's the draw! The problem is when you learn these things in an unstructured way, it's hard to piece it all together and feel certain that you have covered all your vulnerable spots.
My current setup is to rent a cheap $5/month VPS running nginx. I then reverse ssh from my home to the vps, with each app on a different port. It works great until my electric goes out and comes back on the apps become unavailable. I haven't gotten the restart script to work 100% of the time.
But, I'd love to hear thoughts on security of reverse SSH from those that know.
Nginx handles proxying and TLSing all HTTP traffic. It also enforces access rules: my services can only be reached from my home subnet or VPN subnet. Everywhere else gets a 403.
Because since my new provider only provides cg-nat, I've been using a cheap server, but actually having the server at home would be nice.
Block port 22, secure SSH with certificates only. Allow port 443 and configure your web server as a reverse proxy with a private backend.
You don't need an IDS, you don't need a WAF and you don't need Cloudflare.
Unless you become the next Facebook that's when you start to become concerned about security.
I've contented myself using TLS client certs on my family's Android phones (which do not work at all on iOS for something like Home Assistant).
So you don't self-host at home, right?
I have been considering setting up a physical DMZ at home, with two routers (each with its own firewall), such that my LAN stays unmodified and my server can run between both routers. Then it feels like it would be similar to having a VPS in terms of security, maybe?
You want VPS-provider firewall. Docker's going to punch holes through your software firewall.
If using Podman, should I use rootless containers (which IMO suck because you can't do macvlan so the container won't easily get its own IP on my home network)? Is it ok to just use rootful Podman with an idmapped user running without root privileges inside the container and drop all unneccessary capabilities? Should I set up another POSIX user before, such that breaking out of the container would in the end just yield access to an otherwise unused UID on the host?
If using systemd-nspawn, do all the above concerns about rootful / rootless hold? Is it a problem that one needs to run systemd-nspawn itself with root? The manpage itself mentions "Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only.", so should I trust nspawn in general?
Or am I just being paranoid and everything should just be running YOLO-style with UID 1000 without any separation?
All of this makes me quite wary about running my paperless-ngx instance with all my important data next to my Vaultwarden with all of my passwords next to any Torrent clients or anything else with unencrypted open ports on the internet. Also keeping everything updated seems to be a full time job by itself.
WAF's etc just hide the fact the code in your service is full of holes.
Ensuring your infra is built in a secure way is as important as ensuring your service is built in a secure way.
I'm a security consultant so this is not a problem I have. To me it seems very straightforward and like most things are secure by default (with the exceptions being notorious enough that I'd know of it), so I'm interested in the other perspective
Things I do:
* Make sure domain WHOIS does not point to me in any way, even if that means using some silly product like "WHOIS GUARD"
* Lock down any and all SSH access. Preferably only allow key-based authentication.
* Secure the communication substrate. For me this means running a Zerotier network which all dependent services listen on. I also try to use Unix sockets for any services colocated on the same operating system and restrict the service to only listen on sockets in a directory specifically accessible by the service.
* Try to control the permission surface of any service as much as possible. Containers can be a bit heavyweight for self-hosting but make this easy. There's alternatively like bubblewrap and firejail as well.
* Make use of services like fail2ban which can automate some of the hunting of bad actors for you.
* Consider hosting a listener for external traffic outside of your infra. For redundancy, load-shedding, and for security I have an external VPS that runs haproxy before routing over Zerotier to my home infrastructure. I enforce rate limits and fail2ban at the VPS so that bad actors get stopped upstream and use none of my home compute or bandwidth. (I also am setting up some redundant caches that live on the VPS so if my home network is down, one of my services can failover.)
* Segregate data into separate databases and make sure services only have access to databases that they need. With Postgres this is really simple with virtual databases being tied to different logins. I have some services that prune databases that run in a cron-like way (but using snooze instead) and they have no outbound net access.
If your network layer is secure and your services follow least-privilege, then you should be fairly in the clear.
From first hearing about Sandstorm since the first open beta 10 years ago (https://news.ycombinator.com/item?id=10147774) and reading about it on/off since then, this is first time I hear anyone pitching it for "medical and other highly regulated industries". Where exactly does this come from?
> There's still nothing else quite like it
Plenty of other similar self-hosted platforms, YunoHost is probably the closest, most mature and most feature-packed alternative to Sandstorm, at least as far as I know,.
Of course there can be security issue on your webserver as well, but for a simple site this setup is learnable in an hour or two and you are ready to go.
You can hook that up on a pie attached to your router or pay a bit to have it hosted somewhere. Domain is perhaps 2-5$ and an TLS cert you can get from Let's Encrypt.
No idea how to put everything into a container that it makes sense. I just run this quite often on small hosted machines elsewhere. I just install everything manually because it takes 5 minutes if you have done it before.
https://docs.opnsense.org/manual/how-tos/wireguard-client.ht...
Then on my phone I just flick on the switch and can access all my home services. It's a smidge less convenient, but feels nice and secure.
I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.
You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.
Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.
If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.
It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.
Edit: anyone actually interested in such a post?
Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.
If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.
That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.
No issues whatsoever and it is so easy to manage. It just works!
It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)
Lord knows why people overcomplicate things with docker/kubernetes/etc.
Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”
I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.
Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.
Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.
But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine
"Premature clustering is the source of all evil" - or something like that.
I think it's a great way to learn Kubernetes if you're interested in that.
Writing your first yaml or two is scary & seems intimidating at first .
But after that, everything is cut from the same cloth. Its an escape from the long dark age of every sysadmin forever cooking up whatever whimsy sort of served them at the time, escape from each service having very different management practices around it.
And there's no other community anywhere like Kubernetes. Unbelievably many very good quality very smart helm charts out there, such as https://github.com/bitnami/charts/tree/main/bitnami just ready to go. Really sweet home-ops setups like https://github.com/onedr0p/home-ops that show that once you have a platform under foot, adding more services is really easy, showing an amazing range of home-ops things you might be interested in.
> Last thing I need is Kubernetes at home
Last thing we need is incredibly shitty attitude. Fuck around and find out is the hacker spirit. Its actually not hard if you try, and actually having a base platform where things follow common patterns & practices & you can reuse existing skills & services is kind of great. Everything is amazing but the snivelling shitty whining without even making the tiniest little case for your unkind low-effort hating will surely continue. Low signal people will remain low signal, best avoid.
1. RHEL 9 with Developer Subscription. Installed dnf-automatic, set `reboot = when-changed`, so it's zero effort to reliably apply all updates with daily reboots. One or two minutes of downtime, not a big deal.
2. For services: podman with quadlets. It's RH-flavoured replacement for docker-compose. Not sure, if I like it, but I guess that's the "future", so I'm embracing it. Every service is a custom-built image with common parent to reduce space waste (by reusing base OS layer).
So far I want to run static http (nginx), vaultwarden, postfix and some webmail. May be more in the future.
This setup wastes a lot of disk space for image data, so expect to order few more gigabytes of disk to pay for modern tech.
On an unrelated note, an article of how to rent a VPS in China would be interesting :)
New laws comes to mind. If a government decides to try to outlaw encryption again, cloud/hosting companies located there wouldn't have a choice but to comply, or give up on the business. The laws could also be made in such way that individuals are responsible for avoiding it, even self-hosters, and if people are using it anyways, be legally held responsible for the potential harms of it.
You are right though, it gives significantly more control to users. It's just realising 100% of the benefits that might be trickier.
Don't worry about the servers. Worry about mandated software on the client
Given that apparently it's quite difficult to even get a WeChat account without a national ID, I suspect that step 1 is "learn mandarin" and step 2 is "get a Chinese national ID".
Self hosting to me is, at the very least having physical access to the machines.
Things that haven't worked for me:
- Standalone Docker: Doesn't work great on its own. Containers often need to be recreated to modify immutable properties, like the specific image the container is running. To recreate the container, you need to store some state about how it _should_ work elsewhere.
- Quadlet: Too hard to manage clusters of services. Podman has subtle differences to Docker that occasionally cause problems and really tempting features (e.g. rootless) that cause more problems if you try to use them.
- Kubernetes: Waaaay too heavy. Even the "lightweight" distributions like k3s, k0s etc. embed large components of the official distribution, which are still heavy. Part of the embedded metric server for example periodically enumerates every single open file handle in every container. This leads to huge CPU spikes for a feature I don't care about.
With my setup now, I can more or less copy-paste a template into a new file, tweak some strings and have a HTTPS-enabled service available at https://thing.mydomain.mine. This works pretty painlessly even for services that need several volumes to maintain state or need several containers that work together.
Otherwise good article. If you want to go rootless (which you should!), Podman is the way to go; but Docker works rootless too, with some modifications [1]. I have found Docker rootless to be reliable and robust on both Debian and Ubuntu. It also solves permissions problems because your rootless user owns files inside and outside the container, whereas with rootful setups all files outside the container are owned by root, which can be a pain.
Also, you don't need Watchtower. Automatic `docker compose pull` can be setup using standard crontab, see [2].
[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...
I run it alongside portainer because exactly of the compose.yaml file I want to have control over
Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.
If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.
But what about other services, like if you want a database server as well, a mail server, etc.?
I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version
Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.
Both those are out of scope for self host.
But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.
We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.
My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".
I'm just a boomer (technically a millennial) who sticks to Arch Linux even when it comes to servers, I have zero friction, really. I have no issues self-hosting whatever me or a client requires, keeping it minimal and functional.
I self-host like it is 2000 (apart from a couple of some more modern stuff, if you could consider systemd and certbot, etc. modern). :D
One change I made that may help with this, is to not install crap on the host that I don't plan to use for a long time. Trying out a new database server or want to set up an Android IDE for a temporary project? Use a VM, don't clutter up random files all over the host. Is this what is happening on your Windows perhaps?
I've been running a DigitalOcean VPS for years hosting my personal projects. These include a static website, n8n workflows, and Umami analytics. I used manual Docker container management, Nginx, and manual Let's Encrypt certificate renewals. I was too lazy even to set up certbot.
I've migrated to a Portainer + Caddy setup. Now I have a UI for container management and automatic SSL certificate handling. It took about two hours.
Thanks for bringing me to 2025!
Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.
FreeBSD and jails is so easy to maintain its unbelievable.
I have now switched from some SaaS products to self-hosted alternatives:
- Feed reader: Feedly to FreshRSS.
- Photo management/repository: Google Photos to Imich or Imgich (don't remember).
- Bookmarks: Mozilla's Pocket to Hoarder.
And so far, my experience has been simple and awesome!
Don't get me wrong I love some of the software suggested. However yet a another post that does not take backups as seriously as the rest of the self-hosting stack.
Backups are stuck in 2013. We need plug and play backups for containers! No more roll your own with zfs datasets, back up data on the filesystem level (using sanoid/syncoid to manage snapshots or any other alternatives.
Each VM or container gets a data mount on a zvol. Containers go to OS mount and each OS has its own volume (so most VMs end up with 2 volumes attached)
Fully automated, incremental, verified backups, and restoring is one click of a button.
I find it strange that, especially with a docker which already knows your volumes, app data, and config, can't automatically backup and restore databases, and configs. Jeez, they could have built it right into docker.
One could set up a Docker Compose service that uses rclone to gzip and back up your docker volumes to something durable to get this done. An even more advanced version of this would automate testing the backups by restoring them into a clean environment and running some tests with BATS or whatever testing framework you want.
I'm curious as to what issues you might be alluding to!
Nix (and I recently adopted deploy-rs to ensure I keep SSH access across upgrades for rolling back or other troubleshooting) makes experimenting really just a breeze! Rolling back to a working environment becomes trivial, which frees you up to just try stuff. Plus things are reproducible so you can try something with a different set of machines before going to "prod" if you want.
I haven't needed to tune selfhosted databases. They do fine for low load on cheap hardware from 10 years ago.
I remember spending time on this as a teenager but I haven't touched my MariaDB config in a decade now probably. Ah no, one time a few years ago I turned off fsyncing temporarily to do a huge batch of insertions (helped a lot with qps, especially on the HDD I used at the time), but that's not something to leave permanently enabled so not really tuning it for production use
Bitnami PostgreSQL Helm chart - https://github.com/bitnami/charts/tree/main/bitnami/postgres...
Not sure on "easy" backups besides just running pg_dump on a cron but it's not very space efficient (each backup is a full backup, there's no incremental)
(Docker Swarm only for now, though I’m thinking about adding k8s later this year)
Cloudflare Tunnels is a step in the right direction, but it’s not end to end encrypted.
The question is then, how to secure self hosted apps with minimal configuration, in a way that is almost bulletproof?
It's easy to manage and reason about.
If the software you host constantly has vulnerabilities and something like apt install unattended-upgrades doesn't resolve them, maybe the software simply isn't fit for hosting no matter what team you put on it. That hired team might as well just spend some time making it secure rather than "keeping on top of vulnerabilities"
The solution is a secure software in front. It could be Wireguard, but sometimes you don’t know your users or they don’t want to install anything.
While it's not nearly as powerful as say DataDog, it provides the core essentials of CPU, memory, disk, network, temperature and even GPU monitoring (via agent only).
Would highly recommend.
And I like that I can deploy images which basically don't have any requirement to be deployable to Docker Swarm. Is that also the case with Unraid?
I have been writing up my thoughts (and an example): https://andrewperkins.com.au/kamal/
The ability to deploy to both cloud servers and on-premises is a big win as I often work on projects that have a mix of both.
As the sibling comment says, it’s focussed on web servers. In my use case that is fine!
It's stateful (cleans up things when they're no longer in your config), procedural (you control the flow and can trigger things as needed), and supports flexible deployment models (push or pull). Full disclosure, I created it and use it across my business and personal devices.
Would you please call this something else?
"It automatically reconciles", perhaps? I know that a multi-word phrase isn't nearly as snappy, but not only are "it's started" and "started" overloaded with a bunch of meanings, approximately zero of them mean what you want them to mean in this new context.
Seems like Docker has won so comprehensively that even more convenient (But unfamiliar) options are pushed to use it.
I haven't tried it myself yet. Has anybody else given it a spin?
[0]: https://canine.sh/
$ curl https://kiranet.org/self-hosting-like-its-2025/
<!DOCTYPE html>
<html lang="en">
<head>
<title>//localhost:1313/posts/self-hosting-like-its-2025/</title>
<link rel="canonical" href="//localhost:1313/posts/self-hosting-like-its-2025/">
<meta name="robots" content="noindex">
<meta charset="utf-8">
<meta http-equiv="refresh" content="0; url=//localhost:1313/posts/self-hosting-like-its-2025/">
</head>
</html>If you need more power: I had success with HP ProDesk Mini (or any other one-litre PC), you can get these second hand from like $150 and extend RAM and SSDs however you like. You can even pick processor / generation to fit your needs best. These can have consumption from like 30W if I'm not mistaken.
I have no experience with real and expensive server hardware, but most people don't need that for a homelab.
Wut? For many, a Raspberry Pi with 1 GB RAM and the regular sdcard can be enough, you really don't need to go fancy if you don't want to run anything particularly heavy. Or if it's cpu-intensive then you might need the newest Pi or something even beefier but still only the lowest RAM and smallest/slowest storage options (like for WordPress). As you say, it depends on needs
I always recommend using an old laptop to start out with because you've already got it anyway and it's already low power yet very powerful: if it can run graphical software from 2020 then it'll be fine as server until 2030 for anything standard like a web server (with half a dozen websites and databases, such as a link shortener, some data explorers, and my personal site), torrent box, VPN server, mail server, git server, IRC bouncer, Windows VM for some special software, chat bot, etc. all at once. At least, that's what I currently run on my 2012 laptop and the thing is idle nearly the whole time. Other advantages of a laptop include a built-in KVM console and UPS, at least while you still trust the old battery (one should detach and recycle that component after some years)
1. lighttpd exposing a website, using letsencrypt and a cron job to run certbot and restart lighttpd.
2. mox (https://www.xmox.nl) to run a mail server, with PTR records set up by my ISP. I am not with a CG-NAT ISP else none of this would be possible. mox makes it easy enough to set up DMARC and SPF etc. with appropriate output given that you can add to your DNS records.
3. I grab the list of IPs from https://github.com/herrbischoff/country-ip-blocks and add them to an iptables list (using ipset) every week so that I can block certain countries that have no legitimate reason to be connecting, with iptables just dropping the connection. I think I also use https://github.com/jhassine/server-ip-addresses to drop certain ranges from cloud servers to make annoying script kiddies go away.
4. peer-calls (https://github.com/peer-calls/peer-calls/) to be able to video call with my family and friends (with a small STUN server running locally for NAT traversal as I recall).
5. linx (https://github.com/andreimarcu/linx-server) to share single links to files (you can get an Android app to upload from your phone)
6. filebrowser for sharing blocks of files for users (https://github.com/filebrowser/filebrowser).
7. pihole runs on it so blocks adverts.
8. Wireguard runs on the Pi and I open the VPN ports on my router. I use the VPN on my phone so adverts are blocked when I am out and about (traffic gets routed through the Pi).
9. navidrome runs on it and I use subtracks on Android to stream (or just download albums for when I have spotty connection).
10. mpd runs on the Pi and it plays music to some speakers in the house, so I can control it with M.A.L.P on Android.
11. I use goaccess (https://goaccess.io) to look at my server logs and see what is hitting me.
12. I use maxmind geoip data so I know which countries are hitting me.
13. minidlna runs on the Pi so I can stream films to my TV.
14. I run CUPs on it too so that my rubbish wireless Samsung printer can be printed to from Android and my wife's Apple devices without having to buy an AirPlay-compatible printer.
15. xrdp running so I can log into a visual desktop on the Pi if required.
My router doesn't expose SSH ports, just appropriate ports for these web services and the VPN. SSH keys are useful. SSH is not open to the world anyway and you have to VPN into the network first.
This all sits happily and quietly in a cupboard and uses a feeble amount of power.