I guess it’s like how “cooking from scratch” evolved. A cookbook from the nineteenth century might have said “1 hog” as an ingredient and instructed you to slaughter it. Now of course you buy hog pieces on foam trays.
But there's a whole market of people who could benefit from self hosting, but shouldn't be required to understand all the details.
For example, you can get many of these benefits by using a managed service with your own domain. Things like data ownership, open source software, provider competition, etc.
I think we need a broader term. I've been using "indie hosting" lately.
I think one of the big drivers of it has been the serious increase in performance and capability of the low power embedded processors from Intel and AMD (and in the last year or so some ARM based ones), like supporting more than 2GB of ram and having multiple cores that can meaningfully do work even with a 15W TDP.
The one generally accepted exception to this is network protection. You don't want to expose your home ip address to the outside world if you can help it, so a lot of people use tailscale, cloud flare tunnels, or a vps as a proxy.
In case that doesn't make me an old timer, I also actually have pork and home cured bacon in the freezer from hogs we raised and processed. "An old soul living in a new world" feels pretty fitting here.
Every ISP prohibition on self-hosting that I have seen specifies commercial use, not just hosting services (since obviously that could technically prohibit tons of normal and authorized uses like co-op games).
and so does the author, kind of...
"And so, here is a gentle introduction to self-hosting that is not "true self-hosting", but whatever. Sue me."
:)
Or read an HN thread on "true self-hosting", https://news.ycombinator.com/item?id=41440855#41460999
If you're running your own box, you still depend on network infrastructure and uplink of a service provider, whereas a cloud infrastructure provider may go the other way and negotiate direct connections themselves.
Plenty of valuable lessons await for those who even just provision a virtual host inside AWS and configure the operating system and its software themselves. Other lessons for those who rack up their own servers and VNETs and install them in a data-centre provider instead of running them onsite.
There's only so much you can or should or want to do yourself, and its about finding the combination and degree that works for you and your goals.
For a lot of stuff that doesn't need constant public network connectivity, I choose to run a home lab.
Depends, maybe? Was the speaker talking about hardware or software 10 years ago?
Because, when I was given the 'self-hosting' option by some SaaS vendor, it meant that I could host it on whatever I want to independent of the vendor, whether that is a rack in my bedroom or a DO droplet.
When I was given the 'self-hosting' option by some computer vendor (Dell, HP, Sun, etc), it meant that I can put the unit into a rack in my bedroom.
Context was always key; in my mind, nothing has changed.
Kind of makes sense, but kind of also makes historical texts more difficult to understand. In the year 2124, who know what "self-hosting" meant in 2054? I guess it's up to future software archeologists to figure out.
I actually self-host tools, and that involves having (in my case) a couple of rackmount servers in my spare bathroom, and an rPi5 with a 4x m.2 hat on my desk. Hell, even just running stuff on your own desktop/laptop is self-hosting.
But PaaS and SaaS are just as not-self-hosted as IaaS is. It's literally cloud hosting.
It's not so hard to genuinely self-host. You just need a reasonable ISP who is willing to open your connection, and to be sensible about securing your systems.
WHERE and HOW it is hosted, is less important to me. Because if you self-host your own tools, you can freely pick them up and move them to any hosting provider, a cloud provider, or a Raspberry Pi in your basement. Self-hosting FREES you from infra/vendor lock-in.
Hosting from home has its own challenges, so I get why people would go to a hosting provider, but I do think some control is given up in the process.
I feel like there's another term for what you're thinking of but I cannot come up with what it is.
Self-hosting definitely was locally hosted on your own hardware back when hosting providers like Linode, Digital Ocean, AWS, etc existed or were as customizable.
Even corporations "self-host" GitHub Enterprise or Gitlab when they set it up on AWS. Self-host just means you're not reliant on creator of the application to host it for you and manage the server.
There are certainly advantages and disadvantages to self-hosting on your own hardware, as there are to using a hosting provider.
Unfortunately, it also normalized and further desensitivied us to the topic.
Though I'm quite happy to see that eating sentient beings gets out of fashion, at least in developed world.
1. battery backup
That said, I'm not zealous about it. "Perfect is the enemy of good" and I like ecosystem diversity in general. Better to have a few dozen shared hosting providers than 2 or 3 monopolies.
The awesome-selfhosted repository is also a great place to find projects to self-host but lacks some features for ease-of-use, which is why I've created a directory with some UX improvements on https://selfhostedworld.com. It has search, filters projects by stars, trending, date and also has a dark-mode.
I've thought of setting up and running a server for a long time and finally have a spare laptop so I'm thinking of actually running a NAS at least.
I would try to set up automatic updates for critical security patches or update about weekly. I know people that self host and do it monthly and they seem fine too. Most anything super scary vulnerability wise is on the front page here for awhile, so if you read regularly you'll probably see when a quick update is prudent. I personally use NixOS for all of my servers and have auto-updates configured to run daily.
An old laptop is exactly how I got started 13 years ago, they're great because they tend to be pretty power efficient and quiet too.
https://developers.cloudflare.com/cloudflare-one/application...
Because wireguard is UDP and only responds to valid requests, there isn't any open port from the outside. Not even ssh.
I run SSH (requires PKI outside local network), IRC, nextcloud, and ampache (though don't really use ampache anymore :( ).
Home server is encrypted RAID6 Arch Linux. If I had to do it again I'd forego rolling releases and use something more stable, like Debian.
Encrypted backups are done to backblaze once a month. I also have a backup drive that I plug in on occasion, encrypted of course.
Which reminds me my RAID6 drives are getting old now... I'm tempted to move to a VPS.
I set up Jellyfin and Kavita, and those are internet-exposed, but also Nextcloud, and Portainer, and Calibre, and those are behind github SSO auth, via Cloudflare. Basically, before you can hit the nextcloud login page, you have to auth to github (as me) with 2FA first, so no one can sit there and try to brute-force my nextcloud login.
Administrative things like SSH and RDP are best accessed with a VPN but you can configure SSH in particular to be key-based authentication only, which is very secure.
First step to figure out if you actually need to be able to access it from the outside at all. If you just want a NAS, chances are you can put it on a separate VLAN/network that is only accessible within your LAN, so it wouldn't even be accessible from the outside.
If you really need it to be accessible from the outside, make sure you start with everything locked down/not accessible at all from the beginning, then step-by-step open up exactly what you want, and nothing else. Make sure the endpoints accessible is run by software you keep up to date, at least weekly if not daily.
I haven't had any security issues yet (knock on wood). But it seems pretty low-risk if you follow basic best practices. The only thing I have exposed to the internet is a reverse proxy that proxies to a handful of docker containers.
For auth, I also made a comparison of OIDC servers here: https://github.com/lastlogin-net/obligator#comparison-is-the...
It happened that Coolify provides the paid option to sponsor the development, but it is not mandatory.
Kubernetes gets a lot of side eyes in the self-hosted community. That's all of self hosting though. So why not go all in?
I've got 3 dell r720XDs running nixos with k3s in multi master mode. It runs rook/ceph for storage, and I've got like 12 hard drives in various sizes. My favorite party trick is yoinking a random hard drive out of the cluster while streaming videos. Does not care. Plug it back in and it's like nothing happened. I've still got tons of room and I keep finding useful things to host on it.
Personally, even with a 4 node setup (of tiny desktops; the hardware you have would easily cost me $200/mo in power bills), I use docker swarm. Old and unloved, but does everything I need for multi node deployment and orchestration with only a sliver more complexity than vanilla docker.
This way, I get the advantages of NixOS config, while also being able to run arbitrary applications that might not be available on nixpkgs.
As far as storage goes, I just use ZFS on the hypervisor (Proxmox) and expose that over NFS locally.
> It is 2024, and I say it is time we revisited some of the fundamental joys of setting up our own systems.
Self-hosting really is joyful. It's that combination of learning, challenge, and utility.
+1 to Actual Budget
+1 to Changedetection.io
-1 for not mentioning threat modeling / security. The author uses HTTPS but leaves their websites open to the public internet. First-timers should host LAN-only or lock their stuff way down. I guess that's tricky with shared hosting without some kind of IP restriction or tunneling, though. No idea if uberspace offers something like that.
For folks getting past the initial stages of self-hosting, I'd really recommend something like Docker to run more and more different apps side by side. Bundled dependencies FTW. Shameless plug for my book, which covers the Docker method: https://selfhostbook.com
If it's not your hardware running in a space you own or rent, you're not self-hosting.
Currently I have a little Micro-ITX box. But once upon a time I had a proper server rack with 6 U worth of servers, UPS, networking, etc. (Before I was married...)
for those who are curious about my setup, I bought a used Dell R630 on ebay for cheap. 1tb raid 1 on ssds, 32gb ram, 32 cores, and i am enjoying running a few small hobby apps with docker, virsh, and minikube (yes i learned all 3). I have a 1gbps fiber connection. I use a 1m cronjob to detect if my IP changes, and i use the linode api to change my DNS A records.
For compute:
"Each tenancy gets the first 3,000 OCPU hours and 18,000 GB hours per month for free to create Ampere A1 Compute instances. This free-tier usage is shared across Bare Metal, Virtual Machine, and Container Instances."
For block storage:
"Block Volume service Free tier allowance is for entire tenancy, 200GB total/tenancy. If there are multiple quotes for the same tenancy, only total 200GB can be applied"
In other words: you have a 4-core ARM CPU + 24GB RAM + 200GB space for free.
We made Cloud Seeder [2] an open source application that makes deploying and managing your self-hosted server a 1-click issue!
Hope this comes in handy for someone! :-)
[1] https://ipv6.rs
[2] https://ipv6.rs/cloudseeder https://github.com/ipv6rslimited/cloudseeder
* A: "While IPv4 is still widely used, its necessity is diminishing as the world transitions to IPv6. (...)"
;)
We offer 5 because we're geared toward helping people host appliances as opposed to raw network setup! We also offer automatic RDNS with this as well as the Cloud Seeder appliance!
Thanks again for your comments and as well thoughts!
"Seriously, else-hosting is the practical option, let someone else worry about the reliability, concurrency, redundancy and availability of your systems."
Spend one time trying to get through a maze of automated phone answering systems, then try to ascertain whether the human, when you finally get them, even understands the issue, then wonder how much of what they're telling you is to just get you off the phone, all the time wondering if calling even really does anything, and you'll wonder whether it's better to blindly trust a company that likely doesn't have tech people we're allowed to talk to or to just do it ourselves.
At least when there's an issue with my things, I can address it. Although a bit of a tangent, I'd love to see a review of major hosting providers based on whether you can talk to a human, and whether said human knows anything at all about Internet stuff.
This has not been a detailed step by step walkthrough
on how to do things, by design. You are meant to go and explore;
this is simply a way pointer to invigorate your curiosities
Sorry, but because I came looking for solutions, I found the invigoration aggravating, but then helpful in focusing my attention.Scalable services and sites I can build, 10 different ways.
My enduring, blocking need is for dead-simple idiot-proof network management to safely poke a head out on public IP from home. And to make secure peer-to-peer connections. Somehow that process never converges on a solution in O(available) time.
</complaining>
Recent thread: https://news.ycombinator.com/item?id=41440855#41460999
> network management to safely poke a head out on public IP from home
For remote access to private services, would Tailscale/Wireguard be an option? It can even use Apple TV as an exit node.
> secure peer-to-peer connections
Which protocols would you consider secure for P2P use, e.g. which solutions have you tried previously which failed to converge?