I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.
The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.
Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.
Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.
I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.
Ideal if you have the resources (time, money, expertise). There are different levels of qualifications, convenience, and trust that shape what people can and will deploy. This defines where you draw the line - at owning every binary of every service you use, at compiling the binaries yourself, at checking the code that you compile.
> I am not sure why people are so afraid of exposing ports
It's simple, you increase your attack surface, and the effort and expertise needed to mitigate that.
> It's the way the Internet is meant to work.
Along with no passwords or security. There's no prescribed way for how to use the internet. If you're serving one person or household rather than the whole internet, then why expose more than you need out of some misguided principle about the internet? Principle of least privilege, it's how security is meant to work.
This is what I do. You can do Tailscale like access using things like Pangolin[0].
You can also use a bastion host, or block all ports and set up Tor or i2p, and then anyone that even wants to talk to your server will need to know cryptographic keys to route traffic to it at all, on top of your SSH/WG/etc keys.
> I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that.
This is what I don't do. Anything that needs real internet access like mail, raw web access, etc gets its own VPS where an attack will stay isolated, which is important as more self-hosted services are implemented using things like React and Next[1].
There was a popular post less than a month ago about this recently https://news.ycombinator.com/item?id=46305585
I agree maintaining wireguard is a good compromise. It may not be "the way the internet was intended to work" but it lets you keep something which feels very close without relying on a 3rd party or exposing everything directly. On top of that, it's really not any more work than Tailscale to maintain.
Never again, it takes too much time and is too painful.
Certs from Tailscale are reason enough to switch, in my opinion!
The key with successful self hosting is to make it easy and fast, IMHO.
I’m working on a (free) service that lets you have it both ways. It’s a thin layer on top of vanilla WireGuard that handles NAT traversal and endpoint updates so you don’t need to expose any ports, while leaving you in full control of your own keys and network topology.
But some peers are sometimes on the same LAN (eg phone is sometimes on same LAN as pc). Is there a way to avoid forwarding traffic through the server peer in this case?
So yeah, the lesson there is that if you have a port open to the internet, someone will scan it and try to attack it. Maybe not if it's a random game server, but any popular service will get under attack.
But what can you expect from people who provide services but won't even try to understand how they work and how they are configured as it's 'not fun enough', expecting claude code to do it right for them.
Asking AI to do thing you did 100 times before is OK I guess. Asking AI to do thing you never did and have no idea how it's properly done - not so much I'd say. But this guy obviously does not signal his sysadmin skills but his AI skills. I hope it brings him the result he aimed for.
Similar here, I only build & run services that I trust myself enough to run in a secure manner by themselves. I still have a VPN for some things, but everything is built to be secure on its own.
It's quite a few services on my list at this point and really don't want to have a break in one thing lead to a break in everything. It's always possible to leave a hole in one or two things by accident.
On the other side this also means I have a Postgres instance with TCP/5432 open to the internet - with no ill effects so far, and quite a bit of trust it'll remain that way, because I understand its security properties and config now.
In many cases they want something that works, not something that requires a complex setup that needs to be well researched and understood.
In every case where a third party is involved, someone is either providing a service, plugging a knowledge gap, or both.
Behind a VPN your only attack surface is the VPN which is generally very well secured.
the new problem is now my isp uses cgnat and there's no easy way around it
tailscale avoids all that, if i wanted more control i'd probably use headscale rather than bother with raw wireguard
Add the generated Wireguard key to any device (laptops, phones, etc) and access your home LAN as if it was local from anywhere in the world for free.
Works well, super easy to setup, secure, and fast.
With tailscale / zerotier / etc the connection is initiated from inside to facilitate NAT hole punching and work over CGNAT.
With wireguard that removes a lot of attack surfaces but wouldn't work if behind CGNAT without a relay box.
Well just use headscale and you'll have control over everything.
It's always perplexing to me how HN commenters replying to a comment with a statement like this, e.g., something like "I prefer [choice with some degree of DIY]", will try to "argue" against it
The "arguments" are rarely, "I think that is a poor choice because [list of valid reasons]"
Instead the responses are something like, "Most people...". In other words, a nonsensical reference to other computer users
It might make sense for a commercial third party to care about what other computer users do, but why should any individual computer user care what others do (besides genuine curiosity or commercial motive)
For example, telling family, friends, colleagues how you think they should use their computers usually isn't very effective. They usually do not care about your choices or preferences. They make their own
Would telling strangers how to use their computers be any more effective
Forum commenters often try to tell strangers what to do, or what not to do
But every computer user is free to make their own choices and pursue their own preferences
NB. I am not commenting on the open ports statement
I’ve been meaning to give this a try this winter: https://github.com/juanfont/headscale
but actually it's worse. this is HN - supposedly, most commenters are curious by nature and well versed into most basic computer stuff. in practice, it's slowly less and less the case.
worse: what is learned and expected is different from what you'd think.
for example, separating service users sure is better than nothing, but the OS attack surface as a local user is still huge, hence why we use sandboxes, which really are just OS level firewalls to reduce the attack surface.
the open port attack surface isnt terrible though: you get a bit more of the very well tested tcp/ip stack and up to 65k ports all doing the exact same thing, not terrible at all.
Now, add to it "AI" which can automatically regurgitate and implement whatever reddit and stack overflow says.. it makes for a fun future problem - such forums will end up with mostly non-new AI content (new problem being solved will be a needle in the haystack) - and - users will have learned that AI is always right no matter what it decides (because they don't know any better and they're being trained to blindly trust it).
Heck, i predict there will be a chat, where a bunch of humans will argue very strongly that an AI is right while its blatantly wrong, and some will likely put their life on the line to defend it.
Fun times ahead. As for my take: humans _need_ learning to live, but are lazy. Nature fixes itself.
You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
WireGuard gives you the same "no exposed ports except VPN" model without the third-party dependency.
The tradeoff is convenience, not security.
BTW, why are people acting like accessing a server from a phone is a 2025 innovation?
SSH clients on Android/iOS have existed for 15 years. Termux, Prompt, Blink, JuiceSSH, pick one. Port N, key auth, done. You can run Mosh if you want session persistence across network changes. The "unlock" here is NAT traversal with a nice UI, not a new capability.
> SSH clients on Android/iOS have existed for 15 years
That is not the point, Tailscale is not just about having a network connection, it's everything that goes with. I used to have OpenVPN, and there's a world of difference.
- The tailscale client is much nicer and convenient to use on Android than anything I have seen.
- The auth plane is simpler, especially for non tech users (parents, wife) whom I wish to access my photo album. They are basically independent with tailscale.
- The simplicity also allows me to recommend it to friends and we can link between our tailnet, e.g. to cross backup our NAS.
- Tailscale can terminate SSH publicly, so I can selectively expose services on the internet (e.g. VaultWarden) without exposing my server and hosting a reverse proxy.
- ACLs are simple and user friendly.
nothing 100% fixes zero days either, you are just adding layers that all have to fail at the same time
> You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
you still have to have a vulnerable service after that. in your scenario you'd need an exploitable attack on wireguard or one of tailscale's modifications to it and an exploitable service on your network
that's extra difficulty not less
Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!
Basically, I feel that tailscale does not make it very easy to set up services this way, and the only method I have figured out has a bit too many steps for my liking, basically:
- to expose some port to the tailnet, there needs to be a `tailscale serve` command to expose its ports
- in order for this command to run on startup and such, it needs to be made into a script that is run as a SystemD service
- if you want to do this with N services, then you need to repeat these steps N times
Is this how you do it? is there a better way?
LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.
In my experience this is much less of an issue depending on your configuration and what you actually expose to the public internet.
Os-side, as long as you pick a good server os (for me that’s rocky linux) you can safely update once every six months.
Applications-wise, i try and expose as little as possible to the public internet and everything exposed is running in an unprivileged podman container. Random test stuff is only exposed within the vpn.
Also tailscale is not even a hard requirement: i rub openvpn and that works as well, on my iphone too.
The truly differentiating factor is methodological, not technological.
Its only swarms of bots and scripts going through the entire internet, including me.
iptables and fail2ban should be installed pretty early, and then - just watch the logs.
TS is cool if you have a well-defined security boundary. This is you / your company / your family, they should have access. That is the rest of the world, they should not.
My use case is different. I do occasionally want to share access to otherwise personal machines around. Tailscale machine sharing sort of does what I want, but it's really inconvenient to use. I wish there was something like a Google Docs flow, where any Tailscale user could attempt to dial into my machine, but they were only allowed to do so after my approval.
For the permissions, just add basic auth in the reverse proxy and choose whom to share the passwd with.
Now if you want OAuth or something like that... well tough luck, you need to set up OIDC or whatever and that's going to be taking you some time, but it still works how you want.
Claude Code or other assistants will give you conversational management.
I already do the former (using Pangolin). I'm building towards the latter but first need to be 100% sure I can have perfect rollback and containement across the full stack CC could influence.
The way I've put this into practice is that instead of letting claude loose on production files and services, i keep a local repo containing copies of all my service config files with a CLAUDE.md file explaining what each is for, the actual host each file/service lives on, and other important details. If I want to experiment with something ("Let's finally get around to planning out and setting up kea-dhcp6!"), Claude makes its suggestions and changes in my local repo, and then I manually copy the config files to the right places, restart services, and watch to see if anything explodes.
Not sure I'd ever be at the point of trusting agentic AI to directly modify in-place config files on prod systems (even for homelab values of "prod").
But Tailscale is just a VPN (and by VPN, I mean: Something more like "Connect to the office networ" than I do "NordVPN"). It provides a private network on top of the public network, so that member devices of that VPN can interact together privately.
Which is pretty great: It's a simple and free/cheap way for me to use my pocket supercomputer to access my stuff at home from anywhere, with reasonable security.
But because it happens at the network level, you (generally) need to own the machines that it is configured on. That tends to exclude using it in meaningful ways with things like library kiosks.
Your cloudflare tunnel availability depends on Cloudflare’s mood of the day.
The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.
I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.
I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.
Backup generators have their own maintenance schedule.
I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.
Another maker is Goldenmate (less I be accused of being an ad)
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
Of course that means we’ll not have another ice storm in my lifetime. My neighbors should thank me.
https://www.ankersolix.com/ca/products/f2600-400w-portable-s...
What setup did you go with for whole house backup power?
If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.
If you just want to put a service on the internet, a VPS is the way to go.
Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.
And security is only one crucial aspect. How spam filters react to your IP is another story.
In the end I cherrish the dream but rely on third party server providers.
its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.
although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense
So now you need to test them regularly. And order new ones when they're not holding a charge any more. Then power down the server, unplug it, pull the UPS out, swap batteries, etc.
Then even when I think I've got the UPS automatic shutdown scripts and drivers finally working just right under linux, a routine version upgrade breaks it all for some reason and I'm spending another 30 minutes reading through obscure docs and running tests until it works again.
I've also worked in environments where the most pragmatic solution was to issue a reboot periodically and accept the minute or two of (external) downtime. Our problem is probably down to T-Mobile's lousy consumer hardware.
How bottomless of a pit it becomes depends on a lot of things. It CAN become a bottomless pit if you need perfect uptime.
I host a lot of stuff, but nextcloud to me is photo sync, not business. I can wait til I'm home to turn the server back on. It's not a bottomless pit for me, but I don't really care if it has downtime.
I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.
Tailscale also has a self-hosted version I believe.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
> I am spending time using software, learning
What are you actually learning?
PSA: OP is a CEO of an AI company
You can watch your doctor, your plumber, your car mechanic and still wouldn’t know if they di something wrong if you don’t know the subject as such.
Wrote about learning and fun here: https://fulghum.io/fun2
Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.
This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.
I was thinking about what if your "cloud" was more like a tilde.club, with self hosted web services plus a Linux login. What services would you want?
Email and cloud make sense. I think a VPN and Ad Blocker would too. Maybe Immich and music hosting? Calendar? I don't know what people use for self hosting
I'd really focus on it being usable for non-techies, I don't think I'd want a linux login for anything. IMO, the focus should be on the basic infrastructure of digital life for the everyday person.
tilde.club sounds interesting though! Hadn't heard of it before.
I say all this as someone who's been self-hosting services in one form or another for almost a decade at this point. The market incorporation/consumerfication of the hobby has been so noticeable in the last five years. Even this AI thing seems like another step in that direction; now even non-experts can drop $350+ on consumer hardware and maybe $100 on some network gear so that they can control their $50/bulb Hue lights and manage their expansive personal media collection.
But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.
I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.
That said, I'm not sure if Bitwarden is the answer either. There is certainly some value in obscurity, but I think they have a better infosec budget than I do.
I cannot say how happy I am configuring my own immich server on a decade old machine. I just feel empowered. Because despite my 9 years of software development, I haven't gotten into the nitty gritties of networking, VPN and I always see something non-standard while installing an open source package and without all of this custom guidance, I always would give up after a couple of hours of pulling my hair apart.
I really want to go deeper and it finally feels this could be a hobby.
PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.
I've been email self-hosting for a decade, and unfortunately, self-hosting your email will not help with this point nearly as much as it seems on first glance.
The reason is that as soon as you exchange emails with anyone using one of the major email services like gmail or o365, you're once again participating in the data collection/AI training machine. They'll get you coming or they'll get you going, but you will be got.
I do want to be able to take control; with photos and Google not giving me a folder view to manage them was the last straw that pushed me deep into the self hosted world. I just want to de-google as much as reasonable.
There are so many NAS + Curated App Catalog distros out there that make self-hosting trivial without needing to Vibe SysAdmin.
In my experience, this approach works extremely well—I would not have been able to accomplish this much on my own. However, there is an important caveat: you must understand what you are doing. AI systems sometimes propose solutions that do not work, and in some cases they can be genuinely dangerous to your data integrity or your privacy.
AI is therefore a powerful accelerator, not a replacement for expertise. You still need to critically evaluate its suggestions and veto roughly 10% of them.
On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.
Haproxy with SNI routing was simple and worked well for many years for me.
Istio installed on a single node Talos VM currently works very well for me.
Both have sophisticated circuit breaking and ddos protection.
For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.
I expose one or two things to the public behind an oauth2-proxy for authnz.
Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.
Years later I still had the same router. Somewhere a long the line, I fired the right neurons and asked myself, "When was the last time $MANUFACTURER published an update for this? It's been awhile..."
In the context of just starting to learn about the fundamentals of security principles and owning your own data (ty hackernews friends!), that was a major catalyst for me. It kicked me into a self-hosting trajectory. LLMs have saved me a lot of extra bumps and bruises and barked shins in this area. They helped me go in the right direction fast enough.
Point is, parent comment is right. Be safe out there. Don't let your server be absorbed into the zombie army.
The idea is a contract is defined saying which options exist and what they mean. For backups, you’d get the Unix user doing the backup, what folders to backup and what patterns to exclude. But also what script can be run to create a backup and restore from a backup.
Then you’d get a contract consumer, the application to be backup, which declares what folders to backup either which users.
On the other side you have a contract provider, like Restic or Borgbackup which understand this contract and know thanks to it how to backup the application.
As the user, your role is just to plug-in a contract provider with a consumer. To choose which application backs up which application.
This can be applied to LDAP, SSO, secrets and more!
Proxmox Backup Server?
No judgement, but wanting to tinker/spend time on configuration is a major reason why many people do self-host.
p0wnland. this will have script kiddies rubbing their hands
This is nonsense. You can't self-host services meant to interact with the public (such as email, websites, Matrix servers, etc.) without a public IP, preferably one that is fixed.
I recently had a bunch of breakages and needed to port a setup - I had a complicated k3s container in proxmox setup but needed it in a VM to fix various disk mounts (I hacked on ZFS mounts, and was swapping it all for longhorn)
As is expected, life happens and I stopped having time for anything so the homelab was out of commission. I probably would still be sitting on my broken lab given a lack of time.
It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.
But I wanted decent deployments. Hosting a image repository cost 3-4x of the server. Sending over the container image took over an hour due to large image processing python dependencies.
Solution? Had a think and a chat with Claude code, now I have blue-green deployments where I just upload the code which takes 5 seconds, everything is then run by systemd. I looked at the various PaaSes but they ran up to $40/month with compute+database etc.
I would probably never have built this myself. I'd have gotten bored 1/3 through. Now it's working like a charm.
Is it enterprise grade? Gods no. Is it good enough? Yes.
When using them with production code they are a liability more than a resource.
I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.
But I want to host an LLM.
I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.
Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).
I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.
My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.
CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.
It also means you can literally host the tools on a server in your office, if you really want to.
Putting CC on the server makes this set up even better. It’s extremely good at system admin.
Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.
Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.
I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.
related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/
Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well
Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”
[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...
I share a lot of the same hesitations as others in the thread - using a giant US-based tech giant's tool for research as well as another US giant's tool to manage access, but it's really a game change and I'd be unable to find the time to do everything I want if I didn't have access to these otherwise.
I'm not even a software guy by engineering, my network is already complicated enough that learning and correctly securing things otherwise would simply just not be feasible with the time and energy I'd like to dedicate to it.
If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.
What’s the goal? If the act of _building_ a homelab is the fun then i agree 100%. If _having_ a reliable homelab that the family can enjoy is the goal, then this doesn’t matter.
For me personally, my focus is on “shipping” something reliable with little fuss. Most of my homelab skills don’t translate to my day job anyway. My homelab has a few docker compose stacks, whereas at work we have an internal platform team that lets me easily deploy a service on K8s. The only overlap here is docker lol. Manually tinkering with ports and firewall rules, using sqlite, backups with rsync, etc…all irrelevant if you’re working with AWS from 9-5.
I guess I’m just pointing out that some people want to build it and move on.
I'll agree to disagree on it not being applicable. Having fundamental knowledge on topics like networking thru homelabbing have helped me develop my understanding from the ground up. It helps in ways that are not always obvious. But if your goal is purely to be better at your job at work, it is not the most efficient path.
Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.
In that case, it's not about the 'joy of creation', but actually getting everything up and running again, in which case LLMs are indispensable.
From to time, test the restore process.
They tend to slip out of declarative mode and start making untracked changes to the system from time to time.
>I am spending time using software, learning, and having fun - instead of maintaining it and stressing out about it.
Using software, learning and having fun with with what? everything is being done by Claude here. The part of fun and learning is to learn to use and maintain it in the first place. How will you learn anything if Claude is doing everything for you ? You are not understand how things work and where everything goes.
This post could be written or at least modified by an LLM, but more importantly I think this person is completely missing the point of self hosting and learning.
I did it! Except you didn't and you don't know anything about what it did or learned anything along the way. Success?
But for what i'm using Agents right now, claude code is the tool to go.
workdir/ ├── README.md ├── CLAUDE.md # Claude Code instructions ├── BACKUP.md # Backup documentation ├── .gitignore ├── traefik/ │ ├── docker-compose.yml │ └── config/ │ └── traefik.yml ├── authentik/ │ ├── docker-compose.yml │ └── .env.example ├── umami/ │ ├── docker-compose.yml │ └── .env.example ├── n8n/ │ ├── docker-compose.yml │ └── .env.example └── backup/ ├── backup.sh # Automated backup script ├── restore.sh # Restore from backup ├── verify.sh # Verify backup integrity ├── list-backups.sh # List available backups └── .env.example
We've gone a step further, and made this even easier with https://zo.computer
You get a server, and a lot of useful built-in functionality (like the ability to text with your server)
I agree you could use LLMs to learn how it works, but given that they explain and do the actions, I suspect the vast majority aren't learning anything. I've helped students who are learning to code, and very often they just copy/paste back and forth and ignore the actual content.
I'm asking Claude technical questions about setup, e.g., read this manual, that I have skimmed but don't necessarily fully understand yet. How do I monitor this service? Oh connect Tailscale and manage with ACLs. But what do I do when it doesn't work or goes down? Ask Claude.
To get more accurate setup and diagnostics, I need to share config files, firewall rules, IPv6 GUAs, Tailscale ACLs... and Claude just eats it up, and now Anthropic knows it forever too. Sure, CGNET, Wireguard, and ssh logins stand between us, but... Claude is running a terminal window on a LAN device next to another terminal window that does have access to my server. Do I trust VS Code? Anthropic? The FOSS? Is this really self-hosting? Ahh, but I am learning stuff, right?
Tbh I did the mistake of throwing away Ansible, so testing my setup was a pain!
Since with AI, the focus should be on testing, perhaps it's sensible to drop Ansible for something like https://github.com/goss-org/goss
Things are happening so fast, I was impressed to see a Linux distro embrace using a SKILL.md! https://github.com/basecamp/omarchy/blob/master/default/omar...
I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.
I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.
I'm not familiar with Dell product names specifically but 'tower' sounds like it'll sit there burning 200W idle. Old laptops (sliding out the battery) is what I've been opting for, which use barely anything more than the router it sits next to. Especially if you just want to serve static files as GP seems to be looking for, an old smartphone will be enough but there you can't remove the battery (since it won't run off of just the charger)
My first "server" was a 65€ second-hand laptop including shipping iirc, in ~2010 euros so say maybe 100€ now when taking inflation into account. I used that for a number of years and had a good idea of what I wanted from my next setup (which wasn't much heavier, but a little newer cpu wasn't amiss after 3 years). Don't think one needs to even go so far as 200$ for a "local Bandcamp archive" (static file storage) and serving that via some streaming webserver
Jellyfin docs do mention "Not having a GPU is NOT recommended for Jellyfin, as video transcoding on the CPU is very performance demanding" but that's for on-the-fly video transcoding. If you transcode your videos to the desired format(s) upon import, or don't have any videos at all yet as in GP's case, it doesn't matter if the hardware is 20x slower. Worst case, you just watch that movie in source material quality: on a LAN you won't have network speed bottlenecks anyway, and transcoding on GPU is much more expensive (purchase + ongoing power costs) than the gigabit ethernet that you can already find by default on every laptop and router
(In)famous last words?
For now I'm just using Cloudflare tunnels, but ideally I also want to do that myself (without getting DDoS)
The structure is dead simple: `machines/<hostname>/stacks/<service>/` with a `config.sh` per machine defining SSH settings and optional pre/post deploy hooks. One command syncs files and runs `docker compose up -d`.
I could see Claude Code being useful for debugging compose files or generating new stack configs, but having the deployment itself be a single `./deploy.sh homeserver media` keeps the feedback loop tight and auditable.
It's simple enough and I had some prior experience with it, so I merely have some variables, roles that render a docker-compose.yml.j2 template and boom. It all works, I have easy access to secrets, shared variables among stacks and run it with a simple `ansible-playbook` call.
If I forget/don't know the Ansible modules, Claude or their docs are really easy to use.
Every time I went down a bash script route I felt like I was re-inventing something like Ansible.
Which basically accomplishes same thing, but gives a bit more UI for debugging when needed.
For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.
I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too
If you need to run the command once, you can now run it again in the future.
It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.
Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.
In this case you will be completely unable to navigate the infrastructure of your homeserver that your life will have become dependent on.
But a homeserver is always about your levels of risk, single points of failure. I'm personally willing to accept Tailscale but I'm not willing to give the manipulation of all services directly over to Claude.
I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?
I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?
What I've found: Claude Code is great at the "figure out this docker/nginx/systemd incantation" part but the orchestration layer (health checks, rollbacks, zero-downtime deploys) still benefits from purpose-built tooling. The AI handles the tedious config generation while you focus on the actual workflow.
github.com/elitan/frost if curious
Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”
CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).
So the tradeoff has changed from:
“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”
(and no, this product is not against TOS as it is using the official claude code SDK unlike opencode https://yepanywhere.com/tos-compliance.html)
And ironically all in the name of "self hosting". Claude code defies both words in that.
Then someday we self-host the AI itself, and it all comes together.
My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.
I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.
Thus, Renovate keeps me up to date and git keeps everyone honest.
Is there a replica implementation that works in the direction I want?
And until now without AI, but I'm kind of curious but afraid that it will bring my servers down and then I can't roll back :D But perhaps if I would move over to NixOS, then it would be easy to roll back.
I route it through a familiar interface like slack tho as I don't like to ssh from phone or w/e using a tool I built - https://www.claudecontrol.com/
And ideally doing it via lxc or vm.
Extra complication but gives you something repeatable that you can stick on git
But I wouldn't give the keys of the house to Claude or any LLM for that matter. When needed, I ask them questions and type commands myself. It's not that hard.
Avoid stacking in too many hard drives since each one uses almost as much power as the desktop does at idle.
Is it just a single docker-compose.yml with everything you want to run and 'docker compose up'?
Agents are powerful. Even more so with skills and command line tools they can call to do things. You can even write custom tools (like I did) for them to use that allows for things like live debugging.
The tailscale piece to this setup is key.
I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio
Its comedic at this point.
I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?
Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.
Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.
I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.
Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?
Bandwidth inside the same zone is free.
Maybe viable if you have a bunch of spare parts laying around. But probably not when RAM and storage prices are off the charts!
I had a 30-year-old file on my Mac that I wanted to read the content of. I had created it in some kind of word processing software, but I couldn’t remember which (Nexus? Word? MacWrite? ClarisWorks? EGWORD?) and the file didn’t have an extension. I couldn’t read its content in any of the applications I have on my Mac now.
So I pointed CC at it and asked what it could tell me about the file. It looked inside the file data, identified the file type and the multiple character encodings in it, and went through a couple of conversion steps before outputting as clean plain text what I had written in 1996.
Maybe I could have found a utility on the web to do the same thing, but CC felt much quicker and easier.
There are a few important things to consider, like unstable IPs, home internet limits, and the occasional power issue. Cloud providers felt overpriced for what I needed, especially once storage was factored in.
In the end, I put together a small business where people can run their own Mac mini with a static IP: https://www.minimahost.com/
I’m continuing to work on it while keeping my regular software job. So far, the demand is not very high, or perhaps I am not great at marketing XD
Granted, that's rarely enforced, but if you're a stickler for that sort of thing, check your ISP's Acceptable Use Policy.
Though just blocking particular ports for this purpose is very 90s and obviously ineffective, as you demonstrated. Anybody proficient in installing wireguard also knows how to change ports.
That's the flaw right there. Don't mix company assets with pricate use. Phone, laptop, car. Your life is already very dependent on your employer (through income), don't get yourself locked in even more by depending on them for personal tech. Plus it's a security risk to your company.
Unless you have a low paying job, which rarely anybody on HN does, you can afford your own phone and laptop. And IT won't find your messages to girlfriend or pictures you don't want others to see or browsing history.
Lol, no thank you. Btw do your knees hurt?
I still struggle with letting go of writing code and becoming only a full-time reviewer when it comes to AI agents doing programming, but I don't struggle in the slightest with assuming the position of a reviewer of the changes CC does to my HA instance, delegating all the work to it. The progress I made on making my house smart and setting up my dashboards has skyrocketed compared to before I started using CC to manage HA via its REST and WS APIs.
All those fancy GUIs in Mac and Windows designed to be user friendly (but which most users hate and are baffled by anyway) are very hostile for models to access. But text configuration files? it's like a knife through butter for the LLMs to read and modify them. All of a sudden, Linux is MORE user friendly because you can just ask an LLM to fix things. Or even script them - "make it so my theme changes to dark at night and then back to light each morning" becomes something utterly trivial compared to the coding LLMs are being built to handle. But hey, if your OS really doesn't support something? the LLM can probably code up a whole app for you and integrate it in.
I think it's going to be fascinating to see if the power of text based interfaces and their natural compatibility with LLMs transfers over into an upswing in open source operating systems.
Thanks
Claude and Gemini have been instrumental in helping me understand the core concepts of kubernetes, how to tune all these enterprise applications for high latency, think about architecture etc...
My biggest "wow, wtf?" moment was ben I was discussing the cluster architecture with Claude. It asked: want me to start the files?
I thought it meant update the notes, so replied 'yes'.
It spit out 2 sh files and 5 YAMLs that completely bootstrapped my cluster with a full GitOps setup using ArgoCD.
Learning while having a 24/7 senior tutor next to me has been insane value.