I actually think Tailscale may be an even bigger deal here than sysadmin help from Claude Code at al.
The biggest reason I had not to run a home server was security: I'm worried that I might fall behind on updates and end up compromised.
Tailscale dramatically reduces this risk, because I can so easily configure it so my own devices can talk to my home server from anywhere in the world without the risk of exposing any ports on it directly to the internet.
Being able to hit my home server directly from my iPhone via a tailnet no matter where in the world my iPhone might be is really cool.
I am not sure why people are so afraid of exposing ports. I have dozens of ports open on my server including SMTP, IMAP(S), HTTP(S), various game servers and don't see a problem with that. I can't rule out a vulnerability somewhere but services are containerized and/or run as separate UNIX users. It's the way the Internet is meant to work.
You have also added attack surface: Tailscale client, coordination plane, DERP relays. If your threat model includes "OpenSSH might have an RCE" then "Tailscale might have an RCE" belongs there too.
WireGuard gives you the same "no exposed ports except VPN" model without the third-party dependency.
The tradeoff is convenience, not security.
BTW, why are people acting like accessing a server from a phone is a 2025 innovation?
SSH clients on Android/iOS have existed for 15 years. Termux, Prompt, Blink, JuiceSSH, pick one. Port N, key auth, done. You can run Mosh if you want session persistence across network changes. The "unlock" here is NAT traversal with a nice UI, not a new capability.
Now I have tailscale on an old Kindle downloading epubs from a server running Copyparty. Its great!
Basically, I feel that tailscale does not make it very easy to set up services this way, and the only method I have figured out has a bit too many steps for my liking, basically:
- to expose some port to the tailnet, there needs to be a `tailscale serve` command to expose its ports
- in order for this command to run on startup and such, it needs to be made into a script that is run as a SystemD service
- if you want to do this with N services, then you need to repeat these steps N times
Is this how you do it? is there a better way?
LLMs are also a huge upgrade here since they are actually quite competent at helping you set up servers.
In my experience this is much less of an issue depending on your configuration and what you actually expose to the public internet.
Os-side, as long as you pick a good server os (for me that’s rocky linux) you can safely update once every six months.
Applications-wise, i try and expose as little as possible to the public internet and everything exposed is running in an unprivileged podman container. Random test stuff is only exposed within the vpn.
Also tailscale is not even a hard requirement: i rub openvpn and that works as well, on my iphone too.
The truly differentiating factor is methodological, not technological.
TS is cool if you have a well-defined security boundary. This is you / your company / your family, they should have access. That is the rest of the world, they should not.
My use case is different. I do occasionally want to share access to otherwise personal machines around. Tailscale machine sharing sort of does what I want, but it's really inconvenient to use. I wish there was something like a Google Docs flow, where any Tailscale user could attempt to dial into my machine, but they were only allowed to do so after my approval.
Claude Code or other assistants will give you conversational management.
I already do the former (using Pangolin). I'm building towards the latter but first need to be 100% sure I can have perfect rollback and containement across the full stack CC could influence.
The only thing served on / is a hello world nginx page. Everything else you need to know the randomly generated subpath route.
I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Also (and that's a me problem maybe) I was using Tailscale but I'm more "paranoid" about it nowadays. Single point of failure service, US-only SSO login (MS, Github, Apple, Google), what if my Apple account gets locked if I redeem a gift card and I can't use Tailscale anymore? I still believe in self hosting but probably I want something even more "self" to the extremes.
Then 5 years later there was a power outage and the UPS lasted for about 10 seconds before the batteries failed. That's how I learned about UPS battery maintenance schedules and the importance of testing.
I have a calendar alert to test the UPS. I groan whenever it comes up because I know there's a chance I'm going to discover the batteries won't hold up under load any more, which means I not only have to deal with the server losing power but I have to do the next round of guessing which replacement batteries are coming from a good brand this time. Using the same vendor doesn't even guarantee you're going to get the same quality when you only buy every several years.
Backup generators have their own maintenance schedule.
I think the future situation should be better with lithium chemistry UPS, but every time I look the available options are either exorbitantly expensive or they're cobbled together from parts in a way that kind of works but has a lot of limitations and up-front work.
This also makes self-hosting more viable, since our availability is constrained by internet provider rather than power.
If you are going to be away from home a lot, then yes, it's a bottomless pit. Because you have to build a system that does not rely on the possibility of you being there, anytime.
Self hosting sounds so simple, but if you consider all the critical factors involved, in becomes a full time job. You own your server. In every regard.
And security is only one crucial aspect. How spam filters react to your IP is another story.
In the end I cherrish the dream but rely on third party server providers.
its so much simpler when you have the files stored locally, then syncing between devices is just something that can happen whenever. anything that is running on a server needs user permissions, wifi, a router etc etc, its just a lot of complexity for very little gain.
although keep in mind im the only one using all of this stuff. if i needed to share things with other people then syncthing gets a bit trickier and a central server starts to make more sense
I have 7 computers on my self-hosted network and not all of them are on-prem. With a bit of careful planning, you can essentially create a system that will stay up regardless of local fluctuations etc. But it is a demanding hobby and if you don't enjoy the IT stuff, you'll probably have a pretty bad time doing it. For most normal consumers, self-hosting is not really an option and the isn't worth the cost of switching over. I justify it because it helps me understand how things work and tangentially helps me get better my professional skills as well.
Tailscale also has a self-hosted version I believe.
You can even self host tailscale via headscale but I don't know how the experience goes but there are some genuine open source software like netbird,zerotier etc. as well
You could also if interested just go the normal wireguard route. It really depends on your use case but for you in this case, ssh use case seems normal.
You could even use this with termux in android + ssh access via dropbear I think if you want. Tailscale is mainly for convenience tho and not having to deal with nats and everything
But I feel like your home server might be behind a nat and in that case, what I recommend you to do is probably A) run it in tor or https://gitlab.com/CGamesPlay/qtm which uses iroh's instance but you can self host it too or B (recommended): Get a unlimited traffic cheap vps (I recommend Upcloud,OVH,hetzner) which would cost around 3-4$ per month and then install something like remotemoe https://github.com/fasmide/remotemoe or anything similar to it effectively like a proxy.
Sorry if I went a little overkill tho lol. I have played too much on these things so I may be overarchitecting stuff but if you genuinely want self hosting to the extreme self, tor.onion's or i2p might benefit ya but even buying a vps can be a good step up
> I was in another country when there was a power outage at home. My internet went down, the server restart but couldn't reconnect anymore because the optical network router also had some problems after the power outage. I could ask my folks to restart, and turn on off things but nothing more than that. So I couldn't reach my Nextcloud instance and other stuff. Maybe an uninterruptible power supply could have helped but the more I was thinking about it after just didn't really worth the hassle anymore. Add a UPS okay. But why not add a dual WAN failover router for extra security if the internet goes down again? etc. It's a bottomless pit (like most hobbies tbh)
Laptops have in built ups and are cheap, Laptops and refurbished servers are good entry point imo and I feel like sure its a bottomless pit but the benefits are well worth it and at a point you have to look at trade offs and everything and personally laptops/refurbished or resale servers are that for me. In fact, I used to run a git server on an android tab for some time but been too lazy to figure out if I want it to charge permanently or what
> I am spending time using software, learning
What are you actually learning?
PSA: OP is a CEO of an AI company
Wrote about learning and fun here: https://fulghum.io/fun2
Having others run a service for you is a good thing! I'd love to pay a subscription for a service, but ran as a cooperative, where I'm not actually just paying a subscription fee, instead I'm a member and I get to decide what gets done as well.
This model works so well for housing, where the renters are also the owners of the building. Incentives are aligned perfectly, rents are kept low, the building is kept intact, no unnecessary expensive stuff added. And most importantly, no worries of the building ever getting sold and things going south. That's what I would like for my cloud storage, e-mail etc.
I was thinking about what if your "cloud" was more like a tilde.club, with self hosted web services plus a Linux login. What services would you want?
Email and cloud make sense. I think a VPN and Ad Blocker would too. Maybe Immich and music hosting? Calendar? I don't know what people use for self hosting
I say all this as someone who's been self-hosting services in one form or another for almost a decade at this point. The market incorporation/consumerfication of the hobby has been so noticeable in the last five years. Even this AI thing seems like another step in that direction; now even non-experts can drop $350+ on consumer hardware and maybe $100 on some network gear so that they can control their $50/bulb Hue lights and manage their expansive personal media collection.
But Tailscale is the real unlock in my opinion. Having a slot machine cosplaying as sysadmin is cool, but being able to access services securely from anywhere makes them legitimately usable for daily life. It means your services can be used by friends/family if they can get past an app install and login.
I also take minor issue with running Vaultwarden in this setup. Password managers are maximally sensitive and hosting that data is not as banal as hosting Plex. Personally, I would want Vaultwarden on something properly isolated and locked down.
I cannot say how happy I am configuring my own immich server on a decade old machine. I just feel empowered. Because despite my 9 years of software development, I haven't gotten into the nitty gritties of networking, VPN and I always see something non-standard while installing an open source package and without all of this custom guidance, I always would give up after a couple of hours of pulling my hair apart.
I really want to go deeper and it finally feels this could be a hobby.
PS: The rush was so great I was excitedly talking to my wife how I could port our emails away from google, considering all of the automatic opt in for AI processing and what not. The foolhardy me thought of even sabbatical breaks to work on long pending to-do's in my head.
I've been email self-hosting for a decade, and unfortunately, self-hosting your email will not help with this point nearly as much as it seems on first glance.
The reason is that as soon as you exchange emails with anyone using one of the major email services like gmail or o365, you're once again participating in the data collection/AI training machine. They'll get you coming or they'll get you going, but you will be got.
There are so many NAS + Curated App Catalog distros out there that make self-hosting trivial without needing to Vibe SysAdmin.
In my experience, this approach works extremely well—I would not have been able to accomplish this much on my own. However, there is an important caveat: you must understand what you are doing. AI systems sometimes propose solutions that do not work, and in some cases they can be genuinely dangerous to your data integrity or your privacy.
AI is therefore a powerful accelerator, not a replacement for expertise. You still need to critically evaluate its suggestions and veto roughly 10% of them.
On self-hosting: be aware that it is a warzone out there. Your IP address will be probed constantly for vulnerabilities, and even those will need to dealt with as most automated probes don't throttle and can impact your server. That's probably my biggest issue along with email deliverability.
Haproxy with SNI routing was simple and worked well for many years for me.
Istio installed on a single node Talos VM currently works very well for me.
Both have sophisticated circuit breaking and ddos protection.
For users I put admin interfaces behind wireguard and block TCP by source ip at the 443 listener.
I expose one or two things to the public behind an oauth2-proxy for authnz.
Edit: This has been set and forget since the start of the pandemic on a fiber IPv4 address.
Years later I still had the same router. Somewhere a long the line, I fired the right neurons and asked myself, "When was the last time $MANUFACTURER published an update for this? It's been awhile..."
In the context of just starting to learn about the fundamentals of security principles and owning your own data (ty hackernews friends!), that was a major catalyst for me. It kicked me into a self-hosting trajectory. LLMs have saved me a lot of extra bumps and bruises and barked shins in this area. They helped me go in the right direction fast enough.
Point is, parent comment is right. Be safe out there. Don't let your server be absorbed into the zombie army.
The idea is a contract is defined saying which options exist and what they mean. For backups, you’d get the Unix user doing the backup, what folders to backup and what patterns to exclude. But also what script can be run to create a backup and restore from a backup.
Then you’d get a contract consumer, the application to be backup, which declares what folders to backup either which users.
On the other side you have a contract provider, like Restic or Borgbackup which understand this contract and know thanks to it how to backup the application.
As the user, your role is just to plug-in a contract provider with a consumer. To choose which application backs up which application.
This can be applied to LDAP, SSO, secrets and more!
Proxmox Backup Server?
No judgement, but wanting to tinker/spend time on configuration is a major reason why many people do self-host.
p0wnland. this will have script kiddies rubbing their hands
I recently had a bunch of breakages and needed to port a setup - I had a complicated k3s container in proxmox setup but needed it in a VM to fix various disk mounts (I hacked on ZFS mounts, and was swapping it all for longhorn)
As is expected, life happens and I stopped having time for anything so the homelab was out of commission. I probably would still be sitting on my broken lab given a lack of time.
It's not tariffs (I'm in Switzerland). It's 100% the buildout of data centers for AI.
But I wanted decent deployments. Hosting a image repository cost 3-4x of the server. Sending over the container image took over an hour due to large image processing python dependencies.
Solution? Had a think and a chat with Claude code, now I have blue-green deployments where I just upload the code which takes 5 seconds, everything is then run by systemd. I looked at the various PaaSes but they ran up to $40/month with compute+database etc.
I would probably never have built this myself. I'd have gotten bored 1/3 through. Now it's working like a charm.
Is it enterprise grade? Gods no. Is it good enough? Yes.
When using them with production code they are a liability more than a resource.
I just wish this post wasn’t written by an LLM! I miss the days where you can feel the nerdy joy through words across the internet.
But I want to host an LLM.
I have a 1U (or more), sitting in a rack in a local datacenter. I have an IP block to myself.
Those servers are now publicly exposed and only a few ports are exposed for mail, HTTP traffic and SSH (for Git).
I guess my use case also changes in that I don’t use things just for me to consume, select others can consume services I host.
My definition here of self-hosting isn’t that I and I only can access my services; that’s be me having a server at home which has some non critical things on it.
CC lets you hack together internal tools quickly, and tailscale means you can safely deploy them without worrying about hardening the app and server from the outside world. And tailscale ACLs lets you fully control who can access what services.
It also means you can literally host the tools on a server in your office, if you really want to.
Putting CC on the server makes this set up even better. It’s extremely good at system admin.
Took a couple hours with some things I ran across, but the model had me go through the setup for debian, how to go through the setup gui, what to check to make it server only, then it took me through commands to run so it wouldn't stop when I closed the laptop, helped with tailscale, getting the ssh keys all setup. Heck it even suggested doing daily dumps of the database and saving to minio and then removing after that. Also knows about the limitations of 8 gigs of ram and how to make sure docker settings for the difference self services I want to build don't cause issues.
Give me a month and true strong intention and ability to google and read posts and find the answer on my own and I still don't think I would have gotten to this point with the amount of trust I have in the setup.
I very much agree with this topic about self hosting coming alive because these models can walk you through everything. Self building and self hosting can really come alive. And in the future when open models are that much better and hardware costs come down (maybe, just guessing of course) we'll be able to also host our own agents on these machines we have setup already. All being able to do it ourselves.
related "webdev is fun again": claude. https://ma.ttias.be/web-development-is-fun-again/
Also the "Why it matters" in the article. I thought it's a jab at AI-generated articles but it starts too look like the article was AI written as well
Waiting for the follow-on article “Claude Code reformatted my NAS and I lost my entire media collection.”
[1] https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...
I share a lot of the same hesitations as others in the thread - using a giant US-based tech giant's tool for research as well as another US giant's tool to manage access, but it's really a game change and I'd be unable to find the time to do everything I want if I didn't have access to these otherwise.
I'm not even a software guy by engineering, my network is already complicated enough that learning and correctly securing things otherwise would simply just not be feasible with the time and energy I'd like to dedicate to it.
If you have your own agent, then it can talk to whatever you want - could be OpenRouter configured to some free model, or could be to a local model too. If the local model wasn't knowledgeable enough for sysadmin you could perhaps use installable skills (scripts/programs) for sysadmin tasks, with those having been written by a more powerful model/agent.
What’s the goal? If the act of _building_ a homelab is the fun then i agree 100%. If _having_ a reliable homelab that the family can enjoy is the goal, then this doesn’t matter.
For me personally, my focus is on “shipping” something reliable with little fuss. Most of my homelab skills don’t translate to my day job anyway. My homelab has a few docker compose stacks, whereas at work we have an internal platform team that lets me easily deploy a service on K8s. The only overlap here is docker lol. Manually tinkering with ports and firewall rules, using sqlite, backups with rsync, etc…all irrelevant if you’re working with AWS from 9-5.
I guess I’m just pointing out that some people want to build it and move on.
Enlightenment here comes when you realize others are doing the exact same thing with the exact same justification, and everyone's pain/reward threshold is different. The argument you are making justifies their usage as well as yours.
In that case, it's not about the 'joy of creation', but actually getting everything up and running again, in which case LLMs are indispensable.
From to time, test the restore process.
They tend to slip out of declarative mode and start making untracked changes to the system from time to time.
>I am spending time using software, learning, and having fun - instead of maintaining it and stressing out about it.
Using software, learning and having fun with with what? everything is being done by Claude here. The part of fun and learning is to learn to use and maintain it in the first place. How will you learn anything if Claude is doing everything for you ? You are not understand how things work and where everything goes.
This post could be written or at least modified by an LLM, but more importantly I think this person is completely missing the point of self hosting and learning.
I did it! Except you didn't and you don't know anything about what it did or learned anything along the way. Success?
But for what i'm using Agents right now, claude code is the tool to go.
workdir/ ├── README.md ├── CLAUDE.md # Claude Code instructions ├── BACKUP.md # Backup documentation ├── .gitignore ├── traefik/ │ ├── docker-compose.yml │ └── config/ │ └── traefik.yml ├── authentik/ │ ├── docker-compose.yml │ └── .env.example ├── umami/ │ ├── docker-compose.yml │ └── .env.example ├── n8n/ │ ├── docker-compose.yml │ └── .env.example └── backup/ ├── backup.sh # Automated backup script ├── restore.sh # Restore from backup ├── verify.sh # Verify backup integrity ├── list-backups.sh # List available backups └── .env.example
We've gone a step further, and made this even easier with https://zo.computer
You get a server, and a lot of useful built-in functionality (like the ability to text with your server)
I'm asking Claude technical questions about setup, e.g., read this manual, that I have skimmed but don't necessarily fully understand yet. How do I monitor this service? Oh connect Tailscale and manage with ACLs. But what do I do when it doesn't work or goes down? Ask Claude.
To get more accurate setup and diagnostics, I need to share config files, firewall rules, IPv6 GUAs, Tailscale ACLs... and Claude just eats it up, and now Anthropic knows it forever too. Sure, CGNET, Wireguard, and ssh logins stand between us, but... Claude is running a terminal window on a LAN device next to another terminal window that does have access to my server. Do I trust VS Code? Anthropic? The FOSS? Is this really self-hosting? Ahh, but I am learning stuff, right?
Tbh I did the mistake of throwing away Ansible, so testing my setup was a pain!
Since with AI, the focus should be on testing, perhaps it's sensible to drop Ansible for something like https://github.com/goss-org/goss
Things are happening so fast, I was impressed to see a Linux distro embrace using a SKILL.md! https://github.com/basecamp/omarchy/blob/master/default/omar...
I’ll bite. You can save a lot of money by buying used hardware. I recommend looking for old Dell OptiPlex towers on Facebook Marketplace or from local used computer stores. Lenovo ThinkCentres (e.g., m700 tiny) are also a great option if you prefer something with a smaller form factor.
I’d recommend disregarding advice from non-technical folks recommending brand new, expensive hardware, because it’s usually overkill.
(In)famous last words?
For now I'm just using Cloudflare tunnels, but ideally I also want to do that myself (without getting DDoS)
The structure is dead simple: `machines/<hostname>/stacks/<service>/` with a `config.sh` per machine defining SSH settings and optional pre/post deploy hooks. One command syncs files and runs `docker compose up -d`.
I could see Claude Code being useful for debugging compose files or generating new stack configs, but having the deployment itself be a single `./deploy.sh homeserver media` keeps the feedback loop tight and auditable.
It's simple enough and I had some prior experience with it, so I merely have some variables, roles that render a docker-compose.yml.j2 template and boom. It all works, I have easy access to secrets, shared variables among stacks and run it with a simple `ansible-playbook` call.
If I forget/don't know the Ansible modules, Claude or their docs are really easy to use.
Every time I went down a bash script route I felt like I was re-inventing something like Ansible.
Which basically accomplishes same thing, but gives a bit more UI for debugging when needed.
For example - I have ZFS running with a 5-bay HDD enclosure, and I honestly can't remember any of the rules about import-ing / export-ing to stop / start / add / remove pools etc.
I have to write many clear notes, and store them in a place where future me will find them - otherwise the system gets very flaky through my inability to remember what's active and what isn't. Running the service and having total control is fun, but it's a responsibility too
If you need to run the command once, you can now run it again in the future.
It's very tempting to just paste some commands (or ask AI to do it) but writing simple scripts like this is an amazing solution to these kinds of problems.
Even if the scripts get outdated and no longer work (maybe it's a new version of X) it'll give you a snapshot of what was done before.
In this case you will be completely unable to navigate the infrastructure of your homeserver that your life will have become dependent on.
But a homeserver is always about your levels of risk, single points of failure. I'm personally willing to accept Tailscale but I'm not willing to give the manipulation of all services directly over to Claude.
I wonder if a local model might be enough for sysadmin skills, especially if were trained specifically for this ?
I wonder if iOS has enough hooks available that one could make a very small/simple agentic Siri replacement like this that was able to manage the iPhone at least better than Siri (start and stop apps, control them, install them, configure iPhone, etc) ?
What I've found: Claude Code is great at the "figure out this docker/nginx/systemd incantation" part but the orchestration layer (health checks, rollbacks, zero-downtime deploys) still benefits from purpose-built tooling. The AI handles the tedious config generation while you focus on the actual workflow.
github.com/elitan/frost if curious
Historically, managed platforms like Fly.io, Render, and DigitalOcean App Platform existed to solve three pain points: 1. Fear of misconfiguring Linux 2. Fear of Docker / Compose complexity 3. Fear of “what if it breaks at 2am?”
CLI agents (Claude Code, etc.) dramatically reduce (1) and (2), and partially reduce (3).
So the tradeoff has changed from:
“Pay $50–150/month to avoid yak-shaving” → “Pay $5–12/month and let an agent do the yak-shaving”
(and no, this product is not against TOS as it is using the official claude code SDK unlike opencode https://yepanywhere.com/tos-compliance.html)
And ironically all in the name of "self hosting". Claude code defies both words in that.
Then someday we self-host the AI itself, and it all comes together.
My self hosted things all run as docker containers inside Alpine VMs running on top of Proxmox. Services are defined with Docker Compose. One of those things is a Forgejo git server along with a runner in a separate VM. I have a single command that will deploy everything along with a Forgejo action that invokes that command on a push to main.
I then have Renovate running periodically set to auto-merge patch-level updates and tag updates.
Thus, Renovate keeps me up to date and git keeps everyone honest.
Is there a replica implementation that works in the direction I want?
And until now without AI, but I'm kind of curious but afraid that it will bring my servers down and then I can't roll back :D But perhaps if I would move over to NixOS, then it would be easy to roll back.
I route it through a familiar interface like slack tho as I don't like to ssh from phone or w/e using a tool I built - https://www.claudecontrol.com/
And ideally doing it via lxc or vm.
Extra complication but gives you something repeatable that you can stick on git
But I wouldn't give the keys of the house to Claude or any LLM for that matter. When needed, I ask them questions and type commands myself. It's not that hard.
Is it just a single docker-compose.yml with everything you want to run and 'docker compose up'?
Agents are powerful. Even more so with skills and command line tools they can call to do things. You can even write custom tools (like I did) for them to use that allows for things like live debugging.
The tailscale piece to this setup is key.
I am writing a personal application to simplify home server administration if anybody is interested: https://github.com/prettydiff/aphorio
Its comedic at this point.
I suspect it may have been related to the Network File System (NFS)? Like whenever I read a file on the host machine, it goes across the data-center network and charges me? Is this correct?
Anyway, I just decided to take control of those costs. Took me 2 weeks of part-time work to migrate all my stuff to a self-hosted machine. I put everything behind Cloudflare with a load balancer. Was a bit tricky to configure as I'm hosting multiple domains from the same machine. It's a small form factor PC tower with 20 CPU cores; easily runs all my stuff though. In 2 months, I already recouped the full cost of the machine through savings in my AWS bill. Now I pay like $10 a month to Cloudflare and even that's basically an optional cost. I strongly recommend.
Anyway it's impressive how AWS costs had been creeping slowly and imperceptibly over time. With my own machine, I now have way more compute than I need. I did a calculation and figured out that to get the same CPU capacity (no throttling, no bandwidth limitations) on AWS, I would have to pay like $1400 per month... But amortized over 4 years my machine's cost is like $20 per month plus $5 per month to get a static IP address. I didn't need to change my internet plan other than that. So AWS EC2 represented a 56x cost factor. It's mind-boggling.
I think it's one of these costs that I kind of brushed under the carpet as "It's an investment." But eventually, this cost became a topic of conversation with my wife and she started making jokes about our contribution to Jeff Bezos' wife's diamond ring. Then it came to our attention that his megayacht is so large that it comes with a second yacht beside it. Then I understood where he got it all from. Though to be fair to him, he is a truly great businessman; he didn't get it from institutional money or complex hidden political scheme; he got it fair and square through a very clever business plan.
Over 5 years or so that I've been using AWS, the costs had been flat. Meanwhile the costs of the underlying hardware had dropped to like 1/56th... and I didn't even notice. Is anything more profitable than apathy and neglect?
Bandwidth inside the same zone is free.
Maybe viable if you have a bunch of spare parts laying around. But probably not when RAM and storage prices are off the charts!
I had a 30-year-old file on my Mac that I wanted to read the content of. I had created it in some kind of word processing software, but I couldn’t remember which (Nexus? Word? MacWrite? ClarisWorks? EGWORD?) and the file didn’t have an extension. I couldn’t read its content in any of the applications I have on my Mac now.
So I pointed CC at it and asked what it could tell me about the file. It looked inside the file data, identified the file type and the multiple character encodings in it, and went through a couple of conversion steps before outputting as clean plain text what I had written in 1996.
Maybe I could have found a utility on the web to do the same thing, but CC felt much quicker and easier.
There are a few important things to consider, like unstable IPs, home internet limits, and the occasional power issue. Cloud providers felt overpriced for what I needed, especially once storage was factored in.
In the end, I put together a small business where people can run their own Mac mini with a static IP: https://www.minimahost.com/
I’m continuing to work on it while keeping my regular software job. So far, the demand is not very high, or perhaps I am not great at marketing XD
Granted, that's rarely enforced, but if you're a stickler for that sort of thing, check your ISP's Acceptable Use Policy.
Lol, no thank you. Btw do your knees hurt?
I still struggle with letting go of writing code and becoming only a full-time reviewer when it comes to AI agents doing programming, but I don't struggle in the slightest with assuming the position of a reviewer of the changes CC does to my HA instance, delegating all the work to it. The progress I made on making my house smart and setting up my dashboards has skyrocketed compared to before I started using CC to manage HA via its REST and WS APIs.
All those fancy GUIs in Mac and Windows designed to be user friendly (but which most users hate and are baffled by anyway) are very hostile for models to access. But text configuration files? it's like a knife through butter for the LLMs to read and modify them. All of a sudden, Linux is MORE user friendly because you can just ask an LLM to fix things. Or even script them - "make it so my theme changes to dark at night and then back to light each morning" becomes something utterly trivial compared to the coding LLMs are being built to handle. But hey, if your OS really doesn't support something? the LLM can probably code up a whole app for you and integrate it in.
I think it's going to be fascinating to see if the power of text based interfaces and their natural compatibility with LLMs transfers over into an upswing in open source operating systems.
Thanks
Claude and Gemini have been instrumental in helping me understand the core concepts of kubernetes, how to tune all these enterprise applications for high latency, think about architecture etc...
My biggest "wow, wtf?" moment was ben I was discussing the cluster architecture with Claude. It asked: want me to start the files?
I thought it meant update the notes, so replied 'yes'.
It spit out 2 sh files and 5 YAMLs that completely bootstrapped my cluster with a full GitOps setup using ArgoCD.
Learning while having a 24/7 senior tutor next to me has been insane value.