With Sylve, you rarely need to touch the CLI. Snapshots, datasets, ZVOLs, even flashing images directly to ZVOLs, it’s all handled from the UI in a straightforward way.
That tight ZFS integration also lets us build more flexible backup workflows. You can back up VMs, jails, or entire datasets to any remote machine that supports SSH + ZFS. This is powered by Zelta (https://zelta.space) (which is embedded directly into the Go backend), so it’s built-in rather than something you bolt on.
In Proxmox, you can achieve similar things, but it’s less intuitive and usually involves setting up additional components like Proxmox Backup Server.
On proxmox ZFS syncs do not require proxmox backup server, which actually has its own format which is very efficient in speed and disk space, but you do either need something like sanoid/syncoid or use of the shell.
It's fine because I'm comfortable in the CLI but I read your comment and wanted to share that it felt a bit rudimentary at best.
Still an awesome project to learn about and I hope it's successful.
----
A huge, totally obvious, advantage is that FreeBSD isn't using systemd. I'm now nearly systemd-free, if not for Proxmox. But my VMs are systemd free. And, by definition, my containers too (where basically the entire point is that there's a PID 1 for the service and that PID 1, in a container is not systemd).
So the last piece missing for me is getting rid of Proxmox because Proxmox is using systemd.
I was thinking about going straight to FreeBSD+bhyve (the hypervisor) but that felt a bit raw. FreeBSD+Sylve (using bhyve under the hood) seems to be, at long last, my way out of systemd.
I've got several servers at home with Proxmox but I never, on purpose, relied too much on Proxmox: I kept it to the bare minimum. I create VMs and use cloudinit and tried to have most of it automated and always made it with the idea of getting rid of Promox.
I've got nothing against Proxmox but fuck systemd. Just fuck that system.
What about performance characteristics? Recoverability of workloads?
I’m interested in a FreeBSD base OS because it seems ZFS is better integrated and ZFS has a lot of incredibly useful tools that come with it. If Bhyve is at least nearly as performant as KVM, I’d be hard pressed not to give it a whirl.
I've been repeatedly burned by systemd, both on machines I've administered and on appliances. In every situation, the right fix was either "switch distros" or "burn developer-months of work in a fire drill".
In fact, I just decided to go with FreeBSD instead of proxmox specifically because proxmox requires systemd. The last N systemd machines I've had the misfortune to touch were broken due to various systemd related issues. (For large values of N.)
I assume that means anything built on top of it is flaky + not stable enough for production use.
I have quite a lot of customers that we have migrated from VMware to Proxmox. Some of them are rocking zfs instead of vmfs. Mostly these are Dell servers. Proxmox with zfs seems to be more aggressive about disc failure warnings, which I think is helpful.
Pick what OS works for you.
I run Proxmox at home, but now that I have been drinking the NixOS koolaid over the past 2 years, all of my homelab problems suddenly look like Nix-shaped nails.
I actually have a few hosts that only run docker. I might be able to test with those.
Looks like Nix will eat the world soon. :)
Not sure why this copy made the SCP
Or better, how does it do it better than proxmox?
This isn't to say that proxmox is the best thing since sliced bread, I'm curious as to what makes sylve better, is it the API?
A few concrete things:
ZFS-first UX: Not just "ZFS as storage”, but everything built around it. Snapshots, clones, ZVOLs, replication, all cleanly exposed in the UI without dropping to CLI.
Simple backups without extra infra: Any remote box with SSH + ZFS works. No need to deploy something like PBS just to get decent backups.
Built-in Samba shares: You can spin up and manage shares directly from the UI without having to manually configure services.
Magnet / torrent downloader baked in: Sounds small, but for homelab use it removes a whole extra container/VM people usually end up running.
Clustering: but not all-or-nothing, You can cluster nodes when you need it, and also disable/unwind it later. Proxmox clusters are much more rigid once set up.
Templates done right: Create a base VM/jail once and spin up N instances from it in one go, straight from the UI.
FreeBSD base: It's not really a benefit of Sylve, but rather the ecosystem that FreeBSD provides.. Tighter system integration, smaller surface area, no systemd, etc. (depending on what you care about)
None of this is to say Proxmox is bad, it’s great. This is more "we used it a lot, hit some friction points, and built something that feels smoother for our workflows."
(That'd be amazing if it's possible to do stuff like dump configs + check them into git from the cli, then stand them up on any bhve/sylve box later...)
Can you explain your use case when you absolutely can't provide a separate M.2 drive solely for the OS?
Folks using TrueNAS or unRAID for backup instead of safe keeping, and then get mad when everything goes sideways and the data is gone. Your NAS must have a backup elsewhere, snapshot and what not won't save you if everything goes RIP.
ZFS is redundancy and redundancy only, but people see ZFS as some sort of backup. That is silly and wrong.
>A rollback capability is why I'm looking for Proxmox alternatives.
Your VMs and LXC container should have an automated backup. Proxmox itself takes a second to clean install it.
I had to change the motherboard and had to literraly install Proxmox 9.1 from scratch. BUT.... before doing that, I checked the LXC backups sent to a TrueNAS spool in mirror for safe keeping.
Reinstalled Proxmox, mounted the NFS share on Proxmox and voila, all the LXC containers were restored and started like nothing happened.
I'm talking about this, basically: https://www.linuxquestions.org/questions/*bsd-17/howto-zfs-m...
There have since been implementations for Linux but no distribution is designed to support them.
Proxmox is Debian/Ubuntu based.
Both will have their advantages. It might not be about better or worse, the particular things you use may in some cases run better on BSD, or the security management could more fit what you are after.
I wonder why not run both :).
Proxmox is due for it's viral moment though.
A Un*x system that doesn't use systemd as an init system.
In practice though, most setups don’t actually need it if you’re running workloads directly on the host.
Also, if your goal is testing or simulating clusters, you can already run Sylve inside jails. That gives you multiple isolated “nodes” on a single machine without needing nested virt. We have a guide for it here: https://sylve.io/guides/advanced-topics/jailing-sylve/
So you can still experiment with things like clustering, networking, failure scenarios, etc., just using jails instead of spinning up hypervisors inside VMs.
Nested virt is still useful for specific cases like testing other hypervisors or running Firecracker inside VMs, but for most Sylve-style setups it hasn’t really been a blocker.
In prod, let's say you run workloads in Firecracker VMs. You have plenty of headroom on your existing hardware. Nested virtualization would allow you to set up Firecracker hosts on your existing hardware.
Outside of learning and testing I am not sure of what uses there might be but I'm curious to know if there are.
My home lab still uses ESXi 8. But it needs something new and I was looking at proxmox. However I may give this a try first.
Likewise for disk i/o -some people swear by 9P as a backing mechanism, some by ZVOL.
It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
That is why you ask for recommendations of hotels, restaurants, travel destinations, good computer brands, software and so on from friends, relatives or other trusted parties/groups. This does not mean your don't form your opinions. You use the opinions of others as a sort of bootstrap or prior which you can always refine.
HN is actually the perfect place to ask for opinions. Someone just said bhyve does not support nested virtualization (useful input !). Someone else might chime in and say they have run bhyve for a long time and they trust it (and so on...)
So I can't agree with your viewpoint.
Speculation is not useless if you are saying “the answer I got makes it 99% likely that this solution will not work for me”. Curation has immense value in the world today. I investigate only the options most likely to be useful. And that still takes all my time.
> It is not possible to come to a conclusion about everything in the world yourself "from scratch". No one has the time to try out everything themselves. Some filteration process needs to be applied to prevent wasting your finite time.
It's totally possible when you know what your application requires but you didn't state anything.
> Someone just said bhyve does not support nested virtualization (useful input !).
What nested applications are you planning to run?
All I read is that they are still doing ClickOPS over DevSecOps!!
At no moment I heard automation, if you aren't using automation in 2026, your future in IT is cooked.
I run Proxmox at home for my homelab. I used to use VMs and now I have fully adopted Proxmox LXC containers (I hate Docker). I use Ansible to automate everything.
Last night I wanted to setup a notification service called Gotify, the Ansible playbook must:
1. Create a LXC container with specified resources
2. Prepare the system, network and what not
3. Give me a fully operational LXC and service running, go to the browser and voila.
All of that by running one command line, so now I can deploy it over and over.
I have setup a LXC container running Radarr, qBittorrent, Sonarr, Jackett, WireGuard VPN via Proton VPN, Iptables firewall aka kill-switch.
All of what you just read running within a LXC container fully automated via Ansible, OP is doing everything manually.
Even if I was running Sylve, Ansible would be doing the whole automation stuff.
> All I read is that they are still doing ClickOPS over DevSecOps!!
Their setup is mostly working on embedded stuff, and this involves some amount of moving VM disk images around, sometimes they run different software within the same VM disk, so that means ZFS properties need to be tweaked accordingly (compression, recordsize, etc). This is a lot easier to do with a UI than it is with CLI, and the UI is pretty good at showing you what’s going on. Now I'm all for automating stuff, but there's no clear pattern here to automate away,
Now regarding automation in Sylve, you can create a template out of Sylve (with networking, storage, CPU config etc.) and then deploy that template as many times as you want (from the UI), last I checked proxmox only allows you to clone from template one at a time.
What I do is pretty similar to what you mention, but I don't really use ansible since on FreeBSD if it's in the ports tree its one command (after base system is set up) which is `pkg install -y <package>`. And your entire stack (from your list), can be done with one command each. The only thing I see that would need a bit setup would be the wireguard vpn, but even that is pretty straightforward under FreeBSD (so you can do it with a jail and no need for a VM).
There is nothing wrong with that but if an user cannot perform the same tasks via CLI, I see that as a big blocker for a project to be fully adopted with exceptions. OPNSense, there is zero reasons to manage the whole network and what not via CLI, GUI makes life so much easier. I would hate it having to do everything via CLI.
The other thing is LXC, Sylve seems to call it jail.
I would expect this jail to support something like below.
Ansible only automates what you do manually, the server itself only sees the command and it will never run Ansible itself, so intead of manually creating a LXC, Ansible would send:
- name: Deploy LXC
ansible.builtin.command: >
pct create {{ lxc_id }} {{ template }}
--hostname {{ hostname }}
--unprivileged 1
--cores {{ cores }}
--memory {{ memory }}
--rootfs {{ storage }}:{{ rootfs_size }}
--net0 name=eth0,bridge={{ bridge }},ip={{ static_ip }}/{{ cidr }},gw={{ gateway }}
--features nesting=0
Of I wanna exec into the LXC container to restore a backup and start the system, I would expect Sylve to support this. - name: Import lists and hotfixes
ansible.builtin.command: >
pct exec {{ lxc_id }} -- bash -c "
pihole-FTL --config ntp.sync.interval 0;
systemctl stop pihole-FTL;
sqlite3 /etc/pihole/gravity.db < /tmp/adlist.sql;
systemctl start pihole-FTL;
sudo pihole -g
"
All of that from my PC without having to go to a browser. That is the friction that your team should look into automating, there is always a way, it is just easier to go to the browser.We’re API-first, the UI is just a client on top. We already ship Swagger docs with the code (docs/ on the repo), so everything the UI does is exposed and usable programmatically today.
Right now we’re still early (v0.2), so the CLI/SDK pieces aren’t fully there yet, but that’s what we’re building next.
Before v0.4 the plan is:
* a proper CLI for scripting
* a well-defined API lib (TypeScript/Go first, others later)
* parity between UI, CLI, and API
This x 10.
Ansible and OpenTofu/Terraform is where it is at. And you can use Claude/Codex with that setup.
But I do have a single cluster at home which allowed me to learn both Kubernetes and Terraform, I also hate Docker so much that I prefer to convert a Dockerfile into a Terraform template and voila, I do not use it to run my stuff.
I enjoy Terraform very much with Terragrunt. Terraform alone is too messy, Terragrunt makes the house cleaner.
It less about how many times and more about used to automate everything, spend less time doing boring things and more time doing fun stuff.
For example, when I first deployed a Jellyfin LXC container with GPU and what not, the container itself hosts nothing, Proxmox mounts the NFS shared from TrueNAS to it, and it uses a local NVMe for transcoding.
And yet, novice me picked a small storage size, 5GB or something because I only run Debian Netinst which uses 200MB of ram and 0.00001% CPU. Debian Netinst itself requires what 1-2GB of disk??
Back to your question, I had to redeploy another Jellyfin container coz it ran out of disk space with:
1. the GPU passthrough
2. mount all the NFS shares once the LXC is up
3. the transcode folder
4. rsync from TrueNAS and restore the metadata with all the movies and what not.
Had I planned to do it?? Nope.
One command line and I have a brand new Jellyfin LXC with much bigger storage, and working like nothing happened, fully automated from my PC via Ansible.