I build a zfs based NAS last year[1], after 10 months of usage, I have nothing but positive things to say. One thing I didn't understand conceptually at first is that ZFS is standalone and not dependent on the host OS. This was not clear to me and it is not explained anywhere. You can wipe out the entire OS, move the drives to a new system, mount them on a new OS and run `zpool import` to import them. All the information (raid config, snapshots, etc) about ZFS is stored on the pool itself and not on the host OS, especially if you have an alias setup for invidivual drives in `/etc/zfs/vdev_id.conf`.

There is no need for FreeNAS or any of that stuff, I don't use all those features. Just run like 3 commands to create a zpool, install samba service and that's all I need from a NAS. Cron job to run a scrub every month.

If ZFS file system made a sound, it'd make a satisfying and reassuring "CLUNK!" sound.

[1] https://neil.computer/notes/zfs-raidz2/

>One thing I didn't understand conceptually at first is that ZFS is standalone and not dependent on the host OS.

This is true for every mature/production file system. You can do the same with mdadm, ext4, xfs, btrfs, etc. The only constraint would be with versions in that it would be a one way street. Can't necessarily go from something new to something old, but other way round is fine.

ZFS stores mount points and even NFS shares in its metadata[1], so more than most others.

[1]: https://openzfs.github.io/openzfs-docs/man/8/zfs-set.8.html?...

My one complaint about ZFS is that I'd repeatedly googled for "what's the procedure if your motherboard dies and you need to migrate your disks to a new machine?" since that's super-easy with single non-ZFS disks but I was worried how ZFS mirrored pools would handle it, especially since the setup was so fiddly and (compared to other filesystems I've used) highy non-standard (with good reason, I'm sure).

And yet, this thread right here has more and better info than my searches ever turned up, which were mostly reddit and stackoverflow posts and such that somehow managed never to answer the question or had bad answers.

The one complaint is that I found that to be true for almost everything with ZFS. You can read the manual and figure out which sequence of commands you need eventually, but "I want to do this thing that has to be extremely common, what's the usually procedure, considering that ZFS operations are often multi-stage and things can go very badly if you mess it up?" is weirdly hard to find reliable, accurate, and complete info on with a search.

The result was that I was and am afraid to touch ZFS now that I have it working and dread having to track down info because it's always a pain, but I also don't really want to become a ZFS wizard by deeply-reading all the docs just so I can do some extremely basic things (mirror, expand pools with new drives, replace bad mirrored disks, move the disks to a new machine if this one breaks... that's about it beyond "create the fs and mount it") with it on one machine at home.

The initial setup reminded me of Git, In a bad way. "You want to do this thing that almost every single person using this needs to do? Run these eight commands, zero of which look like they do the thing you want, in exactly this order".

I'm happy with ZFS but dread needing to modify its config.

As someone who is a total ZFS fan, I think the `zfs` and `zpool` commands are some of the best CLI commands ever made. Just immaculate. So this comment was a head scratcher for me.

> I also don't really want to become a ZFS wizard

Admittedly, ZFS on Linux may require some additional work simply because its not an upstream filesystem, but, once you're over that hump, ZFS feels like it lowers the mental burden of what to do with my filesystems?

I think the issue may be ZFS has some inherent new complexity that certain other filesystems don't have? But I'm not sure we can expect a paradigm shifting filesystem to work exactly like we've been used to, especially when it was originally developed on a different platform? It kinda sounds like you weren't used to a filesystem that does all these things? And may not have wanted any additional complexity?

And, I'd say, that happens to everyone? For example, I wanted to port an app I wrote for ZFS to btrfs[0]. At the time, it felt like such an unholy pain. With some distance, I see it was just a different way of doing things. Very few btrfs decisions with which I had intimate experience, do I now look back on and say "That's just goofy!" It's more -- that's not the choice I would have made, in light of ZFS, etc., but it's not an absurd choice?

> "what's the procedure if your motherboard dies and you need to migrate your disks to a new machine?"

If you're setup is anything like mine, I'm pretty certain you can just boot the root pool? Linux will take care of the rest? The reason you may not find an answer is because the answer is pretty similar to other filesystems?

If you have problems, rescue via a live CD[1]. Rescuing a ZFS root pool that won't boot is no joke sysadmin work (redirect all the zpool mounts, mount --bind all the other junk, and create a chroot env, do more magic...). For people, perhaps like you, that don't want the hassle, maybe it is easier elsewhere? But -- good luck!

[0]: https://github.com/kimono-koans/httm [1]: https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubu...