this is honestly hard because many of the decisions that matter are not things you type into zfs at all (except incidentally).

how many disks per vdev? how much memory? etc

a lot of the things you've outlined are not universal at all, just situational

Yes, there is a lot of essential complexity that is unavoidable, but there are a lot of people like me who just want a better desktop file system, and we don't need to know about SLOGs and L2ARCs and the half dozen compression algorithms, etc. It's situational, but it's a common enough situation that a targeted solution would be valuable.

I'm one of those who wants a better desktop file system but have stayed away from ZFS (at least for the time being) due to stories about complexity.

Would you say that the defaults are sane enough for that kind of person (no real configuration needed)?

Outside of it being a good idea to manually set `ashift=12` on each vdev creation (because disks lie and zfs’s whitelist isn’t omniscient. I’ve seen too many people burned by this) and `atime=off` for datasets (because burning IO updating the metadata because you’ve accessed something is just fucking stupid), the defaults are sane and you can basically refuse to care about them after that.

Every system I’ve used has `compression=on` set as default, which currently means lz4. People set it manually out of paranoia from earlier days I think.

For linux systems you can set `xattr=sa` and `acltype=posixacl` if you like, which offers a minor optimization that you’ll never notice.

I suppose if you don’t like how much memory zfs uses for ARC, you can reduce it. For desktop use 2-4gb is plenty. For actual active heavier storage use like working with big files or via a slow HDD filled NAS, 8GB+ is a better amount.

Dataset recordsize can be set as well, but that’s really something for nerds like me that have huge static archives (set to 1-4MiB) or virtual machines (match with qcow2’s 64KiB cluster size). The default recordsize works well enough for everything that unless you have a particular concern, you don’t need to care.

I should note, beware of rolling release linux distros and ZFS. The linux kernel breaks compatibility nonstop, and sometimes it can take a while for zfs to catch up. This means your distro can update to a new kernel version, and suddenly you can’t load your filesystem. ZFSbootmenu is probably the best way to navigate that, it makes rolling back easy.

You also want to setup automatic snapshots, snapshot pruning, and sending of snapshots to a backup machine. Highly recommend https://github.com/jimsalterjrs/sanoid

If you really find yourself wanting to gracefully deal with rollbacks and differences in a project, HotTubTimeMachine (HTTM) is nice to be aware of: https://github.com/kimono-koans/httm