ZFS is an industrial-scale technology. It's not a scooter you just hop on and ride. It's like a 747, with a cockpit full of levers and buttons and dials. It can do amazing things, but you have to know how to drive it.
I've run ZFS for a decade or more. With little or no tuning. If I dial my expectations back, it works great, much better than ext4 for my use case. But once I start trying to use deduplication, I need to spend thousands of dollars on RAM, or the filesystem buckles under the weight of deduplication.
My use case is storing backups of other systems, with rolling history. I tried the "hardlink trick" with ext4, the space consumption was out of control, because small changes to large files (log files, ZODB) caused duplication of the whole file. And managing the hard links took amazing amounts of time and disk I/O.
ZFS solved that problem for me. Just wish I could do deduplication without having to have 64GB of RAM. But, I take what I can get.
Today one would use reflink with xfs or btrfs for a finer-grained "hardlink trick" solution.
But that still won't deduplicate renamed files if you're using something like rsync w/--link-dest for the backups.
Are there any deduplicating filesystems for Linux similar to the (very basic) NTFS deduplication in Windows? I’ve fiddled with quite a few Linux dedup solutions over the years but nothing seems to be production ready. Even ZFS isn’t that useful as it only chunks on multiples of the block size.
The basic feature set for deduplication in my book is non-block-aligned chunking with compression. NTFS has had transparent post-process dedup since 2012 and is pretty much ideal for file, VM, and backup servers. It just works in the background with little performance impact but huge space savings.
It feels dirty using Windows servers as NAS for our Linux machines, but that’s what we’re doing today.