Just before I exited the Linux world entirely, I was beginning to chip away at the iceberg known as btrfs, and it was fascinating. I saw so much promise in many of its features, for revolutionizing backups and organizing my disks and everything.
Now btrfs isn't ZFS, but it has some feature parity and perhaps the "poor man's ZFS". It's also much more reasonable to run on certain OS, due to the licensing, packaging, and in-kernel status of ZFS being kind of weird.
One memorable time I was encouraged to use ZFS was when I mentioned to the Linux User's Group that I'd had to pull the power cord to reboot my computer, and I was roundly scorned for this foolish maneuver. But you may change your mind about the wisdom of doing either one when you consider that the system in question was a Raspberry Pi. Heh.
The pi supposedly can get FS corruption, but I've never seen it because every time I install an image, I run a script that puts tmpfses everywhere and turns off mostly useless logging. Those things just run for ever, very reliably, as long as you don't hammer the SD card.
ZFS looks so cool! Unfortunately when I eventually get a NAS I doubt I'll want to pay for anything that can run it, so I suspect I'll just be doing RAID and ext4.
I always stayed away from BTRFS, because every few months I'd see a "BTRFS destroyed my data" post, followed by an argument about if it was BTRFSes fault. I see them less now, perhaps it's time to revisit?
I had multiple pi systems “bricked” after a power outage. Presumably it was file system corruption? I never looked into the issues further. I just wiped the drives and reinstalled. These were vanilla raspbian installs at the time. It happened a few times when I was first trying out my pi. These were a mix of me cutting the power and actual power outages.
I only ever had these issues with SD cards. I quickly switched to running my pi off a USB external SSD, and haven’t had any problems since then. Now when the power goes out, it boots back up properly and all my services start. All of this on ext3 I think?
Planning to redo things for ZFS at some point, but haven’t gotten around to it yet.
Yeah, sd cards that you haven't put through extensive crash testing simply can't be trusted. I used to have a jar of sd cards that didn't survive testing.
Our $job dashboards used to nuke an SD card every couple weeks/months, but since the move to logs-in-RAM we've been running the same SDs for years.
DIY via {fs,journalctl} config , or using https://github.com/azlux/log2ram
Also, mount the SD with the `noatime` flag of course: https://wiki.archlinux.org/title/Ext4#Disabling_access_time_...