I like the idea. Making it backwards compatible with FAT means that, in principle, regular FAT filesystem implementations could be transparently changed to support big fat files (hehe) transparently.
However, reading the spec, it doesn't look fully backwards compatible? It seems like there are file structures which are possible to represent in FAT which aren't possible to represent in BigFAT. In FAT, I could have a 4GB-128kB size file called "hello.txt", and next to it, a file called "hello.txt.000.BigFAT". A FAT filesystem will show this as intended, but a BigFAT implementation will show it as one file' "hello.txt". That makes this a breaking change.
I would kind of have hoped that they had found an unused but always-zero bit in some header which could be repurposed to identify whether a file has a continuation or not, or some other clever way of ensuring that you can represent all legal FAT32 file structures.
There are so many good filesystems out there. Is it really necessary to keep dragging FAT along?
ReactOS is using btrfs, which has so many useful options that FAT will never see (zstd, xxhash, flash-aware options, snapshots, send/receive, etc.). This is positioned both for Linux and Windows.
Microsoft itself restrains ReFS to enterprise use, and btrfs offers so much more functionality. We should stop using a file system from the '80s.
Camera manufacturers and SD card manufacturers can't start shipping SD cards formatted with btrfs until Windows supports it out of the box. They can start shipping SD cards formatted with FAT32 and software/firmware which reads and writes FAT32+BigFAT.
3rd parties can write drivers for Windows, you know. A small, read-only FAT partition on a USB stick or SD card could contain the installable drivers necessary to read/write the rest of the disk.
However, that's unnecessary. The best option for a universal file system is UDF. Windows, Mac, and Linux all have full read/write support.