Except they are using MDADM and LVM on top of it so it can only detect corruption, it cannot self heal like BTRFS can when it's in it's own redundant configuration.
"Btrfs is a modern file system developed by multiple parties and now supported by select Synology NAS models."
The higher performance models I would expect. DS716+ and various rack-mount models presently listed... I think you would be needing an Intel-based model at least to support this when DSM 6 is released.
This doesn't unambiguously detect mismatches though. If it's a mirror, you have two copies of data, which one is bad? All RAID 1 knows is that they're different. If it's RAID456, again, all it knows is that there's a mismatch, it doesn't know if the data strips are wrong, or if the parity strips are wrong. The way ZFS and Btrfs deal with this is data is checksummed, and the fs metadata which includes the parity strips, are checksummed. So there's a completely independent way of knowing what's incorrect, rather than merely that there's a difference.
In RAID6 you can find which of the different combinations shows a mismatch and which combination doesn't. Run through all combinations and find the one that shows no mismatch, the dropped drive is the one with the bad data, rewrite it and go on with your life.
This can only detect one bad drive, if you have two you are toasted.
FWIW Linux software RAID doesn't do that. I think the argument was that differences like this were mostly related to power loss where some disks have the new data and others the old data. And at the block layer, it's impossible to tell which is which and so the code just picks a winner basically at random.
I'm not 100% convinced myself that a 'majority wins' strategy like you described wouldn't be superior, but I can see why they decided otherwise.
Except Synology uses MDRAID (Linux Kernel RAID) and even in RAWID 6 mode it doesn't do this.
It just overwrites the corrupt sector with a new value to make the parity data consistent. It doesn't know which is right or wrong even though technically with RAID6 it would be possible to determine.
Depending on what RAID you are running, you will only know you have bitrot, you can't fix it since you don't know what harddrive has the correct information.
Theoretically for RAID6, but not for anything less than that. And in any case I'm pretty sure Linux RAID doesn't implement that. Though, as the owner of a Synology NAS I'd love to be wrong about that.
Is there a synology solution for checksums to pevent bitrot which ZFS advocates talk about a lot?