How could the drive know? TRIM commands are the only way to "free" a sector/block for writing. The drive might have been tested during setup, so the block might not be empty any more.
You could use some sort of disk quota system, to make the filesystem artificially smaller than it actually is (trim after applying this change). Or simply insure that you don't exceed 80% - 90% used space.
It it is also worth noting that many SSD's are over-provisioned by the manufacture anyway, in those drives manual over-provisioning might achieve very little anyway.
The SSD maintains a translation table for all the virtual addresses exposed by the drive, that maps to the underlying flash physical addresses. Any physical address not in that table, is unallocated and the drive can use freely.
With most SSDs, there's no special explicit step necessary to overprovision a device. Just trim/unmap/discard a range of logical block addresses, and then never touch them again. The drive won't have any live data to preserve for those LBAs after they've been wiped by the trip operation, and the total amount of live data it is tracking will stay well below the advertised capacity of the drive.
The easiest way to achieve this is to create a partition with no filesystem, and use blkdiscard or similar to trim the LBAs corresponding to that partition.
Directly no, but if you moved the data to a new dataset, with a command that preserves the timestamp that would work (rsync -a or zfs send/recv), which could be run from a cronjob.
Compression settings are set at a per dataset level, so applying this to only some files in a dataset isn't practical.
From what I have heard, the amount of work needed in porting native encryption to the original FreeBSD port of ZFS was actually what led to this project.
I can think of an example in New Zealand where an old law (Sedition) was essentially ignored for several decades. A couple years after its first use in a modern setting (early 00's) it was repealed.