Yes, ZFS makes this very easy. It's no problem to snapshot an entire filesystem with billions of files every 5 minutes from cron.
Then the OP could have done:
zfs-restore-file recording-16679.flv
With `zfs-restore-file` as the following script (for example only, I hacked it up in a few minutes) :
#!/bin/bash
FILE="$1"
FULL_PATH=$(realpath "$FILE")
DATASET=$(findmnt --target="${FULL_PATH}" --output=SOURCE --noheadings)
MOUNT_POINT=$(findmnt --source="${DATASET}" --output=TARGET --noheadings | head -n1)
CURRENT_INODE="$(stat -c %i "${FULL_PATH}")"
RELATIVE_PATH="$(echo "$FULL_PATH" | sed "s|^${MOUNT_POINT}/||")"
# iterate all snapshots of the dataset containing the file, most recent first
for SNAPSHOT in $( \
zfs list -t snapshot -H -p -o creation,name "${DATASET}" \
| sort -rn | awk '{print $2}' | cut -d@ -f2 \
) ; do
echo "snapshot $DATASET @ $SNAPSHOT"
SNAPSHOT_FILE="${MOUNT_POINT}/.zfs/snapshot/${SNAPSHOT}/${RELATIVE_PATH}"
SNAPSHOT_FILE_INODE="$(stat -c %i "${SNAPSHOT_FILE}")"
if [ "${SNAPSHOT_FILE_INODE}" == "" ] || [ "${SNAPSHOT_FILE_INODE}" == "${CURRENT_INODE}" ] ; then
continue
fi
echo "found the same named file with a different inode:"
ls -l "${SNAPSHOT_FILE}"
cp -i "${SNAPSHOT_FILE}" "${FILE}"
break
done
If OP didn't change the inode (overwritten with new content) then you could make another script that compares size/hash of the file, or manually specify a time of a snapshot to restore.
Then the OP could have done:
With `zfs-restore-file` as the following script (for example only, I hacked it up in a few minutes) : If OP didn't change the inode (overwritten with new content) then you could make another script that compares size/hash of the file, or manually specify a time of a snapshot to restore.