The graph constructed by using bloom filter-style hash functions supports a decoding process called "peeling" where you:
1. Find a batch with 1 missing element
2. Delete that element from its other assigned partitions
3. Repeat, as the modified batches may now be recoverable
This iterative process (surprisingly!) succeeds with very high probability as long as the number of partitions is 1.22x larger than the number of missing elements with k=3 hash functions.
A property of the initial XOR trick for 2 different elements is that it guarantees finding a way to partition in one pass (and with very trivial code; no hashing involved!), which is lost by replacing that with hashing. (the original trick does take two passes - finding the bit to partition on, and doing the actual partitioning, whereas hashing is 1+ε passes, but the first pass in the original is just an xor-fold, and the partitioning really only needs to be a "accumulator ^= (current_val & mask) ? current_val : 0" (other partition is just xoring the results of both passes), both of which can be trivially parallelized and SIMD'd with O(1) extra memory usage)
The approach in my comment achieves guaranteeing finding partitions, and still avoids actual hashing or anything strictly-probabilistic, but does still lose the extreme triviality and mechanical sympathy of the original approach.
Can't we use this again? I mean:
1. Partition the data so that some batches have up to 1 missing element.
2. Recover the elements where possible with the XOR trick.
3. Pick another hash function, then repeat finding more missing elements.
4. Repeat until no more missing elements.