In their data modelling pages they mention you should break up rows into separate keys per column. Or separate keys per field in a document. This is indeed how many databases model rows on a distributed kv store. So this might be how they achieved 100TB.
However you still have the issue of any single key-value needing to be in their limit. (But it's not like people typically store enormous blobs in Postgres or MySQL either I think?)
The documentation is woefully out of date, sadly. Despite the code being in active development no one is touching the public docs. Though I don’t know for sure, that limitation was probably written something like 10 years ago.
The only part that is not well explained is that "FoundationDB has been tested with databases up to 100 TB".