Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In their data modelling pages they mention you should break up rows into separate keys per column. Or separate keys per field in a document. This is indeed how many databases model rows on a distributed kv store. So this might be how they achieved 100TB.

However you still have the issue of any single key-value needing to be in their limit. (But it's not like people typically store enormous blobs in Postgres or MySQL either I think?)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: