Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having the best performance isn't necessary for a lot of use cases. Sometimes you just want to store and search a bunch of json objects and for that the mongo api is way more convenient then the postgres/sqlite json options


I am not too familiar with MongoDB, but how does the API compare to an ancient key-value store library of Berkeley DB (db.get/put/delete/...)?

(Asking in the context of Mongita, or rather, a file-based key-value store)


There isn't a relationship between the two in terms of API. However the team that designed the Berkeley DB storage engine went on to commercialise it via a company called SleepyCat. That company was acquired by Oracle. The team subsequently left Oracle to found WiredTiger (SleepyCat, WiredTiger, geddit? :-)). WiredTiger was focussed on building a modern database storage engine focussed on high CPU count, high memory footprint servers. In 2014 MongoDB acquired WiredTiger and that is now the main storage engine for MongoDB. Interesting Footnote, Michael Cahill is the primary author of "Serializable Isolation for Snapshot Databases" [0] the paper that introduced Serializable Snapshot Isolation (SSI). SSI is the concurrency control mechanism used in Postgres [1][2]. [0] - https://courses.cs.washington.edu/courses/cse444/08au/544M/R...

[1] - https://wiki.postgresql.org/wiki/SSI

[2] - http://drkp.net/papers/ssi-vldb12.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: