Yes, you can do extract values from JSON, filter based on those values and even create indices - not sure if that counts as "Mongo-style queries" but I've used it and it works pretty well:
The only thing to note about current JSON support in PostgreSQL is that, as far as I know, there is no way to update parts of a JSON document - you have to rewrite the entire document each time. I've not found this to be a big headache YMMV.
One important difference is that you don't need to extract the values to create an index. E.g. in Mongo I believe you still have to decide which keys to index. A GIN index on top of jsonb in PostgreSQL will make it possible to do an indexed search on any key, not just those that are specified at create index time.
Isn't this the case for postgresql data in general? ie, on updates, new data rows are written containing all the field values, whether or not they have changed.
It's more they case that when you want to reach into an existing stored document and change a single value you have to re-write the entire document from your application.
On the reading side there are functions and operators that allow you to reach into stored JSON and extract the parts you want. What would be nice would be to be able to do something similar for updates - although this is clearly more complex than reading, so I can see why it has been done this way.
Edit: I guess the most general solution would be to directly support something like JSON Patch:
Yes, but you don't have to specify all the other columns that aren't being updated.
With json/jsonb, you have to provide the entire object graph each time you are updating it. You can't update one field in the graph. Which could be a pain if you have concurrent updates.
Hopefully we'll be able to update parts of the jsonb columns sometime.
You're right, because MVCC works on immutable rows. The structure can be broken up to accommodate large values[1], which saves updating parts that don't change, but json data is a single column and updates need to copy the entire document.
I should have been more clear. What I meant was that in-place partial updates are not going to happen any time soon. This is mostly a performance issue, updating a value inside a JSON document will never be lightning fast because you need to copy the whole row and clean up the old version after the transaction has committed. JSON patch would avoid transfering the JSON document back and forth over the network, but the local copy will still need to happen.
True. I think reducing network overhead, removing one SQL call (currently need SELECT + UPDATE), and simplifying application code is a good start and probably fast enough for most use cases.
Do you have a reference for that - I had a quick look through the documentation for this new feature and it looks mainly like a better way of storing and accessing JSON data - not sure it helps with updates.
[NB Not being snarky... genuinely curious about this!]
It provides a Mongo-like API directly on Postgres, using normal tables/columns.
I've used it in some projects in the past on some projects and it's fairly staightforward, and makes it easy to port an app from Mongo->Postgres.
Does this mean I can do Mongo-style queries, retrieving a set of documents which match particular key: value criteria, using PostgreSQL?