Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> they bolt it onto Postgres after realizing they have availability or scale needs beyond what a relational database can do, then they bolt on Elasticsearch to enable querying, and then they bolt on Redis to make the disjointed backend feel fast.

This made my head explode. Why would you explicitly join two systems made to solve different issues together? This sounds rather like a lack of architectural vision. Postgres's zero access-design inherently clashes with DynamoDB's; same goes with ElasticSearch scenario: DynamoDB's was not made to query everything, it's made to query specifically what you designed to be queried and nothing else. Redis sort-of make sense to gain a bit of speed for some particular access, but you still lack collection level querying with it.

In my experience, leave DynamoDB alone and it will work great. Automatic scaling is cheaper eventually if you've done your homework about knowing your traffic.



In my experience, leave DynamoDB alone and it will work great.

My experience agrees with yours and I'm likewise puzzled by the grandparent comment. But just a shout out to DAX (DyanmoDB Accelerator) which makes it scale through the roof:

https://aws.amazon.com/dynamodb/dax/


If you add DAX you are not guaranteed to read your writes. Terrible consistency model. https://docs.aws.amazon.com/amazondynamodb/latest/developerg...


Terrible consistency model.

Judging a consistency model as "terrible" implies that it does not fit any use case and therefore is objectively bad.

On the contrary, there are plenty of use cases where "eventually consistent writes" is the perfect use case. To judge this as true, you only have to look and see that every major database server offers this as an option - just one example:

https://www.compose.com/articles/postgresql-and-per-connecti...


I think main advantage of DDB is being serverless. Adding a server-based layer on top of it doesn't make sense to me.

I have a theory it would be better to have multiple table-replicas for read access. At application level, you randomize access to those tables according to your read scale needs.

Use main table streams and lambda to keep replicas in sync.

Depending on your traffic, this might end more expensive than DAX, but you remain fully serverless, using the exact same technology model, and have control over the consistency model.

Haven't had the chance to test this in practice, though.


Thanks - I've seen DAX mentioned and possibly even recommended. I don't need faster DynamoDB that much.


You choose your consistency on reads. However, Dax won't help you much on a write heavy workload.


In my experience, NoSQL is almost never the right answer.

And DynamoDB is worse than most.

My prediction is that the future is in scalable SQL; CockroachDB or Yugabase or similar.

NoSQL actually causes more problems than it solves, in my experience.


There are plenty of reasons when NoSQL is the right answer. The biggest is when you care more about predictable performance: https://brooker.co.za/blog/2022/01/19/predictability.html?s=...


As long as you consider "can just fail if it gets too busy" to be "predictable."

Which I don't. I'd rather see reliable operation than "predictable except for when it fails outright" in almost every situation.

If you've encountered that other situation, where failures are fine? Then great. But I still assert that's a tiny minority of real-life DB use cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: