It's for high-performance computing with current CPU designs that are dependent on data locality for performance.
I agree that it's a harmful design for business data. Programmers want to push their runtime data model into the database and they have no interest in the operational, maintenance, and performance problems this causes. When someone suggests this kind of thing, I'll ask them "how do we diagnose performance problems with this technology when there are 100,000 concurrent users and millions of data elements?" The rows-and-columns people can answer this question.
> When someone suggests this kind of thing, I'll ask them "how do we diagnose performance problems with this technology when there are 100,000 concurrent users and millions of data elements?"
I don't understand; the exact same performance diagnostics work in both cases. Why is this different? There's nothing intrinsically less performant about this approach. You really think your checkerboard tables and long lists of columns with names like "VALUE12" and "VALUE13" and multiple different kinds of key/value pairs you jammed in there for different clients -- you think those are better performance!?
> 100,000 concurrent users
Do you actually have 100,000 concurrent users? Really? You don't, do you? You just kinda hope you will eventually. And again: this approach is not worse for that.
> millions of data elements
This is absolute peanuts for any modern database system. It's weird that this is your extreme example.
I agree that it's a harmful design for business data. Programmers want to push their runtime data model into the database and they have no interest in the operational, maintenance, and performance problems this causes. When someone suggests this kind of thing, I'll ask them "how do we diagnose performance problems with this technology when there are 100,000 concurrent users and millions of data elements?" The rows-and-columns people can answer this question.