I wonder how much the data in the demo helps sell the readability and space-efficiency. Like anything looks tidy if there's 3-4 rows of data being joined. But that sort of well behaved small-scale hello world database isn't difficult to work with in any tool.
How does it handle if the table I join pulls in a thousand matching rows for each key?
There's a dataset like that in the demo at https://vimeo.com/manage/videos/553439658 . Ultorg handles larger datasets through a number of strategies: (1) automatic LIMIT clauses with an infinite scroll like mechanism, and with various rules to deal with the nested data structure, (2) a multi-threaded UI that keeps the visual query representation responsive even during long-running queries, (3) automatic cancellation of ongoing queries if the user makes a change, e.g. to add a filter, (4) "frozen" headers and values that makes the data make visual sense even as you scroll down multiple pages, (5) a separate "single-column" form layout type that works well when you have a few grouping fields followed by a long subtable, and a few others.
> I wonder how much the data in the demo helps sell the readability and space-efficiency.
This is a very good question.. In the section of the video on automatic form layout it is strongly implied that the auto-layout is driven by semantic inference of the column names. I see this easily breaking -- anyone who's ever inherited a production database knows that column names are often cryptic or context specific.
Also makes me wonder how well that system handles databases with non-English identifiers?
Column names are used primarily to inform the detection of "heading" fields, i.e. fields that should be given a larger font in the form layout, or fields that should be shown by default when adding a join to another table. The other heuristics are based on things such as data type, average length of strings, and average join cardinalities.
(Coincidentally, the system is named after a cryptic, context-specific column name in a real dataset the author once worked on.)
How does it handle if the table I join pulls in a thousand matching rows for each key?