> Most problems don't require a general graph. They require subsets of a graph, because everything is a subset of a graph. A list is a subset of a graph. A map is. A tree is. A relational database is. A directed acyclic graph is. But we can stop before it's just any "graph". We can stop at DAG, like neural networks do, or even at tree, like programming language syntax does ("abstract syntax tree", "expression tree"). Code and data are two faces of the same thing.
And if you have a single edge not fulfilling the stricter structure's properties, you are left with a general graph. A single one, hence my point.
> but because it removed freedom in a highly specific way, which enabled larger more reliable systems, contrary to naive expectations.
That I agree with, but I personally believe it is too restrictive in case of memory models. Especially that the price really is not high -- modern GCs have insanely good throughputs with bounded latency, the only price being a slightly larger memory footprint.
> And if you have a single edge not fulfilling the stricter structure's properties, you are left with a general graph.
And this is why it's not good to have a general graph as a baseline model, because one mistake or temptation to stray from a structure with given properties, and you lose all the benefits of a constraint.
"This object will be passed deep into a call tree, but never have mutable methods called on it from more than one place in a process, but it'll be read from multiple places". Good luck enforcing this in a general object graph.
But if you eliminate mutable shared state, say like Rust does (not that I approve the specifics of Rust exactly) then you don't have to worry about it. Rust doesn't implement just a general object graph where everyone can have a reference to everything all the time. It has constraints.
> That I agree with, but I personally believe it is too restrictive in case of memory models. Especially that the price really is not high -- modern GCs have insanely good throughputs with bounded latency, the only price being a slightly larger memory footprint.
However as I noted in the link, the GC is the smallest price you pay.
Also it's not a slightly larger memory footprint, GC languages typically take 2x the RAM.
I do believe that Rust's choice is correct for its particular niche, but it is also a perfect example for how limiting that constraint is (just see any kind of Rust help forum thingy), and not just for beginners, you also get burned by it at the other end of the spectrum (e.g. implementing lock-free algorithms, etc). Sure, there are escape hatches and in most other topics I would agree with you that having it constrained with escape hatches is the correct solution (e.g. type systems with casts), I still feel that the memory model is not the correct place to enforce these. (For the Rustaceans, no, I'm not saying Rust is a bad language, I like it and it is absolutely a gift to low-level programming. Not necessarily a good fit for high level tasks though, for me at least)
You mentioned SQL. The reason it can be so performant is that it doesn't let the user over-specify their constraints, letting the computer optimize its plans, storage layout, etc. I'm sure one can physically write a particular query faster than a DB could execute it, but it is definitely not an easy task. So strangely enough, both strong and weak constraints can give you good performance. To get back at the exact topic at hand, let me reference Rich Hickey's term: "Place-Oriented Programming", referring to this notion that 'objects' are constrained to a given physical location, over being what they should be: semantics. When you write a program and you use a list, you don't actually care about that list being in a particular place, having to be moved when more items are added to it, etc, those are all implementation details. You care about its "list-ness" only. I see GCs with arbitrary graphs as allowing for this mental model. (Note, that an object not having identity, aka being a value type is a semantic question, and that allows for plenty optimizations by a decent runtime, so I'm not telling that we shouldn't care about performance).
> GC languages typically take 2x the RAM.
There is a niche where that is a huge price, but in 99% of cases I believe we can afford that. Also, GCd languages also have their own escape hatches when needed.
EDIT:
Nonetheless, I hope I don't sound too argumentative, I genuinely enjoy this discussion and our differing view points - I'm engaging with curiosity, even if the tone tells otherwise, I'm no native speaker.
Rust was trying to solve the correct problem. But it's debatable if it provides the right solution. My camp, so far, is that everything must be a value, i.e. copy semantics, with structural sharing. This solves an enormous set of problems, including controllable (side) effects, while it permits efficient local in-place mutation of state (unlike FP).
And if you have a single edge not fulfilling the stricter structure's properties, you are left with a general graph. A single one, hence my point.
> but because it removed freedom in a highly specific way, which enabled larger more reliable systems, contrary to naive expectations.
That I agree with, but I personally believe it is too restrictive in case of memory models. Especially that the price really is not high -- modern GCs have insanely good throughputs with bounded latency, the only price being a slightly larger memory footprint.