By internal contradictions, do you mean conflicting evidence in the relationship between topics or metrics? That will (and does) come up regularly - peer-reviewed studies investigating the same topics have differently measured (or contradictory) results. We have tools for assessing the statistical quality of submitted relationships (through things like statistical reproducibility, algorithm type, statistical controls, etc.), so unreproducible or statistically unlikely relationships will be clearly seen as such. Building tools to programmatically test reproducibility of evidence is definitely something we've thought about (if that's the "formal verification" you are talking about).
Ultimately the goal will be to (statistically) approximate the sum total of all evidence between pairs of topics, and also to provide users with the tools and sources to assess (and apply!) that evidence.
2. The criteria for reproducibility seem a little rough to me? They seem to be sort of distant from things like registered replication, publication bias analysis etc.
3. My gut impression is you need some kind of meta-scientific model, eg something that models the probability of studying an association, the observed association conditional on that, effects on heterogeneity of effect sizes etc.
4. Along those lines, I wonder if there's an implicit schema of looking for nonzero associations and documenting them rather than reporting best estimated strength of known association? Maybe not.
5. I'm curious how you define nodes/topics versus subnodes/subtopics. I suspect defining the nodes/topics and their boundaries would become tricky?
RE your first question, one of their answers from product hunt may be useful:
> Q: Is this supposed to be open source version of Google's knowledge graph?
> A: At their essence, KGs are based on semantic relationships, e.g. coffee is a beverage, apple and banana are fruit, diabetes is a disease, etc. System is based on statistical relationships (collected and synthesized from data, models, and papers): A predicts B, C is caused by D, E and F are highly correlated, G and H change together, etc. [...] We hope these will be complementary ways of understanding the world -- one based on language, the other based on statistics.
I think internal contradictions are more of an issue for a Knowledge Graph, which try to infer things and have to make conclusions based on possibly contradictory evidence. System just tries to present the available connecting evidence without making object level conclusions itself.
Great question. We present all the evidence behind a relationship (on "evidence cards" that show the source, strength, sign, direction, population, controls, and reproducibility). The evidence cards on a relationship page may conflict, and this is clear for users to see and evaluate. We also generate a natural language synthesis of the evidence. We are working on enhancing our meta-analysis of the evidence to flag these kinds of conflicts. And our community will surely play an important role (as is the case on Wikipedia).