You could have a feedback loop, feeding back compilation errors to the model untill it converges to no errors.
Potentially you could do the same for fact-related questions, by fact checking the result against the (properly indexed) training set and feeding the results back.
Yeah, I've been exploring this generate->error->fix feedback loop along with some test cases on my app builder, and it's quite good but not perfect. It just goes in loops on some errors still.
Potentially you could do the same for fact-related questions, by fact checking the result against the (properly indexed) training set and feeding the results back.
I wonder if it would work.