Sure, you wouldn't really interact with a computer the way we do now once you have AIs that can understand natural language. It would be more like having a personal secretary.
I don't think theorem provers are actually the way to get there though. AI systems are probabilistic in nature, and neural nets are self-organizing. One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging. It's something that needs to be trained, and doesn't come to most people naturally. Our whole cognition is rooted in heuristics.
> One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging
So far, "neural networks" in AI is a fancy name for what is nothing more than a giant equation system with many parameters. It's not even close to a biological, actual self-organizing, neural networks. It's closer to a weather model prediction.
The human brain is not based on formalisms, so let's create an AI that helps the human brain's weakness. Maybe we shouldn't try to replicate the human brain capacities, but rather create a new "form of life" complementing our biological skills.
So far, theorem provers, with expert systems, are the only works I'm aware of about systematically explaining reasoning and decisions.
Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.
I do think theorem provers can be useful in certain contexts, and I can see AI using these tools to solve problems.
> Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.
This is not correct in the current state of tech. Neural networks are parametrized equations systems. You train the parameters on a dataset in a training phase, then freeze the result, then distribute the model to devices. Once distributed, the "neural network" can't be modified, and stop to "learn" new cases.
Edit : I mean, you are not completely wrong, you described the training phase of the neural network. That's only half of the story tho
I don't think theorem provers are actually the way to get there though. AI systems are probabilistic in nature, and neural nets are self-organizing. One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging. It's something that needs to be trained, and doesn't come to most people naturally. Our whole cognition is rooted in heuristics.