Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you, there is no much data about the effectiveness of more rigorous software development tools. It's clearly a research topic.

My intuition is that, with advances in robotics and AI, we may see the need of more robust logical systems. At some point, mathematicians and algorithmicians may use those tools to prove new concepts, which they will be able to share and valid more quickly, which then will percolate into software engineering more quickly.

Beyond software engineering, the expirements with theorem provers may lead to new ways of exchanging information, such as news, mathematics and legal documents. There are inspirations to take from the formalizations created in theorem provers for applying automated reasoning in more domains than just programming (imho)

Edit : Intuitively, it has do to with building trust in the algorithms and informations we share at light speed. With more validated building blocks, we may explore more complex systems. Accelerating trust between different entities can only lead to plus value, I guess. That's all very abstract tho

Edit 2 : too put it in even fewer words, theorem provers may be more about collaboration than just pure technical engineering



To be honest, I think that once we have advances in AI there will come a point where you won't really be doing the kind of programming that we do today by hand. You'll have an AI assistant whom you'll give queries that are close to natural language, and it will figure out how to implement them for you. I can see that coming within a few decades for many kinds of applications such as your typical CRUD apps.


In such an hypothetical world, the "typical CRUD application" may just not even exist anymore.

I was talking about advances in consumer drones, autonomous cars, and personal robots, such as SpotMini from Boston Dynamics. More autonomous embedded systems evolving around us means different needs in term of safety in software development.

AI will have to explain the reasoning of their decisions (prove they did right) in natural language. The humans who do that in our world are scientifics, politicians, lawyers and mathematicians. Those people use a specific kind of natural language, with domain specific words to communicate. Theorem provers in software engineering are a step forward that direction imho


Sure, you wouldn't really interact with a computer the way we do now once you have AIs that can understand natural language. It would be more like having a personal secretary.

I don't think theorem provers are actually the way to get there though. AI systems are probabilistic in nature, and neural nets are self-organizing. One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging. It's something that needs to be trained, and doesn't come to most people naturally. Our whole cognition is rooted in heuristics.


> One of the biggest problems is that it's really hard to tell how such a system arrives at a decision. The human brain itself is not based on formalism, and we find formal thinking to be very challenging

So far, "neural networks" in AI is a fancy name for what is nothing more than a giant equation system with many parameters. It's not even close to a biological, actual self-organizing, neural networks. It's closer to a weather model prediction.

The human brain is not based on formalisms, so let's create an AI that helps the human brain's weakness. Maybe we shouldn't try to replicate the human brain capacities, but rather create a new "form of life" complementing our biological skills.

So far, theorem provers, with expert systems, are the only works I'm aware of about systematically explaining reasoning and decisions.


Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.

I do think theorem provers can be useful in certain contexts, and I can see AI using these tools to solve problems.


> Neural networks are graphs that evolve at runtime by balancing their weights based on reinforcement, and as far as I know there hasn't been much success in using formal methods for AI.

This is not correct in the current state of tech. Neural networks are parametrized equations systems. You train the parameters on a dataset in a training phase, then freeze the result, then distribute the model to devices. Once distributed, the "neural network" can't be modified, and stop to "learn" new cases.

Edit : I mean, you are not completely wrong, you described the training phase of the neural network. That's only half of the story tho




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: