Computers already write code for us, they just need to be told what we want them to write. The question is if we can design a programming environment where the computer can understand 'natural' human specifications and compile that, instead of needing to have a trained human compile the natural specifications into source code, which the computer can compile into a program.
A large chunk of the job of developing software is getting the stakeholders to understand the problem they want the software to solve. The rest of it is just typing which is the trivial part.
Yeah, right. And all the technical books on design patterns, functional programming, algorithms, etc. are out there just to teach programmers how to type faster ...
Sure we could. This is just like Siri, you tell it what you want, she asks for clarifications, a conversation you have with the computer until...bam, you get what you want.
I would think we could do much of this already if we tried, but there are a lot of things on my list to do until I get to that.
Your optimism on this seems unfounded in my opinion.
When there are no domain restrictions, computers have not proven very capable of comprehension. Look even at the comparatively simple domain of handwriting recognition.
More importantly, you entirely miss the problem that many times the human doesn't know exactly what he wants until he sees it.
As a software engineer, I have no fear my job will be replaced by computers talking to product managers.
That's why the program should be expressed as a conversation! We don't know what we want so we should start vague, get feedback, provide clarifications, and so on. The conversation with the computer must be a two-way thing!
That's brilliant. Not in terms of some AI working out what a silver haired old grandmother meant but in terms of talking to clients - they could imagine talking to a computer and I am just providing that feedback.
The thing is, we probably don't need hard AI yet to do some of this. Yes, it must be a dialogue system, but we have those today. We already saw the movement in this direction with Hugo Liu's concept.net work, but for some reason no one has followed up yet. We are getting to the point with speech rec/understanding technology that someone is bound to try again soon.
In general, we don't need Hard AI for much anything, and usually don't want it. Any specific problem you want solved will require some specific solution method or algorithm rather than the whole and entire "make a computer hold an adequate water-cooler conversation" thing.
The problem is that most people are simply incapable of giving correct specifications.
In a lecture about this topic (how to create specifications that can be turned into formal correct code) we were given the following simple example of an incorrect specification:
"Everybody loves my baby. But baby loves nobody, but me."
If you formalize this, you can simply conclude that the person that says that is equal to the baby - which clearly is not, what you intended.
And this is a very, very simple specification. Specifications of real software are magnitudes more complicated.
That is why the computer must hold a two way conversation with the programmer. How often do we just type out a program from our head to the computer anyways? Most of us write the program, debug it, change it because the result wasn't right, or we didn't really understand what we wanted, or someone else decided the requirements changed, or whatever...