My colleague and I talk about this at lunch with an eye on doing a research project given promising ideas, but I think I (the pl guy) am much more optimistic than he (the ML/SR guy) is. I should mention he used to work on dialogue systems full time, and has a better grasp on the area than I do. I've basically decided to take the tool approach first: let's just get a Siri-like domain done for our IDE, we aren't writing code but at least we can access secondary dev functions via a secondary interface (voice, conversation). The main problem getting started is that tools for encoding domains for dialogue systems are very primitive (note that even Apple's Siri isn't open to new domains).
The last person to take a serious shot at this problem was Hugo Liu at MIT. Alexander Repinning has been looking at conversational programming as a way to improve visual programming experiences; this doesn't include natural language conversation, but the mechanisms are similar.
I would think that this is why PL research if very relevant here; until we have an 'intelligence' advanced enough to distill from our chaotic talking about an intended piece of software, I see a PL augmented with different AI techniques to explain, in a formal structure (with the AI allowing for a much larger amount of fuzziness than we have now in coding; aka having the AI fix the syntax/semantic errors based on what it can infer about your intent after which, preferably on a higher level of the running software, you can indicate if this was correct or not) how a program should behave.
With some instant feedback a bit like [1] this at least feels feasible.
The last person to take a serious shot at this problem was Hugo Liu at MIT. Alexander Repinning has been looking at conversational programming as a way to improve visual programming experiences; this doesn't include natural language conversation, but the mechanisms are similar.