I agree. I think what Apple is releasing in Siri could be bigger than most of us realize. I have several android phones and don't use the voice recognition much because 1 out of every 4 or 5 tries it returns things I didn't mean. That's frustrating. The average user quits after a 20% failure rate. Siri on the iPhone 4s needs to be at 5% or less failure rate, meaning one of every 20 requests returns an error. But then again, if it returns an error at least the user can clarify and continue the conversation. I think Apple just leapfrogged Google in natural language processing and taking it to the masses.
as someone who works a lot with speech recognition: speech recognition demos differently from how it works in the field. the demoer gets to remove all of the semantic noise from the system (people who aren't Phil Schiller or their wife, choosing "Greek restaurants" instead of a phrase easily confusable with another), leaving only signal.
if it works the way it says on the box -- and remember, Apple already released Voice Control, which has a less-than-stellar reputation -- then it could be revolutionary. voice recognition often doesn't.
I agree it's a big "IF". IF Siri works as advertised, then it really could be revolutionary. I remember being pumped up by Google's demo of voice recognition, only to try it out and find it novel and cool but not dependable and accurate enough to use in all my real-life situations. And it's difficult to correct errors, or clarify requests in a conversation manner like Siri.
Another reason to hope is that Siri had 19 people when Apple acquired it and most of them are still at Apple. I would imagine that Apple scaled their team significantly. Who knows? Maybe 100 engineers working on it, since it's a cornerstone of the next generation of mobile devices (and probably coming to desktop in the near future too). But Google doesn't seem to have the same priority on natural language processing as Apple does cause more than 50% of Apple's revenue comes from the iPhone. How many engineers at Google are working on voice recognition and natural language processing? Maybe somebody here will know. Maybe max 10 engineers?
Apple also will be forced to scale Siri across multiple languages very quickly, especially if it works well. Currently they have English, French and German. But tons of people will want it, so that motivates Apple to innovate even more.
I guess we'll see very soon how good Siri on the iPhone 4s really is.
it's not an apples-to-apples (heh.) comparison, because Google builds their own speech recognition engine and Siri/Apple licenses the engine of a company called Nuance (at least, last I checked), and so their language scaling is limited by what Nuance can give them.
unless Siri has undergone a lot of changes since the last time I looked at it, there is a sharp line between speech recognition and determining user intent/question answering; it was founded as CALO (http://en.wikipedia.org/wiki/CALO), a program which didn't even do speech recognition.
speech recognition maps a space of waveforms onto a space of utterances in a language. determining user intent maps that utterance onto the space of actions.
Siri has done great things in the field of the latter; they license their technology for doing the former from another company.