Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it's not an apples-to-apples (heh.) comparison, because Google builds their own speech recognition engine and Siri/Apple licenses the engine of a company called Nuance (at least, last I checked), and so their language scaling is limited by what Nuance can give them.


You're assuming that they're not able to augment/enhance the Nuance engine with their own improvements.


unless Siri has undergone a lot of changes since the last time I looked at it, there is a sharp line between speech recognition and determining user intent/question answering; it was founded as CALO (http://en.wikipedia.org/wiki/CALO), a program which didn't even do speech recognition.

speech recognition maps a space of waveforms onto a space of utterances in a language. determining user intent maps that utterance onto the space of actions.

Siri has done great things in the field of the latter; they license their technology for doing the former from another company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: