Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main problem is that Siri wasn't opened up to third party apps to hook into, and thus remained rather limited and stupid.

Each app could have bundled a voice interface, and Apple's phone would have been the first futuristic and extensible "star trek computer". Oh well, opportunity missed. And by a company whose founder had the Mac speak onstage. I think Jobs would have gone in this direction and demoed the crap out of it!

As usual, there is a systems level challenge to solve for building a foundation for app developers. Namely, how to make a fair and EFFECTIVE way for app developers to all share the same namespace / tree of commands?

If I was in charge of the Siri team, I would have made the following changes:

1) Fork OpenEars or another open source package and spearhead it as a first pass on the phone to eliminate the need for internet connection.

2) Have apps register prefixes for commands

3) Have apps register for "voice intents" and verbs that connect like they have for inter-app audio and app extensions

4) When an app is open, have a way to speak to the app through using the iOS library. This can be used to issue commands or dictate an email etc.

5) Feature apps that make ingenious use of voice commands and have them pitch PR stories about how the iPhone is becoming like Star Trek and is far ahead of Android.



Or do the same, but on Android. Why can't devs add new voice commands to the system?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: