Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're also just really not great. I tested out Mycroft a couple years ago and found that the success rate for getting it to understand its wake word and listen for commands was under 10%. Maybe if you buy their prepackaged product, it works better, but that's not something I want to do. I just want to run it on a Pi 4 (which they claim works) with a mic array.


Yeah I think there are two sides to this coin (and just for clarity - all of this relates to Picroft, not Mimic 3 the TTS engine that just launched). The audio hardware makes a huge difference to audio input which is why we've developed the custom SJ201 board that's in the Mark II. But even on DIY units we have been making big improvements on the wake word detection by better balancing our training data sets. Once the Mark II is shipping there are additional wake word improvements on the roadmap. Eventually the system will optimize for the users of each device. So the wake word model on your device wouldn't be exactly the same as the model on mine. We've also ported the Wake Word model to Tensorflow Lite which means it uses a small fraction of the system resources that it used to :D We're also about to make some bigger changes to mycroft-core that will help to support a broader range of hardware in a more consistent way. So whilst you could try it again today and I can guarantee it's better than the last time you used it, if you want a DIY system instead of a Mark II - I'd suggest adding a reminder to check it again in a couple of months once these bigger changes land.


That sounds fantastic! Thank you for replying; I'll definitely check back and give it another go.


10% doesn't sound much worse than my Alexas' 30%...


And if it's anything like siri, it can barely do anything useful, so it doesn't matter if it understands you.


"I searched the web for 'shutup stop mute stop talking' and here is what I found on Wikipedia..."


I find Siri really useful - for a very limited set of tasks where recognition is about 100% and being hands free has a benefit. Typically this is starting exercise workouts and countdown timers. For more general tasks the recognition is still good (for me, seems to cary by voice) but even at 90% there will be one mistakes in most requests.


counter point, I use my alexa daily and don't run into many issues with voice recognition, or it's lack of understanding.

Daily uses:

1) in the AM i set up all my needed reminders for 5 minutes before every meeting I have

2) it's connected to my hue bridge so I can turn off/on lights by asking while laying in bed, which is wonderful.

3) I play music all day.

4) It reminds me 10 minutes before every sunset to go outside for a walk.


My Google home is pretty near 100%; I can count on one hand the number of times it hasn't "heard" me over the past year. That's my benchmark.


They’ve done a lot of work in the last year on the software side. Might be worth revisiting. They’re tentatively on track to (finally!) ship in September of this year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: