What does HackerNews think of mycroft-precise?

A lightweight, simple-to-use, RNN wake word listener

Language: Python

#19 in Raspberry Pi
Ok, for those that want proof, its pretty simple to do.

1) we know that sending voice data to "HQ" costs power

2) we know that live transcription costs a huge wedge of power

3) we know that wakeword matching is quite power efficient. (see https://rhasspy.readthedocs.io/en/latest/wake-word/, https://github.com/MycroftAI/mycroft-precise)

So, in a quite room we know that to save power and data, devices won't be streaming data/listening. We can use that as a baseline for power and network usage.

Then we can start talking, measure that

then start saying watchwords and see what happens.

That aside, we know that its really expensive to listen & transcribe 24/7. Its far easier and cheaper to monitor your web activity and surmise your intentions from that. There are quite a few searches/website visits that strongly correlate to life events. Humans are not as unique and random as you may think, especially to a machine with perfect memory.

This sounds very close to how I started out with a similar project, named Rhasspy: https://github.com/synesthesiam/rhasspy-hassio-addon

I ended up using a set of JSGF grammars (one per intent) to generate a statistical language model for use with pocketsphinx. Rhasspy also features a web-based interface for creating custom words -- I have a mapping from Sphinx phonemes to eSpeak phonemes so you can iterate over a pronunciation until it sounds right.

As you mentioned, the wake/hotword stuff with Sphinx isn't terribly robust. I've been Docker-izing Mycroft Precise (https://github.com/MycroftAI/mycroft-precise) to address this.