How does Facebook use speech recognition in its own products?

Haven't you ever wondered if some mobile apps listen to you? And then you get accurate targeted ads...

People keep downvoting me, because "we need you to send a proof", but sorry I don't want to share Snowden's fate, but I know for fact IOS/Android, most TVs and major TV cable boxes (USA) are indeed analyzing your speech and sending back home _keywords_ from your conversations. I can imagine that's how they avoid being sued into oblivion for clear 4A violation, as metatags have been considered not a violation of your privacy by courts.

But go ahead, try it yourself! Have a conversation about having children with your loved one, next to your phones and your TV box, no online search required. Give it few hours and turn your Sling on, or browse some Amazon/Youtube. All of sudden you will see ads for products and companies you have never heard of, trying to sell you diapers or baby cribs. Where do you think it came from? Google, as of today, still is unable to read your mind.

Ok, for those that want proof, its pretty simple to do.

1) we know that sending voice data to "HQ" costs power

2) we know that live transcription costs a huge wedge of power

3) we know that wakeword matching is quite power efficient. (see https://rhasspy.readthedocs.io/en/latest/wake-word/, https://github.com/MycroftAI/mycroft-precise)

So, in a quite room we know that to save power and data, devices won't be streaming data/listening. We can use that as a baseline for power and network usage.

Then we can start talking, measure that

then start saying watchwords and see what happens.

That aside, we know that its really expensive to listen & transcribe 24/7. Its far easier and cheaper to monitor your web activity and surmise your intentions from that. There are quite a few searches/website visits that strongly correlate to life events. Humans are not as unique and random as you may think, especially to a machine with perfect memory.