They use Google cloud speech-to-text API, one of the key technology. https://mycroft-ai.gitbook.io/docs/mycroft-technologies/over...
¯\_(ツ)_/¯
The pi isn't really fast enough to process the speech in real time. deepspeech by mozilla was cited as an offline alternative to the Google speech API but it's difficult to set up with Mycroft and doesn't work very well (lack of data and lag - https://mycroft.ai/voice-mycroft-ai/). Because of this, Mozilla set up Common Voice (https://commonvoice.mozilla.org/en) to help build open datasets of voice recordings.
DeepSpeeech is very old software. Vosk works just fine https://github.com/alphacep/vosk-api. People even run tiny Whisper on Pi, though they have to wait ages.