I have been lucky to not be impacted by hand or wrist pain. I did experience chronic back pain for years due to a pinched sciatic nerve, it can be debilitating and depressing. Chronic pain is a large precursor to suicide, if you have it, you need to get it fixed.

My interest in this is from a FutureOfProgramming perspective, how can we interact with computers in different, higher order ways. What does the Mother of All Demos [0] look like in 2020?

Travis Rudd, "Coding by Voice" (2013) [1] this uses Dragon in a Windows VM to handle the speech to text. This was the original, "termie slash slap", a kind of Pootie Tang crossed structural editing.

Coding by Voice with Open Source Speech Recognition, from HOPE XI (2016) [2]

Another writeup [3] that outlines Dragon and a project I hadn't heard of called Silvius [4]

It looks like most of these systems rely on Dragon, and Dragon on Windows at that due to not having the extension APIs on Mac/Linux. Are there any efforts to use the Mac's built in STT or the Cloud APIs [5,6]?

[0] Mother of All Demos https://www.youtube.com/watch?v=yJDv-zdhzMY

[1a] https://www.youtube.com/watch?v=8SkdfdXWYaI

[1b] https://youtu.be/8SkdfdXWYaI?t=510

[2] https://www.youtube.com/watch?v=YRyYIIFKsdU

[3] https://blog.logrocket.com/programming-by-voice-in-2019-3e18...

[4] http://www.voxhub.io/silvius

[5] https://cloud.google.com/speech-to-text/

[6] https://aws.amazon.com/transcribe/

Talon actually uses Mac's built in STT if you don't have Dragon.

James from http://handsfreecoding.org/ was working on a fork of Dragonfly[0] to add support for Google's speech recognition, but I'm not sure if he still is. There are several barriers to that working well though: additional latency really hurts, API usage costs and (as far as I know) an inability to specify a command grammar (Dragonfly/Vocola/Talon all let you use EBNF-like notation to define commands, which are preferentially recognized over free-form dictation).

[0]: https://github.com/dictation-toolbox/dragonfly