I'm totally blind and use a screen reader. Was surprised to see this on HN as I normally see this sort of thing just in accessibility spaces on Twitter or similar! Hopefully more mainstream coverage will mean that accessibility gets on the radar of more developers.
Is there a way to contact you? I would have hundreds of questions that come to mind on how you use the Web.
I'm building a web browser [1] that tries to make the semantics of the web automateable, so its goals are kind of aligned with blind people in regards to semantics and extraction.
My knowledge of how blind people use the web is kind of outdated, though. Is JAWS still a thing?
(My web browser is far from being stable, lots of things don't work yet and there's a ton of work left)
I'm not the OP, but you can contact me if you like; my email is in my profile. I'm not totally blind, and I don't use a screen reader exclusively. But I use one a lot when browsing the web. Also, I worked on the Windows accessibility team at Microsoft for three years, and I wrote a Windows screen reader and a talking web browser before that.
Edit: To answer your other question, yes, JAWS is still maintained and widely used, but it's not as dominant as it once was. On Windows, the NVDA open-source screen reader is quite popular, and the built-in Narrator screen reader is on the rise (though of course I'm biased, since I worked on it). iOS with its built-in VoiceOver screen reader is also quite popular among blind people, at least in the US.
I've taken a look on the WebAIM surveys from the last years and have seen that NVDA is on the rise. It's so nice to see that JAWS finally loses their market share.
I was asking because back around 2008 when I was more involved with Accessibility on the Web, we built a website that had a legal requirement to be as accessible as possible. So therefore we were trying to generate accessible double-paged PDFs, voicing over DAISY books and all. And doing so was so much work. We spent thousands of man hours just on document conversion, even when the underlying source format for the documents was RTF which is at least theoretically easy to parse in regards to layouting.
Every time we tried to make things compatible with JAWS, we realized that JAWS was just a pile of dirty Trident hacks that wasn't integrated as nicely as someone would expect such a software to be.
It was before the rise of AI/CNNs so therefore converting a vectorized PDF back into a semantic one was totally impossible. These days tesseract seems to make huge progress, but is still unusable for the task in practice due to its high failure rate in recognized words that you cannot fix with tricks like a Levenshtein distances or dictionary statistics.
Eversince I've been more on the Linux side of things, though. Here the ecosystem is so bad that I cannot even start to describe it. Most TTS engines are literally from the last millenia, and projects like Orca aren't made for anything serious when trying to embed it into your software to give users more access and control.
Maybe you have also some hints here? Are there better alternatives that I'm not aware of?
[1] The actively developed version of Mozilla TTS, named coqui-TTS. My understanding is that the original team was let go from Mozilla and they formed coqui.
https://github.com/coqui-ai/TTS
They are also on Element Matrix:
https://matrix.to/#/#coqui-ai_TTS:gitter.im
[2] FOSS automated accessibility testing engine for websites and other HTML-based user interfaces:
https://github.com/dequelabs/axe-core
[3] Emacspeak, developed by someone who was blind since childhood:
https://en.wikipedia.org/wiki/Emacspeak
[4] UK government websites are famous for being accessible. They have design guidelines:
https://design-system.service.gov.uk/
[5] Similar system for the US govt.
https://designsystem.digital.gov/
[6] Mozilla MDN learn accessibility: