I love Emacs. My first intro to it was on the Braille Plus Mobile Manager back in like 2008 or so. That was a beautiful device that ran Linux and was developed for the blind. There's been nothing exactly like it since. The BT Speak is a poor ematation that runs on a Raspberry Pi 4 and is sluggish because Linux accessibility is hard and not optomized for such low-power devices.
Anyway, I began learning Emacs commands in the Emacs tutorial on that Braille Plus, , and they made sense to me. Unfortunately, Emacspeak only really works well on Linux and Mac, not Windows where all the blind people are. Speechd-el only works on Linux, since it uses Speech-dispatcher. I got Speechd-el talking on Termux for Android last night though, although it was rather laggy between key press and speech. Emacspeak development has paused, though, and Speechd-el seemingly hasn't been updated in half a year. Emacs itself has a lot going on for a normal screen reader to interpret which is why Emacs-specific speech interfaces are so useful.
A few examples:
* On Windows, with Windows Terminal and NVDA screen reader, arrow keys read where the cursor is, but C-n and C-p, C-f and C-b, all that, NVDA doesn't say anything. This is with the -nw command line option because the GUI is inaccessible.
* Now, if I do M-x, it does say "minibuf help, M-x, Windows Powershell Terminal". From there, I can do list-package and RET and use arrow keys to go through packages, but N and P don't speak even though I know they move between packages. So it seems like the echo area works.
* Programs like the calendar, though, really doesn't speak well with a screen reader. It just read the line, not the focused date. Using left and right jst say "1 2 3 4 5" etc. So custom interfaces don't work well. I shudder to think how it'd read Helm.
Lol maybe I can get AI to make a good speech server for Emacspeak for Windows.
Anyway, I began learning Emacs commands in the Emacs tutorial on that Braille Plus, , and they made sense to me. Unfortunately, Emacspeak only really works well on Linux and Mac, not Windows where all the blind people are. Speechd-el only works on Linux, since it uses Speech-dispatcher. I got Speechd-el talking on Termux for Android last night though, although it was rather laggy between key press and speech. Emacspeak development has paused, though, and Speechd-el seemingly hasn't been updated in half a year. Emacs itself has a lot going on for a normal screen reader to interpret which is why Emacs-specific speech interfaces are so useful.
A few examples:
* On Windows, with Windows Terminal and NVDA screen reader, arrow keys read where the cursor is, but C-n and C-p, C-f and C-b, all that, NVDA doesn't say anything. This is with the -nw command line option because the GUI is inaccessible. * Now, if I do M-x, it does say "minibuf help, M-x, Windows Powershell Terminal". From there, I can do list-package and RET and use arrow keys to go through packages, but N and P don't speak even though I know they move between packages. So it seems like the echo area works. * Programs like the calendar, though, really doesn't speak well with a screen reader. It just read the line, not the focused date. Using left and right jst say "1 2 3 4 5" etc. So custom interfaces don't work well. I shudder to think how it'd read Helm.
Lol maybe I can get AI to make a good speech server for Emacspeak for Windows.