Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So this utilizes the tree-sitter AST and performs operations on the "parts" of the buffer? This, then, is very similar to textobjects in vim / evil, but with a spoken component.

I can see a lot of promise in this. Particularly if this integrates a sort of "record and perform" feature. Say I'm doing action X with my keyboard, and I know I need to do Y after, and 1) I can see all the required text on the screen, 2) it's a simple operation. I could then speak the command for Y while executing X, and then press the button to execute the voice command after I finish X. Much better than having to alternate between typing and speaking.

Alternately typing and speaking would lead to using different parts of our cognition, and writing software isn't very verbal once you've grokked the syntax. Instead, if one can type and speak at the same time, it'll work everything and make achieving a flow state much easier. Sort of like rubber ducking on steroids.

One could take notes, too...

I see an Emacs package coming lol.



> I see an Emacs package coming lol.

Emacs is pretty good in the other direction (can type, can't see) thanks to Emacspeak.

But it's going to be a while before Emacs can catch up in this domain. The display engine can't handle cursorless-style notation, and the tree-sitter integration is not mature yet.

Source: I've tried.


You should check out this demonstration of programming by voice[0].

[0] https://youtu.be/GM_siEPD4Ws?si=f52wK3tqqJaCQPp7


Thanks!


> I see an Emacs package coming lol.

There is one that drives vscode or jetbrains with cursorless plugin, but a native emacs one would be nice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: