Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows: ability to output to the currently running screenreader #2

Open
danielw97 opened this issue Apr 27, 2024 · 7 comments
Open

Comments

@danielw97
Copy link

Hi,
First off, great work on this utility and thanks for making it.
I was first made aware of it via your recent reddit post on the local llama subreddit.
As someone who's been using the cli to interact with llms over the past few years, it's great to have an accessible graphical option that isn't a webui.
It's still early days of course, although a small wishlist item of mine is to be able to output model responses to a screenreader such as jaws or nvda if possible.
I'm not sure how things are on Mac, although on windows the main option currently is to be able to output via the system's main tts which is probably sapi.
I'm not sure how easy something like this would be to implement whilst still staying cross-platform, though.

@chigkim
Copy link
Owner

chigkim commented Apr 27, 2024

Actually I was trying to get this going with dkager/tolk, but I hit a roadblock and put it aside for now.
dkager/tolk#23
Do you know anything about this library?
Maybe I'll pick it up again at some point.

@danielw97
Copy link
Author

Hi,
Thanks for your reply. I don't know a tun about this library, other than it's been used in quite a few projects.
It's also not been updated in quite a few years, although to my knowledge is still the main screenreader interface library folks are using and I don't believe there is a more recent one.
I'll have a look into the code as I'm currently learning python and try to see what's going on though.
One of the other open-source projects currently using this, although it's in c# so apples and oranges is https://github.com/khanshoaib3/CrossSpeak

@danielw97
Copy link
Author

Hi again, after looking at dkager/tolk#23 it appears that the main issue is that the dlls for nvda etc aren't on path. After the recent pyinstaller changes, might it be worth trying to place the dlls in the base path where the python script is being called from?
I won't have a tun of time until later next week to look at this more, although wanted to note my thoughts down in case they're useful.
At least in my book this is low priority, although would be nice to have in future.
Thanks again.

@chigkim
Copy link
Owner

chigkim commented Apr 28, 2024

Even before Pyinstaller, I couldn't get it to work. I just made a very simple testing script put the dlls where the main script was.
if you could even come up with a minimal python script that speaks hello world, let me know! That would be extremely helpful. I can take it from there!

@danielw97
Copy link
Author

Hi,
I've been struggling to get tolk to work, however have had better luck with accessible-output2.
Not only was it upgraded much more recently, although it is also a pip installable package and is cross-platform.
More info here:
https://github.com/accessibleapps/accessible_output2
A simple example like this from the repo worked for me:

import accessible_output2.outputs.auto
o = accessible_output2.outputs.auto.Auto()
o.output("Some text") #attempts to both speak and braille the given text through the first available output
o.speak("Some other text", interrupt=True) #Speak some text through the output, without brailling it, and interrupt the currently-speaking text if any

It can be installed with pip install accessible-output2
Hth a bit.

@chigkim
Copy link
Owner

chigkim commented May 10, 2024

It's for Windows only right, no MacOs? It mentioned about multi platform, but I didn't see VoiceOver Mentioned in the ReadMe.

@danielw97
Copy link
Author

The readme doesn't specify, although from looking at the docs it appears to have both mac and linux support as well.
I should be able to test on mac over the next few days to double check though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants