Hi, so I finally thought I would pick up BGT, I have not built an game yet that is something for me to be proud of, so I thought that BGT may be a better place for me to get some experience in building games before I start trying to peace together a game in another language.

I had some questions though, I tried emailing the developer, and he did not respond though. So here we go:

Does anyone know how BGT is able to implement the method speak_wait() in the tts_voice class? This is a method that would be very helpful in many of my other projects using pygame and accessible_output2. Would it be possible to create a method for accessible_output2 that did this?

I have not tried yet using the features of BGT to use NVDA or Jaws,
but I would hope that BGT had a similar method in those cases as well?
I have not found out how to make jaws or NVDA speak yet. Where is that
located in the documentation? I saw some where in there that I have to
drag the NVDA dll into my project before I can call to NVDA, but I
could not find the class on that.

The one thing that I noticed after I built my first game and compiled
it into an executable release build, was that it was taking a unusual
amount of time to run the program. When I would click on the
executable, it will take several seconds, probably over half a minute
to a minute before the game opens. Do you know why this is? I am a little
concerned about this as I build larger games, is there anything I can
do about this delay?

Thanks,

TJ Breitenfeldt

Thumbs up

I never had this problem running bgt executable files. And, try looking at the user interaction within references, the functions screen_reader_set_library_path, screen_reader_is_running and so is what you re looking for. I didn't check but maybe there's something simillar in accessible_output2, for example, you could use a loop that runs while the tts_voice object is speaking.

Thumbs up

Hi,
bgt already knows the speak_wait method inside the tts_voice class, but since the tts_voice class only accesses sapi, this won't work for screen readers. The problem here is that screen readers don't support a way to know if they're currently speaking or not, and if that's actually the stuff we told them to speak, or if they're actually speaking something totally different at the moment. That's why there is no way to implement a speak_wait() method for screen readers yet.
Regarding the long loading time of bgt executables. This has nothing to do with executables, but with some sort of antivirus software. I encountered this with McAffe I guess, it scans any executable on launch, and if it doesn't know this executable yet, it will take a very long time to scan it. Norton seems to do the same though. Just try out disabling all of your virus protection software and launch the executable again, it should work flawlessly then.
Best Regards.
Hijacker

Thumbs up

all the classes that you need are inside the includes folder in your program files

Thumbs up

I just read your first post again and noticed that my response actually didn't hit the nail onto the head, so i'll try to answer the remaining questions here.
Accessible output tries to implement a unified interface to all underlying speech systems, including sapi, nvda, jaws, window eyes and several more. That's why it tries to create methods for all functionalities all those interfaces have in common. Since sapi is the only interface which allows to check if the voice is currently active, I imagine that he decided to drop this feature. BGT takes another aproach here. You've got the tts_voice class in BGT which assembles an interface for sapi only and therefore implements the speak_wait() function. All the screen reader access functions aren't encapsulated inside a class, but inside simple functions, which can be find inside the BGT help file, and they don't offer a way to implement a speak_wait method.

Thumbs up

Thank you for the reply's. So, it is possible to create a speak_wait method for Sapi, but not other screen readers? Why? I would think that because NVDA is open source that we should be able to at least come up with something to be able to detect while NVDA is speaking. It is just a very useful tool to have, and no one really once to use Sapi if they can avoid it. Is it something unique to Sapi, or is it possible to do this with voice over.

TJ Breitenfeldt

Thumbs up

I don't know if it's able with VoiceOver, I never developed under OS X, but all screen readers don't implement such functions as it seems. That could be to the simple reasons that sapi developed it's own way of implementing synthesizers, which people used to build the sapi 5 voices we now use and had to have functions defined in a specification by Microsoft. Screen Readers like NVDA use several synthesizers and drivers who were developed by several other companies who all defined their own specification. Even if the Vocalizer voices in NVDA would support detecting if they're currently speaking or not, it won't be guaranteed that Eloquence supports this feature. Or eSpeak. And so on. That's probably the problem. NVDA depends on several third-party products, while Microsoft forced the developers to adapt their specification, which means that all voices assembled under SAPI need to return if they're currently speaking or not, making it possible to implement a speak_wait method. Other products don't have to support this feature, that's why screen readers don't expect them to exist and don't implement such a function. Anyway, i'm not sure about it, but that could be some reason.
Best Regards.
Hijacker

Thumbs up

even if VoiceOver had this feature, which I don't think it does, BGT isn't MacOS compatible. So it does you no good. you would need to make the app accessible out of the box with a Gui, or build it in python using accessible_output2 or build your own tts system.

Thumbs up

Okay. Thank you, it really would be nice to be able to view the screen reader's buffer, but it apparently is not really possible except for with SAPI. So I am guessing that when ever you people need to build a game that requires timing based on when the screen reader is done speaking, people are just using Sapi? or is there another solution that people have found to get around this that hasn't been mentioned?

Also, Thank you to Hijacker, I had my antivirus ignore that BGT file, and my game ran perfectly. I did  not want to do any more development in BGT until I figured that out. Thank you.

TJ Breitenfeldt

Thumbs up

Hi, the only way that i know of, is by using the sapi.
you just have to use the speak_wait function with sapi.

Thumbs up

A way commonly used (e.g. in Manamon) is to use a non-scrolling dialog system for example, meaning the user has to press a certain key when he finished listening to the current message, so that the dialog can play the next one now. That's actually one of the most commonly used, if not the best way to handle this situation, yes. Other ways aren't possible if you want to use screen readers too and don't just want to depend on SAPI.
Best Regards.
Hijacker

Thumbs up

I was thinking about this problem, could it be possible to create a formula based on the screen readers current rate, which could be grabbed from the ini file in the case of NVDA, and calculate based on the number of words? so you could then set a timer to wait while the screen reader is speaking based on that value. It seems like it should work, but I am not sure how to set this up. The only problems I see are if the user changes the rate of the screen reader, so you would have to check the value in the settings file every time, also, we would need to know what the rate value  is measuring.

Anyone know if this is possible, and if so, how to do this?

TJ Breitenfeldt

Thumbs up

That should be almost impossible and also too inefficient. First it should be hard enough to find the NVDA ini file. You don't know where the user installed NVDA, if he uses it portably or from hard drive, and where he put it onto his hard drive. Not to mention that it is absolutely inefficient to read one and the same file multiple times per second. That creates some traffic which will reduce the lifetime of the hard drive of the user by days, if not weeks.
There will also be the problem, that the rate again is something which is not fixed in the screen reader, meaning that the speech rate means totally different things for each voice the screen reader uses. Try e.g. speeding up vocalizer voices to 100 percent and compare that speech to eSpeak or Eloquence at 100 percent. You'll notice that the vocalizer voices are loads slower than Eloquence or eSpeak. And there is no way to get the actual speed in seconds depending on the rate given in NVDA.
I mean, it's always nice to ask things through and ask questions, maybe you'll find something useful, but honestly, do I want such a function? Honestly? I'm totally happy with such manually scrolling dialogues, I even like them better than the self-scrolling ones, even sighted people don't like them very much, they often scroll too fast or they get distracted by something else and when the concentrate again they missed half of the text. Why not just implement such thing and don't bother with this stuff? IT is some range where you should always ask yourself the question: there are so many people out there you know loads more than me about this topic, so why didn't they invent such a method yet? And almost every time the answer will be: because they don't know a solution for that, so I probably won't be the guy who invents the wheel.
Best Regards.
Hijacker

Thumbs up

Okay, I thought that this probably wasn't possible, or else it would have already been done, but I thought I would ask to see why for myself. Yes, I would agree that manual scrolling is better in most cases, it would just be a nice tool to have. Oh well, there are clearly other ways of handling this issue.

Thanks,

TJ Breitenfeldt

Thumbs up

Some developers have done something similar to your suggestion. Instead of looking up the screen reader's settings directly, they let the user specify a wait factor, then try to predict how long it will take for the speech to complete based on the length of the text to be spoken. This is pretty unreliable, though, so manual scrolling is still preferred.

Thumbs up

Exactly, its unreliability and major inprecision is the main reason why this approach is not used more widely. Even Lone Wolf implemented it, calling the feature screen reader speech delay, and that was back in 1999 I think, being the first true Windows audiogame to do this as far as I know. Yet it still just didn't and doesn't work...

The blame for screen readers lacking such standardized and reliable API's as SAPI does is entirely on the manufacturers of said screen readers, and probably on the manufacturers of the individual voices as well. Rather than allowing for their voices to interface directly to Sapi, therefore being consequently selectable in your screen reader of choice just like any other widely available Sapi 5 voice, they create their own proprietary interfaces for these voices that the screen readers in turn have to learn to support, e.g. Nuance, Realspeak and whatever, and this in turn leads to the inability to create a standard universal wrapper for this kind of thing.

Lukas

Thumbs up