1

Hello, I am a student in game development and I am currently developing a User Interface in Unity that can help blind people and people with short visibility to navigate through it. I am new to this big but amazing world of accessibility and I was hoping that you guys could give me some advice on my current version.

I leave you here the link to download the current build:
https://drive.google.com/file/d/1-MgsEr … sp=sharing
To play it you just need to download the archive, extract the folder and press in the executable application "Builds".

I would love to know what do you think about it specially how useful do you think it is what I have until now, of course any comment about further improvement will be more than welcome!

In the scene, there are 2 menus, the main one that is shown at the beginning and the "on pause menu" that is triggered by pressing the escape key once you are in a new game.

There are still some technical things to improve but initially you should be able to move through the interface without seeing it by making use of the voice over and the keyboard. You should also be able to change the text size to make it more easy to read and control the volume of the music and UI sounds.

Looking forward for your feedback big_smile

Thumbs up

2

Arnaurapid, welcome to the forum!  It's always great to have new developers around here.

- Aprone
Please try out my games and programs:
Aprone's software

Thumbs up

3

Hi, OK, so what you have so far is good, in that Unity isn't very accessible by default. A couple of things, first off, when loading the accessibility option for the first time, the whole game just sort of hangs for about 10 seconds. Next, you're using pre-recorded clips for each menu option, but it would be much more preferable if you harnessed SAPI to do the speaking of the menu options. This would mean that adding a new option to any in game menu would not require recording a clip for it. It would mean not having potentially dozens of voice clips taking up extra space. And, it would mean that moving through the options, SAPI can be interrupted, so you wouldn't hear the continuation of the previous option as the new one starts speaking. All windows computers have at least one SAPI voice on them.

The bipeds think this place belongs to them, how cute.

Thumbs up +1

4

Thank you Aprone it's for me a pleasure being able to participate!

Hi ironcross32, thank you for trying it out! I am aware that Unity isn't the best engine because doesn't allow the blind readers to detect text. That's why I wanted to introduce a voice over guide/narrator. I know the sounds aren't the best but I am planing to incorporate this into a videogame and in the future the intention would be to look for a voice actor. The game itself would also have other text to be recorded so it wouldn't be just for the menu. I agree that it would be easier not having to record each button for the menu but I thought it could be nice towards the immersion of the game to have a voice different than a machine tone. I take a note on the fact that it gets laggy! I will look into ways of making it more fluid. Also as you comment, I need to fix the fact that the text doesn't get interrupted when changing from option to option.

I haven't don't much research about the Microsoft Speech Recognition API yet but it sounds like a good think to look into! Although I am curious to know how good and liked it is. Do you personally use it? and do you know if it is a popular option towards the visual impaired? Also, does it depend on mouse control or works well with keyboard input too?

Thank you!

Thumbs up

5

To be honest, the machine tone, as you call it, is the way to go. It's what we're used to, the voice over thing is OK for a first step, but to really work well, it would be preferable to use SAPI. No, as a general rule, I don't use SAPI, there is a certain amount of latency involved with it, however, this isn't an issue with using it for games, only for navigation on websites, working with the filesystem, writing documents and other computer use. The benefits of using the SAPI far outweigh the few drawbacks, namely, that I don't think a majority of us would prefer recorded voice clips in the game for reading menu option and the like. You could also use a module called tolk to interact directly with our screen reader. Sapi can be done with the System.Speech.SpeechSynthesizer namespace in C# which is what I think Unity uses. It isn't the best implementation, but it works, Tolk is good because as long as you copy freely available libraries into the project dir, which for visual studio would be c:\users\<username>\source\<project name>\<app folder>\bin\<debug|release>\  Copy them in there and it will work fine.

The bipeds think this place belongs to them, how cute.

Thumbs up

6

Also, SAPI/Tolk is a good choice because we may wish to use this for information that may or may not be constantly updated (like a HUD). It would be much too difficult to anticipate arbitrary events and record voice clips for all possibilities we can generate. Not to mention the unnecessarily huge waste of space.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up

7

Thanks for your answers guys smile

ironcross32 wrote:

It's what we're used to, the voice over thing is OK for a first step, but to really work well, it would be preferable to use SAPI.

Even if you are used to it wouldn't you appreciate a voice actor tone more than a more synthethic machine tone?
I understand that in terms of efficiency it's easier to forget about it and go for the SAPI, but actually I want this to be part of the project from the start, I am not looking for the easiest but for a quality product.

Ethin wrote:

Also, SAPI/Tolk is a good choice because we may wish to use this for information that may or may not be constantly updated (like a HUD).

I agree that it would save time for sure, although I am curious to know if SAPI/Tolk works as well for MAC. And I am also curious to know if MAC is a popular choice/acquisition for those with blindness and visual impairment.

Thumbs up

8

As a blind person I generally prefer to just use the synthesized voice with a screen reader. Screen readers allow you to read text at your own speed and with your own preferred settings, whereas audio clips specifically for the game would slow things down. Also if the game is particularly number heavy you would need to record a lot of separate voice clips, which would take up space and sound slow and clunky.

Deep in the human unconscious is a pervasive need for a logical universe that makes sense. But the real universe is always one step beyond logic.

Thumbs up

9

@7, no, Tolk and SAPI are windows-specific. I haven't found a library out there that is able to utilize TTS engines from Windows, Mac or Linux.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up

10

@7 No, honestly, the synthesized voices are better. Maybe not to you if you're sighted certainly, I can understand how you feel that way, to you, maybe you don't understand them, or you do at a slow rate. To you I'm sure they sound really bad. Not to us though, we're so used to it because we used screen readers most every day, if not every day, because if we don't use a computer that day, we're almost certainly going to use our phone, and it will be on there. Using SAPI or using a library like Tolk, or even just taking direct control using the functionality in the screen reader's library, which is the layer that tolk abstracts away so you don't have to worry about it. Sapi voices allow one to adjust the rate, and with a screen reader, we can do that independently of the game. this is good because everyone can listen at their own pace.

Now, where I think voice actors would come in good is if you wanted like a sort of narrator's announcement, depending on the type of game you want to make, you could have voice acted lines like, "Powerup activated", "Boost mode enabled", "New record". You get the idea, basically just to spice things up. But I think most people that see this will tend to agree that recorded lines for menus  and UI elements is not really preferable.

Of course, its your project and you're free to do what you like with it, but you did ask, so I responded.

The bipeds think this place belongs to them, how cute.

Thumbs up +1

11

ironcross32 wrote:

To you I'm sure they sound really bad. Not to us though, we're so used to it because we used screen readers most every day, if not every day, because if we don't use a computer that day, we're almost certainly going to use our phone, and it will be on there.

This is great! Seriously, I had no idea, I would have always thought that a different voice would be appreciated but I understand now how you see it! It means changing my approach but still I appreciate it.

Thank you Ironcross32. The only problem I see now it would be if I want to make the game accessible for different platforms - Operative Systems then as Ethin is commenting that SAPI and TOLK are specific for Windows...

Thumbs up

12

I wonder if you can just set up several options, and only activate one based on the OS the system was running on. Like, if mac, you use voiceover, if windows, you use tolk, on linux, well I don't know what the hell you do to make it work with Orca lol.

There's got to be some way to do this in the code, because if not, you would have to compile for each system separately, if that's the case, you can still do this, it just means that each built project will be just a little different. If you're coding in visual studio, you can probably just do each build in the same solution. But my reason for thinking there almost has to be a way to distinguish what operating system you're under is because of things like delimiters like %CrLF and the like. Also well, like file paths, while it doesn't matter about using slashes or backslash in windows (it used to) still the structure is different than /home/<user>/games/ etc. You could do c:\games or c:/games. Paths in windows are dirty because the paths we're used to seeing and or typing are all symbolic links anyway.

The bipeds think this place belongs to them, how cute.

Thumbs up

13

@12, I know you can do it through speech-dispatcher but that has some serious downsides. For line terminators, that's easy:
// For C++ projects, not sure how you'd do it in C#...
#include <boost/predef.h>
#if (BOOST_OS_WINDOWS==1)
#define newline "\r\n"
#elif (BOOST_OS_DARWIN==1)
#define newline "\r"
#elif (BOOST_OS_LINUX==1)
#define newline "\n"
// other cases...
#endif
Alternatively, C++'s std::endl in <iostream> would do that.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up