We are obviously now starting to see AAA mainstream games implementing voiced interfaces, and that's something that's only going to increase over the coming years.
Even just within the couple of examples so far there are very different approaches to how it is implemented, so I thought it might be helpful to have a chat about what direction you would like developers to take with it.
Some examples -
1. Screenreader compatibility, e.g. skullgirls
2. Using platform level text to speech API, e.g. crackdown 3
3. Building a custom synthesised speech solution inside the game itself that works the same across all platforms (PC/Xbox/PlayStation/Switch etc), e.g. Division 2
4. Building a custom synthesised speech solution inside he game itself that also has in-game control over voice and speech speed, e.g. Eagle Island
5. Recording on-brand human speech for all menu items, e.g. Freeq
Ignoring all practicalities, what would your dream setup be? Which approach would you like developers to take, and why? Strictly from your own perspective, how it affects your experience of a game, not how it might affect developers.
Or would you even prefer more than one, for example one approach for menu navigation and another for in-game interfaces?
[EDIT] Also considering many developers are coming to this from scratch with no prior knowledge of TTS conventions, any tips on what kind of contextual information to communicate? Label/role/state/announcements etc?