There is a bit of variation in the approaches that developers taking their first stabs at text to speech have taken.
It ranges from simply reading out the name of menu items that are highlighted all the way up to Minecraft's approach of replicating standard screenreader level of verbosity; screen titles & headings, label/role/state for all elements.. telling you what page you're on, wether what you're highlighting is a button or a slider or a checkbox, wether it is part of a list, etc.
The purpose of this post is to pass on a question from some developers who are pondering what kind of end of the scale to aim for themselves, is a Minecraft approach what you're after, the same level of detail you would expect when using an app or a website?
Or given game UI's often simpler nature than something like a website, is that too much detail, detail that gets in the way of efficient navigation?
Would you prefer multiple navigation modes like on a screen reader, or is that overkill for game UI?