2017-02-17 16:47:55 (edited by daigonite 2017-02-17 21:52:31)

Well, the honeymoon period of my current job is pretty much over. It's not bad, don't get me wrong - but I don't feel that I'm going to be able to get the experience that I need here. I'm finding the environment in this sort of client facing work isn't going to work for the long run. I can deal with it for now, but I really need to focus on my own skills in my own time.

I kind of feel that in order to really jump-start my career I'm going to need to function as an independent entity from any corporation or something, build my own product so to speak. I think it might be a good idea for some of us to work together to develop some ideas? I was slightly inspired by the thread talking about real world problems that could be solved with blind technology.

I've had a few ideas for a while and one of the main frustrations with the audiogame scene is that nothing is really standardized. Imagine how awesome it would be if we had a simple to use a single API to apply a basic level of accessibility to a game. Sure there are engines out there that are specifically designed for building audio games, but I mean having an interface that can be applied similarly to how standard software accessibility is meant to be used.

One of the biggest problems I notice with game accessibility is that large companies don't implement any accessibility and most times games must be navigated by basic sound cues - some games are better with this than others. Obviously a solution like this can't be applied to every game, but perhaps we can reduce the problem by making a viable API that can be implemented with other games.

One of the things I've been trying to work on in the last few years is develop a sufficient model for navigating 3D space as a blind user. At this point I have a theory for how blind people navigate in comparison to sighted people, but I need to experiment with it to really see for sure how it works. I believe that by modelling these navigation patterns will help tremendously.

For those who are curious, I believe that people who are born blind navigate use a relative strategy to navigate and orient themselves, as opposed to the visual model which is absolute. By this, I mean that a blind person remembers locations based on what is next to a location - over a large area, a blind person can locate objects by basically building a mental "path" of relative objects. A sighted person can just see an object and therefore can calculate that path dynamically.

People who go blind adapt towards using this strategy but aren't as efficient. Sighted people rarely use this navigation technique, which means that in certain situations, blind people should outperform sighted people, specifically low light conditions. In addition, blind people perform least effectively in situations where locations change frequently in no generalized pattern.

My largest frustration with my current situation is I feel like I can't work with enough people with my ideas; this is partially my own fault, trying to be a lone wolf for such a long time; so perhaps I figured I should try to reach out here? I feel that once I'm able to get a functioning product, me or my team can finally go to an investor and actually make a workable product and potentially jump start the new accessibility wave.

I think understanding this will be the best way to model blind navigation digitally which can help build a navigational API for the blind in mainstream 2D (and potentially 3D) environments. However I think translating "distance" will be the hardest thing.

I've also wanted to build a highly portable note taker. It would have a very small display of only a few cells but updates the display automatically to simulate braille reading. However I don't have any electrician or hardware skills relevant to build any hardware. Frustrating.

If anyone, sighted or blind, is interested in setting up a team with me, that would be chill smile

you like those kinds of gays because they're gays made for straights

2017-02-18 01:56:40

Hi.
Interesting theory.
Being a software developer myself i could Imagine The Possibilities with that kind of navigation implementation.
I'll keep an eye on this thread.

If you like what I do, Feel free to check me out on GitHub, or follow me on Twitter

2017-02-18 02:08:52

Just a thought: if you want to build a braille notetaker, please do not make one of those ones that has tiny displays, or at least give customers the option of having larger displays. I like longer displays (32-40 braille cells) because it provides more continuity for reading. Also, studies have shown that longer displays generally lead to faster reading speeds.

2017-02-18 02:17:29 (edited by daigonite 2017-02-18 02:24:51)

I've talked about some of my ideas with Aaron Baker; he's pretty busy these days but he's said before it would be awesome to have a joint blind-sighted team to build a standard accessibility engine.

TBH my idea isn't really the best out there and I know Aaron at least and I'm sure others have developed better systems as well. Here is the basic model of what I use in Colors:

Global:
A toggle for accessibility mode turns on and off most of these features. Actually, in Colors, the functions are still executed but the game doesn't use the TTS script if access mode is off. In actual implementation it would connect to the screen reader instead of the TTS engine. Furthermore, if this toggle is off, accessibility objects are essentially ignored.

Menus:

Each menu option has a set of strings associated with it that provide more information. For example, on an item menu, there is a set of functions that return the strings for things like quantity, name, description, stat boosts ect. Each of these menuoption objects in a menu can be associated with an element in the visible menu. In addition, the menu as a whole also has its own strings to be read. These are all mapped to normally unused keys while accessible mode is active. This allows for most visual information to be accessible by selecting the correct menu option, then reading off what data you need with a single key press.

This is really easy to implement honestly and basically is a specialized version of the standard accessibility interface used with most computers, although using the game's standard inputs instead of tabs.

Navigation:

Navigation is more difficult. TTS assistance includes reading off the coordinates of destinations, reading off your own coordinates, reading off the terrain and reading the current room (location) you're in.

Obviously navigation requires sound though. First, a 3D sound engine is implemented, however, this works in 2D by collapsing the y axis. The listener is attached to the player, and as the player moves, you can hear objects in the distance come closer or further away depending on how you move. Furthermore, your player makes steps as they move. Each step creates an echo that you can approximately hear where collisions are. If you get close enough, you will hear a proximity sound in the most recent release.

Finally, a marking system allows you to drop down audio markers so that a player can listen to the sounds and form commonly traversed pathways easier.

Please feel free to add any suggestions. I've been building this in game maker of all things haha, so I probably am somewhat limited in scope.


TJT - What makes you prefer a long display to a short one? Do the short displays update automatically? I'm trying to think of a way to basically compress the display in a way that doesn't distract the reader. The idea behind what I want to build is that the braille display moves for the reader instead of vice versa. However, if a reader feels more comfortable moving their hand, this solution wouldn't be ideal.

While having a long display is easier to read, it's also much harder to carry around and it will be more expensive to produce as well; the goal is to have a very cheap (not so much that it falls apart of course lol) and portable device.

you like those kinds of gays because they're gays made for straights

2017-02-18 05:04:21

My own work's mainly been with Peter Meijers image to sound rendering, though mostly with image mapping lately in regards to paint software. My audiocraft prototype utilizes a simulated depthmap of the 3D environment for taking snapshots, I had considered adding solid colored textures as well so users could adjust the filtering to identify key objects like switches, doors, hazards, enemies, items, etc. I had also applied a compass style tone system for tracking the players viewing point and position of the reticule when moving around and looking up/down, and was going to add proximity audio cues to objects near the reticule, along with footstep sounds based on the material surface.

Combining environmental audio cues and beacons with the capability to take snapshots of the immediate area could potentially help provide some overlap in sensory input and better assist with navigating environments, well 3D ones at least.

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-18 05:46:19

Well, I can't help with getting you an independent job, however you could email the contact address on audiogamehub and ask them for a position.
Its not payed exactly though your's may be, in any case I have seen your work, I havn't managed to beat any of your games but your work has merit, I have seen it, I havn't grasped it all exactly with the 3d environment
But still if you want a job and you think you can wait a little, then they don't tell me to not look for new talent so put your proposal and see what they say, I do think your work is good enough.
I am not the boss just the pr guy and tester but I have no issue with you doing stuff for us.
Its a shameless plug I know, but better to look rathern than not.
At least concidder it.

2017-02-18 06:21:09

I don't really want to send this thread off on the braille display topic at the expense of the main one (considering that innovation in braille is apparently stubbornly resistant to assaiying), but... Yeah, I'm all for more information displayed at one time.
As regards portability, I currently have a 20cell PACMate which is an unwieldy monolith. About 3 weeks ago I saw a 32cell Braille Edge which is smaller in every dimension, possibly including width since the PACMate displays are designed to be interchangeable so a 20 is only marginally less bulky than a 40. ... I want that Braille Edge. Oh, I am glad I have _something_ to use as a notebook that's more portable than a Laptop and more longform-friendly than a phone/tablet/touch screen in general. But something less likely to break under its own weight would be great, and I'd be gaining 12 cells!
The problem seems to be an engineering one. How do we cram more moving parts into a tiny space for the smallest price and best quality possible? ... And then get it manufactured and developed and distributed for a reasonable price while still paying the talents involved who could be making much more at Google? I mean... I'm sure two-line displays exist. I've never seen one, but I think I've heard of one...
To make a visual comparison, I'd much rather read a novel on one of those monochrome amber / green monitors from the 1980s than on a calculator, but a Nintendo DS would still be an improvement, and a Kendel would beat all of them. What we currently have is the calculator. There are some full-screen devices out there which a whole 3 people seem to have access to, off somewhere in Japan and Korea and Bristol, and while I'm not sure about the DV2, comparing the other two to monochrome monitors is being a little generous, from the sounds of it. The Braille Tablet is the holy grail that everyone is questing for. Occasionally someone will write an article about something in more of the DS category, but those are less substantive than the seemingly mythical fullscreen devices.
So, if you want to aim small, and can come up with a way to make a nice not-too-thick rectangle-thing for roughly the price of an iPhone±$250, that's mostly unexplored territory.

Ah, wait! You know? I have been thinking that smaller braille devices would be good. Not for reading sentences or code, but... you know those things they call braille watches that are really just watches with the hour marks embossed and the hands exposed (does that seem like an iffy design to anyone else?)? While I'm sure 5 cells wouldn't fit very comfortably on the wrist, and people are mostly dropping watches in favor of phones (or Apple Watches) these days, it seems to me that little things like watches, digital thermometers, etc are being completely ignored in favor of the much more exciting book-reading devices. And I wonder... is that a gaping hole in the market that might actually go somewhere (especially if this was 1993 and these could be made relatively non-bulky)? Or is it being ignored for reasons other than the attention to bringing down the cost of line displays and trying to come up with viable page displays?
I mean, if it's $1/dot, a 20cell display costs $160 to build. I think it's more than that--let's go up to $5/dot. So $800. A watch would really only need 32 dots (you'd want a colon and maybe AM/PM, but those don't need whole cells dedicated to them). We can probably cut that in half, since the numbers only take up a maximum of 4 dots. At $5/dot, your braille watch costs $80 to build, and is no more complicated firmware-wise than an LCD watch. You could probably even come up with some tricks to take advantage of the fact that there are 6 unused dot combinations to reduce the cost, and also it'd only need to refresh once every 60 seconds, and 5/6 of those refreshes will only update the last cell, and 5/6 of that remaining 1/6 only update the last 2 cells.
So, yeah, I like the idea of tiny braille devices (Microbraille?), just not so much for text or illustrations, and more for unadorned numbers and such.

看過來!
"If you want utopia but reality gives you Lovecraft, you don't give up, you carve your utopia out of the corpses of dead gods."
MaxAngor wrote:
    George... Don't do that.

2017-02-18 07:28:17 (edited by daigonite 2017-02-18 07:38:35)

No worries, finding a job right now isn't exactly the biggest concern; I'm already content with my current employment. If you're interested in having me work with you contact me privately via email; this was more about assembling a team or maybe a github project; although it doesn't help I'm talking about 2 simultaneous potential ideas.

I never heard of Peter Meijers, looking into it. So let me get this straight; what it does is it renders essentially a sound map (similar to a bitmap) and then plays a sound that's essentially a compilation of that? I assume that this has to be triggered by the player, similar to a sighted player flashing the map so they can see their relative position. How is the sound generated, I'm interested in learning more.

looking into him I should really get into contact with him at some point lol!

In regards to microbraille, I mean it's possible. The thing is though manufacturing the dot mechanism I don't think should cost that much tbh. They're just rods with rounded ends that are cut to be a certain height. They would fit in a shaft. You could have a mechanical component moving them or even other possibilities. I don't think it would cost more than $1 a dot, at least on a manufacturing scale. Consider that the dots are technically part of a larger mechanism that likely trims some of the costs as it gets larger; but I don't think designing my own pins would be particularly difficult or overly expensive.

I'm not sure why you're claiming the firmware would be comparable to an LED watch. I was thinking more of a phone-sized device, but potentially thicker for the braille mechanism. Input would be handled through buttons that input braille one at a time, and also control buttons that control the cursor position. The output is either through TTS via speaker/headphones or braille output, and basically the UI would be a series of menus. Possible that you could even build games on it maybe? LOL. But it could take notes, make do stuff like have a calculator or even store notes or something. The idea is that this isn't a tablet so much as it's like a strip with some buttons on the bottom to control input. I think something this simple could reasonably be built under $500, possibly $300, but I don't know what the cost of building the software would be. I haven't built something like this before so my estimates could be totally wrong but I don't really understand the problem.

Question, is Discord accessible? I might create a blind accessibility development channel, that might be a good idea. https://discordapp.com/

No drama though, this would be 100% development based

you like those kinds of gays because they're gays made for straights

2017-02-18 08:38:50 (edited by magurp244 2017-02-18 08:52:05)

Oops, guess I should have included a link, heh. You can find the relevant details [here], he has some examples written in C and its under a permissive LGPL attribution license. While your there you could also grab a free copy of The vOICe, which is a mature implementation of the concept.

I've also translated some of those examples into Python, which you can find [here] and [here], and my [Audiocraft] prototype has a working implementation with included source code.

What it basically does is take a 2D images RGBA data, multiplys the data with a waveform, and Sums the array along the Y axis into a 1D array for playback in an audio buffer. The result is that a pixels position is plotted via time from left to right, and pitch for up to down, mapping the relative topography of the image.

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-18 09:06:40 (edited by daigonite 2017-02-18 09:23:53)

So if I have this right, what he's doing is each snapshot he's converting each pixel into essentially an amplitude value, and x denotes timing and y denotes pitch. I also imagine these snapshots make a lot of audio noise so they have to be toggled on and off? Either way it sounds like an interesting navigation tool.

I actually probably could translate that into game maker, overhead permitting lol. Well, probably not panning unfortunately since GM's sound engine sucks. However I think with commercial games in order to implement this snapshot system a separate mask would need to be implemented, along with whatever collision masks are there.

It would be really intriguing to have a 1st person game take place with this sort of navigation but I don't know how efficient it would be for something fast paced. I don't think that emulating how a person looks around will be helpful though, but rather emulating an image that represents hazards, safe areas, enemies ect. but that's just me, maybe you know better.

Going back to relational maps, I think the most effective use of this snapshot system would to use it as sort of a map that can be accessed at any point; this removes the noise level problem as well as allowing a player to essentially "glance" at their immediate vicinity. This solves the distance problem but idk if it helps in fast paced scenarios. I mean fill me in here I'm learning a lot lol.

you like those kinds of gays because they're gays made for straights

2017-02-18 09:29:32 (edited by magurp244 2017-02-18 10:17:10)

I guess you could consider it a kind of reverse Spectrogram. It usually processes images in pulses which can either be processed very quickly or slowly, depending on the users preference. In this particular use case though I expect a toggle for a single sweep or continuous play would be preferable because of the interference it would cause with the ambient audio cues and beacons. Overhead can be a bit of an issue though without some optimization, the process can usually only handle low resolution images in a timely fashion as a result and typically has to downscale, the vOICe for example can handle a max resolution of 176x144, my own usually hovers around 64x64 or 128x128.

As I mentioned my Audiocraft prototype has a working binary demonstrating the concept, I also have a minimap implementation with my [AudioRTS] demo. I'm not sure if its suited to a fast paced game though, I haven't really heard of any previous implementations or attempts other than my prototype to use this in a game setting. I think something like navigating spaces in games like Gone Home, SOMA, or other slower paced games might be more suitable. Also keep in mind though that Peter's also planning to deploy this for people to navigate in the real world as well, so there may be some practicality to the idea.

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-18 15:05:58 (edited by daigonite 2017-02-18 19:34:16)

In most games I think this engine could be used as a minimap. Irl I'm not sure, it seems too early to be useful yet due to changing light conditions and other factors that could affect the original image. That's not to say this doesn't have huge potential.

I might make a demo in gml with panning, I came up with a way to beat around the game's engine lol. My idea is to draw the collision map in a 256x256 area around the player onto a new surface, convert that into a 8x reduced square, and then apply the algorithm for a 1 second sound. I think going any higher resolution might cause issues for the games crap sound engine.

I could alternatively make a dll extention but what's the fun in that?

also boooo discord is not accessible

you like those kinds of gays because they're gays made for straights

2017-02-18 23:30:42

Peter Meijer has dropped by here before to mention that the vOICe is not ideal for high-speed games, a few years ago.
I did put a vOICe-like feature in Redsword. Come to think of it, something like that might be better to replace the buggy confusing radar in my current project.
The vOICe is pretty noisy, in the information sense. It's great for navigation, assuming uniform coloration, but I imagine that even with enough computing power to increase the resolution, fine details would remain elusive.
I like the idea of a cross between the vOICe and Swamp-style radar. The idea is to cut it down to the necessary information, then, if practical, add the aesthetic touches. This should be trivial for any physics engine ... well, period, but raycasting would function as something of a vertual cane. The basic implementation would be the slimmest and would just distinguish solid objects from open space (although, given that we should expect some developers to just slap it on with the default settings, said defaults probably should look for things that really stick out, color-wise, since those are usually important). Make it configurable for the level of detail, etc the game requires. Ex, in Minecraft you need to distinguish block types, but in Harry Potter you probably don't care about the exact patterns in the tiles or carpet, so long as it's not a trick step.

That's basic environmental scanning. It'd probably be good to let time-sensitive or high priority objects make sound independent of the scan cycle. I don't see this being something easy to make the responsibility of an accessibility API, but I'd recommend trying to incorporate it anyway, as otherwise I expect the idea of togglable audio beacons either wouldn't occur to most devs, or would seem like undue labor to do it from scratch. Incorporating it in the API would make it easier to get more player customization included in complex games, I suppose--i.e., in Swamp there are map beacons and player beacons and beacons for tracking specific players and each of these is separately togglable. The pesky part is that PC games have plenty of extra keys to make available for these features, but if this goes to consoles, would the game that uses this much stuff have buttons left over for quickly toggling accessibility features?


(Regarding the cost-per-dot thing, IIUC the cost is for the mechanism that moves them, since each dot must be moved individually. I remember coming across an article in which someone used microfluidics to effectively take that part out of the equation, but I've not heard anything of it since, which is how most of the exciting ones go.)

看過來!
"If you want utopia but reality gives you Lovecraft, you don't give up, you carve your utopia out of the corpses of dead gods."
MaxAngor wrote:
    George... Don't do that.

2017-02-19 00:08:55

Determining multiple different areas shouldn't be too hard, having a multi channel approach is the best route. Having the ability to either read through each channel individually or at once would be beneficial too.

Meijer would work best as a sort of on the fly immediate mapping of the surroundings, I think. Raycasting would be more applicable to more games so I think it would be more of a fundamental component.

The button issue on consoles is challenging. Most options don't need to be changed dynamically and could be mapped to an accessibility option menu. On the fly combinations would have to be reserved for the most vital functions and could probably be done by pressing an initial button combination as an "accessibility input toggle", then press the button for the function. Depending on the game's focus it can implement that function, whether it's an option or information menu.

I think in such an API we should have a set of options, all of which provide some sense of accessibility. The first two that would be easiest to implement in a 3D title would be accessible environment sounds and menu accessibility. Just these two alone would add a lot more accessibility to current titles and would most likely be a cinch to implement.

More advanced options such as raycasting, collision proximity noises, Meijer's algorithm and other tricks would require more work to implement from the developer's point of view due to the lack of "on rails" implementation of these things. Probably what would be best would be to build some sort of tool that allows the easy testing of these components for the developer. For example, with raycasting, it might be useful to have tool that has a custom collision map for the raycasting, which may compensate for clipping mistranslations. Meijer's could be a configuration tester, ect. I'm just thinking of ideas on the top of my head.

Regarding braille cells - I think it might be possible to use an electromagnetic component to compensate for most of the motor elements. There would need to be some sort of locking mechanism so it would require some directly mechanical components, but that might be a cheaper alternative.

you like those kinds of gays because they're gays made for straights

2017-02-19 01:12:21

In a real world setting I may be inclined to agree about practicality, but it also depends on what kind of image data your processing. If you'll recall I had mentioned Depthmap images as being ideal for navigation because in a depthmap how close or far away an object is determines how bright or dark an object is, presenting a clear sense of depth. For example if you fed [this] or [this] image through the process it would convey a fairly clear sense of where objects are and their depth in the environment, although I would probably invert the images so the closer the object is the louder it would sound, and further the quieter.

In a real world setting its a bit tricky without stereo imaging to build a depthmap of a scene, something I think Peters been working on with newer camera features. In a game setting though, its much easier to build a depthmap from existing data and remove ambient lighting as a factor altogether, although having the option to view lightmaps could be a feature. I haven't quite figured out how to render depthmaps quite yet myself, my video card being unable to handler shaders certainly doesn't help. Anyway, the idea was I could use solid color textures to add or remove objects from the depthmap at render time so users could determine their position based on volume for distance, pitch for height and time for width to help get a general sense of where things were in front of them. Finer details is definitely one of the short comings with the approach though, but having a zoom capability could also help.

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-19 03:00:35 (edited by daigonite 2017-02-19 03:03:33)

I emailed Meijer today and he responded, pointing out that irl applications are quite different from video games. I would think that having a 2D map would also be helpful in devices, but hey he's the experienced one, not me lol. Honestly from the way he put it it sounds like there's a lot of potentially interesting developments, I'm going to follow his work now. Thanks for the suggestion.

What I plan on doing for Colors is using the objects that handle collision (there are two, one for angles and one for squares) and drawing them as shapes on a separate surface that's 512x512, location relative as if the player was in the center of the surface; then use a radial blur shader to blur the lines a bit. Then, I want to shrink it to a 32x32 square, which can be used with the algorithm. Each pixel I get the associated luminosity (also called value) and then take those values into an array representing the square. Then, as I parse through the array, the row counter will be the horizontal, the column counter will be the vertical and the value in the array is the luminosity/amplitude. The drawing function is called one step, then the processing functions are done the next; this is due to how GML handles draw cycles. Then I can trigger the sounds for each of the 32 channels, having their pitches assigned accordingly, and have the volume of each column adjusted for the values in the rows as the sounds are playing. Once the sound is complete dispose of the object.

Is this helpful at all? I guess if you don't know how to draw the collision map that might be an issue.

weird question but are you blind/vi or sighted? Might be helpful to discuss this with pictures since that's half of the problem, if you can see.

you like those kinds of gays because they're gays made for straights

2017-02-19 07:15:19 (edited by magurp244 2017-02-19 07:22:19)

So if I understand correctly your taking a top down topographical view centered on the users position, drawing solid collision shapes to it, or is it just outlines? Then scaling it to 32x32 and playing 32 sounds for each element along the vertical axis of the array? Hmm, I think Sik tried something similar when he was working on his side scroller here awhile back. It might be enough to work, your using GameMakers built in sound engine yes?

To whom is your vision question directed?

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-19 15:01:58

IIRC, Magurp is sighted. I'm not.

Re: electromagnets, the trouble with this solution seems to be getting a narrow enough focus for the field to affect individual pins without interfering with the others. Maybe if it's set up in, say, two layers, in sort of a checkerboard pattern, to optimize the positioning of the magnets relative to their associated dots? That'd make it thicker and somewhat more costly to produce, but if it works for cheaper than the intricate maze of Piezoelectric levers that seemed to be the mainstream solution since last I checked, I guess I'd be OK with it being a little thicker than the slimmer models on the market.
Another way to reduce the leaky field problem would be to have the pins themselves come in two or more types, but that ups the complexity and probably the cost.

看過來!
"If you want utopia but reality gives you Lovecraft, you don't give up, you carve your utopia out of the corpses of dead gods."
MaxAngor wrote:
    George... Don't do that.

2017-02-19 20:04:40

So is Game Maker accessible now or is it it's usual inaccessible product? I'm trying to follow this thread and saw GML mentioned more than once, so that's why I'm asking.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.
My Github

2017-02-20 04:13:41 (edited by daigonite 2017-02-20 04:17:38)

I was asking you Magurp, so I could show you some diagrams in my post to explain what I was doing.

I'm still building it out, been traveling all weekend for work so pardon me. Here's a screenshot of a map in my game from the game maker studio interface. I haven't built out the drawing thing for the collisions yet so just bear with me:

http://i.imgur.com/WES16mL.pngp

All the red areas are collisions - they come in two types, an angle and a square collision, this just makes it really easy to implement collision maps in the game. Anyways, what I plan to do is draw all the collision objects on a separate surface, with your character being in the center that's something like 256 or 512 pixels square, resize the surface to be 32x32 pixels, then convert it into luminosity values through this function:

var surf_id = argument0;
var hei = surface_get_height(surf_id)
var wid = surface_get_width(surf_id)

for(var i = 0; i < hei; i++){
    for(var j = 0; j < wid; j++){
        pixel_array[i,j] = color_get_value(surface_getpixel(surf_id, i, j))*0.00392;
    }
}

This converts each pixel into a 2D array of luminosity ranging from 0 to 1, this can later be used by the Meijer algorithm to produce the sound. In game maker I plan to essentially pause all active sounds, then make a 32 channel sound with pitches spaced evenly between a min and a max value, then play the sound for a second and offer a map with 32 pixel resolution. For my application that should be enough and it doesn't require any external extensions. I'm still building it out so I'll keep you updated. This essentially provides a top down minimap that's accessible to blind players.

CAE - Yeah, I've considered this problem; I really want to discuss this with someone with more of a technical robotics background or something since they might have a better solution than I do. I think with proper insulation it should work, I think the locking mechanism might be a problem though since if it's just magnets, attempting to read it might push the pin back in the slot if the magnet repulsion isn't powerful enough. But this is why I need hardware talent lol.

Ethin - GML as a coding language is accessible but Game Maker Studio isn't. Game maker is coming out with a new edition though called game maker studio 2 that might address these issues; a lot of the problem comes from the application using legacy code for the UI or something when I pestered the developers a few years ago about it. If you were cool enough though you could probably do it all through creating the XML files yourself and then compiling them but that is so much damn work and a lot of the things are represented as integers, not understandable names.

I just tested parakeet this weekend and it's buggy as all hell so I redact my recommendation. I like the code oriented approach for objects which makes it much easier to implement certain things but it straight up overheated my computer and shut it off through an infinite recursion bug, no lie. Avoid

you like those kinds of gays because they're gays made for straights

2017-02-21 00:44:13

Ah well, alright.

Hmm, are you handling your area's as static maps/screens that characters move between? Not unlike in Braillemon/Zelda/etc? The image seems to suggest something like that.

-BrushTone v1.3.3: Accessible Paint Tool
-AudiMesh3D v1.0.0: Accessible 3D Model Viewer

2017-02-21 01:18:54

The maps are static areas, but the visibility is limited to an area of 300x400 pixels. So as you travel through, you can only see a part of it; the mini-map is meant to emulate that but on a slightly larger scale.

you like those kinds of gays because they're gays made for straights

2017-02-21 19:06:45

So I've added a meijer's algorithm to Colors as described above. It converts a 512x512 area into a 32x32 area, blurs it a bit to smooth out the sound and converts it to a sound. A few notes:

- 32x32 resolution causes a short delay while initiating the sounds, but when they are playing there is little to no delay.
- The sound is kind of "jerky".


I'm thinking about reducing the initial capture resolution to 256x256, since I think 512x512, being wider than the actual view area is too large for an audio based player, considering the screen resolution is capped at 400x300. I'll post a demo when I'm more content with the quality of the algorithm.

you like those kinds of gays because they're gays made for straights