@8
Wasm...maybe. It depends whether the clang vector extensions that I intend to use support it. But for sighted people who want this, there's Resonance, which is...well, let's call it good enough. Suffice it to say that by the time you're not doing proper HRTF but are instead just simulating a surround sound system without even proper ITD, you can really hear the difference. But the thing is, I think sighted people may actually not be able to hear that difference.
I actually like WebAudio, but I hit limitations with the buffer node because I wanted to insert silence at the end of the loop (which allows for non-jittering footsteps under all circumstances), I found out that Resonance is meh, I started looking at writing my own buffer node only to find out that the support for doing that is woefully immature in ways that I could go on about at length. And then--as happens with me--it snowballed. But to be honest I like Electron for the ease of writing reasonable UIs for things like level editing, modern JS is great, and the packaging/updating story of Electron is also great, so one of the first bindings for this thing would probably be node.
Libaudioverse might still be the top Google search result for Python 3D audio. I haven't looked in a while.
@9
It's a shame I don't currently have the old camlorn_audio demo up, and it's a shame that I never did one for Libaudioverse (then again, Libaudioverse's hrtf was never very good anyway). But unless bass got HRTF, I give you the old Aureal 3d demo (do it with headphones): https://www.youtube.com/watch?v=zJlYL6I6u-0
There's a better version of this against OpenALSoft, but this one is particularly interesting because once upon a time in the days of Windows XP and earlier, if you had a Creative or Aureal sound card, you'd get this with shades of doom or anything else using DirectSound appropriately. Put another way, if we could time travel Swamp back 10 years, Swamp would sound like this with little to no code changes. There's a lot of interesting (and stupid) history there,but patent wars and then Microsoft happened and then 3D audio technology died even for the sighted for the next 20 years.
Also, last I checked, Bass requires purchasing a commercial license. Points to him for being popular in spite of that though. I've never used the library in depth, but it's a good library and puts a lot of things in one place that would otherwise be very hard to get working together.
There's more I want to do with this that I could go into with respect to consuming your tilemap and making hallways sound like hallways without you having to do anything and stuff like that, but it's speculative because there needs to be both a library to build that in and a game or engine willing to work with me to make that happen, plus the math is complicated and my time is short. SO no promises.
@10
In so far as I'm aware you can consume an apache licensed product without having to provide attribution, but if you modify the library you get into fun things which are kinda fiddly about having to notate which files you modify, etc. I really want to just use the unlicense, and I've even found a public domain audio output piece that seems like it should be reasonable and appears to have users. But we'll see, because Resonance has a lot of juicy, juicy code in it and is Apache. They failed at their hRTF because reasons, but they do still have a lot of good pre-tuned things and a very interesting reverb design, it's all commented and cites papers, etc.
What you want to do for hrtf is use the hilbert transform to get a minimum phase filter, window and truncate it to 32 points (but I think 16 is good enough), convolve, then reintroduce the time delay at runtime. You can find out how to do the hilbert transform part here. I implemented that in Python with numpy and verified that his algorithm is correct, and eventually I'll probably do a blog post on it (or at least get the code into one file and publish a Gist). You also need an interpolating delay line with subsample accuracy that doesn't introduce frequency artifacts, which you can get by oversampling, delaying in the oversampled representation, then downsampling at the end. If you don't do it this way, you end up with phase artifacts that you can't get rid of because the group delays of the impulse responses vary, so fading between them ends up with multiple "copies" at different delays, which is the primary reason the Libaudioverse HRTF doesn't work.
There's no real benefit to a convolution framework for small block sizes because the FFT won't help you, and for the case of HRTF where you have many sources in parallel, you can batch them in groups of 4, use the SSE intrinsics or Clang's vector extensions (I favor the latter because it's cross platform) to do the convolution loops 4 for the price of 1, then share the output buffer to share the cost of downsampling among all sources, bringing that from O(n) to O(1) as well. The convolution loop minus a framework is about 10 or 15 lines, even for "wide" simd stuff.
I also don't favor convolution reverbs either. You don't get nearly so many interesting parameters with those.
Mind you, big "in theory" neon sign here. I implemented a POC in Python on top of Pyo (warning: Pyo is slow and weirdly unstable) but haven't taken it further yet.
Perhaps third time's the charm. Also one of these days I ought to do a Libaudioverse postmortem. Mind you that kind of just turns into "this was a hobby project until I realized I was out of time and needed to finish it, at which point I was out of time", but you also can't optimize the hell out of your synthesis when you have a general purpose node graph either. I do still wish I'd managed to fund it enough to justify holding off on entering the job market, though, because it'd have been cool if completed.
@11
Libaudioverse is still around. You don't want to use it unless you know enough to finish it. It's kinda a dead end because it tried to be for everyone, and to be honest me-now cringes at some of the choices me-then made. It was essentially WebAudio for Python, and attempted to also reach sighted markets that no longer exist, but in being that it became kind of a monster and would need a month or two of full-time work. Suffice it to say that it can break if you unplug your headphones.
I'm talking about doing something better which ironically takes less time and effort than Libaudioverse needs, while simultaneously being both faster and easier to use for the case of games. So good to know you're interested.
My BlogTwitter: @ajhicks1992