2020-02-21 02:33:21 (edited by camlorn 2020-02-21 04:02:47)

So, for those who don't know, I once upon a time did Libaudioverse, which was going to be a general purpose library for audio synthesis stuff, but then I failed to fund it and got a job, followed by the second punch of WebAudio happening and making audio libraries that do what it did irrelevant to anyone except blind people.  And then it died.

Well turns out that yes, I do still have a job, and that job does take a lot of my time.  But I also know a lot more math than I used to because I never stopped thinking about stuff, got bored, prototyped an HRTF in Python which does all the things Libaudioverse was going to do but never finished.

And then I also found out a lot of good stuff has actually happened.  Google put out a bunch of VR audio related stuff with things like reverb as open source.  The Libaudioverse code is still around and most of it is something I can relicense/reuse how I please.  The audio output issues that libaudioverse had got fixed by Microsoft adding things to Windows, and people wrote libraries that don't seem buggy anymore, so low latency audio output is probably no longer weeks of work.  And, for those who are old enough to remember Shades of Doom on a Creative soundcard, all of the Creative and Aureal patents we might care about have now expired which means there's a ready-made source of info on things like environment reverb just one patent search away.

So, to get to the point.  I could probably put together something good and incredibly fast in what works out to about 2 to 3 weeks of full time effort with HRTF and environmental reverb, streaming audio support, and all the other basics, and I sort of know what that something would look like.  Unlike Libaudioverse, it would only have a few functions to bind and would be easy enough to be wired up to most languages in a few hours by anyone who wanted to do it.  It would probably be BGT compatible because the design I favor for it is game specific, which means getting rid of all the fun bits about threads and function pointers and such.  And I'd be licensing it as permissively as possible, specifically as close to putting it in the public domain as I can.

Tentatively this is called synthizer.  At the moment a Python prototype that proves that I now know the math that I didn't know before and a very small amount of C++ code that doesn't play audio exists.

I'm here to answer two questions.  First, what's the interest?  And second, how many people/notable games would be willing/able to integrate it?  I want to do this, but I'd be doing it around my job and I don't foresee much interest outside blindness land, and so the question of how much use it'd actually see arises early on.  I'm not making promises, just extending a feeler to find out where people stand.  I'll probably be doing some of it either way because it interests me and everyone needs a hobby, but there's a big difference between a project that no one else will ever use and, well, the opposite  of that.  Especially with respect to priority, documentation, ability to get it tested by others, etc.

My Blog
Twitter: @ajhicks1992

2020-02-21 03:13:37

all of the Creative and Aureal patents we might care about have now expired which means there's a ready-made source of info on things like environment reverb just one patent search away.

big_smile big_smile :d :d :d
I mean, yes, I like the rest because I don't have a stimulant prescription and so can't handle more than a couple lines of init and a sound.play, but you had me at the expired patents.

看過來!
"If you want utopia but reality gives you Lovecraft, you don't give up, you carve your utopia out of the corpses of dead gods."
MaxAngor wrote:
    George... Don't do that.

2020-02-21 03:22:56

People have slowly been moving away from BGT ever since the developer posted that it's no longer being maintained and people should preferably stop using it some time ago. Most of those people have moved to python, but since python isn't specifically made for audiogames a lot of questions are regularly asked on here about how to do that, and looking for a good sound library probably makes up the largest proportion of those topics. So I'm rather confident there will be a lot of people interested in using this.
Also, not sure if you've seen the forum topic for it, but some people have made a module in python with bgt-like functions to help people used to bgt to make audiogames (as far as I understand it, haven't actually looked at it). The github is here, and the forum topic is here if you're interested. They or you could probably integrate what you want to make into that which would make it more likely to be used.

2020-02-21 04:25:27 (edited by camlorn 2020-02-21 04:27:08)

@2
It's not every patent we might care about, but it's most of them.  There might still be some submarine patents floating around or something, but most of the interesting ones were between 1995 and 2000, so it's been 20 years and should be safe.  Besides, who's going to care?  And if someone did it would be because this got adopted by someone making a VR headset, who would presumably have money.  That's not out of the question as the VR people actually care about HRTF again, but I wouldn't count on it happening.

But the irony is that half of those patents are "This is better because one source takes 25 MIPS" and it's just like, well, even loading the software takes 25 mips nowadays.  But there's a lot of interesting stuff in Google patent searches that eventually became what we think of as the golden era of audiogames development, especially with respect to environmental reverb.

A lot of this made it into OpenALSoft because OpenALSoft actually traces its roots back to Creative in a way that sort of puts them in the clear for patents--the very earliest versions of it were released as Creative OSS I believe.  But in terms of friendly APIs OpenAL is not one, you have to rebuild so much on it, the "this was for a specific sound card and is designed to send commands over PCI, but we made an open standard to look good" is so obvious everywhere.  Better remember to query for user-configured limits, and what's that about wanting to run 8 reverbs, how about no, all you get is a really expensive one optimized for CPUs from 15 years ago.  I want mine to support things like atomic updates/database transaction style sound updates so that you don't get the audio equivalent of visual tearing where half the sounds update before the audio thread ticks, and who needs numbered enums for properties when we can just use strings, and I'm pretty sure I can outdo OpenALSoft for performance too (it's the closest completed competitor).  I've also given thought  to how to do things like figuring out reverb parameters from tilemaps, but that's speculative and the math to do it starts at did you know that the fourier transform actually approximates a function in polar coordinates better than anything for audio and rapidly becomes more esoteric, so it's not like that'd happen soon.

Also yes, I still have a chip on my shoulder about OpenAL.  Camlorn_audio was a failure in part because it was one of my earliest projects, but OpenAL has so much promise for us and then completely fails at delivering any of it by either being incomplete or too complicated to be useful.

@3
I'd not go overly out of my way for BGT,  but the short version of the long discussion is that for games especially but also for other reasons, it's actually better to not throw function pointers around like candy and for things like "this file stopped" you might as well read off an event queue in the main loop.  And at the end of that reasoning chain you arrive at something that's 99% compatible with the BGT FFI, and at that point what's giving thought to the last 1%?  There will probably eventually be things in what I have in mind that it can't bind, just because it is unbelievably hard to avoid char** when doing C APIs of any degree of sophistication, but keeping what can be kept BGT-friendly BGT-friendly seems worth it if it's not going to take effort.

But yes, I know about Lucia as of a few days ago. I reappeared after years of absence to poke around the forums and see who was doing what for the specific purpose of talking about this topic.  Was very interesting to see that most of my predictions back when I tried to explain that BGT was a bad idea came true, though I am disappointed that we landed on the worst possible version of that universe, namely it's abandoned *and* still closed source.

Unfortunately I probably don't have the time to both do this project and also maintain integrations to engines--there's a reason we're discussing things before there's code, this time.  I don't regret my career, but having the hobby become the job kinda ruins the hobby.

My Blog
Twitter: @ajhicks1992

2020-02-21 05:12:22

Disclaimer:  I'm not a coder, nor a sound designer, so I'm only going based off of my second hand observations.


I think this could be pretty great for our community, and I wouldn't be surprised if the Lucia project would incorporate it.
And since Lucia seems to be making the most headway right now in the transition away from BGT, that's where most of the more legit up and coming devs are focusing from what I've seen.
I'm also reasonably confident that if you were to set up a Patreon or something, than you'd get a few people donating to your efforts, and that might help you justify the time expenditure.


I will say though that it seems the majority of devs willing to utilize 3D audio correctly are already using either their own custom engines, or preexisting libraries, since most of them are indi mainstream devs to begin with.
Hence I honestly don't see many devs from this community using something like this anywhere close to it's actual potential, but for the small amount that might, I think it could complement their projects greatly.
Comprehensive documentation and tutorials would help to alleviate this somewhat, and you may be able to get help with that, but unless something drastically changes, than this will likely only be of use to the small portion of the community that consistently come out with higher quality titles anyway.
Which, might not be such a bad thing, since they already lead the audio games market, and are also much more likely to be able to afford a moderately priced paid product, if the licensing would allow you to do that.


There is also a particular need for a better solution that is cross platform between Windows and Mac, as I've heard that the current options are either too buggy or too expensive, and that testing with two different implementations can significantly hamper testing.


That's just my understanding of it though, and I'm sure that someone with greater experience could give you a more in depth rundown of the situation.

2020-02-21 05:54:45

Hi @camlorn
Interesting read in post 1. To give you a short answer:
Yes I would use it, and I would look into integrating it into lucia, if the licensing are compatible (from what you've said, it should be).
PS: Welcome back smile

If you like what I do, Feel free to check me out on GitHub, or follow me on Twitter

2020-02-21 06:27:29

@5
Cross platform is kinda doable.  It's just the audio piece though.  One of the things that happened is the audio output libraries I could find stopped sucking, and that's the hardest part to make cross platform.  However I don't have access to nor the desire for a mac, so someone will have to support it and all I can do is make it relatively easy.  I'm planning to use clang anyway, so the compilers will at least be compatible.

I personally feel like there's some degree of chicken and egg going on.  Audiogames used to be better.  Then the audio part of audiogames got pulled when Microsoft killed hardware accelerated audio in Vista and never replaced it, and then it was just this slow slide into what we have today.

I mean seriously, you'd never see Audioquake nowadays.  That thing is incredibly hard to play, but the people back then did it.  I don't actually know what made the difference--I'm not blaming audio--but well, there you have it.  I'm not saying that audioquake was even a particularly good example of the genre, just using it as some sort of barometer for the effort/skill/etc that went into stuff.

We should at least have more shades of doom clones than we do.  Maybe it was that we had a few active, prominent devs, but it didn't feel like that was the case at the time.

Synthizer came up because I am slowly plucking away at an RPG engine of my own, sort of Unity for audiogames, and ran up against one too many limitations/issues with WebAudio.  Maybe that'll turn into something one day.

@6
It's either going to be Apache or something more permissive than Apache.  As I said I want this to be as close to public domain as possible, but there's a potential advantage from the legal perspective to Apache that i need to check on, w.r.t. being able to pull from some code that Google has released.  There shouldn't be compatibility issues with LGPL.

You have a potentially bigger problem though. In Python, LGPL means not being able to be packaged in any form that's obfuscated to an effective degree, since one of the key provisions of the LGPL license is that the portions of the code under the LGPL need to be swappable with compatible versions (i.e. I need to be able to update Lucia in a game you give me, despite not having the source for said game).  But in Python most effective obfuscation/anti-reverse-engineerin begins at nonstandard bytecodes that other people don't have.  It's possible the modern audiogames community doesn't care about this though, but being as you're raising license compatibility issues this is kinda a big one and why I don't use the LGPL for stuff anymore myself.

My Blog
Twitter: @ajhicks1992

2020-02-21 07:34:48

There is no solid cross-platform python audio library that has 3D audio functions. If you get your library posted on the Python Audio Library page, you should be able to reach people outside the blindness community.
I would also like to see nice handling for sound generation and adding together multiple synths. If it would then be possible to compile it to WASM, then that would be really cool, as you could reach users on the web as well as people using python.

2020-02-21 09:07:39

One question remains, if I'm going to use this instead of my audio library of choice, what would this have that, say, bass doesn't have already considering the features it has?
Don't understand me wrongly, if you make this thing, I'll be the first to beta test it if given the chance, but I just want to see how would this help my future games since I wasn't around at the time of libaudioverse.

2020-02-21 09:11:09

@1:
Welcome back!

Personally I am very interested in a liberally licensed HRTF solution. A couple of months ago I implemented HRTF as part of my slowly evolving game framework, but I am not at all sure whether my implementation will scale. Like yourself, I work full time so can only really do it as a hobby. I have been seriously considering releasing my framework, including the HRTF portion, as open source once it has matured a bit. My HRTF solution uses partitioned FFT convolution with precomputed impulse tables, and it seems to work OK. I haven't done any serious profiling yet, though, which is why I'm holding off.

I use the convolution engine from the WDL library which I think looks quite reasonable, and is under the Zlib license. Personally I stay away from any dependencies that require attribution in binary only distrubutions. I believe the Apache license falls into this category, so if your work will be published under this license I would not use it myself. But needless to say not everyone shares my idiosynchrasies. smile

This became a slightly longer post than I had intended. I'm mentioning all this now because if you have a better solution in mind, I may turn my efforts in another direction so that we don't end up with two competing products in such a small community.

I'll be interested to see what you put out. Good luck!

Kind regards,

Philip Bennefall

2020-02-21 09:56:08

Hi there, i would like to make use of this lib audio verse thing, i would llove making use of it in my lucia games.

best regards
never give up on what ever you are doing.

2020-02-21 16:15:47

@8
Wasm...maybe.  It depends whether the clang vector extensions that I intend to use support it.  But for sighted people who want this, there's Resonance, which is...well, let's call it good enough.  Suffice it to say that by the time you're not doing proper HRTF but are instead just simulating a surround sound system without even proper ITD, you can really hear the difference.  But the thing is, I think sighted people may actually not be able to hear that difference.

I actually like WebAudio, but I hit limitations with the buffer node because I wanted to insert silence at the end of the loop (which allows for non-jittering footsteps under all circumstances), I found out that Resonance is meh, I started looking at writing my own buffer node only to find out that the support for doing that is woefully immature in ways that I could go on about at length.  And then--as happens with me--it snowballed.  But to be honest I like Electron for the ease of writing reasonable UIs for things like level editing, modern JS is great, and the packaging/updating story of Electron is also great, so one of the first bindings for this thing would probably be node.

Libaudioverse might still be the top Google search result for Python 3D audio.  I haven't looked in a while.

@9
It's a shame I don't currently have the old camlorn_audio demo up, and it's a shame that I never did one for Libaudioverse (then again, Libaudioverse's hrtf was never very good anyway).  But unless bass got HRTF, I give you the old Aureal 3d demo (do it with headphones): https://www.youtube.com/watch?v=zJlYL6I6u-0

There's a better version of this against OpenALSoft, but this one is particularly interesting because once upon a time in the days of Windows XP and earlier, if you had a Creative or Aureal sound card, you'd get this with shades of doom or anything else using DirectSound appropriately.  Put another way, if we could time travel Swamp back 10 years, Swamp would sound like this with little to no code changes.  There's a lot of interesting (and stupid) history there,but patent wars and then Microsoft happened and then 3D audio technology died even for the sighted for the next 20 years.

Also, last I checked, Bass requires purchasing a commercial license.  Points to him for being popular in spite of that though.  I've never used the library in depth, but it's a good library and puts a lot of things in one place that would otherwise be very hard to get working together.

There's more I want to do with this that I could go into with respect to consuming your tilemap and making hallways sound like hallways without you having to do anything and stuff like that, but it's speculative because there needs to be both a library to build that in and a game or engine willing to work with me to make that happen, plus the math is complicated and my time is short.  SO no promises.

@10
In so far as I'm aware you can consume an apache licensed product without having to provide attribution, but if you modify the library you get into fun things which are kinda fiddly about having to notate which files you modify, etc.  I really want to just use the unlicense, and I've even found a public domain audio output piece that seems like it should be reasonable and appears to have users.  But we'll see, because Resonance has a lot of juicy, juicy code in it and is Apache.  They failed at their hRTF because reasons, but they do still have a lot of good pre-tuned things and a very interesting reverb design, it's all commented and cites papers, etc.

What you want to do for hrtf is use the hilbert transform to get a minimum phase filter, window and truncate it to 32 points (but I think 16 is good enough), convolve, then reintroduce the time delay at runtime.  You can find out how to do the hilbert transform part here. I implemented that in Python with numpy and verified that his algorithm is correct, and eventually I'll probably do a blog post on it (or at least get the code into one file and publish a Gist).  You also need an interpolating delay line with subsample accuracy that doesn't introduce frequency artifacts, which you can get by oversampling, delaying in the oversampled representation, then downsampling at the end.  If you don't do it this way, you end up with phase artifacts that you can't get rid of because the group delays of the impulse responses vary, so fading between them ends up with multiple "copies" at different delays, which is the primary reason the Libaudioverse HRTF doesn't work.

There's no real benefit to a convolution framework for small block sizes because the FFT won't help you, and for the case of HRTF where you have many sources in parallel, you can batch them in groups of 4, use the SSE intrinsics or Clang's vector extensions (I favor the latter because it's cross platform) to do the convolution loops 4 for the price of 1, then share the output buffer to share the cost of downsampling among all sources, bringing that from O(n) to O(1) as well.  The convolution loop minus a framework is about 10 or 15 lines, even for "wide" simd stuff.

I also don't favor convolution reverbs either.  You don't get nearly so many interesting parameters with those.

Mind you, big "in theory" neon sign here.  I implemented a POC in Python on top of Pyo (warning: Pyo is slow and weirdly unstable) but haven't taken it further yet.

Perhaps third time's the charm.  Also one of these days I ought to do a Libaudioverse postmortem.  Mind you that kind of just turns into "this was a hobby project until I realized I was out of time and needed to finish it, at which point I was out of time", but you also can't optimize the hell out of your synthesis when you have a general purpose node graph either.  I do still wish I'd managed to fund it enough to justify holding off on entering the job market, though, because it'd have been cool if completed.

@11
Libaudioverse is still around.  You don't want to use it unless you know enough to finish it.  It's kinda a dead end because it tried to be for everyone, and to be honest me-now cringes at some of the choices me-then made.  It was essentially WebAudio for Python, and attempted to also reach sighted markets that no longer exist, but in being that it became kind of a monster and would need a month or two of full-time work.  Suffice it to say that it can break if you unplug your headphones.

I'm talking about doing something better which ironically takes less time and effort than Libaudioverse needs, while simultaneously being both faster and easier to use for the case of games.  So good to know you're interested.

My Blog
Twitter: @ajhicks1992

2020-02-21 16:21:30

Hey @7

The reason we chose the LGPL was from the description from choosealicense.com below

However, a larger work using the licensed work through interfaces provided by the licensed work may be distributed under different terms and without source code for the larger work.

Because we wanted lucia to be able to be used in both free and commercial games without having to open source the end game, while if improvements were made to lucia iteselfs during that games development, require those improvements to be given back to the engine.
However non of us (to my knowledge) are legal professionals of any kind

If you like what I do, Feel free to check me out on GitHub, or follow me on Twitter

2020-02-21 16:39:01 (edited by philip_bennefall 2020-02-21 16:40:24)

@12 Thank you for the references, I'll have to read up on that when I have some time. My convolution based approach uses slightly larger impulses (200 frames) since that's what the dataset contained, but if I can reduce that without noticeable differences in quality, it's always a bonus. It sounds pretty good at least to my ears, but of course there's always room for improvement.

Regarding the Apache license, I am not a lawyer so of course I can't be 100% certain that my understanding is correct, but the following quote can be found here:

http://www.apache.org/dev/apply-license.html

Section 4d of the license provides for attribution notices to be included with a work in a NOTICE file, such that the attribution notices will remain, in some form, within any derivative works. Apache projects MUST include correct NOTICE documents in every distribution.

It's not quite clear to me whether this applies to derivative works distributed in binary form, but that's what it sounds like.

In any case, best of luck with your efforts. To answer your original question, if you do pick the Unlicense or something similarly liberal, I'm most definitely interested. smile

Kind regards,

Philip Bennefall

2020-02-21 16:51:19

@13
From your license file: "  You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:"  Each of the following don't matter, but I will leave it to you to read them if you want this quote in context.

This is fine as long as you're not in a language where all the packing options involve converting to bytecode and hiding the source and where all of the obfuscation/reverse engineering things you might want to do require making the bytecode nonstandard, because those activities are directly against that license.  You can't split the reverse engineering prevention of the game off and leave the library in a reverse engineering friendly state in Python.

The LGPL works for C++ for instance because you just distribute the library as a DLL, and people can copy the DLLs around all they want, but as soon as you're in something like Python it doesn't.

You probably wanted the MPL2, which requires that people making contributions to the library give their contributions back to the library if I recall correctly, but allows incorporation in modified works without this particular restriction, but that ship has probably sailed because afaik you have multiple contributors.  Libaudioverse landed on the MPL2 because of the LGPL restriction above, since the bindings to any interpreted language would be licensed under it too, and run into the same problem.

You don't need to be a legal professional to read the licenses.  You only need to be a legal professional if you're going to defend them in court.  Hopefully this doesn't kill the project, and it's possible no one cares, but it's definitely still a problem.

My Blog
Twitter: @ajhicks1992

2020-02-21 17:00:05

@14
Yeah. This is in a preliminary enough stage that I need to read the license.  I'm not a fan of Apache either, but places like Google like it because of the patent-related clauses.

200 samples is too long for realtime usage IMO.  You might be able to get away with it but not on low end devices, or if someone tries to push it.  Are you doing bilinear interpretation or just locking it to the nearest dataset point?  The artifacts really show up when you start interpolating and moving sources around fast, or putting them between data points, plus to do the interpolations you need to run two copies of every source.

perhaps if you don't have Libaudioverse's overhead you can effectively brute force it nowadays, but I know that OpenALSoft sounds as good as that Aureal demo and only uses 32 points by default.

The TLDR is that a minimum phase filter "takes out" the delay, and puts all the most important coefficients at the front, so when you drop the tail you don't lose much if at all.  Then you can interpolate the delay line part by stepping the delay line every sample.

I'm curious: did you find any better HRTF data than the MIT dataset?  I have a couple I haven't evaluated yet, but I wouldn't mind something better/more comprehensive.

My Blog
Twitter: @ajhicks1992

2020-02-21 17:56:53

@15 thanks for the clarification. What you just described was the intent of using the LGPL. I'll talk with the others about, if we are going to change the licensing of lucia.

If you like what I do, Feel free to check me out on GitHub, or follow me on Twitter

2020-02-21 18:03:36

I would use it in any future projects I undertake. I always wanted the ability to use 3d audio and messing around with reverb, but all of my efforts were in vane. Open AL is just funky on my machine, FMOD wrapppers are dead in Python... I heard of SoLoud but have not tested that yet, and I don't know what other 3d libs are out there, lol.

2020-02-21 18:19:28 (edited by ambro86 2020-02-21 18:27:25)

Hi Camlorn, great idea! I think our community really needs a good 3D sound library. So far, in my memory, there is no audio game that truly manages the sound in 3D completely. I'll explain. So far the games use left and right panning to indicate the position of the object. But they do not manage the position of the object if it is in front of or behind the player. To overcome this they often use the pitch. Would the library you intend to develop go in the direction I think, that is to say actually give the sensation that the object is in front of the player, or in the back?
To understand each other, for good sound management in 3d, listen to this video. So far no audio game has ever managed to manage 3D sound in such a realistic way: https://www.youtube.com/watch?v=8IXm6SuUigI
Thanks and good job!

2020-02-21 18:23:52

@17
The thing about the FSF and any license they put out is that their definition of free isn't your definition of free.  Be careful when adopting their licenses, because they can really bite you like this.

And be careful in general if you change it.  I would say that the MPL2 has what you want, but I haven't read it in years, so don't just take my word.  You can honestly literally make the license file "You can do whatever you want with this software, but if you change it you must make your contributions available. Additionally, you use the software at your own risk and no express or implied warranty is provided".  The Unlicense is something like 2 paragraphs and says almost exactly that minus the contribution bit, and yes, that will be legally binding.  You could put "All users of this software must order a dozen cupcakes and send them to this address in New York" and it would be legally binding.  No one would use your software, you'd be laughed out of the room, but it'd be legally binding.  Using someone else's lawyer-vetted license is better if there's one that fits what you want to do of course.

Some friends and I once got fucked over by a contractor who held code hostage on a hard deadline, hacked together a stupid 3 paragraph thing in Google Docs with grammar and formatting mistakes in 15 minutes, then forced us to sign it because we needed the code faster than could be handled through lawyers.  It was also legally binding.

@18
As far as I know, SOLoud doesn't do HRTF.  It might but I don't think it does.  It does however contain Miniaudio, which is the public domain piece I might use for audio output.

Only ones I know of are OpenALSoft, Resonance, and a thing called Slab3d. Everything else has to be licensed, i.e. wwise, Rapture3d, etc, and you're going to pay a pretty penny for it.

My Blog
Twitter: @ajhicks1992

2020-02-21 18:29:56

@16:
I use the public domain CIPIC HRTF database found at https://www.ece.ucdavis.edu/cipic/spati … hrtf-data/

I use the special kemar dataset, which has 5 degree intervals. I snap between them, with a very brief crossfade. This obviously causes a spike which I would like to avoid, but have not yet found a way. However, the transitions are quite smooth and I am personally happy with the result. I had to do a lot of post processing though in order to even out some of the crazy peaks in the frequency spectrum, and to put the low-end back. The MIT dataset suffers from similar peaks and lack of bass, and the MIT license is ambiguous regarding binary attribution so I avoided it.

Kind regards,

Philip Bennefall
P.S. I use miniaudio in my framework and it works wonders. And the author is a great guy, too.

2020-02-21 18:31:29

@camlorn welcome back!

1. You talked about this project being game specific as compared to libaudioverse, which was basically implementing WebAudio so would be suitable for anything that needed audio synthesis. How does that translate to the features that won't be avilable in this library vs Libaudioverse? Will its API also be graph based?
2. Regarding the WebAudio issues that you mentioned, I'm wondering why you couldn't insert the silence in the BufferNode before adding it to the graph? I'm probably overlooking something obvious here.

2020-02-21 18:34:12

The other 2 3d audio libraries that I know of that haven't been mentioned here yet is Steam Audio and the Oculus Audio SDK. They are only available as binaries, so you wouldn't be able to contribute improvements to it, but can use them in commercial projects.

2020-02-21 18:42:03

I've been looking for an audio library for my Godot accessibility plugin. I'm kind of ignorant about these things. Would this only be HRTF, or would it work well for 5.1/7.1 systems? Godot's audio story is kind of dumb right now. 2-D audio is hard-centered on screen center, meaning you can't do a 2-D game with surround audio out of the box. I hack around that by creating an invisible 3-D viewport, a non-rendering camera, then mirroring all 2-D nodes into the 3-D soundscape. But even that leaves something to be desired, and there is some interest in redoing Godot's audio subsystem for 4.0.

HRTF is something I definitely want, even if only for my Godot games. But I'd also like to support non-headphone 5.1/7.1 use as well. If you can pull that off, I'd make an effort at creating a GDNative module for it. Whether or not I could get it into Godot 4 is another story. Godot is MIT, which I assume is compatible with the Apache license. Another challenge is that I'm actually going for pretty substantial cross-platform support, with builds of my game currently running under Linux, Windows, Web Assembly, and Android. I know that's a tall order, but it's what my current engine empowers me to do right now, even with its meh implementation.

2020-02-21 18:49:18

Please! Consider making this.
I can wrap it in Cython for Python and Go.