2014-07-11 07:41:54 (edited by frastlin 2014-07-15 17:04:55)

Because this is such an important topic for audiogame developers, here is a thread so we can discuss them!

Here is the list of sound libraries I know and their license, if you know more please post them and I can add it!
OpenAL Soft LGPL
Bass Free for free games, $125 for one small game, $950 for high-budget license for one game and $2950 for unlimited games
FMOD Free if you don't wish to sell your games, many many different licenses based on what you do wish to do and what product you use.
Audiere LGPL
Direct Sound Is only on Windows and comes on every windows computer (I think)
Core Audio seems to be a super super advanced and much better written library like Direct sound on Windows, but for Apple devices (IOS and OSX).
Pyo GNU GPL v3
PortAudio MIT (Has PyAudio python bindings for it.
XAudio2 Is Microsoft's gaming library, is like directX, but for sound.
The Papa Engine Is super powerful and is what Papa Sangre uses. It is only on IOS at the moment. You need to contact them for licensing.

I believe most languages have a special module that one can use to manipulate raw sound data, so type in "raw audio" + your language.

Game libraries that have an audio library built in:
SDL zlib license (pygame is based off this)
Allegro giftwair (they would like you to donate, but not needed)

For those who don't know what a sound library is, a sound library is what is used to manipulate audio. Weather that is rewinding, panning, playing different audio formats, adding echo, recording, generating audio, dealing with midi, or doing anything that has to do with controlling the sounds you wish to have in your game.
Most libraries will have a way to play a sound, pause a sound, stop a sound and change the volume of a sound. Past that, it really depends on what library you choose.

Thumbs up

2014-07-11 08:06:44

So camlorn is apparently writing his own sound library because he found OpenAL was much too complex and full of bugs for anything that had more than 256 sounds as well as it being chock full of broken features and bad design blunders.
I personally thought OpenAL looked really awesome with the x, y and z coordinate system. But 250 sounds is crazy! Also, I can't live with horrible error detection, so that instantly puts a redflag against OpenAL for me.

Has anyone used audiere or FMOD? FMOD has come out with this new license for Indi developers that they get one of their libraries free, but I'm not sure if it has an API. They are one of those peculiar companies who think GUIs are cool!
Audiere was last updated in 2006, but despite that, it seems to have a very nice set of features. I'm not sure if it has a python wrapper though.

Is sox considered a sound library?

Thumbs up

2014-07-11 12:45:16

SDL, raw, Allegro ...

Obviously depends on what you need to do.

Just myself, as usual.

2014-07-12 15:56:38

Core audio is not like Direct Sound.  Core audio, should it be available for windows, would invalidate all audio work I have ever done.  Ever.  It's the most advanced sound library I've ever seen, save Pyo which has major speed issues and is Gpl (with no commercial version).  Confusingly, there is also a  core audio on windows, but it's not the same-the Windows core audio is an extremely low-level way of talking to the device.
To be 100% clear on OpenALSoft, the source limit is not the end of the world.  You can get around it if you're clever, but this cleverness is not at all simple.  OpenAL splits audio into sources and buffers.  A source represents information on location in 3D space, cones, and some other stuff.  A buffer holds audio.  You attach buffers to sources.  The problem is that all properties of a source are needed for audio so, in order to share a source between sounds, you've got to write a not-so-small intermediate layer that remembers what the source used to be set to--i.e, you have to make your own source object that supports having the actual source pulled out from under it.  If you don't attach a buffer to a source, it takes basically zero space and zero CPU, yet we can only reliably have a couple hundred without hacking OpenALSoft itself (he says he wants to remove this limit, maybe he did, but there are others just as annoying).  This is the *least* of the issues I had with OpenALSoft-I've mentioned the rest here and here.  The other reason I ended up doing my own is that OpenAL is "rigid": you get a source, at most 4 effects in parallel, and then the sound card, but there's a bunch of stuff that can be done if the library lets you actually manipulate the sound graph yourself (i.e. synthesizers, making audio radars becomes much easier, get a copy of the audio stream without hassle, and a friend of mine has an idea for environmental reverb that might actually let you hear the shapes of rooms).
Of the libraries listed here and disregarding 3D audio, Bass is good.  SDL is just a low-levle wrapper around the sound card unless you also use SDL_Mixer, and Fmod now uses a graphical studio thing like most of the commercial offerings.  Irrklang is also something worth looking at in that it's supposed to be very simple, but I've not used it.  The python options for easy audio include Pyo (powerful, will take an hour to learn but is the most flexible, has major, major speed and latency issues and primarily aimed at research), PyAudio (low-level wrapper over Portaudio, may be useful but will require writing a mixer), and Pygame (looks easy enough, limited API, can't do too much with it but it's sufficient for i.e. SoundRTS or Shades of Doom quality sound).  The additional windows option is XAudio2, but this is C++ or maybe C#, may or may not have latency issues, and looks extremely complicated.
That's the limit of my knowledge.

My Blog
Twitter: @camlorn38

2014-07-12 19:30:34


Since we seem to be all sort of screwed because all libraries wheither
* aren't reasonnably affordable
* don't support 3D
* have latency, delay problems and such
* have very bad designed API

Why not start something on our own, together ?
The plan could be the following:

* Begin with an existing low-level library such as portaudio so that we don't have to bother with multiplatform hardware support. From there, make our own mixer, channel/source and sample/buffer objects, etc. BASS and FMOD are examples of well made API, we can probably inspire a bit from them, for their API design
* Program the core API in C/C++ so that we can then make bindings for any language we wish, and so that we aren't stuck from start to a particular one; please no python, no java, no C#, as nobody don't like / don't want to use some of them.
* LGPL license, so that we could can use it in commercial products, as well as having feedbacks from potential contributors

At some point, I started writing a C API on top of portaudio. Unfortunately, I don't know enough math to make involved effects like LPF, HRTF, etc. I limited my thing to 2D and pseudo 3D simulated only with pitch/pan. But by joining together we might have a chance to understand how it works.
The main problem of my stuff was that CPU was quickly loaded as soon as you had 50 or 100 channels playing at the same time. I'm not at all neither an expert of CPU optimization, SSE and such. I'm pretty sure that DS, BASS, FMOD and other are all using SSE.

I had also started at some point a MIDI player that loads samples from SF2 banks and generates audio streams 44.1 stereo. But it wasn't very stable, might still be useful.

There are 10 kinds of people : those who know binary, and those who don't.

2014-07-12 21:20:48

I already have started fixing this problem for real, but I'm not accepting contributions because it's going to be commercial.  Before everyone jumps on me, Libaudioverse will  be free for open source games by virtue of being GPL--if you're going to force closed source and still be free, you can probably afford whatever I decide (I'm thinking $20-$30 for these games) or open source your game because, you know, it's free anyway.  I can offer commercial licenses by virtue of being the only contributor and/or getting copyright agreements in place, which is why I've not been screaming for help.  I'll probably have something releasable in 1-3 months--I finally got past a bunch of stupid low-level infrastructure, managed to solve the bindings problem (my bindings maintain themselves, no human required), and correct the mistake of choosing C over C++11.  I'm going to write something on my progress shortly, as it is now significant enough that I can actually have a meaningful article.  There will also be something else on how I fixed the bindings problem as at least one friend of mine thinks that it's worthy of Hacker News.
Unfortunately, I don't think collaboration will work.  There aren't enough people on these forums with enough knowledge of mathematics.  The only reason that I've managed to get where I have is because I went to college, got through Calculus 2, and spent about 6 months hammering my head against DSP until I was able to build a mental framework of what's going on.  This is not an easy problem mathematically nor programming-wis: if you've not had at least calculus, you're not going to be able to understand anything beyond pan.  All of the explanations involve derivatives and integrals, complex numbers, raising complex numbers to exponents, and usually Euler's identity.  At least 3 of those also involve non-trivial trigonometry.  The current libraries are where you get without this knowledge-going further requires it.  The Libaudioverse repository is about to hit the thousand-commit mark, to give some idea of the scope of the problem.  The reason that camlorn_audio used OpenAL instead of a custom mixer is that, when I wrote it, I hadn't made the journey through DSP hell.  And my grasp of DSP is still not complete, merely complete enough that I can do useful and cool things.
It's also not something you understand by reading code.  Code does not tell you the original formulas or why you're suddenly adding and subtracting x and y.  Understanding the basics of DSP is something that can only be done with mathematics, not via code examples.  Sad but nevertheless true.  Having the example in front of you isn't going to help much unless you know where the 20 lines of magic math came from.
beyond those problems, a truly powerful library is going to require at least a little familiarity with graphs (the data structure, not the kind with graph paper) and must be written in C++.  To be truly fast, you're also looking at at least one of 3 things: cache friendliness, SSE, or multithreading the actual synthesis algorithms-none of these is trivial, though the last is surprisingly simple if you architect for it when you start coding.  There's also at least one lock-free algorithm at the bottom of Libaudioverse's audio code: in order to properly talk to audio devices, you can't accidentally priority invert the audio thread.
I'm not trying to depress people or scare interested people away, but I'm not going to sugar-coat it.  I did DSP outside school and it took a very long time and a very lucky find of an arguably accessible resource (it's got LaTeX alt text) as well as reading lots of Wikipedia articles over and over.  I think it's a very interesting field and learning it will teach you a lot of other math in the process.  But getting far enough to understand what is going on with HRTF is going to take a while-not to mention reverberation, which combines all of the DSP basics plus a bit of personal creativity; it is said that only the original designer will ever understand all the parameters in a reverb algorithm.
Basically implementing a good quality DSP library takes a bunch of tricky math stuff, some computer science stuff, and a bunch of low-level issues (hello to floating point subtleties) and combines them in a blender.  Most sighted programmers can't do this kind of coding, as it's really a specialized domain--people specifically go to college for DSP, and the only reason I know it at all is because I spent a year or so on it.
And for the record, using something like this in your game is easy.  It's making it in the first place that is so difficult.
If people want to discuss things related to DSP, I'll try to help if I can.  I have a practical understanding--the kind you can program with.  I'm still working towards a full mathematical understanding, but I'll get there in the end as the hardest part is behind me now.  I think there's one or two others floating around this forum that understand the topic as well.  My particular advantage as to answering questions is that this was not taught to me with diagrams (or at all), leaving me in the position to perhaps put it in words better than a sighted person could.

My Blog
Twitter: @camlorn38

2014-07-14 23:54:04

Wow camlorn that sounds awesome!
I really can't wait to see what you have created! It will be the new library for all the audio games! big_smile
Half of what you said went right over my head, so that probably means that it will do way more than I need!
Does your library have: panning, and the ability to set objects on a map and have them fade faster or slower and change directions, like what swamp does. Also, being able to load and unload sound objects, stream background music (If loading is super fast, it doesn't really matter (Or work with Elias )?
And for all us newbies, have you considered creating a python wrapper or BGT wrapper? That is probably what other people could do unless you wished to sell each wrapper separately...

Thumbs up

2014-07-15 03:39:42

Well, to provide concrete details of what I have (a more detailed and much more technical blog post is around the corner, but I want a couple more things done first):
the library consists of objects that you connect.  Think of objects as boxes with ports on them--each port can have a wire connected to it.  Out of each box also comes a number of wires which you can split as many times as you want.  Ports represent audio inputs and wires represent audio outputs.  Each box takes audio from its inputs, does something to it, and then spits out audio on its outputs.  In addition, boxes also have switches and dials--the properties--that let you control exactly what the boxes do.  Examples of boxes include the mixer (combines multiple audio sources), the panner (pans audio with or without Hrtf), the limiter (prevents audio from going above 1.0 or below -1.0--this is needed to prevent odd things on some sound cards), the file node, the sine wave generator, and a bunch of others that I'm in the process of writing.  This is the level you would work at for writing a custom simulation of your own, music software, media players, voice streaming, etc: Libaudioverse is by no means Audiogame specific.  camlorn_audio was, which was a mistake and also had to do with the fact that OpenAL tries to be game specific, too.
The next level up and what most people are going to want is the 3D simulation.  You create an environment, which is an object with a bunch of properties on it representing things like room size and echo and reverb--basically whatever I code.  You then use this environment to create sources.  On the environment is a pair of properties that specify the position and orientation of you, known as the listener in audio land.  Each source has properties representing its position, orientation (it will be possible to make sources that sound different if they're facing away from you, i.e. simulate  speaker playing music), size (specified as the maximum distance at which the source is to be audible), and other things.  While the usage of the first set of objects is not simple, the usage of this set is extremely so, involving something like 2 function calls to initialize at program start and 1 to create a source.
Finally, the library will provide callbacks.  I am going to implement those tomorrow and doing so is going to be trivial, but they needed some now-completed infrastructure first.

I've been working on this since the summer began, and I'm about 75% to an alpha release.  one of the landmarks is going to be reimplementing Unspoken on top of it--the thing libaudioverse can do that camlorn_audio can't even now is integrate itself with NVDA's audio APIs.  The reason that it hasn't gone faster is because I needed to implement a general and flexible infrastructure and I chose C over C++ (see the link I linked in my last post).  I now have the ability to turn out new bindings in a day at most, and Python already works (I've not released because it's still missing essential features and the bindings are still a bit raw-nevertheless, they are completely functional).  The 3D simulation is lacking in features but works, and the library has full Hrtf support.  I have the ability to implement literally any type of LTI filter, and quite a few things that aren't (this means things to people who know about DSP, but translates to lowpass, highpass, bandpass, band-reject, dc blocker, and a few other things to those who don't).

As for performance, I have written a  benchmarking program.  The benchmark can manage anywhere from 100 to 200 sources in realtime on a single core and without SSE.  Specifically what I get depends on background processes and whether the last change I introduced is doing something stupid and inefficient.  In real programming, for a variety of reasons, this is going to translate to 70-100 playing sources for most people.  If you create too many, the mixer will be too busy to answer requests from your code in a timely manner, consequently dropping your frame rate (there's a device lock).  There are 2 optimizations I have yet to implement.  One of these makes it scale to the number of cores you have (it's currently only using one) and one is SSSE.  I expect each of these alone to increase the performance by at least a factor of two.  The HRTF I am testing with is a 128-point response, which sounds twice as good as the one OpenALSoft lets you get away with; given that I'm getting this many sources, if I made the sacrifices OpenALSoft does, I'd be outperforming it already (its default HRTF takes 4 times less computing power).  I can make those sacrifices; perhaps better, I can leave those sacrifices in your hands if you want them.  If you aren't using HRTF, consider the number of playing sources unlimited.

Finally, I am not planning to sell bindings separately and you will be able to use it for free if your app is open source.  I'm going to work out some pricing schemes that depend on how much you want to sell the app for, with a very expensive license that lets you use it in as many as you want.

My Blog
Twitter: @camlorn38

2014-07-15 06:58:38 (edited by frastlin 2014-07-15 07:03:31)

Wow! I'm super excited for the 3D simulation!!! I totally love your explanation it is really clear!
So, when you say "The benchmark can manage anywhere from 100 to 200 sources in realtime on a single core..."
You are meaning at once?
That is a ton of sounds at once!
Even 70-100 sounds at once is a lot. When you were saying something like that about OpenAL, I thought you were meaning initializing sounds. You will probably answer this in the tutorial, but I just want to make sure that it is easy to create a ton of .ogg files. If I have 4 or 5 for footsteps on grass, 4 or 5 for footsteps on metal, 3 or 4 for footsteps on the road, I would like to play a random one of those 4 or 5 each footsteps, and have them all have similar settings.
In Pygame I  create a sound object:
sound1 = pygame.mixer.Sound("mysound.ogg")
Then I add properties to that sound
sound1.change_volume(1, 0)
(The above is supposed to play out of the left speaker only, but in pygame it is a little more complex)
Or what I can do is create a channel with all those properties and just play lots of sounds on that channel.
I think what you are saying with the objects is similar to channels in pygame.
So I can create a setup with a certain amount of filtering and panning, then play as many sounds as I wish through those settings? Then I can create another object for muffling, or volume and connect it to the first object, then play as many sounds as I wish through both of those object's settings?
Kind of like busses in Sonar?
This sounds super awesome and I can't wait for it to come out!
Let me know if you would like help marketing it and whatnot, I know a lot of places where it can be linked and developers who I can push it to!
I read a couple of your blog posts and they are awesome! I think frankly that you would find a masters program trivial at this point and the only reason why you would wish to go is to find a more advanced teacher as well as get access to more refined academic circles.
Your passion for programming shows and coupled with good business sense, I think you'll not need to worry too much about work at your level of education.

Thumbs up +1

2014-07-15 13:32:52

wow what an interesting topic!
I'm just following the topic because I don't have much to say. But I have a question:
Have you looked at the sound library, engine or what it is which is used in Papa sangre 2? I have heard that the company have released the engine so people can use it in other games as well.

Best regards SLJ.
Feel free to contact me privately if you have something in mind. If you do so, then please send me a mail instead of using the private message on the forum, since I don't check those very often.
Facebook: https://facebook.com/sorenjensen1988
Twitter: https://twitter.com/soerenjensen

2014-07-15 16:35:24

I do not have access to the Papa Sangre engine.  It does not run on anything that's not iOS, and is of little interest to me because of that.  I would also need to pay for access.  iOS and Android are in the works for Libaudioverse, but don't expect anything on that front for 6 to 8 months (iOS requires super-optimization.  Compiling for iOS is quick, but being able to run effectively in realtime is hard).  The mixer itself is pure Ansi C++11 and I make an effort to never violate the standard.

I'm not familiar enough with those programs to see how your analogy compares to what is actually going on.  Each input can be connected to *exactly* one output, though an output can go to any number of inputs.  The specific process of sharing an effect, therefore, is the construction of a mixer with, say, 32 inputs.   You then send that output to the shared portion and connect your objects to its inputs.  This is the low level again, however-the 3d simulation is much simpler.  I made sacrifices and compromises to be able to run as fast as I'm running: one of these was an implicit mixer at each input.  Pyo sort of does this and a bunch of other stuff.  The result is an audio library that can't do powerful things in realtime for games and only works for Python.  If you're looking for something it is similar to, go look at the Pyo tutorials and then think about how great that would be if it was fast enough to run in realtime while handling 100 sources of audio, reverb, and music (Bryan Smart did Headspace, but only gets 16 sources out of it total as I recall).
The way something like footsteps will work is as follows.  Sources are merely speakers and get their audio from something else.  In this case, it's a node that reads a file.  What I'm going to do is add support for queuing files and a callback that tells you when it's finished one of them.  What you do then is put your walking code in the callback and decide what sound you want to play.  This is something I ran into with Camlorn_audio, and it was only solvable with the creation of--you guessed it--threads.  If you don't want anything playing right now, you just don't queue the next one.  To avoid issues with threads, you'd typically do this by telling your main loop to call that deciding code (every language possibly including BGT provides a threadsafe flag, so the callback becomes something like 3 lines and the main loop just does a standard if(should_decide_next_footstep)).
The thing to remember is that libaudioverse separates 3D simulation completely from how the audio is arriving: objects can be files, urls, midi synthesizers, etc.  You then tell a source to read from one of them, controlling characteristics specific to files through the file object.  This is a little hard to explain in forum post appropriate length, but there will obviously be tutorials and examples.  This is going to be commercial and better come with docs.  Regardless, the 3D simulation could care less as to where it gets audio from.
The benchmark works as follows.  It starts by creating 10 sources connected to sine wave objects and synthesizing 5 seconds of audio from them (Libaudioverse can synthesize to buffers instead of the sound card).  It times this and, if it is less than 5 seconds, runs the test again with ten more.  The benchmark is capable of synthesizing 5 seconds of audio in 5 seconds for anywhere between 100 to 200 sources, depending on the aforementioned factors.  Because there is a lock on the device, actually making Libaudioverse take 100% of the time for synthesis will block your code as you call into it, so it's lower--as I said previously, 100 is what can probably be expected on a PC from the finished product after a bit of optimization.
And I'm going to be applying to MIT, probably Stanford, and possibly a few other places I've not found yet in the fall.  I'm not stopping at a masters program.  I'm going all the way to doctorate, if at all possible.

My Blog
Twitter: @camlorn38

2014-07-16 08:44:23

Thank you for your answer. The IOS engine sounds cool if people wanna make audiogames for IOS, but I don't know if the engine is accessible.

Best regards SLJ.
Feel free to contact me privately if you have something in mind. If you do so, then please send me a mail instead of using the private message on the forum, since I don't check those very often.
Facebook: https://facebook.com/sorenjensen1988
Twitter: https://twitter.com/soerenjensen

2014-07-16 18:14:29

I've heard that it's also hard to get a hold of the papa Sangre engine, assuming you are ready to pay whatever they ask.  The problem with iOS development, however, is much more fundamental: Xcode is the example of why I consider Mac accessibility to be flawed at the Voiceover level.  You're not going to enjoy developing for iOS unless you get something like Ruby Motion or know how to write custom scripts to make Xcode suck less, and possibly not even then.  You can do it, it's just not exactly fun--the only blind person I know doing it on a regular basis uses a Windows VM for code editing.
As for why I'm going to support it?  Libaudioverse is bigger than the audiogaming community or the blind person.  Sighted people want and need this software too, and being able to share code across platforms has become a very, very big thing in the computer science and programming world.  If you wanted, you could combine Libaudioverse with Sdl and write games for 5 platforms.

My Blog
Twitter: @camlorn38

2014-07-17 23:53:09

I'm not sure to understand well: your final product is going to use OpenAL behind the scene, or not at all ?

ABout math, in theory I have seen all that: vector calculus, derivative/integrals, multiple integrals, vector analysis, and such. However, I have problems appliying all that stuff concretely as soon as the level is above a certain point. Pseudo-3D by playing with vectors and trigonometry isn't so hard; but I have already tried to read things about the most basic DSP, i.e. low/high/bandpass, and after hours of reading, I still don't understand from where come all the core of the magic (coefficiants usually called a0, a1, a2, b0, b1, b2). So far, I blindly copied code without understanding what's going on behind it. It just works, period. I suppose that reverb and convolutions used in HRTF are a level higher in difficulty ?

IN fact, I haven't made a lot of audio signal processing theory in my university courses, unfortunately for us it's sadly teached together with imaging stuff (of course unaccessible). First of all, I should probably start my journey by trying to understand the relation between the frequency of a signal at a time T and the couple observed sample values at that moment. At the moment I don't even understand that.

There are 10 kinds of people : those who know binary, and those who don't.

2014-07-18 00:08:15 (edited by camlorn 2014-07-18 00:09:37)

Edit: I've got no theory from university.  This is all stuff I did outside school in my own time.  It's perfectly doable, if you give it time and don't stop reading.

I have dropped OpenAL.  I think OpenAL sucks and hate it more than I can say.  camlorn_audio uses OpenAL; Libaudioverse never did.  I will never ever touch it again (unless someone pays me, anyway).

You're looking at it wrong.  You cannot pull the frequency of a sound out given two samples--this is not what the FFT does.  The FFT gives you the frequency of sinusoidal components of the sound.  Figuring out the "frequency", i.e. is this middle C, is something that people spend years working on.  There is no mathematical tool that can give you "the frequency", only a list of where the sine waves are.  Having Calculus gives enough that, cobbling together bits of knowledge from all over the internet, you can build a bridge to a workable understanding.  I have never needed anything after single-vaeriable calculus, nor have I yet needed to use differential equations in any form.

The core of the magic isn't something you get to see until after differential equations, or at least until you understand the Laplace transform.  I do not understand why what I use works, only that it does.  I also spent about 6 months studying it.  There's two things that most people don't realize: DSP is as much an art as a science, at least when talking about audio, and DSP is very, very deep.  Unless you're an electrical engineer or one of the few majors that needs it for everything everywhere, you're not going to know why everything you want to use works.  The other thing about DSP is that, in actuality, it's a discrete version of the stuff that goes into signal processing, something which has been important since the first radio, and the applications of both fields are so wide as to make audio look like an island in an ocean of possibility.  Audio and images are the least important applications.  How about telephones?  The internet?  Analysis of waves in the ocean?

My Blog
Twitter: @camlorn38

2014-07-18 06:53:18

I have dropped OpenAL.  I think OpenAL sucks and hate it more than I can say.  camlorn_audio uses OpenAL; Libaudioverse never did.  I will never ever touch it again (unless someone pays me, anyway).

What are you using to output audio easily then ? Do you connect manually to native API i.e. WaveOut or DirectX on windows and the equivalents on linux and mac, or are you using a library like portaudio ?

There are 10 kinds of people : those who know binary, and those who don't.

2014-07-18 18:32:03

I'm using Portaudio for the moment.  It doesn't matter what I use, though: the audio output is completely separate from the mixer and, in fact, you could write your own if you were so inclined.  The audio output code lives in a corner by itself.  I will need to talk to it directly in future, but Portaudio works well enough for testing.  There are unfortunately some latency issues I've not yet tried to fix, so I may have to drop it.
But tbh, that code is the simplest code in the entire thing.  Libaudioverse is somewhere around 2500 lines at this point, and the portaudio code is somewhere around 85 of them.  Ripping it out and replacing it with the stereotypical multi-backend decider that a lot of people use (that is it looks at what's available and automatically picks the "best" library for your platform) is going to be the the work of a day, and it can fall back to Portaudio until I write better backends.  All these do is provide a way to send samples to the sound card so it doesn't matter what you pick-there's no particular advantage in terms of DSP, only in lower latency.
Also, I think our definitions of easy are different.  Most audio APIs that provide for only sample output are easy: learning to use one and integrating it with something else takes something like 2 or 3 hours tops for me, especially since the mixer does not "know" about the audio API.  I consider nothing about them difficult.  There are much harder programming problems for Libaudioverse; actually writing audio backends isn't one of them.

My Blog
Twitter: @camlorn38