Well, to provide concrete details of what I have (a more detailed and much more technical blog post is around the corner, but I want a couple more things done first):
the library consists of objects that you connect. Think of objects as boxes with ports on them--each port can have a wire connected to it. Out of each box also comes a number of wires which you can split as many times as you want. Ports represent audio inputs and wires represent audio outputs. Each box takes audio from its inputs, does something to it, and then spits out audio on its outputs. In addition, boxes also have switches and dials--the properties--that let you control exactly what the boxes do. Examples of boxes include the mixer (combines multiple audio sources), the panner (pans audio with or without Hrtf), the limiter (prevents audio from going above 1.0 or below -1.0--this is needed to prevent odd things on some sound cards), the file node, the sine wave generator, and a bunch of others that I'm in the process of writing. This is the level you would work at for writing a custom simulation of your own, music software, media players, voice streaming, etc: Libaudioverse is by no means Audiogame specific. camlorn_audio was, which was a mistake and also had to do with the fact that OpenAL tries to be game specific, too.
The next level up and what most people are going to want is the 3D simulation. You create an environment, which is an object with a bunch of properties on it representing things like room size and echo and reverb--basically whatever I code. You then use this environment to create sources. On the environment is a pair of properties that specify the position and orientation of you, known as the listener in audio land. Each source has properties representing its position, orientation (it will be possible to make sources that sound different if they're facing away from you, i.e. simulate speaker playing music), size (specified as the maximum distance at which the source is to be audible), and other things. While the usage of the first set of objects is not simple, the usage of this set is extremely so, involving something like 2 function calls to initialize at program start and 1 to create a source.
Finally, the library will provide callbacks. I am going to implement those tomorrow and doing so is going to be trivial, but they needed some now-completed infrastructure first.
I've been working on this since the summer began, and I'm about 75% to an alpha release. one of the landmarks is going to be reimplementing Unspoken on top of it--the thing libaudioverse can do that camlorn_audio can't even now is integrate itself with NVDA's audio APIs. The reason that it hasn't gone faster is because I needed to implement a general and flexible infrastructure and I chose C over C++ (see the link I linked in my last post). I now have the ability to turn out new bindings in a day at most, and Python already works (I've not released because it's still missing essential features and the bindings are still a bit raw-nevertheless, they are completely functional). The 3D simulation is lacking in features but works, and the library has full Hrtf support. I have the ability to implement literally any type of LTI filter, and quite a few things that aren't (this means things to people who know about DSP, but translates to lowpass, highpass, bandpass, band-reject, dc blocker, and a few other things to those who don't).
As for performance, I have written a benchmarking program. The benchmark can manage anywhere from 100 to 200 sources in realtime on a single core and without SSE. Specifically what I get depends on background processes and whether the last change I introduced is doing something stupid and inefficient. In real programming, for a variety of reasons, this is going to translate to 70-100 playing sources for most people. If you create too many, the mixer will be too busy to answer requests from your code in a timely manner, consequently dropping your frame rate (there's a device lock). There are 2 optimizations I have yet to implement. One of these makes it scale to the number of cores you have (it's currently only using one) and one is SSSE. I expect each of these alone to increase the performance by at least a factor of two. The HRTF I am testing with is a 128-point response, which sounds twice as good as the one OpenALSoft lets you get away with; given that I'm getting this many sources, if I made the sacrifices OpenALSoft does, I'd be outperforming it already (its default HRTF takes 4 times less computing power). I can make those sacrifices; perhaps better, I can leave those sacrifices in your hands if you want them. If you aren't using HRTF, consider the number of playing sources unlimited.
Finally, I am not planning to sell bindings separately and you will be able to use it for free if your app is open source. I'm going to work out some pricing schemes that depend on how much you want to sell the app for, with a very expensive license that lets you use it in as many as you want.