@severestormsteve1
If this were a project for the blind, I'd have used that name instead. but having a mismatch between the code and the documentation, especially when there is code in the documentation...that's annoying.
I've considered doing the weather thing. SO have others. I'm not sure why no one has actually tried yet, but I've seen this idea many times. It's likely to be doable if you do it from the perspective of someone standing outside. In the long run--post-1.0, requiring rather deep and interesting mathematics, and not part of this funding, but eventually possible--I want to have sources where you can hear their shape and size as you do in the real world. A few packages can do it, but no one publishes their techniques because it's the kind of thing where it's worth a lot of money. The mathematics for it is something I'm only just beginning to understand, and it's well beyond calculus. But it could let you tell how "big" a storm is, perhaps.
@slj
yeah, probably something like that. The problem is that if the goal is to also include occlusion, the game or whatever has to have a reason to occlude.
What I might do is get something to the point where you can walk around in a virtual environment. I really don't want to do a full-on shades of doom clone at this time. Libaudioverse isn't my only project. I've also got somewhat time sensitive work going on on the Rust compiler, and I'm putting it aside so that I can produce these demos. Abandoning that for a couple weeks isn't something I want to end up doing--it's almost finished and incredibly impressive on the resume doesn't actually cover how incredibly impressive it is on the resume.
I did this because it was fun and I had the time, originally. The first commit in the repository was April 12, 2014, so almost exactly 3 years today. I was leading into it with some stuff before that, probably by an additional 6-8 months, various prototypes and things, plus reading a bunch of textbooks that I managed to find. Camlorn_audio used someone else's mixer and I got interested in how it worked, so I started studying and clawed my way into expertise as a blind person. When it became evident that camlorn_audio couldn't be made to work as well as is needed, I was in a position to just do my own. I thought it would take 6 weeks and make a cool summer project; if I'm being honest it could have, but all we would have had at the end of that is a stereo panner that happens to be HRTF, whereas what we have instead is a library that can do just about any realtime synthesis task for just about any domain you want to name (occlusion? Hah. Reverb? Sure thing...). As an interesting sidenote, it would be possible for someone to reimplement Libaudioverse's source and environment using lower-level Libaudioverse components. I'm not sure why you would. But you could.
My BlogTwitter: @ajhicks1992