@84
atomics work only to some extent. The problem is that you only get one atomic operation at a time. So as soon as it involves any more than one variable, or if the operation might have side effects, or anything like that--well, too bad for you. You lose. There is more than a little bit of lockfree stuff in synthizer already but proving that it works is almost impossibly difficult. ideally (and in the future this might be the case) you'd put some Synthizer properties in atomics and others in a queue of deferred writes that get applied at the next tick, but things like setting the buffer on a buffer generator also need to change position etc. so for example:
You use atomics for position and loop points and etc. You don't use atomics for the buffer. The code sets the buffer, then position etc. but because buffer was in the queue it clears everything after the fact because total order is gone.
And for things like defaults for sources that are properties on context, this approach also means they can possibly be set out of order there too, depending on the strategy.
There's only so many strategies. OpenALSoft does some complex agglomeration of locks and atomics. Synthizer is borrowing the WebAudio idea of using message passing. The current approach is to have a lightweight concept of an Invokable and to use a lockfree queue and a semaphore to submit them, but that's too slow. So I'm replacing it with a lockfree ringbuffer, separating validation from setting, and then they just apply at the next audio tick, or when manually flushed if the queue gets full.
ideally we'd then have some sort of per-thread cache value thing but the thing is, reading an int property or something like that is always going to be slow enough that you don't want to do it, and anything I do to make it more right just makes it slower. Writing this, I'm realizing I could just flush pending writes before every read, so maybe it doesn't need to be eventually consistent, but in either case the choice I made has the consequence of making something less than ideal about the read path.
But the advantage is a pretty big one, in that there isn't really any risk of deadlocks, and in the long run it's possible to get the audio threads down to a finite, constant number of syscalls. They already are, but memory allocation/deallocation happens on them still in a few cases, for the time being. That's fine, it's good enough, but these are the kinds of choices that take good enough to the sorts of latencies you get out of Reaper when configured to its most extreme settings sometime in the future.
My BlogTwitter: @ajhicks1992