2020-05-26 06:04:51

@25

From the Cython docs:

The __cinit__() method is where you should perform basic C-level initialisation of the object, including allocation of any C data structures that your object will own. You need to be careful what you do in the __cinit__() method, because the object may not yet be a fully valid Python object when it is called. Therefore, you should be careful invoking any Python operations which might touch the object; in particular, its methods and anything that could be overridden by subtypes (and thus depend on their subtype state being initialised already).
By the time your __cinit__() method is called, memory has been allocated for the object and any C attributes it has have been initialised to 0 or null. (Any Python attributes have also been initialised to None, but you probably shouldn’t rely on that.) Your __cinit__() method is guaranteed to be called exactly once.

The docs then go on for several more paragraphs defining just what the limitations are.  There may be some obscure reason to use this, but __init__ is efficient enough and it's not worth it especially since zero initialization is fine.  Also one of the important restrictions here is that you can't access other Python objects and this has to inherit from Python-provided base classes, plus calling into Synthizer itself is in no way basic c-level initialization.  Also whatever overhead might exist here is immeasurably small as compared to creating the Synthizer-side objects anyway and you don't get out of a __init__ for most of it because the user needs to be able to provide parameters and expects docstrings in the right places.

If it's not going to be zero initialized then it might error, which means needing to throw an exception, and nothing about these docs is suggesting that that's a safe thing to do either.

I can't use char * which is native in C because in Python 3 str is unicode and Synthizer uses UTF8, so conversion is necessary.  I could use the raw Python types if I wanted to put in a lot of extra effort for absolutely no gain whatsoever other than making everyone's lives difficult, and also Synthizer is char * and wants UTF8, not UTF16, since everyone who makes a halfway sane C abstraction that has to deal with Unicode doesn't change their basic char type for Windows.  I could use Py_UNICODE* but that doesn't solve the problem of needing to accept or hand out str objects to Python code outside the bindings in addition to changing the underlying character type, which would have to be something Synthizer then exposed in the public API just so that Python could use it.

I don't like shutting people down and I understand that you're trying to help, but please stop trying to help.  I've tried to be really patient with you in general, but this is continuing a pattern from the other thread that I don't have the bandwidth for where I have to explain the things you're trying to explain to me and why I'm not using them and/or why they don't work.  If you find a bug I'm happy to address it, but I don't have the bandwidth to address code reviews in general, and I especially don't have the bandwidth to address code reviews from people who clearly only have a theoretical knowledge of any of the topics at hand.  And even if I did, I have neither the bandwidth nor the interest of optimizing the Cython bindings to be some version of perfect.  The overhead isn't in Python, I promise you that, and the only reason I'm even bothering with Cython rather than making my life super easy and using CFFI is that it opens up some options for later around custom generators and byte streams that I won't even be touching for months.

My Blog
Twitter: @camlorn38

2020-05-27 23:13:29

Hi and thanks for all the work.
I'm curious about how generators work.
What happens if I add the same generator to 2 different sources? Or do I need to create a generator for each source separately?

If it's not what you want to do, it's probably the right thing for you.

Thumbs up +1

2020-05-27 23:44:18

The hierarchy is one or more generators per source.  The generators are what plays and how (I've got plans for 4 or 5 at least that cover game use cases).  The source is where plus controls that apply to all of them (panning, volume, pause all the generators at once).

They should only go on one source at a time but at the moment nothing validates this, so you'll get weird results if you screw that up.

Buffers (fully decoded in-memory audio assets) will come at some point.  You'd use a buffer across multiple sources, with one BufferGenerator per source.  If that sounds complicated, there will be some sort of utility helper object that covers the common case of creating a source with a buffer and you can just use that instead of making a BufferGenerator and managing 3 objects.

The reason this is weird is because I started Synthizer with all the weird streaming use cases people used to ask me for with Libaudioverse in mind, which made streaming easy and fully decoding the asset a little bit more work.

My Blog
Twitter: @camlorn38

2020-05-28 17:59:52

I really like where this is going. If I can figure it out I might create a Rust FFI binding for this (heh, it'd be neat if it was written in Rust, but that's just me being wishful). Keep up the good work -- I look forward to using this in an actual app one day (I can see it being used in actual programs and not just games).

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up +1

2020-05-28 18:48:27

@29
I wanted to use Rust, but they don't have const generics or super mature SIMD.  There is rather a lot of type-level metaprogramming going on w.r.t. inline buffers to avoid pointer chasing for example.  But rayon would have been amazing and there's already been bugs that Rust would have saved me from (including one that's outstanding that I'm going to try to track down this weekend).  Me, audio programming, and Rust is ironic.  Rust is now mature enough for something like Libaudioverse, but since I did Libaudioverse I learned why most of Libaudioverse's internals weren't as good as they should have been and pushed the bar higher, and now it's yet again beyond where Rust is at by just enough that Rust became a poor fit.

Rust bindings are pretty simple for it.  Everything synthizer is a handle.  You can duplicate what the Python bindings are doing with respect to properties using macros and mostly call it a day.  In fact Python might end up using Jinja2 to get Rust-style code generation in order to avoid the overhead of descriptors, eventually.  That said you might want to wait in case I change the API in some major way--this is still young enough that I can't promise that I won't.

My Blog
Twitter: @camlorn38

2020-05-28 23:04:13

@30, understandable. You could always write your multithreaded/thread-safe code in Rust, then call it from C++. That would be messy, but then you'd have the power of both languages at your disposal. It might be something to try in the future, perhaps.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up +1

2020-05-28 23:17:38

@31
Not when the multithreaded code is the entire library and interacting with a bunch of inline aligned buffers whose sizes are determined by const generics that do computation at compile-time to work out said sizes.

It's not that bad though.  Modern C++ can be done pretty safely, at least in the sense of memory safety.  Not as good as Rust and the skill level to do it is very high, but it's still doable.

You might find the Synthizer code interesting and you might be one of the few people here who can actually read it as well.

My Blog
Twitter: @camlorn38

2020-05-29 00:16:26

@32, I was actually considering that. A while back I had written a module in Rust that was a part of another project that I've since lost that used MIME types to decode audio files instead of extensions or manual file scanning. The code was quite complicated but it was really, really fun to write. I was wondering if it would be worth it if I rewrote that in Rust (or, hell, in C++, why not) and then directly incorporated that into Synthizer. It would be trivial to extend, too -- you just add a new MIME type and then a decoding function to call, and the decode function would return you a Vec<i32>. I'd need to rewrite it again though -- I lost the code, and it would take me a while to rewrite, but I could do it. Adding streaming support shouldn't be too hard, I think, though I've never written streaming functions. And adding custom MIME type handling also shouldn't be too hard either. The question is, would it be faster than what you already have? I suppose the only way to find out would be to try it. Of course, if I wrote it in rust you'd then have a dependency on another library.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up +1

2020-05-29 00:34:26

@33
I'd rather not bring in something in Rust because I'm trying to keep this reasonably self-contained.  I'm also only including things that don't require credit in binary distributions (i.e. zlib, unlicense, etc. But not MIT, BSD, etc).

What I have are the libs in third_party/dr_libs (I think, that path is off memory).  The approach is just to try decoding with all the implemented decoders and see what doesn't error,.  Their libs actually do properly detect format, but it's unfortunately a limited selection, and even more unfortunately stb_vorbis isn't by the same person and is...let's go with not ideal.

A mime sniffer might be a worthwhile project for you to do, because I don't remember ever finding a good implementation of the HTML mime sniffing specification that isn't "and then we tore this chunk of code out of Chrome".  that said I didn't ever look hard, so don't take my word that this doesn't exist.  I don't think it's needed for Synthizer now and I'm not seeing a need for it in anything resembling the near-term future, but it might be whenever I add optional support for dynamically loading ffmpeg/libav depending on the interface of those libraries.

In general my philosophy on this topic is: this is for games, if you are a game dev you can convert your files to wav/mp3/ogg/flac and probably already have them in one of those formats anyway, anything else for media loading is a nice to have save for eventually ambisonics way down the road when I have the time to play with machine learning libraries to try to generate ambisonic decoding coefficients (yes, really, it is *that* involved, the people who have them something something patent IP noncommercial use only legalese).

My Blog
Twitter: @camlorn38

2020-05-29 00:48:40

@34, yeah, I know what you mean. And its unfortunate that no such ambisonic library exists that's unlicense-based or something equivalent to that. (I personally don't mind MIT or BSD licensed stuff myself, but I understand your stance.) I've never really been able to get a hold on ffmpeg/libav's API -- looking at code samples it just seems ridiculously overcomplicated, and the docs aren't exactly useful, last time I checked.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.

Thumbs up +1

2020-05-29 01:18:56

@35
I don't mind BSD/MIT either, I just don't want to be someone's dependency of a dependency.  Sometimes the trade-off with respect to that is worth it, but there is literally no source of high quality DSP code I can think of offhand that's not (L)GPL, and there's a lot of value (for myself and others) in being able to just yank bits out for other projects (like my hypothetical speech synthesizer).  If there were other worthwhile things to pull DSP code from I'd be a bit less strict on this but there's not: it's either cryptic in the extreme without comments, licensed under the (L)GPL, or slow because it's educational. Usually all 3 (you would think educational and cryptic without comments wouldn't happen. You would be wrong in this case).  My hypothetical networking solution will have BSD stuff in it if/when I get that far.

Someone needs to take on the tooling required to let everyone put "This bla bla bla notice must be included in all substantial portions of the software" in the software without auditing the entire dependency graph, but somehow no one has, at least not outside the commercial space.

For Ambisonics, look at Resonance.  They have an implementation in jS that is (mostly) fine.  Ambisonics actually turns out to be a big disappointment though because in the typical manifestation you don't get the interaural time difference.  A sighted person would never notice, but if you're listening to ambisonics demos and you're like "Huh, that's kind of low quality"--well, it is.  But I think you can get around it by just running 2 decoders and computing the ITD yourself.  Resonance also has some coefficients, but it's seemingly not possible to duplicate the process they used to build them without running afoul of a noncommercial use only HRTF dataset that Google somehow got permissions to use and I am not entirely clear on what the license of those coefficients are if separated from the library, so I've decided not to touch them with a 10 foot pole for now.

My Blog
Twitter: @camlorn38

2020-05-29 12:16:56

Hi Camlorn,
This is great stuff, good work, as usual!!

With respect to bindings for other languages, is that something you'd want to include in your repo? Or other peoples'?

I'm using Dart for everything now, and would love to write bindings (when you say it's safe and OK to do so). While I'm perfectly happy for you to merge said bindings into your repo, it obviously then relies on me maintaining them, unless the API becomes so stable that they never need touching again.

Probably something you've already thought about, but just in case you haven't, there's even more food for thought. smile

Take care,
Chris Norman
Selling my soul to andertons.co.uk since 2012.

2020-05-29 16:16:54

@37
Bindings that make it into the official repo are bindings that I'm responsible for, and I can't afford to be responsible for all of them.  Someday, if/when this is mature enough, I'll probably start moving a bunch in.  But for a while it's just going to be Python and after that I'm going to pick and choose quite carefully.  The library itself is a piece no one else can do, but bindings can be taken on and maintained by anyone (at least once there's a manual, anyway).

Unfortunately, though, you'll probably be disappointed with Dart.  This doesn't run on phones right now, and it probably won't for a long time.  For starters I'd have to buy a mac.  If someone is interested in making it happen, I can help them make it happen at some point, but it'll either be really easy or really hard with basically no middle ground.  And if you're hoping to use this on the web via webasm, that definitely won't happen for a long time because webasm puts a lot of restrictions on what I can do for desktop/phones, and don't even get me started on what you have to do to stream data or fake a filesystem.

My Blog
Twitter: @camlorn38

2020-05-29 21:58:04

@38: I'm not planning to use it on a phone. It may become useful when Flutter's desktop support gets good, or maybe there's some UI toolkits that are game friendly that I can use. I've not looked into any of this yet. Just saying, I'm more than happy to make Dart bindings if and when the time comes.

Take care,
Chris Norman
Selling my soul to andertons.co.uk since 2012.

2020-05-29 22:14:32

@39
I looked into desktop dart and I couldn't even figure out how to get it to compile without a console window, so this is probably longer away than you think.  Alternatively I can and it's undocumented and I just don't know how.  But I'd love desktop Dart I think, if it could be done, and I could get a UI framework going with it fairly quickly I believe (at least for Windows--others would have to contribute other platforms).

My Blog
Twitter: @camlorn38

2020-05-31 17:08:53

@40
Desktop stuff is pre-alpha ATM. I just got a blank window when I tried. I reckon it'll be good when it gets there though.

Take care,
Chris Norman
Selling my soul to andertons.co.uk since 2012.

2020-06-07 07:30:54

Just pushed a Source3D object to the repository, bringing Synthizer in alignment with WebAudio in regards to panning.  We've got all the distance model stuff WebAudio has, plus something I'm calling the closeness boost, which allows you to make sources jump in volume when they get close to the listener (basically think of this as a "I'm close enough that I might want to interact" hint).

Example.py is updated, and there's something like 10 new properties across the context and the sources.  In case someone wants to try this, the sources copy the context's versions of the distance model stuff on creation so you can configure defaults there and have new sources pick them up.

This isn't on Pypi yet.  I'm probably going to finally invest the time tomorrow to get CI pushing wheels on tags, so that whenever I post one of these progress updates anyone using the Python version can just pip install it.

We're close.  We're not there, but we're close.  There's a few things we need yet, starting with reading other file types than wav, and there is for example a notable lack of stereo sources for game music etc.  But anyone familiar with WebAudio should now be able to hook this up and get something out of it, even if it's a bunch of interesting bug reports.

A manual will be coming soon as well. It's finally at the point where it's worth me investing time to do that, but it'll probably be a week or two because it's going to take a few hours to bootstrap GitHub pages in Ci and write some meaningful content for it.  If you're following along and like "I don't even know where to start", that's hopefully going to change soon.

My Blog
Twitter: @camlorn38

2020-06-07 08:54:31

It looks really nice. Really wished I could help somehow, but I don't know C / C++.
But hey, if I can some way help, do let me know smile

If you like what I do, Feel free to check me out on GitHub, or follow me on Twitter

2020-06-07 16:21:20

@42 congrats.

bitcoin address: 1LyQ3hziMC2DTnCtgM3V1zfuZ73P3CYT9P

2020-06-09 12:48:06

Will it be possible to use it on Linux or on the other platforms?

Thumbs up

2020-06-09 15:11:07

As far as I understand it, its supposed to be cross-platform capable, so yeah.

Thumbs up

2020-06-09 16:31:22

I don't have Linux so I can't test there, but at least getting it building is on the to-do list.  I'm not actively breaking platforms and there will be CI, but I can't test Linux and Mac and they're not high enough priority for my personal projects for me to change that, so in the end it will depend on there being volunteers who know enough C++ to submit patches.

My Blog
Twitter: @camlorn38

2020-06-09 16:32:07

Is there an already working version for python available?

best regards
never give up on what ever you are doing.

2020-06-09 16:38:17

Yeah, pip install synthizer if you're on a 64-bit Python. I just don't have it automatically updating yet so when I make changes it's a long manual process, so the source that takes position directly instead of azimuth and elevation isn't on pip yet.

My Blog
Twitter: @camlorn38

2020-06-09 16:58:02

so synthizer is not available on the 32 bit python?

best regards
never give up on what ever you are doing.