2020-09-20 22:07:40

@nolan
Just set up crates.io if you want to add me as co-owner on stuff.

because they don't let us use anything but our GitHub, I'm camlorn there as usual.  Shame though, was hoping to move away from that and toward ahicks, but I guess we can't have everything.

My Blog
Twitter: @ajhicks1992

2020-09-20 22:58:36

OK, I'll add you tomorrow when I'm back at my workstation with crates.io credentials.

2020-10-01 05:24:27

A quick Python question. Looking at the synthizer mannual, it says that property reading is slow. I assume this also applies to retrieveing the seek position of a generator? I have a situation where there are different audio files of the same length. (music layers) If I start playing one layer and decide to add another one some time in the future, would it not be recommended to read the first file's seek position to start the second layer at the correct position?

Trying to free my mind before the end of the world.

2020-10-01 15:38:48

Yeah, you don't want to read position right now.  I eventually plan to optimize that property specifically.  But even if I do, you still run into another problem: you might get a value slightly in the past.  That's true of almost any audio library in existence.  At the moment there's no way to sync a bunch of BufferGenerators together because that's a rarely needed feature.  Usually it doesn't need to be sample accurate and you can just start them all in a loop and manipulate them through gain (i.e. silent ones are gain=0, and you fade them in and out).  As long as you're not doing it for literally everything in your game, the cost of having a few silent BufferGenerators playing here and there is negligible.

If you truly need sample accurate BufferGenerator syncing, you'll probably get it eventually.  Just not now, and not through BufferGenerator.  The goal, after reverb, is to cover the common use cases with generators specific to those use cases.  SO for example you might get MultibufferGenerator or something like that out of it.  Libaudioverse tried the "be incredibly general" approach, and it's not possible to do that well without more resources than I have (thus me not continuing it and starting over).

My Blog
Twitter: @ajhicks1992

2020-10-02 13:47:08

Hey @Camlorn, is there currently a way to change the rate of a playing sound? Was going to try making something with engine fx for a bit of lighthearted relief, but I'm not sure it's possible yet.

Cheers.

-----
I have code on GitHub

2020-10-02 16:18:26

There's internal support for it, but it looks like I never exposed it.  I can probably get you that this weekend.  It's the kind of thing that should take 10 minutes being as it's already there.

My Blog
Twitter: @ajhicks1992

2020-10-02 19:35:37

At the risk of being slightly off topic, how hard would it be to have synthesizer used from lua? The idea I had was to use it as the audio engine as a replacement for the lua audio plugin. I assume that plugin uses some code from a dll, but am not sure from which audio library.

2020-10-02 19:43:57

Probably not very hard.  It can be called from anything that can call C stuff and the API is relatively straightforward and likely as not to stay that way. But I don't know enough about Lua to write bindings or something, in fact Lua doesn't really even exactly have a concept of bindings in the first place, so you're on your own when it comes to actually doing it.

There's at least two ways of doing it that I know of, and the standard one is that the embedder embeds dependencies as the embedder, so you'll need to figure out what if anything the thing you want to use it in offers for binding C stuff.  There's a lot of options and no real standard, so it quickly turns into "Lua bindings for Synthizer that work with only these 5 apps" or whatever.  The only "cross-platform" way is to write a C module, which maybe I'll do that eventually, but it's time-consuming and I don't think it can be automated, so you'll have to find out what's available in the environment you're in and go from there.

My Blog
Twitter: @ajhicks1992

2020-10-02 22:22:06

@256
Lovely, thank you mate. I really appreciate that.

-----
I have code on GitHub

2020-10-02 23:26:24

Synthizer will be fine, as long as you have CFFI aboard, and MUSHclient lua with CFFI embedded is working fine, so thats not a problem. For non-CFFI solutions though, a C wrapper will be required, as was already explained earlier.

2020-10-03 15:01:04

Hello!
So, as some of you may know already, I'm trying to create a game engine for the .net game developers out here, and I know there are some. Yeah, you might say that .net is not suited for gaming, but that is in the past for some time now. The simple fakt unity exists proves that beyond any daught, so, why not for the blind as well? At this moment, I haven't written any specs, I'm gathering the components I think are best for what I'm trying to do, then I'll see what comes of it.
Anyway, enough about this rumbling, thing is, I want to use synthiser as the audio engine, like project lucia is using bass. Would that be OK with you, @camlorn?
if so, any legal issues I need to warry about? license conflicts, anything at all? or is it something like ECI eloquence tts bynaries, I'm not allowed to distribute anything made with synthiser? Believe me, this is the first time I make anything public, I don't even know how to use github, even though I used git quite a lot. License problems I can't grasp, for example why I can't sell products that include the bass library, maybe it's because english is not my native language and I can't find such licenses in romanian, really DK.
well, if everything is right and I can use it and everything, I must write bindings for it in the .net ecosystem. Although I don't look forward to how to make it a cross platform assembly, how to inject the right library depending on the platform used, that would mean I have to have cashed all the library binaries for all the specific platforms I want to support,  I think a thorough google search would answer all those questions in due time, so I don't think I will have very big problems dealing with it.
so, DK, does anyone know where to find the docs, I kind of need the C API docs, like the signatures and meanning of parameters/possible values(where applicable), etc.
and another question:
so, in order to create a sound and add it in the environment, you need to create a context, then a buffer generator based on the context, then create a buffer from a file, assign it to the generator, create the source from the buffer(didn't really understand that), add it to a player, then play it. I'm sorry if I didn't understand some things, as I looked only abit at the examples. So, must one always create the buffers from the files, or internet stream, or whatever, add them to the generator, make the source, add it to the player(or must I create a new player too?), and then play?
if so, isn't abit too weird? I mean, with bass, you init the lib, which would perhaps do something like initialise an internal synthiser like context, resource pool, whatever, would probably create it's own kind of what you call a generator, but all that's hidden, right? then, you tell it to give you a channel handle that's perhaps the buffer along with the source, and when it plays, it adds the channel handle to something resembling a global synthiser player in functionality?
and...what is a source, why do we use a generator, etc? Maybe because it's a new audio library, but it's really strange to me, but again, I used only bass for the hole time I worked with sound in .net. Though now that I look at it, your lib is kind of familiar from somewhere, perhaps from an example I've seen about openal soft in python? this API reminds me of something, that is the closest I could think of, though I never actually used anything besides bass for audio, even though I think this lib has the potential of surpassing it completely, more file formats and we're almost there.

2020-10-03 15:46:39

bgt lover wrote:

License problems I can't grasp, for example why I can't sell products that include the bass library, maybe it's because english is not my native language and I can't find such licenses in romanian, really DK.

You are allowed to sell such products, but you'll have to purchase a shareware license from BASS first, which costs a few hundred bucks. Not that expensive IMHO, since you're going to get quite a few bucks back from your own product as well.

bgt lover wrote:

so, DK, does anyone know where to find the docs, I kind of need the C API docs, like the signatures and meanning of parameters/possible values(where applicable), etc.

Sure, as can be seen in this thread, the docs are located under https://synthizer.github.io.

2020-10-03 17:55:10

Lua audio is using Bass.

Facts with Tom MacDonald, Adam Calhoun, and Dax
End racism
End division
Become united

2020-10-03 18:50:30

thanks @262! This thread is so huge, that searching anything in it is impossible.
now about the C API, I think I slowly grasp the thing, it's actually easy enough to use. If I didn't know better, I'd honestly say this was made from the start to be compatible with the horible dll support implemented in bgt, but we all know it can't have been made so by design since every developer who knows what he's doing, like @camlorn here, knows enough to stay away from bgt like the plague.
As I read the documentation, I maybe will find something I don't understand, so I need to post somewhere. So what do you guys think, should I make a new topic, or post in this one?

2020-10-03 19:20:04

It's not intentionally designed to be compatible with BGT, it just so happens that a lot of otherwise good designs for audio fall out that way.  I expect that features in the near future won't keep that compatibility.  In particular structs are looming on the horizon.

Generators "generate" audio.  Sources are where audio comes from.  Sources are like speakers.  Generators are like things you plug into the speakers.  They're separate because you can have more than one generator plugged into a source at once, and, in future, you'll be able to put effects on them.

Buffers are separate because you can share them between sources  and cache them in memory in order to prevent needing to reload the file all the time.  If Base doesn't have something like this, that's surprising to me, because for games you really want to be able to do that.  Otherwise you'll have all sorts of glitching pretty early on because hard disks aren't that fast.

You're right that it's somewhat odd to have 3 objects but, if you combine them, you lose a ton of flexibility.  For example, Synthizer will eventually offer a thing that knows how to load a directory in parallel in the background, so that you can ask it for buffers and they're just already there ready for you.

Licensing is fine.  Synthizer is public domain.  You can run with it and sell it and not even credit me.  I'd appreciate you not doing that, but you can.

My Blog
Twitter: @ajhicks1992

2020-10-03 21:52:49

@camlorn: No, no, surely I won't do that. In fakt, why the hell would I want to sell an audiogame engine that's...just made for the blind? I won't get any money worth my efforts anyway, so might just offer it for free, and open source as well! I only said that, as an example with bass and the part of the license I didn't quite understand. Yeah, of course I'll give you credit for it, as will, I think, any dev in this community competent enough to use the lib, we know it's not our work, so I see it only as bad fate for someone not crediting you, as I'd say if someone would use my as yet non-existing engine without at least saying they used it, just bad fate.
yeah, I knew in a way that the more advanced things in the API would be more complicated, but I really hope there won't be too many structs and such, since those constructs are abit hard to bind, or at least for languages that don't have that concept or if they do, most of them don't pack them correctly in memory so that they are understandable by C. From what I herd, even rust, though it uses structs and compiles to native code, doesn't respect C conventions for field alignment by default, unless you prepend the struct with some attribute I forgot the name of. Python, for example, needs a class with a dictionary inside denoting the fields of the struct or something like that, what I know for sure is that it uses the struct module and has...quite enough boilerplate to deal with for a binder.
the .net, however, must do almost nothing for that, because, like in rust, you just have to specify an attribute from the System.Runtime.InteropServices namespace since the struct type already exists in the cls, so an attribute in front of the declaration. Well, it seems I don't need to fight with the language so much with those structs, but please, if you can, try to keep the API as simple as possible.
btw, is it supported to load in a decoded byte array of samples and have it played, as if the decoder had already ran and it's output would be those bytes? because if so, you won't need to integrate all file formats directly into synthiser, rather aditional formats besides the core ones would be considered pluggins, then you could decide whether or not they would be included in the main repo or create a new pluggins section in your documentation. Bass does this, too, and now they have all sorts of weird formats and stuff, even midi, made by the community, I think?
and yeah, it seems that bass doesn't have that buffer cashing/sharing capability directly implemented, but one could create a channel with the file loaded, then use BASS_ChannelGetData to get the buffer data, then you could, I suppose, cash that. Though for that buffer to be complete and ready for cashing, e.g. the channel handle is not a streamming channel, the file must completely be loaded into memory, so BASS_SampleLoad needs to be used, but that has limitations regarding file sizes, encoding, and samplerate, obviously. So, if nothing changed since I last used it, then I don't think such a thing exists in bass, perhaps with an add-on?
so yeah, I think that, with alittle time, synthiser will become the next bass, if not better, and I could see that happening in the near future.
now, to another thing:
how many platforms has this support to build against out of the box? windows, what else? mac OS, linux? uwp? android, iOS, wach OS, XBOX?
for example, monogame, the basis for my future engine, is able to deploy games on android, iOS, uwp(through xamarin), as well as several consoles, including the play station, dreamcast, nintendo switch(I believe), and Dk others off the top of my head, I'd have to look them up to find all of them out. Thing is, I really could care less about those weird consoles and stuff, but I'd like to not include anything that would make the game unbuildable on platforms monogame would run on, even though I won't support them explicitly, that's why I'm trying to put on the list as close to only .net core libs, if not only microsoft components and nuget packages, like grpc for networking, for example, since there's a nuget package that's inheritly cross-platform, because it says it runs on .net standard. Another such thing would be the bullet sharp library, a wrapper around the bullet physics engine. That, too is written in c++, but I hope it builds at least on android, iOS, and uwp(x64/arm).
so, how easy is it to compile and link synthiser against new platforms? for example, any part that I know it's cross platform as far as compilers go, and any single one where I know most of the platform specific stuff resides, like playing sounds and stuff? if it's too hard to port to consoles and such, I sincerely could not care less, since I don't have one anyway, but I'd like to know if such portings are possible and what platforms are already supported

2020-10-03 22:13:06

I'm not saying you'd run with it and not credit me, merely that you could.

Binding structs is easy in any language that I can think of.  Your knowledge is out of date for Python, and also not correct in the first place, given that even incredibly ancient versions of Python 2 had ctypes which could handle structs no problem.  I'm actually directly responsible for Rust not following C layout by default (I contributed that optimization to their compiler).  If you're doing it by hand you just add one extra line, but no one does it by hand because you can just run rust-bindgen on the C headers and a binding comes out the other side.  Surely .net has something similar.  BGT is the only thing I can think of where binding structs is overly difficult.  Obviously you have to keep your binding up to date if the library changes, but that's true even without structs.

Synthizer will work on Linux and Mac as soon as I have the bandwidth to tweak the build system and as soon as people with those platforms then follow that up by testing it.  This is on the roadmap for after reverb.  It can probably be extended to the game consoles by someone who knows how to do that without too much work as well, though I doubt that will happen because game console devkits are expensive and hard to get your hands on and they already have good options for this stuff.

Custom formats will happen, but not for a while.  For games, using flac, wav, or mp3 is fine, and you can just convert your assets with ffmpeg.  I'm more interested in having a good overall story than I am in investing time in making that work well.  Synthizer isn't trying to be everything to everyone.

My Blog
Twitter: @ajhicks1992

2020-10-04 00:59:36

BASS has buffering support built-in, they're called samples over in BASS' infrastructure. They get loaded and decoded in-memory and you can create up to X independent channels of those, where X can be specified freely by the application using BASS. Changing data on the sample will then again influence all channels, which allows for quite flexible data management.

2020-10-04 01:23:28

@chrisnorman7
Version with pitch bend is up, buffer_generator.pitch_bend = 2 for example.  It's not well tested so either you'll enjoy in a "wow this works great" way or a "wow that sucks and broke" way but either way enjoy.  It's not available on other generators yet, that's a more substantive change that I'm not sure how I want to handle being as I need to keep some options open moving forward.

@268
Yeah, I mean I figured that had to be there somewhere.  For what it's worth--though you weren't the one asking originally--the difference with Synthizer is that buffers are immutable: you can't just change the buffer to change all generators using it, because if you want to load different data you need a different buffer.

My Blog
Twitter: @ajhicks1992

2020-10-04 04:02:02

@269, can you explain your rationale for contributing that to rust where it doesn't follow C conventions for structs?

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.
My Github

2020-10-04 04:15:14

@270
Yes, I wrote this blog post explaining what I did, why, and how I did it.  Some of that's probably out of date in terms of compiler internals.

Short version: I asked a question on IRC, it spun out of control and became a 6 month project back when I had time to randomly take on 6 month relatively full time projects, and now Rust  auto-reorders fields to eliminate "holes", which makes your types smaller without you having to hand-sort fields by alignment and size.  In the process I also cleaned up a huge chunk of the compiler, which unblocked future work such as repr(transparent) that's not mentioned there that other people have run with since.

Rust types weren't actually very C compatible in the first place--for example Option<T> only has a discriminant sometimes--so it wasn't a major loss, especially since even before that change you were supposed to be using repr(c), and as the blog post mentioned actually doing it only had a fallout across like 5 or 10 libraries for all of crates.io.

Will let you or anyone else read the post for more than that, no sense rewriting the entire thing for here.

My Blog
Twitter: @ajhicks1992

2020-10-04 12:17:47

@269
Great, thanks for the change, but it does seem to be a little broken haha.

It seems that values >= 1.002 and <= 0.991 start distorting. I haven't had chance to test it on my proper speakers yet, but I've never heard that on my laptop speakers before.

BTW, is there any facility within Synthizer to record the output like there was on libaudioverse? Thinking it might be useful for you to be able to get direct recordings of our outputs. No worries if not, there's always alternative recording solutions.

Thanks again for all your hard work.

-----
I have code on GitHub

2020-10-04 15:54:43

Minor console correction: UWP lets you target the Xbox One. I think you can also build natively for it with DX12 and such if you're willing to jump through certification hoops. Since miniAudio already covers that use case, it may already work.

I made a failed attempt at trying to build Synthizer under WSL despite ensuring that I was on Clang 10. On my iPad and don't have the error messages at hand, but they looked obscure enough that I could probably fix them if My CXX knowledge wasn't a couple decades behind. Happy to put it through its paces on Linux and MacOS once it builds there. Having gotten Pulse working in WSL, I don't think it's performant enough for more than the simplest of use cases.I do have a VM for accesing my LVM drive, though.

2020-10-04 17:02:55

@272
I was testing with quite large values here. I will investigate.  Maybe there's an issue with fractional values or something.  As I said, it was never heavily tested: I wrote it to prove a design, and then put it on the back burner with the intent to pick it up again later when I'm ready to do the doppler effect.

Adding something to record data is doable, but not going to do it now because it's fiddly.  I forgot Libaudioverse even had that, but probably it was done wrong, because back then I was doing things just right enough to work but just wrong enough to be dead ends.

@273
Still have to have an Xbox, but you're right, I forgot that one didn't have a dev kit now.

Getting this going on Linux or Mac isn't something I think will be very hard.  It's just an issue of priorities: I effectively have one C++ dev working on the project, but if I hypothetically had more then I'd effectively only have one "I can write synthesis stuff" dev, so no matter how you slice it getting to having a good one-OS story is more important to me.  And work isn't leaving me with enough energy for 8 hour Saturday coding sessions right now (it's a great job, just a very busy one for the last month, and probably for a few weeks yet, but hey--I'm getting paid to do Rust).

My Blog
Twitter: @ajhicks1992

2020-10-04 17:13:00

@272
I think I'm going to need an example file and a specific value that's broken. I just tried to duplicate it here and I'm afraid that I cannot.  Also tried sweeping it and it worked fine there as well.  It's not the best pitch bend algorithm in the world, but it doesn't seem to be distorting here.

Those specific values are magic because Synthizer pretends you didn't specify pitch bend if you're closer than that to 1.0, because pitch bend is overall very slow and for things that try to do it dynamically it would be nice if we didn't spend that cost because it decided to do 0.999 or something.

My Blog
Twitter: @ajhicks1992