2021-01-06 01:37:00

@348
Ha! And there was me just living with it.

Want me to test with a directory of MP3 files too? Or do you have the data for your own tests?

-----
I have code on GitHub

2021-01-06 01:45:44

@351
At this point, you testing isn't of much value.  In practice, I don't even need a directory of files anyway.  Just testing one should be good enough.

My Blog
Twitter: @ajhicks1992

2021-01-07 20:52:28

I think that streaming generators, when you first add it to a source, play the same 50 MS or so of audio twice.
I was also wondering why WAV decoding was about as slow as FLAC. I thought that WAV decoding was simple, basically just do a few checks and copy the raw data into memory.

2021-01-07 20:56:21

I will look into streaming generators, though they haven't been touched much since originally implemented.  WAV decoding is simple.  That's why I am surprised that it is slow, and will be looking into it in the very near future.  The curse of having a dayjob and developing the library is that until the library is complete, you can't move on to using the library, so it's a shame this wasn't brought to my attention sooner.  I think the most that has been said is "can you do complicated partial loading thing" but that's different from "I think wav loading is 10 times slower than it should be".

My Blog
Twitter: @ajhicks1992

2021-01-09 20:25:26 (edited by camlorn 2021-01-09 20:25:51)

Ok. Decoding performance is now 2x better.  And for anyone else not following along, everything you'd expect to be pausable can be paused and everything you'd expect to have a gain has a gain.  Also some other changes, you'll want to read the release notes.

The specific problem with decoding is to do with resampling. If you resample your content to 44100 HZ it will run way faster.  I'm not sure if I'm going to document/maintain that guarantee, but on top of the performance improvement we just got it's another 5x or so.

The decoding improvements currently only apply to buffers because they're experimental-ish.  And because they're experimental-ish, if you can notice degraded audio quality, say something.  For further improvement I may need to fork a library, but I'm trying to avoid that so hopefully this is good enough.

Not sure if I'll get to investigating StreamingGenerator or not this weekend, but I'll try for it.  Events will also be a thing in a week or two for anyone waiting on that.

My Blog
Twitter: @ajhicks1992

2021-01-09 21:02:54

@355
Great stuff, thanks mate.

Going to have me a peruse now!

-----
I have code on GitHub

2021-01-09 21:12:06

Sorry for the double post, but when you say the Synthizer release notes, do you mean [url=]this link[/url]? Only there doesn't seem to be as much information on there as your post implies.

I also tried looking at the release on GitHub, but not much on there either.

Cheers,

-----
I have code on GitHub

2021-01-09 21:26:24

Yeah, that's because I'm an idiot and forgot to deploy the docs.

My Blog
Twitter: @ajhicks1992

2021-01-09 21:57:01

Can you add the ability to get a streaming generators length? The ability to seek isn't supe rhelpful if I don't know how far I can actually seek.

2021-01-09 21:59:26

No, I can't.  If I do that, it takes lots of other potential future applications off the table, and for some files, you can't know without decoding the whole thing.

You don't reliably get sample-accurate seeking with streaming generator anyway, and I have no idea what you're trying to do to need this.  I feel like you're trying to use the library for things it's not meant for, or in ways that fight the design.

My Blog
Twitter: @ajhicks1992

2021-01-14 21:46:42

I think that something is up with panned source.
When I run this code:
from synthizer import *
import sys, time
with initialized():
    ctx=Context()
    buf=Buffer.from_stream("file", sys.argv[1])
    gen=BufferGenerator(ctx)
    gen.buffer=buf
    src=PannedSource(ctx)
    src.add_generator(gen)
    src.panning_scalar=1
    time.sleep(buf.get_length_in_seconds())
I expect the sound to start playing at the far right, but instead it's playing in the center.
Also, do you have the algorithms for converting between DB and a scalar for gain and PanningScalar? The only one I found was the one in the Python example.

2021-01-14 22:47:32

The only db to scalar algorithm that exists is the one in the Python example.  You should only be converting DB to gain anyway.  before you ask, the reason Synthizer doesn't offer functions for it is because the FFI overhead is more expensive than just pasting the math.  You can grab the algorithms off Wikipedia for decibel, or from include/synthizer/math.hpp.  They're like 1 line, but I don't have them memorized because they're almost never useful in practice.  Specifically, does a -3db signal + a -3db signal clip? Who knows! Better convert to scalars and add them and then convert back to find out.

I will get to all the bugs when I can.  Please open tickets against the repository rather than reporting here.

My Blog
Twitter: @ajhicks1992

2021-01-31 20:51:08

Just pushed 0.8.8.  Been a while since the last release, but that's because I give you a event system.  You can also now set panner_strategy on Context to set the default for new sources.

Docs are thin on the event system because there's going to be enough changes in the C API in 0.9 that I'll have to rewrite half of it.  Nothing much should change for Python users, though.  Also, as the manual says it's alpha quality.

I suspect that mostly this will get used by the various Python engine efforts rather than you directly, but we shall see.

The next thing is fixing the HRTF normalization scripts and bringing that up to par with what it should be.

My Blog
Twitter: @ajhicks1992

2021-01-31 21:01:08

@363
This is brilliant!! Thank you.

I'll get onto fixing up Erawax's sound system (again) tomorrow.

Thanks again. Looking forward to the new HRTF improvements.

-----
I have code on GitHub

2021-02-01 01:36:30

I might've asked some of these questions before, sorry if I have. But I've noticed a couple things:
1. You implement a custom bitset class. What's the rationale for this when C++ already has a bitset class in <bitset>?
2. You use const std::string references. How is this better than std::string_view?
3. In your bitset class you use the __builtin_popcount (as well as __builtin_ctz). C++20 adds the <bit> header which adds various bit manipulation functions, see this. I think that it might be better to switch to these functions since they aren't architecture specific whereas the __builtin_* functions are, if I'm not mistaken, architecture specific.
These are primarily curiosities of mine; it just seems like your reinventing the wheel in these cases and I'm curious why.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.
My Github

2021-02-01 02:17:27

My bitset can determine the first unset bit without having to iterate all the bits.  As far as I know the built-in one can't, or at least when I originally wrote mine it couldn't.

Anything C++20 is off the table because compiler support lags.  In general, you can't use C++ standards for 2-3 years after they're finalized.  At the moment compiler support is iffy, especially in Visual Studio.  Once that's finalized you then have to wait for the old Linux distros to not be relevant anymore so that default versions of gcc and clang come along for the ride.  I only use C++17 for that reason.

std::string_view might or might not be useful but I haven't learned it.  I think Synthizer does stuff with strings in something like all of 3 places and only one of them does anything more advanced than pass it into a map or somesuch.  So there's no point.  Also I think a lot of that is newer than C++17, though maybe it's not. 

Const references as a general pattern are the C++ way to avoid copying but pretend you have a copy, in the sense that if you decide not to pass it on to a third thing, no copy is actually made.  So e.g. if there's an error it's free.  Things like string and vector probably can't be elided, so it's quite the performance cost to take them by value.

My Blog
Twitter: @ajhicks1992

2021-02-01 02:19:11

@366, ah, I understand now. Thanks for that clarification.

"On two occasions I have been asked [by members of Parliament!]: 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out ?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."    — Charles Babbage.
My Github

2021-02-01 04:36:43

i wish custom buffers feature comes very soon.
thanks for your on going work camlorn. it really helps a lot

if you like this post, please thum it up!
if you like to see my projects, feel free to check out my github
if you want to contact me, you can do here by skype. or you can follow me on twitter: @bhanuponguru
discord: bhanu#7882

2021-02-04 13:39:23 (edited by mohamed 2021-02-04 13:41:12)

hello, @Camlorn, Does synthizer support .ogg yet? I want to switch the game i am doing's main audio system to synthizer but i use too many ogg files and converting them will sure be a lot of work to do, So does it support it or do i need to switch them to something else?

2021-02-04 14:17:20

as far as i know, nope it does not at the moment. you'd have to convert them all to MP3 or WAV or FLAQ

best regards
never give up on what ever you are doing.

2021-02-04 15:04:10

@369 converting your files is the simplest thing to do, import them all into audacity, then export them as mp3, there is an option for that called "export multable"

2021-02-04 16:53:43

You're a programmer.  You can also do ogg->mp3 conversions in about 5 lines of bash or about 20 lines of Python,assuming you have ffmpeg.  I don't have the "here's how you strip a file extension and replace it" bash magic memorized anymore, but you just have to arrange for ffmpeg -i myfile.og myfile.mp3 to run over all your files.  It's really not that big a deal, so much so that if this sounds hard you're probably not ready to write games.  As  a starting point I think the following bash works as long as you don't mind ending up with .ogg.mp3, but I'm typing it offhand:

for i in $(find .); do
ffmpeg -i $i $i.ogg
done

There is a small modification of that that will get the filenames right, but you can also automate this in 30 minutes or less with Python using the glob and subprocess modules, and I don't feel like digging through the bash manual to find it for you.

Ogg isn't going to happen anytime soon because reasons.  See issue #37.  The long and the short of it is stb_vorbis is so unmaintained that I don't trust it, and if we want dr_vorbis (or is it dr_ogg, I forget) I'll probably have to sponsor him as in with real money, or something.  Eventually there will probably be optional dependencies requiring binary attribution, literally just to cover this case, but I need a much better reason than "converting my files is hard" or "but listening tests that don't matter once you're playing more than one sound at a time or outside the lab on consumer-grade audio gear show ogg is better".

My Blog
Twitter: @ajhicks1992

2021-02-04 17:38:16

my big reason is that i want to use ogg is not because converting is hard, Reason is file size, Mp3 and wav and flac are fucking huge, So am guessing if i converted them i will get my sounds folder from 29 mb to like 150 mb, And that's not good at all.

2021-02-04 19:17:19

Mp3 is fine.  I need a much better justification than making something small be smaller.  This is something like the single lowest priority thing in the entire library.  If you're ending up with huge Mp3s, try a different encoder such as ffmpeg, encoding at a lower bitrate, and/or encoding as variable bitrate.  I have a copy of every Legend of Zelda soundtrack up to Wind Waker as high-quality mp3 and it's under a gig.  Everyone is fine with Swamp which uses wav.

People really need to consider the following.  I get two things from what appears to be primarily new programmers.  1: can we have custom buffers? I can't download to a temporary file and/or think audio DRM is doable.  2: when is ogg happening?  I try to be professional about this project, but come on.  I get a day or two a week at most where I've got enough uninterrupted time to code on it, and you can easily live without both features.  I have received exactly one good reason to prioritize custom buffers, and exactly zero good reasons to prioritize ogg.  I'm not saying no, but doing these eats valuable time when I don't have much time to begin with.  Custom streams/in-memory buffers will happen before 1.0, but don't hold your breath for ogg.

My Blog
Twitter: @ajhicks1992

2021-02-04 23:50:01

Hi Camlorn,

I'm having issues with certain sounds, and I'm not sure when this started because I've only just now started noticing it. But when I set the pitch of shorter flac sounds that aren't looping, the sounds seem not to play, and instead emit a buzzing sound instead of the sound in question. I'm loading the sounds with a buffer and a generator, like normal. Other longer and or looping sounds play just fine, and I can't seem to find any rhyme or reason to why these sounds do not play, but I can tell you that if the pitch is 1.0, or normal, they play, else they just emit the buzzing sound I've just described.

I've been going by Bryn, which is now reflected in my profile. Please do your best to respect this :) Mess ups are totally OK though, I don't mind and totally understand!
Follow on twitter! @ItsBrynify
Thanks, enjoy, thumbs up, share, like, hate, exist, do what ya do,
my website: brynify.me