2014-07-17 05:11:31

Here's the other thing about this.  People only use those four because sine waves add together to make interesting sounds and the rest make interesting sounds when filtered and detuned, especially if you can have moving filters and all sorts of stuff.  The thing here is that you're never going to have interesting sounds synthesized in realtime without a sound library or enough knowledge of Numpy and math to fake it.  The other wave shapes don't exist like you think they do.  When we start talking about complex sounds, we start talking about them in terms of the volumes of the sine waves that make them up.  There is only one basic wave, the sine wave, and all other waves can be described in terms of it.
I didn't want to go into this because it brings up more questions until you actually work with it and requires knowledge of both notations for complex numbers for any sort of quantitative understanding, but I think a qualitative description is in order.
There is a thing called the FFT, for the Fast Fourier Transform.  As I said, every sound in all the world is made of a sum of of sine waves--specifically, for a fully precise representation, you need an infinite number of them.  But computers are quantized and only capable of a certain precision, so we can make it finite in computer land (and, in all honesty, it's possible to make it finite in real life for many cases).  instead of talking about adding specific sine waves, we say we're going to add sine waves at certain frequencies always and specify the relative volumes and phases of them--a sine wave that's ignored simply has a volume of 0.  The phase is when they start: the part I glossed over is that we don't have to start all the waves at the same time--we might start 1000 hz at time 0, hold 1001 hz until time 0.2, and the like.  This system can let you talk about manipulations of sounds in a meaningful manner: one way to apply a lowpass filter is to apply the FFT, turn down all the frequencies you consider high, and then undo the FFT to get a sound you can play out of it.
So what happens in real life, when you get your music synthesizers and whatnot, is one of 3 things: we do our analysis and resynthesis off the FFT, we get a complex wave shape by recording and pitch bend it in realtime (which is trivial, believe it or not), or we start from the basic shapes and apply filters and effects.  The first of these is really, really, really rare.  The second of these also brings up an interesting point: those "other waves" can and usually are either recorded and cut down to one period or simply drawn with a mouse.
How exactly we get from here to code, though, hits the wall of I know math really, really well.  Basically, you can pretend that its one second of audio no matter how many samples you have and then choose to play it back--it's the same reasoning I gave for the basic functions, only this time the function comes from a table and you interpolate between values with a weighted average or, in the few cases you need it, a more complex algorithm called a spline.  I'm too tired at the moment to write sample code for the weighted average approach: it's simple enough, but it's also where one of those issues with floating points is hiding, and I don't want to work it out again in Python.  It comes down to like 10 lines, however.
As for programmatically working with the FFT, I suggest not doing that.  You need trig and knowledge of complex numbers to figure out how to interpret the results and it's the opposite of intuitive.  Finding or working out sample code to do pitch bend, either yourself or with Pygame, is worth it: you can record a wave file and just pitch bend it, which gives you a great number of interesting sounds for what is--in the grand scheme of things--barely any code (pitch bend is my weighted average thing again).
I'm sorry this explanation can't be more clear.  I'm trying very very hard to avoid words like magnitude and angle of a complex number, frequency bins, and the like.  If you want to learn about this stuff, I suggest installing Pyo, reading its tutorials, examples, and documentation, and trying stuff.  You can come to a practical understanding of the FFT without math, but the only way to do so is to play with it in an accessible environment.  One way to get access  to something FFT-like is to play with Audacity's equalizer.  The funny thing is that it's good at describing sounds but not so much at synthesizing them: the filtered waveform approach is more common.  Pyo can let you play with this, too, either using wave files or the basic shapes.
basically, from here out, your road to understanding is experimentation more than anything.  You don't have the pretty pictures and cute diagrams, nor the math to understand the equations.  Therefore, the best thing you can do is play and listen to the results.

My Blog
Twitter: @ajhicks1992

2014-07-17 16:29:49

Here is your example with square and triangle waves (at least I hope so). Executing the module now plays as a test all the available wave forms. A triangle wave doesn't go brutally from 1 to -1 but goes up progressively from -1 to 1, then progressively from 1 to -1, and so on.

#our imports, numpy is required for pygame.sndarray to run and math is used for the sin and pi functions
import pygame, numpy, math


def sine(frequency, t):
    """The equasion for a sine type wave"""
    return math.sin(2*math.pi*frequency*t)

def sawtooth(frequency, t):
    """The equasion for a sawtooth wave"""
    return 2*((frequency*t)%1)-1

def square(frequency, t):
    """The equasion for a square wave"""
    a = (frequency*t)%1
    return -1 if a < .5 else 1

def triangle(frequency, t):
    """The equasion for a triangle wave"""
    a = (frequency*t)%1
    return 4 * a - 1 if a <= .5 else - 4 * a + 3


#A dictionary with a string key of the wave types
wave_types = {
"sine": sine,
"sawtooth": sawtooth,
"square": square,
"triangle": triangle,
}

def sound_creator(duration=1.0, frequency=440, left=0.5, right=0.5, wave_type="sine", sr=44100, bits=16):
    #initialize the pygame's mixer module with default values.
    pygame.mixer.init(frequency = sample_rate, size = -bits, channels = 2)

    #This gives a number of samples in our file. Don't try to print this number, it is really big! I'm not sure why the "round" function     #is there
    n_samples = int(round(duration*sample_rate))

    #This is creating our lovely array. I'm not sure what all of it means, but the n_samples is the amount of items in our list (That     #really big number that we got above!) and the 2 there means that we are wanting 2 channels. I don't know what the other     #number is.
    buf = numpy.zeros((n_samples, 2), dtype = numpy.int16)

    #This is making sure our intijure buffer formats are inbetween -1 and 1, it is making sure we don't have a clipping sound
    max_sample = 2**(bits - 1) - 1

    #Now we are appending samples to the array we made above.
    for i in xrange(n_samples):

        #This is where we append to our array the number for the type of wave it is.
        #t is the time we have in seconds
        t = i/float(sr)

        #This is appending the wave type numbers to the first array which on 2 channels is the left
        #max_sample is the clipping volume and the left and right are just the spacific volume for that side. The wave_types         #dictionary is called with the string of the kind of wave the caller wished ("sine", "sawtooth"...)
        buf[i][0] = int(round(max_sample*left*wave_types[wave_type](frequency, t)))

        #Now we are doing the same to the second array which is the right
        buf[i][1] = int(round(max_sample*right*wave_types[wave_type](frequency, t)))

    #this creats the sound through pygame's sndarray module
    sound = pygame.sndarray.make_sound(buf)

    #sound is now the same as any object that is created using the pygame.mixer.Sound class
    return sound

#This will only run if you run this module, it is a nice example that will play quietly in the left speaker
if __name__ == ('__main__'):
    
    # some tests
    
    assert square(1, .1) == -1
    assert square(1, .4) == -1
    assert square(1, .6) == 1
    assert square(1, .9) == 1
    
    assert triangle(1, 0) == -1
    assert triangle(1, .25) == 0
    assert triangle(1, .5) == 1
    assert triangle(1, .75) == 0
    assert triangle(1, 1) == -1
    
    #Change these values to suit your needs
    duration = 1.0 #in seconds
    sample_rate = 44100
    bits = 16
    frequency = 440
    left = 0.1
    right = 0.03

    for type in sorted(wave_types):
        print type
        
        #Sound is a tipical sound object that pygame uses
        sound = sound_creator(duration, frequency, left, right, type, sample_rate, bits)
        sound.play()

        #If you don't wait the program will exit. If you wait the duration of the sound you will get some popping when the sound is     #done. That is why I'm waiting 1.5 seconds, rather than 1.
        pygame.time.wait(1500)

    pygame.mixer.quit()

2014-07-17 17:09:16

This is awesome!
So is pitch bend different than just changing the frequency and creating a new sound object?
I think you could create a pitch bend function by adding a for loop making the number smaller each time that the array forloop runs...


about the module above, there is a strange bug that happens when you import sound_creator() into another module with pygame already initialized. The frequency is an octive lower than it should be. So 440 is really played at 330. The odd thing is that without pygame initialized, it works just fine. Do you have any idea why this is?

2014-07-17 17:37:49

Pitch bend as a filter is probably very complicated if you do the filter yourself, while creating a sound with a new frequency is probably possible, but quite tricky too. The sound will have to stop at a level where the next will follow without an audible transition. Maybe -1, or 0. pygame.mixer.Channel have a queue() method and a get_queue() method, so you might be able to queue short sounds when the queue is empty, but I don't guarantee the result. It might work, or not.

The bug happens because the sound_creator() function is initializing pygame.mixer, which is not the right place to do so (a function which returns something shouldn't have a side effect). If initialization has been already done, nothing will change and the sounds will be created with the wrong parameters. You should get the parameters from pygame.mixer, not set them from the function.

2014-07-18 00:22:58

The difference between pitch bend and increasing the frequency and resynthesizing is that pitch bend works everywhere.  To pitch bend something, you manipulate the buffer after synthesis, not change the loop.
I'll try to explain this too, since I seem to be on a role or something:
We normally talk about time in seconds, but we can also talk about time in samples.  In sample time, 1 means "advance one sample", 2 means "advance two samples", etc.  In the same way that we can say 12:00 is 12 hours after the beginning of the day, we can say that 12 sampletime is 12 samples past the beginning of our buffer.
You can pitch bend a sound by playing it faster.  What you want to do is make a second buffer.  Say that we want to make something 1.2 times higher.  We start by taking the 0th sample from our buffer, and putting it in the 0th sample of the new one.  Bear with me on this next bit: it is not as nonsensical as it sounds.  We then take the sample at sampletime 1.2, that is 20% between the sample at index 1 and the sample at index 2, and put it inn as the second sample in the new buffer.  We continue this pattern: the 3rd sample in the second buffer is at 2.4 in the first buffer, so 40% between samples 2 and 3, the third is at sample time 3.6, so 60% after sample 3, etc.
How do you compute them, then?  I'm obviously being an idiot or insane or something--I'm telling you to use floating point indexes.  Obviously you can't, but you can take a weighted average and guess.  To get the sample for sampletime 1.2, get sample at index 1 (hereafter s1) and the sample at index 2(hereafter s2).  Then:

offset = sampletime-floor(sampletime)
w1 = 1-offset
w2 = offset
result = s1*w1+s2*w2

This is a bit like an old tape player.  If the tape starts moving faster, it starts sounding like chipmunks.  You continue the above pattern until the whole number part equals the length of the buffer minus 1, and stop.
So why?  because it works on anything.  If you have a handy mathematical method by which you might synthesize a sound from a  frequency, you don't need it.  But really, what you have most of the time is just an array of bytes, say a piano at middle c.  If you know that d is some factor above c you can pitch bend to it--this won't sound super realistic past a point, but works well enough for lots and lots of situations.  I'd suggest seeing if Pygame has this already, though I suspect it doesn't.
If you're looking for a really cheep way to pitch things up (but not down, unfortunately--down needs the above logic), you can increase the pitch by a factor of two easily.  To do so, take every other sample from the original buffer and put it in a new one that is half the length.  Trippling is every 3rd sample, etc.  Going down in pitch and going up by decimal factors is hard, but there's the quick way for whole numbers.

My Blog
Twitter: @ajhicks1992

2014-07-18 21:06:08

I've learned so much about audio in the last week LOL... Thank you! big_smile I've solved my current problem and will be working on finishing my little game...
Will Libaudioverse have sound generation? I may wish to port the game I'm making after the one I'm doing now to Libaudioverse and it will use sound generation.
So hurry up with the release! I want to play with it! big_smile

2014-07-18 22:20:19

Libaudioverse currently has Sine.  I haven't added the others yet, but can do so in like half an hour: they are useless for testing purposes, so I didn't bother.  the code from the Sine wave works for all of them, I just ahve to point it at a table representing the other shapes.  More useful to your current problem, though, is that Libaudioverse can pitch bend files in the way just described.  You could create your audio progress bars by just looping and pitch bending a file, which gives much more interesting sounds.
I give no ETA on a release.  It will be ready when it's ready.  I'm not going to rush it: rushing leads to sub-par code.  I'm building a library for games, but also a  research platform: a lot of Libaudioverse's code is based around being able to add new effects easily.  Examples of such include Waveguide synthesis (it's sufficient to say that it makes string-like and piano-like sounds), various reverb algorithms that are good at different things, volumetric sources (giving a source a size instead of just a position), etc.  I'm getting very close to something I can show off, though YMMV on how good the examples make it look without tutorials.  It's traditional enough in terms of how audio libraries tend to work, but not traditional with how Audiogames.net teaches new blind game programmers BGT audio libraries aught to work.  But that's okay-in its current form, it's 10x more powerful than the standard solutions in this community and 2 or 3 times more powerful than what most sighted games use, and I'm only just beginning.

My Blog
Twitter: @ajhicks1992