Wow!!! That's so cool!
Also thanks for the book!
3d audio is totally supported and you can see how to use it in howlerJS.
With your 3 examples, using Chrome/Chrome OS,
Examples 1 and 2 play and stop without issues,.
Example 3 however does not stop nor start audio, but both the stop and open/close door buttons are there, and can be clicked/spaced on.
Using the latest dev channel of Chrome OS, currently at v47.
With Chrome installed today on Windows (or with Chromium on Linux), example 3 works except the door sounds. Maybe the sources are garbage collected before they run, I don't know; I'll try to fix this later since the goal at the moment is to check what can be done with at least one browser.
Iceweasel (Firefox) on Linux works too.
Here are two more examples:
- example 4, like example 3 with the door the right,
- example 5, like example 4 with the listener inside a very big room (like a large sports hall).
Here is an index with all the examples:
I'm not completely satisfied by example 5 because the sounds seem to stay on one side of the room. I don't know.
#56 (edited by queenslight 2015-10-03 07:22:11)
At least with example 5, I am able to open and close the door, and the sound plays.
Sounds like its in a dungeon.
Using Chrome OS 47 Dev channel, just like before.
I am now able to open and close the door with music in it. No idea why it wasn't working before.
I wonder if it was do to the powerwash I did. This example 4 I'm referring to.
Sounds like its coming from a club!
I tried all 5 examples again, and all work! Closing and opening. So yep, doing a power wash was the cure.
Actually, I have fixed and updated example 3. Sorry for not informing you earlier. It happened by chance that example 4 and 5 worked completely with Chrome, so I did the same thing in example 3 to make it work with Chrome (a default value of 0 was missing in a function call).
Yes, the initial idea was the sound of music behind the heavy door of a club. To make the door heavier I have used the Audacity effect called "change speed, affecting both Tempo and Pitch", which could be called: "what this object/animal would sound like if it were smaller/bigger?".
I wonder if this effect can be done in real time with AudioBufferSourceNode.playbackRate . According to this doc and this example, it is possible, and the parameter can even change during playback:
https://developer.mozilla.org/fr/docs/W … aybackRate
I will try to make an example where you can freeze time and listen to the world slow down (the music, in this example). Then the world would progressively resume at full speed.
I can't wait to check that out when it arrives.
Making small steps, turn in to more excitment!!
I have added example 6, music with a "slow down everything" button:
As far as I understood, the "slow down" parameter only works on the AudioBufferSourceNode, so the room echo wouldn't be affected. It would be nice if the simulation would slow down too. Anyway this effect could be interesting to enter menu mode to inform the player that the game is slowing down or freezing while the player is in the menu. Another use is for an acceleration potion or power where the player character have the impression that everything is slowing down, which is very useful because this would give more time for the player to identify the surroundings and make various actions that would be too hard in real time.
That slow down effect is so neat!
That would also be good for time-shifting, like in the game Lagousy Of Cane: Soul Reaver on Playstation, which lets the character go between different worlds.
Test successful on my end. Hmmm, sounds like a kitchen door when its opened and closed.
Thanks for the report. Chrome works here too. The door sounds strange to you maybe because there is no echo in this example. The result should be exactly the same as example 4.
Wow! That is pretty awesome, it looks really complex, even in brython though. I'll need to really go through it in order to understand it.
It is significantly slower loading than any of the other examples, even though brython is cached in my browser.
Have you considered asking on the brython list for the Ajax to be updated?
This is only a very thin layer over WebAudio, because I wanted to keep the flexibility of the audio nodes. It's possible to add a simpler layer with a specific aim.
Like previous example 7, I haven't noticed a speed difference with example 4. The only strange behavior is when testing the page locally as a file with Firefox: sometimes it works and always works, sometimes it doesn't until some time have passed (maybe until the page is loaded from a web server, I don't know). With Chrome the result is more reliable: it only works with a web server.
I just found the coolest thing! The web speech api is now able to be used on Chrome and Firefox and it attaches to your SAPI5 voices:
https://hacks.mozilla.org/2016/01/firef … peech-api/
It fires an event when the speech finishes, so this allows the creation of apps that are triggered by speech finishing. If someone is running a browser that does not handle the speech API, there is meSpeak.js:
This means that it is now possible to have an app that uses SAPI or something similar on the web!!!
If someone is using Linux, please let me know what the API does there!
Here is an example in brython, it has an html page with a <button id="b1">Click me</button>:
from browser import document, window
synth = window.speechSynthesis
def create_u(txt="Nothing", voice=None):
Utterance = JSConstructor(window.SpeechSynthesisUtterance)
u = Utterance(txt)
if voice:u.voice = voice
Interesting! The web speech api works well with Chrome and Windows 8.1. Synthesis uses the default SAPI voice and additional Google Voices. Recognition works well too, and without learning (at least in this example), but requires to allow the browser every time to use the mic.
Couldn't make synthesis work with Chromium 37 in a Knoppix Live CD.
So, it's possible to have full control on the speech, but I wonder if it would be great to also use the web page as a way to display a lot of structured text info (tables, etc) freely accessible to screen readers. I wonder how it would be possible to use fully controlled speech synthesis and accessible text (eventually with aria) in an efficient and user-friendly way.
Just in case, here is a link to an old post about speech in another topic:
http://forum.audiogames.net/viewtopic.p … 24#p237424
Not sure if it's too early to share my proof of concept, but I wanted to prove that this 3D sound could work in a browser. This little demo doesn't do much, but you can tab onto the app, move the sound source and see how it changes.
This is awesome! Are you blind? Because looking at the code you are using canvas and a camera, something I always get super frustrated with!
This is not you, but I really don't like the sound when it is next to you because it jumps several spaces farther than it should. This is pretty easy though, just skip the 3 or so spaces that have that sound of being next to you.
Hmmm, I wonder if I coded it wrong somehow if it's going a bit wrong when it's next to you. This is all pretty new to me too, really, like I said it was to see if it was possible as much as anything. I've stalled, as I can't think how to fathom touch controls. Anyone here any good with those in a browser setting? I could use your help. Once I'm over that, I'll try and put together a proper prototype that showcases both 3D sound in browsers and proves it works on desktop and mobile. But... I'm getting way ahead of myself again. Apologies! It's just that this is all exciting.
Howler.js is just the audio. three.js looks really like a graphics library... I didn't see anything but maybe a clock that is useful for anything but Graphics. Thus saying, it would probably be a good thing to plug in for sighted developers to use.
It is not you, in any 3d system, there are problems with sounds right next to you. It is like (1,0,0), (1, 1, 0) and (1, -1, 0).
So when y is 1, 0 or -1, and x is 1 or -1, it is a little odd.
You could google about how to grab touch gestures. Otherwise, it would not be too difficult to program a little app you use and log what is what when you do something on your phone.
Hi there. It's been a while, but I got stuck on technology choices and stuff.
Anyway, I'm working on the engine I mentioned and It's been progressing nicely in recent weeks.
I already have positional audio and scene definition, basic moving around. Need to add player and movement controls.
For now it's a fully declarative way to create a 3d scene. Not an if, not a loop.
I'll be looking for developers wanting to try it out and share feedback.
Links to demos coming soon!
I've also been working on a 3D first person shooter, which is actually coming along nicely. I've posted several demo's in the RTR thread and on Twitter. You can already play with multiple people, you have maps with ambiences, reverb etc. So I think the web will allow for a lot of neat creations. There are limitations of course, but I think for our purpose it could be enough for a lot of things.
here is the newest demo, where I spawn and then another player spawns and walks around you and shoots some.
Here is an older demo where I just walk around a little.
I should make a new recording with the progress since then, but I'm a bit stuck right now. Grrr.
Ghorthalon that is awesome!
I would really like some games that are for 6-12 yearolds in the web browser though LOL.