I think the main problem with WebAudio is that it was originally designed under the assumption that Javascript would be too slow to fill audio buffers fast enough for low latency buffer queueing, and thus this complex audio-node system was conceived where audio data would flow through black-box processing nodes written in C/C++.

In a perfect world, a low-level web audio API would focus on providing an as direct-as-possible way to stream raw audio data (buffer queueing like in OpenAL, or a wrap-around buffer like in DirectSound), make sure that this works well with WebWorkers (i.e., worker thread should be able to queue new data when required, not when the main loop gets around to it), and move all the complicated high-level audio-graph stuff into optional Javascript libs.

That's what Mozilla's proposed MediaStream Processing API was: https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProc...

But Google was able to put more engineering effort behind the Web Audio API and get a lot of web sites to use it, so that's what won. The fact that Web Audio's ScriptProcessorNodes run on the main thread instead of in workers is basically a travesty, brought about because they were tacked on as an afterthought to claim "feature parity" with the MediaStream Processing API. Until that's fixed [1], real-time audio generation in JavaScript will always be a joke that only works under ideal conditions where people open just one, well-tuned webpage in their browsers at a time and GC pauses don't exist.

(full disclosure: I work for Mozilla and am not bitter at all)

[1] https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A... is the proposal to fix it, only three years late to the party

I'm currently trying to get realtime audio streaming over WebSockets to work. I'm encoding audio using libspeex on the server side (an iOS device) and decoding it again on the client using speex.js[1].

Transmitting Speex "frames" (where each frame is a 20ms chunk of audio) over WebSockets and decoding in JS works beautifully. However, I have a really hard time to queue up these 20ms AudioBuffer nodes perfectly, without producing any glitches or pops. I'm not sure its even possible with the current API.

What I would like to have is an AudioBuffer node where I can dynamically append chunks while its playing. Since all(?) source nodes are for one time only use, the browser could free the data that was already played again.

An AudioBuffer that allows modification of the data while it's playing would work too - you could just loop it and use it as a ring buffer. However, my understanding of the spec is, that modifying an AudioBuffer's data is explicitly prohibited once its playing.

[1] https://github.com/jpemartins/speex.js/

You might be interested in looking at what https://github.com/brion/ogv.js does.

Just curious, if your goal is realtime, why aren't you using WebRTC?

Thanks, I'll have a look!

I can't use WebRTC yet because I want to have support for IE and especially Safari & Mobile Safari. I want to add audio for an iOS App[1] that's using jsmpeg[2] for video streaming.

[1] http://instant-webcam.com/

[2] https://github.com/phoboslab/jsmpeg