1,295
edits
Line 53: | Line 53: | ||
Having a readonly attribute named 'rewind' strikes me as confusing. It reads like a verb. It being a relative value is also kind of awkward; if the end user is always going to subtract it from the current stream offset, it'd be much better if we just actually *exposed* the 'real time' as an attribute, that already factors things in. People who want a relative value can then subtract the 'currentlyMixingTime' from the 'currentRealTime', or something like that. | Having a readonly attribute named 'rewind' strikes me as confusing. It reads like a verb. It being a relative value is also kind of awkward; if the end user is always going to subtract it from the current stream offset, it'd be much better if we just actually *exposed* the 'real time' as an attribute, that already factors things in. People who want a relative value can then subtract the 'currentlyMixingTime' from the 'currentRealTime', or something like that. | ||
''roc: we could expose two times, but in most cases the author would just have to compute the difference and use that. I'm not sure that's an improvement.'' | |||
The way that writeAudio simply 'writes' at the current location of the audio cursor makes me nervous. I feel like it doesn't do enough to make it clear exactly what's going on in the various use cases (rewound stream, prebuffering samples before they're played, etc), especially because we attempt to promise that all streams will advance in sync with real time. Maybe this is okay because rewinding is opt-in, though. | The way that writeAudio simply 'writes' at the current location of the audio cursor makes me nervous. I feel like it doesn't do enough to make it clear exactly what's going on in the various use cases (rewound stream, prebuffering samples before they're played, etc), especially because we attempt to promise that all streams will advance in sync with real time. Maybe this is okay because rewinding is opt-in, though. | ||
''roc: I would prefer not to add an extra parameter just so that the API can throw if the author passes in the wrong value. If we can find another use for it, sure. And yeah, rewind is an advanced feature so I don't care so much if it's tricky to use.'' | |||
The way in which multichannel data is laid out within the sample buffer does not seem to be clearly specified, and there's lots of room for error there. Furthermore, if what we go with is interleaving samples into a single buffer (ABCDABCDABCD), I think we're leaving a lot of potential performance wins on the floor there. I can think of use cases where if each channel were specified as its own Float32Array, it would make it possible to efficiently turn two monaural streams into a stereo audio mix without having to manually copy samples with javascript. Likewise, if we allow a 'stride' parameter for each of those channel arrays, interleaved source data still ends up costing nothing, which is the best of both worlds. This is sort of analogous to the way binding arrays in classic OpenGL works, and I feel like it's a sane model. | The way in which multichannel data is laid out within the sample buffer does not seem to be clearly specified, and there's lots of room for error there. Furthermore, if what we go with is interleaving samples into a single buffer (ABCDABCDABCD), I think we're leaving a lot of potential performance wins on the floor there. I can think of use cases where if each channel were specified as its own Float32Array, it would make it possible to efficiently turn two monaural streams into a stereo audio mix without having to manually copy samples with javascript. Likewise, if we allow a 'stride' parameter for each of those channel arrays, interleaved source data still ends up costing nothing, which is the best of both worlds. This is sort of analogous to the way binding arrays in classic OpenGL works, and I feel like it's a sane model. | ||
''roc: I think I'll just go with non-interleaved for now. That's what Chrome's Web Audio API does.'' | |||
I don't like seeing relative times in APIs when we're talking about trying to do precisely timed mixing and streaming. StreamProcessor::end should accept a timestamp at which processing should cease, instead of a delay relative to the current time. Likewise, I think Stream::live should be a readonly attribute, and Stream should expose a setLive method that takes the new liveness state as an argument, along with a timestamp at which the liveness state should change. It'd also be nice if volume changes could work the same way, but that might be a hard sell. Explicit timing also has the large benefit that it prevents us from having to 'batch' audio changes based on some certain threshold - we can simply accept a bunch of audio change events, and apply them at the desired timestamp (give or take the skew that results from whatever interval we mix at internally). This is much less 'magical' than the batching we suggest, and it also is less likely to break mysteriously if some other browser vendor does their batching differently. | I don't like seeing relative times in APIs when we're talking about trying to do precisely timed mixing and streaming. StreamProcessor::end should accept a timestamp at which processing should cease, instead of a delay relative to the current time. Likewise, I think Stream::live should be a readonly attribute, and Stream should expose a setLive method that takes the new liveness state as an argument, along with a timestamp at which the liveness state should change. It'd also be nice if volume changes could work the same way, but that might be a hard sell. Explicit timing also has the large benefit that it prevents us from having to 'batch' audio changes based on some certain threshold - we can simply accept a bunch of audio change events, and apply them at the desired timestamp (give or take the skew that results from whatever interval we mix at internally). This is much less 'magical' than the batching we suggest, and it also is less likely to break mysteriously if some other browser vendor does their batching differently. |
edits