Talk:MediaStreamAPI: Difference between revisions

Jump to navigation Jump to search
m
No edit summary
Line 56: Line 56:
The way in which multichannel data is laid out within the sample buffer does not seem to be clearly specified, and there's lots of room for error there. Furthermore, if what we go with is interleaving samples into a single buffer (ABCDABCDABCD), I think we're leaving a lot of potential performance wins on the floor there. I can think of use cases where if each channel were specified as its own Float32Array, it would make it possible to efficiently turn two monaural streams into a stereo audio mix without having to manually copy samples with javascript. Likewise, if we allow a 'stride' parameter for each of those channel arrays, interleaved source data still ends up costing nothing, which is the best of both worlds. This is sort of analogous to the way binding arrays in classic OpenGL works, and I feel like it's a sane model.
The way in which multichannel data is laid out within the sample buffer does not seem to be clearly specified, and there's lots of room for error there. Furthermore, if what we go with is interleaving samples into a single buffer (ABCDABCDABCD), I think we're leaving a lot of potential performance wins on the floor there. I can think of use cases where if each channel were specified as its own Float32Array, it would make it possible to efficiently turn two monaural streams into a stereo audio mix without having to manually copy samples with javascript. Likewise, if we allow a 'stride' parameter for each of those channel arrays, interleaved source data still ends up costing nothing, which is the best of both worlds. This is sort of analogous to the way binding arrays in classic OpenGL works, and I feel like it's a sane model.


I don't like seeing relative times in APIs when we're talking about trying to do precisely timed mixing and streaming. StreamProcessor::end should accept a timestamp at which processing should cease, instead of a delay relative to the current time. Likewise, I think Stream::live should be a readonly attribute, and Stream should expose a setLive method that takes the new liveness state as an argument, along with a timestamp at which the liveness state should change.
I don't like seeing relative times in APIs when we're talking about trying to do precisely timed mixing and streaming. StreamProcessor::end should accept a timestamp at which processing should cease, instead of a delay relative to the current time. Likewise, I think Stream::live should be a readonly attribute, and Stream should expose a setLive method that takes the new liveness state as an argument, along with a timestamp at which the liveness state should change. It'd also be nice if volume changes could work the same way, but that might be a hard sell.


It would be nice if we could try to expose the current playback position as an attribute on a Stream, distinct from the amount of samples buffered. The 'currentTime' attribute is ambiguous as to which of the two 'current times' it actually represents, so it would be nice to either make it clear (preferably with a more precise name, but at least with documentation), or even better, expose both values.
It would be nice if we could try to expose the current playback position as an attribute on a Stream, distinct from the amount of samples buffered. The 'currentTime' attribute is ambiguous as to which of the two 'current times' it actually represents, so it would be nice to either make it clear (preferably with a more precise name, but at least with documentation), or even better, expose both values.
4

edits

Navigation menu