Talk:MediaStreamAPI: Difference between revisions

attempt document bemasc's complaint
(attempt document bemasc's complaint)
Line 13: Line 13:
If the worker just writes out timestamped buffers, it can dynamically signal it's latency, and the scheduler can make its own decision about what constitutes and underrun. However, in realtime contexts (conferencing and games) it is helpful to optimize for the lowest possible latency. To assist this, it's helpful if the workers (and internal elements like codecs and playback sinks) can advertise their expected latency. The scheduler can then sum these over the pipeline to determine a more aggressive minimal time separation to maintain between the sources an sinks.
If the worker just writes out timestamped buffers, it can dynamically signal it's latency, and the scheduler can make its own decision about what constitutes and underrun. However, in realtime contexts (conferencing and games) it is helpful to optimize for the lowest possible latency. To assist this, it's helpful if the workers (and internal elements like codecs and playback sinks) can advertise their expected latency. The scheduler can then sum these over the pipeline to determine a more aggressive minimal time separation to maintain between the sources an sinks.


One possible resolution is just to use an aggressive schedule for all realtime streams, and leave it to developers to discover in testing what they can get away
One possible resolution is just to use an aggressive schedule for all realtime streams, and leave it to developers to discover in testing what they can get away with on current systems.
with on current systems.
 
== Processing graph only works on uncompressed data ==
 
This depends quite a bit on the details of the StreamEvent objects, which are as yet unspecified, but it sounds as if there's a filter graph, with data proceeding from source to sink, but with only uncompressed data types. This is a helpful simplification for initial adoption, but it also severely limits the scope of sophisticated applications. Most filter graph APIs have a notion of data types on each stream connection, so encoders, decoders and muxers are possible worker types, as well as filters which work on compressed data; anything that doesn't need to talk directly to hardware could be written in Javascript. As it stands, the API seems to disallow implementations of: compressed stream copying and editing for things like efficient frame dup/drop to maintain sync, keyframe detection for optimizing switching, new codecs written in javascript, and feeding compressed data obtained elsewhere into the graph.
Confirmed users
111

edits