MediaStreamAPI: Difference between revisions

No edit summary
Line 116: Line 116:
   // Causes this stream to enter the ended state.
   // Causes this stream to enter the ended state.
   // No more worker callbacks will be issued.
   // No more worker callbacks will be issued.
   void end(double delay);
   void end(float delay);
   
   
   attribute Worker worker;
   attribute Worker worker;
Line 131: Line 131:
Note that 'worker' cannot be a SharedWorker. This ensures that the worker can run in the same process as the page in multiprocess browsers, so media streams can be confined to a single process.
Note that 'worker' cannot be a SharedWorker. This ensures that the worker can run in the same process as the page in multiprocess browsers, so media streams can be confined to a single process.


An ended stream is treated as producing silence and no video. (Alternative: automatically remove the stream as an input. But this might confuse scripts.)  
An ended stream is treated as producing silence and no video. (Alternative: automatically remove the stream as an input. But this might confuse scripts.)


// XXX need to figure out the actual StreamEvent API: channel formats, etc.
interface DedicatedWorkerGlobalScope {
  attribute Function onprocessstream;
  attribute float streamRewindMax;
};
interface StreamEvent {
  attribute any inputParams[];
  attribute float rewind;
  attribute long audioSampleRate;    // e.g. 44100
  attribute short audioChannelCount;  // Mapping per Vorbis specification
  attribute FloatArray audioInputs[];
  void writeAudio(FloatArray data);
};
 
'inputParams' provides access to structured clones of the latest parameters set for each input stream.
 
'audioSampleRate' and 'audioChannelCount' represent the format of the samples. The sample buffers for all input streams are automatically converted to a common format by the UA, typically the highest-fidelity format (to avoid lossy conversion). 'audioInputs' gives access to the audio samples for each input stream. The length of each sample buffer will be a multiple of 'audioChannelCount'. The samples are floats ranging from -1 to 1. The lengths of the sample buffers will be equal.
 
'writeAudio' writes audio data to the stream output. If 'writeAudio' is not called before the event handler returns, the inputs are automatically mixed and written to the output. The format of the output is the same as the inputs; the 'data' array length must be a multiple of audioChannelCount. 'writeAudio' can be called more than once during an event handler; the data will be appended to the output stream.
 
There is no requirement that the amount of data output match the input buffer length. A filter with a delay will output less data than the size of the input buffer, at least during the first event; the UA will compensate by trying to buffer up more input data and firing the event again to get more output. A synthesizer with no inputs can output as much data as it wants; the UA will buffer data and fire events as necessary. Filters that misbehave, e.g. by continuously writing zero-length buffers, will cause the stream to block.
 
To support graph changes with low latency, we might need to throw out processed samples that have already been buffered and reprocess them. The 'rewind' attribute indicates how far back in the stream's history we have moved before the current inputs start. It is a non-negative value less than or equal to the value of streamRewindMax on entry to the event handler. The default value of streamRewindMax is zero so by default 'rewind' is always zero; filters that support rewinding need to opt into it.


==== Graph cycles  ====
==== Graph cycles  ====
1,295

edits