MediaStreamAPI: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
 
(40 intermediate revisions by one other user not shown)
Line 1: Line 1:
= Streams, RTC, audio API and media controllers  =
Moved!


=== Scenarios  ===
The latest version of the draft is now at [https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProcessing.html].
 
These are higher-level than use-cases.
 
1) Play video with processing effect applied to the audio track
 
2) Play video with processing effects mixing in out-of-band audio tracks (in sync)
 
3) Capture microphone input and stream it out to a peer with a processing effect applied to the audio
 
4) Capture microphone input and visualize it as it is being streamed out to a peer and recorded
 
5) Capture microphone input, visualize it, mix in another audio track and stream the result to a peer and record
 
6) Receive audio streams from peers, mix them with spatialization effects, and play
 
7) Seamlessly chain from the end of one input stream to another
 
8) Seamlessly switch from one input stream to another, e.g. to implement adaptive streaming
 
9) Synthesize samples from JS data
 
10) Trigger a sound sample to be played through the effects graph ASAP but without causing any blocking
 
11) Synchronized MIDI + Audio capture
 
12) Synchronized MIDI + Audio playback (Would that just work if streams could contain MIDI data?)
 
13) Capture video from a camera and analyze it (e.g. face recognition)
 
14) Capture video, record it to a file and upload the file (e.g. Youtube)
 
15) Capture video from a canvas element, record it and upload (e.g. Screencast/"Webcast" or composite multiple video sources with effects into a single canvas then record)
 
=== Straw-man Proposal  ===
 
==== Streams  ====
 
The semantics of a stream:
 
*A window of timecoded video and audio data.
*The timecodes are in the stream's own internal timeline. The internal timeline can have any base offset but always advances at the same rate as real time, if it's advancing at all.
*Not seekable, resettable etc. The window moves forward automatically in real time (or close to it).
*A stream can be "blocked". While it's blocked, its timeline and data window does not advance.
 
Blocked state should be reflected in a new readyState value "BLOCKED". We should have a callback when the stream blocks and unblocks, too.
 
We do not allow streams to have independent timelines (e.g. no adjustable playback rate or seeking within an arbitrary Stream), because that leads to a single Stream being consumed at multiple different offsets at the same time, which requires either unbounded buffering or multiple internal decoders and streams for a single Stream. It seems simpler and more predictable in performance to require authors to create multiple streams (if necessary) and change the playback rate in the original stream sources.
 
*Streams can end. The end state is reflected in the Stream readyState. A stream can never resume after it has ended.
 
Hard case:
 
*Mix http://slow with http://fast, and mix http://fast with http://fast2; does the http://fast stream have to provide data at two different offsets?
*Solution: if a (non-live) stream feeds into a blocking mixer, then it itself gets blocked. This has the same effect as the entire graph of (non-live) connected streams blocking as a unit.
 
==== Media elements  ====
 
interface HTMLMediaElement {
  // Returns new stream of "what the element is playing" ---
  // whatever the element is currently playing, after its
  // volume and playbackrate are taken into account.
  // While the element is not playing (e.g. because it's paused
  // or buffering), the stream is blocked. This stream never
  // ends; if the element ends playback, the stream just blocks
  // and can resume if the element starts playing again.
  // When something else causes this stream to be blocked,
  // we block the output of the media element.
  Stream createStream();
  // Like getStream(), but also sets the captureAudio attribute.
  Stream captureStream();
  // When set, do not produce direct audio output. Audio output
  // is still sent to the streams created by createStream() or captureStream()
  // is called.
  // This attribute is NOT reflected into the DOM. It's initially false.
  attribute boolean captureAudio;
  // Can be set to a Stream. Blocked streams play silence and show the last video frame.
  attribute any src;
};
 
==== Stream extensions  ====
 
Streams can have attributes that transform their output:
 
interface Stream {
  attribute double volume;
  // When set, destinations treat the stream as not blocking. While the stream is
  // blocked, its data are replaced with silence.
  attribute boolean live;
  // Time on its own timeline
  readonly double currentTime;
  // Create a new StreamProcessor with this Stream as the input.
  StreamProcessor createProcessor();
  // Create a new StreamProcessor with this Stream as the input,
  // initializing worker.
  StreamProcessor createProcessor(Worker worker);
};
 
==== Stream mixing and processing  ====
 
[Constructor]
interface StreamProcessor : Stream {
  readonly attribute Stream[] inputs;
  void addStream(Stream input);
  void setInputParams(Stream input, any params);
  void removeStream(Stream input);
  // Causes this stream to enter the ended state.
  // No more worker callbacks will be issued.
  void end(float delay);
  attribute Worker worker;
};
 
This object combines multiple streams with synchronization to create a new stream. While any input stream is blocked and not live, the StreamProcessor is blocked. While the StreamProcessor is blocked, all its input streams are forced to be blocked. (Note that this can cause other StreamProcessors using the same input stream(s) to block, etc.)
 
The offset from the timeline of an input to the timeline of the StreamProcessor is set automatically when the stream is added to the StreamProcessor.
 
While 'worker' is null, the output is produced simply by adding the streams together. Video frames are composited with the last-added stream on top, everything letterboxed to the size of the last-added stream that has video. While there is no input stream, the StreamProcessor produces silence and no video.
 
While 'worker' is non-null, the results of mixing (or the default silence) are fed into the worker by dispatching onstream callbacks. Each onstream callback takes a StreamEvent as a parameter. A StreamEvent provides audio sample buffers and a list of video frames for each input stream; the event callback can write audio output buffers and a list of output video frames. If the callback does not output audio, default audio output is automatically generated as above; ditto for video. Each StreamEvent contains the inputParams for each input stream contributing to the StreamEvent.
 
Note that 'worker' cannot be a SharedWorker. This ensures that the worker can run in the same process as the page in multiprocess browsers, so media streams can be confined to a single process.
 
An ended stream is treated as producing silence and no video. (Alternative: automatically remove the stream as an input. But this might confuse scripts.)
 
interface DedicatedWorkerGlobalScope {
  attribute Function onprocessstream;
  attribute float streamRewindMax;
};
interface StreamEvent {
  attribute any inputParams[];
  attribute float rewind;
  attribute long audioSampleRate;    // e.g. 44100
  attribute short audioChannelCount;  // Mapping per Vorbis specification
  attribute FloatArray audioInputs[];
  void writeAudio(FloatArray data);
};
 
'inputParams' provides access to structured clones of the latest parameters set for each input stream.
 
'audioSampleRate' and 'audioChannelCount' represent the format of the samples. The sample buffers for all input streams are automatically converted to a common format by the UA, typically the highest-fidelity format (to avoid lossy conversion). 'audioInputs' gives access to the audio samples for each input stream. The length of each sample buffer will be a multiple of 'audioChannelCount'. The samples are floats ranging from -1 to 1. The lengths of the sample buffers will be equal. Streams with no audio produce a buffer containing silence.
 
'writeAudio' writes audio data to the stream output. If 'writeAudio' is not called before the event handler returns, the inputs are automatically mixed and written to the output. The format of the output is the same as the inputs; the 'data' array length must be a multiple of audioChannelCount. 'writeAudio' can be called more than once during an event handler; the data will be appended to the output stream.
 
There is no requirement that the amount of data output match the input buffer length. A filter with a delay will output less data than the size of the input buffer, at least during the first event; the UA will compensate by trying to buffer up more input data and firing the event again to get more output. A synthesizer with no inputs can output as much data as it wants; the UA will buffer data and fire events as necessary. Filters that misbehave, e.g. by continuously writing zero-length buffers, will cause the stream to block.
 
To support graph changes with low latency, we might need to throw out processed samples that have already been buffered and reprocess them. The 'rewind' attribute indicates how far back in the stream's history we have moved before the current inputs start. It is a non-negative value less than or equal to the value of streamRewindMax on entry to the event handler. The default value of streamRewindMax is zero so by default 'rewind' is always zero; filters that support rewinding need to opt into it.
 
==== Graph cycles  ====
 
If a cycle is formed in the graph, the streams involved block until the cycle is removed.
 
==== Dynamic graph changes  ====
 
Dynamic graph changes performed by a script take effect atomically after the script has run to completion. Effectively we post a task to the HTML event loop that makes all the pending changes. The exact timing is up to the implementation but the implementation should try to minimize the latency of changes.
 
==== Canvas Recording ====
 
To enable video synthesis and some easy kinds of video effects we can record the contents of a canvas:
 
interface HTMLCanvasElement {
  Stream createStream();
};
 
'createStream' produces a stream containing the "live" contents of the canvas as video frames, and no audio.
 
==== Examples  ====
 
1) Play video with processing effect applied to the audio track
 
<video src="foo.webm" id="v" controls></video>
<audio id="out" autoplay></audio>
<script>
  document.getElementById("out").src =
    document.getElementById("v").captureStream().createProcessor(new Worker("effect.js"));
</script>
 
2) Play video with processing effects mixing in out-of-band audio tracks (in sync)
 
<video src="foo.webm" id="v"></video>
<audio src="back.webm" id="back"></audio>
<audio id="out" autoplay></audio>
<script>
  var mixer = document.getElementById("v").captureStream().createProcessor(new Worker("audio-ducking.js"));
  mixer.addStream(document.getElementById("back").captureStream());
  document.getElementById("out").src = mixer;
  function startPlaying() {
    document.getElementById("v").play();
    document.getElementById("back").play();
  }
  // We probably need additional API to more conveniently tie together
  // the controls for multiple media elements.
</script>
 
3) Capture microphone input and stream it out to a peer with a processing effect applied to the audio
 
<script>
  navigator.getUserMedia('audio', gotAudio);
  function gotAudio(stream) {
    peerConnection.addStream(stream.createProcessor(new Worker("effect.js")));
  }
</script>
 
4) Capture microphone input and visualize it as it is being streamed out to a peer and recorded
 
<canvas id="c"></canvas>
<script>
  navigator.getUserMedia('audio', gotAudio);
  var streamRecorder;
  function gotAudio(stream) {
    var worker = new Worker("visualizer.js");
    var processed = stream.createProcessor(worker);
    worker.onmessage = function(event) {
      drawSpectrumToCanvas(event.data, document.getElementById("c"));
    }
    streamRecorder = processed.record();
    peerConnection.addStream(processed);
  }
</script>
 
5) Capture microphone input, visualize it, mix in another audio track and stream the result to a peer and record
 
<canvas id="c"></canvas>
<mediaresource src="back.webm" id="back"></mediaresource>
<script>
  navigator.getUserMedia('audio', gotAudio);
  var streamRecorder;
  function gotAudio(stream) {
    var worker = new Worker("visualizer.js");
    var processed = stream.createProcessor(worker);
    worker.onmessage = function(event) {
      drawSpectrumToCanvas(event.data, document.getElementById("c"));
    }
    var mixer = processed.createProcessor();
    mixer.addStream(document.getElementById("back").startStream());
    streamRecorder = mixer.record();
    peerConnection.addStream(mixer);
  }
</script>
 
6) Receive audio streams from peers, mix them with spatialization effects, and play
 
<audio id="out" autoplay></audio>
<script>
  var worker = new Worker("spatializer.js");
  var spatialized = stream.createProcessor(worker);
  peerConnection.onaddstream = function (event) {
    spatialized.addStream(event.stream);
    spatialized.setInputParams(event.stream, {x:..., y:..., z:...});
  };
  document.getElementById("out").src = spatialized; 
</script>
 
7) Seamlessly chain from the end of one input stream to another
 
<mediaresource src="in1.webm" id="in1" preload></mediaresource>
<mediaresource src="in2.webm" id="in2"></mediaresource>
<audio id="out" autoplay></audio>
<script>
  var in1 = document.getElementById("in1");
  in1.onloadeddata = function() {
    var mixer = in1.startStream().createProcessor();
    var in2 = document.getElementById("in2");
    in2.delay = in1.duration;
    mixer.addStream(in2.startStream());
    document.getElementById("out").src = mixer;
  }
</script>
 
8) Seamlessly switch from one input stream to another, e.g. to implement adaptive streaming
 
<mediaresource src="in1.webm" id="in1" preload></mediaresource>
<mediaresource src="in2.webm" id="in2"></mediaresource>
<audio id="out" autoplay></audio>
<script>
  var stream1 = document.getElementById("in1").startStream();
  var mixer = stream1.createProcessor();
  document.getElementById("out").src = mixer;
  function switchStreams() {
    var in2 = document.getElementById("in2");
    in2.currentTime = stream1.currentTime;
    var stream2 = in2.startStream();
    stream2.volume = 0;
    stream2.live = true; // don't block while this stream is playing
    mixer.addStream(stream2);
    stream2.onplaying = function() {
      if (mixer.inputs[0] == stream1) {
        stream2.volume = 1.0;
        stream2.live = false; // allow output to block while this stream is playing
        mixer.removeStream(stream1);
      }
    }
  }
</script>
 
9) Synthesize samples from JS data
 
<audio id="out" autoplay></audio>
<script>
  document.getElementById("out").src =
    new StreamProcessor(new Worker("synthesizer.js"));
</script>
 
10) Trigger a sound sample to be played through the effects graph ASAP but without causing any blocking
 
<script>
  var effectsMixer = ...;
  function playSound(src) {
    var audio = new Audio(src);
    audio.oncanplaythrough = new function() {
      var stream = audio.captureStream();
      stream.live = true;
      effectsMixer.addStream(stream);
      stream.onended = function() { effectsMixer.removeStream(stream); }
      audio.play();
    }
  }
</script>
 
13) Capture video from a camera and analyze it (e.g. face recognition)
 
<script>
  navigator.getUserMedia('video', gotVideo);
  function gotVideo(stream) {
    stream.createProcessor(new Worker("face-recognizer.js"));
  }
</script>
 
14) Capture video, record it to a file and upload the file (e.g. Youtube)
 
<script>
  navigator.getUserMedia('video', gotVideo);
  var streamRecorder;
  function gotVideo(stream) {
    streamRecorder = stream.record();
  }
  function stopRecording() {
    streamRecorder.getRecordedData(gotData);
  }
  function gotData(blob) {
    var x = new XMLHttpRequest();
    x.open('POST', 'uploadMessage');
    x.send(blob);
  }
</script>
 
15) Capture video from a canvas, record it to a file then upload
 
<canvas width="640" height="480" id="c"></canvas>
<script>
  var canvas = document.getElementById("c"); 
  var streamRecorder = canvas.createStream().record();
  function stopRecording() {
    streamRecorder.getRecordedData(gotData);
  }
  function gotData(blob) {
    var x = new XMLHttpRequest();
    x.open('POST', 'uploadMessage');
    x.send(blob);
  }
  var frame = 0;
  function updateCanvas() {
    var ctx = canvas.getContext("2d");
    ctx.clearRect(0, 0, 640, 480);
    ctx.fillText("Frame " + frame, 0, 200);
    ++frame;
  }
  setInterval(updateCanvas, 30);
</script>
 
= Related Proposals  =
 
W3C-RTC charter (Harald et. al.): [[RTCStreamAPI]]
 
WhatWG proposal (Ian et. al.): [http://www.whatwg.org/specs/web-apps/current-work/complete/video-conferencing-and-peer-to-peer-communication.html]
 
Chrome audio API: [http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html]

Latest revision as of 14:47, 3 June 2012

Moved!

The latest version of the draft is now at [1].