MediaStreamAPI: Difference between revisions

Line 9: Line 9:
These are higher-level than use-cases.  
These are higher-level than use-cases.  


1) Play video with processing effect applied to the audio track  
1) Play video with processing effect applied to the audio track (e.g. high-pass filter)


2) Play video with processing effects mixing in out-of-band audio tracks (in sync)  
2) Play video with processing effects mixing in out-of-band audio tracks (in sync) (e.g. mixing in an audio commentary with audio ducking)


3) Capture microphone input and stream it out to a peer with a processing effect applied to the audio  
3) Capture microphone input and stream it out to a peer with a processing effect applied to the audio (e.g. XBox 360 chat with voice distortion)


4) Capture microphone input and visualize it as it is being streamed out to a peer and recorded  
4) Capture microphone input and visualize it as it is being streamed out to a peer and recorded (e.g. Internet radio broadcast)


5) Capture microphone input, visualize it, mix in another audio track and stream the result to a peer and record  
5) Capture microphone input, visualize it, mix in another audio track and stream the result to a peer and record (e.g. Internet radio broadcast)


6) Receive audio streams from peers, mix them with spatialization effects, and play  
6) Receive audio streams from peers, mix them with spatialization effects, and play (e.g. live chat with spatial feature)


7) Seamlessly chain from the end of one input stream to another  
7) Seamlessly chain from the end of one input stream to another (e.g. playlists, audio/video editing)


8) Seamlessly switch from one input stream to another, e.g. to implement adaptive streaming  
8) Seamlessly switch from one input stream to another (e.g. adaptive streaming)


9) Synthesize samples from JS data  
9) Synthesize samples from JS data (e.g. game emulators or MIDI synthesizer)


10) Trigger a sound sample to be played through the effects graph ASAP but without causing any blocking
10) Trigger a sound sample to be played through the effects graph ASAP but without causing any blocking (e.g. game sound effects)


11) Trigger a sound sample to be played through the effects graph at a given time
11) Trigger a sound sample to be played through the effects graph at a given time (e.g. game sound effects)


12) Capture video from a camera and analyze it (e.g. face recognition)
12) Capture video from a camera and analyze it (e.g. face recognition)


13) Capture video, record it to a file and upload the file (e.g. Youtube)
13) Capture video and audio, record it to a file and upload the file (e.g. Youtube upload)


14) Capture video from a canvas element, record it and upload (e.g. Screencast/"Webcast" or composite multiple video sources with effects into a single canvas then record)
14) Capture video from a canvas element, record it and upload (e.g. Screencast/"Webcast", or composite multiple video sources with effects into a single canvas then record)


15) Synchronized MIDI + Audio capture
15) Synchronized MIDI + Audio capture


16) Synchronized MIDI + Audio playback (Would that just work if streams could contain MIDI data?)
16) Synchronized MIDI + Audio playback


=== Straw-man Proposal  ===
=== Straw-man Proposal  ===
1,295

edits