Media/WebRTC/WebRTCE10S: Difference between revisions
Line 8: | Line 8: | ||
it accesses them by doing IPC calls via [[IPDL|https://developer.mozilla.org/en-US/docs/IPDL]] | it accesses them by doing IPC calls via [[IPDL|https://developer.mozilla.org/en-US/docs/IPDL]] | ||
The current architectural split | The current architectural split is shown below: | ||
https://raw.github.com/mozilla/webrtc/master/planning/architecture-simplified.png | https://raw.github.com/mozilla/webrtc/master/planning/architecture-simplified.png | ||
In this diagram, physical devices are shown on the left and components of the browser | |||
are on the right. Note that we show a number of arrows going through the DOM/JS | |||
layer. The implication is that MediaStreams are mediated by the DOM/JS layer. I.e., | |||
JS is responsible for plumbing MediaStreams between gUM and the PeerConnection | |||
and between the PeerConnection and video/audio tags. This doesn't mean that | |||
the media actually flows through the JS, however. |
Revision as of 20:32, 6 April 2013
Introduction
The WebRTC architecture for desktop (Media/WebRTC/Architecture) is based on a single process model where we can directly access platform resources from the WebRTC code. B2G, however, has a split process architecture (B2G/Architecture) where the renderer/content process runs in a sandbox and has limited access to platform resources. Generally, it accesses them by doing IPC calls via https://developer.mozilla.org/en-US/docs/IPDL
The current architectural split is shown below:
In this diagram, physical devices are shown on the left and components of the browser are on the right. Note that we show a number of arrows going through the DOM/JS layer. The implication is that MediaStreams are mediated by the DOM/JS layer. I.e., JS is responsible for plumbing MediaStreams between gUM and the PeerConnection and between the PeerConnection and video/audio tags. This doesn't mean that the media actually flows through the JS, however.