Confirmed users
147
edits
(Added diagram illustrating internal vs external code) |
|||
(12 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
<h2> Overall Architecture </h2> | <h2> Overall Architecture </h2> | ||
<p>Note: see [[Media/WebRTC/WebRTCE10S]] for the architecture on B2G with E10S</p> | |||
<p>https://github.com/mozilla/webrtc/raw/master/planning/architecture.png | <p>https://github.com/mozilla/webrtc/raw/master/planning/architecture.png | ||
</p><p><br /> | </p><p><br /> | ||
Line 43: | Line 44: | ||
==== PeerConnection vs. CC_Call ==== | ==== PeerConnection vs. CC_Call ==== | ||
The PeerConnection is 1:1 with CC_Call, so that when you do CreateOffer, it's effectively kicking off the offer process on an existing call. Then we can reuse the SIPCC state machine to some extent to manage the call state. Subsequent calls to the other JSEP API's such as setRemoteDescription and localDescription will run on the same call and use the same state machine. There is a global singleton PeerConnectionCtx which handles callbacks/notifications from SIPCC. | |||
==== Mtransport Interface ==== | ==== Mtransport Interface ==== | ||
The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. | The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. Roughly speaking, they are wired up as follows: | ||
* | * The PeerConnection creates ICE objects (NrIceCtx) as soon as it starts up. It creates as many as the number of m-lines we expect to need. | ||
* | * When SIPCC (lsm) determines that a new media flow is required it stands up a MediaPipeline (containing the MediaConduit [codecs], SRTP contexts, and a TransportFlow (DTLS, ICE, etc.) | ||
Note that each MediaPipeline is one-way, so a two-way audio flow has two media pipelines. However, since you are doing symmetric RTP, you likely have two MediaPipelines for each TransportFlow, though there may be two Transportflows for each MediaPipeline if RTCP is not muxed. | |||
==== Internal vs 3rd party code ==== | |||
<p>https://github.com/nils-ohlmeier/firefox-webrtc-documentation/raw/master/Firefox-WebRTC-internal-3rdparty.png | |||
</p><br> | |||
Colors in the diagram: | |||
* White: Mozilla's own code | |||
* Orange: 3rd party code | |||
* Green: webrtc.org code shared with Google Chrome | |||
== List of Components == | |||
The system has the following individual components, in no particular order | |||
* PeerConnection | |||
** PeerConnection.js -- shim translation layer to let us do API adaptation to the C++ | |||
** PeerConnectionImpl -- C++ implementation of the PeerConnection interface. | |||
** SIPCC -- handles SDP and media negotiation. Provided by Cisco but not a downstream. | |||
* Media | |||
** Webrtc.org/GIPS -- handles media encoding and decoding. Downstream from Google. | |||
** MediaConduit -- Generic wrapper around Webrtc.org | |||
** MediaPipeline -- Wrapper to hold the MediaConduit, mtransport subsystem, and the SRTP contexts, as well as interface with MediaStreams. | |||
* Transport | |||
** mtransport -- generic transport subsystem with implementations for ICE, DTLS, etc. | |||
** NSS -- new DTLS stack. Mentioned because we need to land the new version of NSS | |||
** nICEr -- ICE stack; downstream from reSIProcate project | |||
** nrappkit --portable runtime, utility library; downstream from nrappkit.sourceforge.net | |||
* DataChannel | |||
** DataChannel implementation in the DOM | |||
** libsctp -- SCTP implementation; downstream from the BSD SCTP guys | |||
Line 93: | Line 127: | ||
Internally, <tt>cprSendMessage()</tt> is a write to a unix domain socket, and the receiving thread | Internally, <tt>cprSendMessage()</tt> is a write to a unix domain socket, and the receiving thread | ||
needs to loop around <tt>cprGetMessage()</tt>, as in <tt> | needs to loop around <tt>cprGetMessage()</tt>, as in <tt>ccapp_task.c</tt>: | ||
/** | /** | ||
Line 235: | Line 269: | ||
=== Signaling System: AddStream === | === Signaling System: AddStream === | ||
There are | There are probably many ways to implement the AddStream API but I present this one for discussion. | ||
AddStream would be an API on the PeerConnection backend that takes as a parameter a MediaStream pointer. | AddStream would be an API on the PeerConnection backend interface that takes as a parameter a MediaStream pointer. | ||
When called the MediaStreams are stored in a container in the PeerConnection backend. When CreateOffer or CreateAnswer are called to generate the local SDP then the GSMTask thread that is generating the SDP will interrogate the MediaStream container and assemble the media lines based on information it reads from the media streams. | When called the MediaStreams are stored in a container in the PeerConnection backend. When CreateOffer or CreateAnswer are called to generate the local SDP then the GSMTask thread that is generating the SDP will interrogate the MediaStream container and assemble the media lines based on information it reads from the media streams. | ||
Line 253: | Line 287: | ||
Note that real care will need to be taken to make sure that the lifetime of objects shared across threads is right. We should be using nsRefPtr with thread-safe ref count objects to assist this process. | Note that real care will need to be taken to make sure that the lifetime of objects shared across threads is right. We should be using nsRefPtr with thread-safe ref count objects to assist this process. | ||