Media/WebRTC/Architecture: Difference between revisions

Added diagram illustrating internal vs external code
(Added diagram illustrating internal vs external code)
 
(12 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<h2> Overall Architecture </h2>
<h2> Overall Architecture </h2>
<p>Note: see [[Media/WebRTC/WebRTCE10S]] for the architecture on B2G with E10S</p>
<p>https://github.com/mozilla/webrtc/raw/master/planning/architecture.png
<p>https://github.com/mozilla/webrtc/raw/master/planning/architecture.png
</p><p><br />
</p><p><br />
Line 43: Line 44:
==== PeerConnection vs. CC_Call ====
==== PeerConnection vs. CC_Call ====


ISTM that the natural design is to have PeerConnection be 1:1 with CC_Call, so that when you do CreateOffer, it's effectively kicking off the offer process on an existing call. Then we can reuse the SIPCC state machine to some extent to manage the call state. Subsequent calls to the other JSEP API's such as setRemoteDescription and localDescription will run on the same call and use the same state machine.
The PeerConnection is 1:1 with CC_Call, so that when you do CreateOffer, it's effectively kicking off the offer process on an existing call. Then we can reuse the SIPCC state machine to some extent to manage the call state. Subsequent calls to the other JSEP API's such as setRemoteDescription and localDescription will run on the same call and use the same state machine. There is a global singleton PeerConnectionCtx which handles callbacks/notifications from SIPCC.


==== Mtransport Interface ====
==== Mtransport Interface ====


The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. We have two main options for wiring them up:
The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. Roughly speaking, they are wired up as follows:


* Have Webrtc{Audio,Video}Provider instantiate the mtransport objects and manage them. This is roughly the way things are constructed now, but we would need to modify the existing code to talk to mtransport and to be prepared to receive and handle ICE candidate updates (both internally generated and received from the JS).  
* The PeerConnection creates ICE objects (NrIceCtx) as soon as it starts up. It creates as many as the number of m-lines we expect to need.
* Have the PeerConnection set up the mtransport objects, manage ICE and DTLS, and then provide handles downward to SIPCC.  The major obstacle here is that we somehow have to figure out how to get this data into and out of the SDP. The advantage is that it avoids tight coupling in WebRTC{Audio,VideoProvider}.
* When SIPCC (lsm) determines that a new media flow is required it stands up a MediaPipeline (containing the MediaConduit [codecs], SRTP contexts, and a TransportFlow (DTLS, ICE, etc.)


I was originally leaning towards the second of these approaches, but now that I have thought about it a while, I think the first is probably better, and that's what's shown in the diagram above.
Note that each MediaPipeline is one-way, so a two-way audio flow has two media pipelines. However, since you are doing symmetric RTP, you likely have two MediaPipelines for each TransportFlow, though there may be two Transportflows for each MediaPipeline if RTCP is not muxed.
 
==== Internal vs 3rd party code ====
 
<p>https://github.com/nils-ohlmeier/firefox-webrtc-documentation/raw/master/Firefox-WebRTC-internal-3rdparty.png
</p><br>
Colors in the diagram:
* White: Mozilla's own code
* Orange: 3rd party code
* Green: webrtc.org code shared with Google Chrome
 
== List of Components ==
 
The system has the following individual components, in no particular order
 
* PeerConnection
** PeerConnection.js -- shim translation layer to let us do API adaptation to the C++
** PeerConnectionImpl -- C++ implementation of the PeerConnection interface.
** SIPCC -- handles SDP and media negotiation. Provided by Cisco but not a downstream.
 
* Media
** Webrtc.org/GIPS -- handles media encoding and decoding. Downstream from Google.
** MediaConduit -- Generic wrapper around Webrtc.org
** MediaPipeline -- Wrapper to hold the MediaConduit, mtransport subsystem, and the SRTP contexts, as well as interface with MediaStreams.
 
* Transport
** mtransport -- generic transport subsystem with implementations for ICE, DTLS, etc.
** NSS -- new DTLS stack. Mentioned because we need to land the new version of NSS
** nICEr -- ICE stack; downstream from reSIProcate project
** nrappkit --portable runtime, utility library; downstream from nrappkit.sourceforge.net
 
* DataChannel
** DataChannel implementation in the DOM
** libsctp -- SCTP implementation; downstream from the BSD SCTP guys




Line 93: Line 127:


Internally, <tt>cprSendMessage()</tt> is a write to a unix domain socket, and the receiving thread
Internally, <tt>cprSendMessage()</tt> is a write to a unix domain socket, and the receiving thread
needs to loop around <tt>cprGetMessage()</tt>, as in <tt>ccap_task.c</tt>:
needs to loop around <tt>cprGetMessage()</tt>, as in <tt>ccapp_task.c</tt>:


     /**
     /**
Line 235: Line 269:
=== Signaling System: AddStream ===
=== Signaling System: AddStream ===


There are probablme many ways to implement the AddStream API but I present this one for discussion.
There are probably many ways to implement the AddStream API but I present this one for discussion.


AddStream would be an API on the PeerConnection backend that takes as a parameter a MediaStream pointer.  
AddStream would be an API on the PeerConnection backend interface that takes as a parameter a MediaStream pointer.  


When called the MediaStreams are stored in a container in the PeerConnection backend. When CreateOffer or CreateAnswer are called to generate the local SDP then the GSMTask thread that is generating the SDP will interrogate the MediaStream container and assemble the media lines based on information it reads from the media streams.
When called the MediaStreams are stored in a container in the PeerConnection backend. When CreateOffer or CreateAnswer are called to generate the local SDP then the GSMTask thread that is generating the SDP will interrogate the MediaStream container and assemble the media lines based on information it reads from the media streams.
Line 253: Line 287:


Note that real care will need to be taken to make sure that the lifetime of objects shared across threads is right. We should be using nsRefPtr with thread-safe ref count objects to assist this process.
Note that real care will need to be taken to make sure that the lifetime of objects shared across threads is right. We should be using nsRefPtr with thread-safe ref count objects to assist this process.
== Open Issues ==
* Do we have one set of worker threads (PC, Media In, Media Out) per call? One per origin? One total?
* Who is responsible for cleaning up the various objects? My thought is that SIPCC creates and deletes most things, but that makes using ref counted pointers a bit harder (Though not impossible).
* Are the media input and output streams separate or distinct?
* Where is DTLS processing (which is interlocked with the DTLS state machine) done? It's easiest to do this on STS as is ordinarily done with SSL/TLS. Is this too expensive. Note that this is only for Data Channels.
Confirmed users
147

edits