Media/WebRTC/Architecture: Difference between revisions

Line 47: Line 47:
==== Mtransport Interface ====
==== Mtransport Interface ====


The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. We have two main options for wiring them up:
The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. Roughly speaking, they are wired up as follows:


* Have Webrtc{Audio,Video}Provider instantiate the mtransport objects and manage them. This is roughly the way things are constructed now, but we would need to modify the existing code to talk to mtransport and to be prepared to receive and handle ICE candidate updates (both internally generated and received from the JS).  
* The PeerConnection creates ICE objects (NrIceCtx) as soon as it starts up. It creates as many as the number of m-lines we expect to need.
* Have the PeerConnection set up the mtransport objects, manage ICE and DTLS, and then provide handles downward to SIPCC.  The major obstacle here is that we somehow have to figure out how to get this data into and out of the SDP. The advantage is that it avoids tight coupling in WebRTC{Audio,VideoProvider}.
* When SIPCC (lsm) determines that a new media flow is required it stands up a MediaPipeline (containing the MediaConduit [codecs], SRTP contexts, and a TransportFlow (DTLS, ICE, etc.)


I was originally leaning towards the second of these approaches, but now that I have thought about it a while, I think the first is probably better, and that's what's shown in the diagram above.
Note that each MediaPipeline is one-way, so a two-way audio flow has two media pipelines. However, since you are doing symmetric RTP, you likely have two MediaPipelines for each TransportFlow, though there may be two Transportflows for each MediaPipeline if RTCP is not muxed.




Confirmed users
214

edits