Media/WebRTC/Architecture
Overall Architecture
At a high level, there are five major components we need to integrate to build a functional WebRTC stack into Firefox.
- The MediaStream components that provide generic media support.
- The WebRTC.org contributed code that handles RTP and codecs.
- The SIPCC signaling stack.
- The DataChannel management code and the libsctp code that it drives.
- The transport stack (mtransport) stack which drives DTLS, ICE, etc.
These are managed/integrated by the PeerConnection code which provides the PeerConnection API and maintains all the relevant state.
In addition, there is the GetUserMedia() [GUM] code which handles media acquisition. However, the GUM code has no direct contact with the rest of the WebRTC stack, since the stack itself solely manipulates MediaStreams and does not care how they were created.
Here is an example sequence of events from the caller's perspective:
- JS calls create one or more MediaStream objects via the GetUserMedia() API. The GUM code works with the MediaStream code and returns a MediaStream object.
- JS calls new PeerConnection() which creates a PeerConnection object. [QUESTION: does this create a CC_Call right here?]
- JS calls pc.AddStream() to add a stream to the PeerConnection.
- JS calls pc.CreateOffer() to create an offer.
- Inside PeerConnection.createOffer(), the following steps happen:
- A Create offer request is sent to the CCAPP_Task
- An appropriate number of WebRTC streams are set up to match the number of streams.
- Some number of mtransports are set up (to match the appropriate number of streams) [OPEN QUESTION: is this done by PeerConnection or inside SIPCC?.
- Asynchronously, SIPCC creates the SDP and it gets passed up to the PeerConnection.
- The PeerConnection forwards the SDP response to the DOM which fires the JS createOffer callback.
- The JS forwards the offer and then calls pc.SetLocalDescription(). This causes:
- Attachment of the mtransport to the WebRTC.org streams via ExternalRenderer/ExternalTransport
- When the remote SDP is received, the JS calls pc.SetRemoteDescription() which forwards to the CCAPP_Task in the same manner as createOffer() and setLocalDescription(). This causes:
- Forwarding of the ICE candidates to mtransport. At the time when the first candidates are received, mtransport can start ICE negotiation.
- Once ICE completes, DTLS negotiation starts
- Once DTLS negotiation completes, media can flow. [QUESTION: Should we hold off on attaching the mtransport to WebRTC.org until this is ready?]
The big questions for me have to do with object lifetime and exactly how to plumb ICE and DTLS into the SDP system.
PeerConnection vs. CC_Call
ISTM that the natural design is to have PeerConnection be 1:1 with CC_Call, so that when you do CreateOffer, it's effectively kicking off the offer process on an existing call. Then we can reuse the SIPCC state machine to some extent to manage the call state. Subsequent calls to the other JSEP API's such as setRemoteDescription and localDescription will run on the same call and use the same state machine.
Mtransport Interface
The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. We have two main options for wiring them up:
- Have Webrtc{Audio,Video}Provider instantiate the mtransport objects and manage them. This is roughly the way things are constructed now, but we would need to modify the existing code to talk to mtransport and to be prepared to receive and handle ICE candidate updates (both internally generated and received from the JS).
- Have the PeerConnection set up the mtransport objects, manage ICE and DTLS, and then provide handles downward to SIPCC. The major obstacle here is that we somehow have to figure out how to get this data into and out of the SDP. The advantage is that it avoids tight coupling in WebRTC{Audio,VideoProvider}.
I was originally leaning towards the second of these approaches, but now that I have thought about it a while, I think the first is probably better, and that's what's shown in the diagram above.
Thread Diagram
The system operates in a number of different threads. The following diagrams show the thread interactions for some common interactions between the threads.
Overview of threading models and inter-thread communication
At a high-level we need to deal with two kinds of threads:
- Standard Firefox threads (nsThreads) [1]
- Non-Firefox threads generated internally to SIPCC (these are a thin wrapper around OS threads).
Unfortunately, these threads have rather different operating and dispatch models, as described below.
nsThreads
The primary way to communicate between nsThreads is to use the Dispatch() method of the thread. This pushes an event into an event queue for the thread. For instance (https://developer.mozilla.org/en/Making_Cross-Thread_Calls_Using_Runnables):
nsCOMPtr<nsIRunnable> r = new PiCalculateTask(callback, digits); mythread->Dispatch(r);
CPR/SIPCC Threads
SIPCC threads communicate via an explicit message passing system based on Unix domain sockets. The actual message passes are wrapped in subroutine calls. E.g.,
if (cprSendMessage(sip_msgq /*sip.msgQueue */ , (cprBuffer_t)msg, (void **)&syshdr) == CPR_FAILURE) { cprReleaseSysHeader(syshdr); return CPR_FAILURE; }
Internally, cprSendMessage() is a write to a unix domain socket, and the receiving thread needs to loop around cprGetMessage(), as in ccap_task.c:
/** * * CCApp Provider main routine. * * @param arg - CCApp msg queue * * @return void * * @pre None */ void CCApp_task(void * arg) { static const char fname[] = "CCApp_task"; phn_syshdr_t *syshdr = NULL; appListener *listener = NULL; void * msg; //initialize the listener list sll_lite_init(&sll_list); CCAppInit(); while (1) { msg = cprGetMessage(ccapp_msgq, TRUE, (void **) &syshdr); if ( msg) { CCAPP_DEBUG(DEB_F_PREFIX"Received Cmd[%d] for app[%d]\n", DEB_F_PREFIX_ARGS(SIP_CC_PROV, fname), syshdr->Cmd, syshdr->Usr.UsrInfo); listener = getCcappListener(syshdr->Usr.UsrInfo); if (listener != NULL) { (* ((appListener)(listener)))(msg, syshdr->Cmd); } else { CCAPP_DEBUG(DEB_F_PREFIX"Event[%d] doesn't have a dedicated listener.\n", DEB_F_PREFIX_ARGS(SIP_CC_PROV, fname), syshdr->Usr.UsrInfo); } cprReleaseSysHeader(syshdr); cprReleaseBuffer(msg); } } }
Interoperating between SIPCC Threads and nsThreads
The right idiom here is that when we write to a thread using it's idiom. This means that when we have a pair of threads of different types, each needs to know about the other's idiom, but neither needs to change it's master event loop. The idea is shown below:
Note that this makes life kind of unpleasant when we want to write to an nsThread from C code in SIPCC because nsThread communication is by instantiating a nsIRunnable() class which is foreign linguistically to C. However, looking at the existing code it appears that we can isolate these calls to C++ code or make bridges to make those available.
In the diagrams below, CPR-style messages are indicated with IPC() and nsThread-style messages are indicated with Dispatch()
Important threads
There are a number of threads that do the heavy lifting here.
- DOM Thread: the thread where calls from the JS end up executing. Anything running here blocks the DOM and the JS engine (existing Firefox thread)
- SocketTRansportService (STS): the thread with the main networking event loop (existing Firefox thread)
- MediaStream: where media from devices is delivered. (same as DOM thread?) (existing Firefox thread)
- PeerConnection Thread: the thread where the PeerConnection operates (new thread; QUESTION: how many are there? one total or one per PC?)
- CCAPP_Task: the outermost of the SIPCC threads, running in ccapp_task.c:CCApp_task() This is the API for SIPCC
- GSMTask: The internal SIPCC state machine, running in gsm.c::GSMTask() This is where most SIPCC\SDP processing and state management occurs.
- Media Input Thread: the thread used by WebRTC for incoming media decoding (new thread)
- Media Output Thread: the thread used by WebRTC for outgoing media encoding (new thread; may be the same as Media Input thread).
A note about this taxonomy:
- In general, expensive operations prompted by (DOM, STS, MediaStream) need to be done on other threads, so your event handlers should just Dispatch to some other thread which does the work. This is particularly true for media encoding and decoding.
Signaling System: CreateOffer
[TODO: EKR. (1) Should we do the ICE allocation in PC and just have SIPCC ask it. (2) first offer creation is asynchronous internally because ICE may be asynchronous.] [TODO: EKR. Parallel forking and cloning.]
The sequence for CreateOffer is shown in the link above.
As described above, most of the heavy lifting for the API happens off the DOM thread. Thus, when the JS invokes CreateOffer(), this turns into a Dispatch to the PeerConnection thread which (after some activity of its own) invokes SIPCC's FEATURE_CREATE_OFFER via a SIPCC IPC message to CCAPP_Task, which using the GSMTask thread creates the appropriate transport flows (mtransport) and constructs the SDP. Eventually, it constructs the SDP offer and then Dispatches it to the PeerConnection thread, which Dispatches the result to the DOM thread which eventually calls the JS CreateOffer callback. [Question: is this actually not Dispatch but some fancier call.]
In the meantime, the ICE gathering process has been running on the STS thread. As each candidate is gathered, the STS thread does an IPC call to the GSMTask thread to return the ICE candidate. These candidates are then Dispatched back to the CCAPP_Task thread and then to the PC thread and then eventually to the DOM thread where the ICE candidate events fire.
[Question: Enda. (1) During function gsmsdp_create_local_sdp the media streams that were added by AddStream are interrogated, how can I represent that in this diagram or is it implied that GSMTask ownes these streams and there is no outside interaction? (2) I am not sure we do 'create webrtc flows' here, I thought this was the responsibility of setLocalDescription or setRemoteDescription. I am happy that we create SIPCC internal representation of media streams but not sure we call out to webrtc at this point.]
Signaling System: SetLocal(Caller)
Signaling System: SetRemote(Callee)
Above is Enda's diagram, but I think it's wrong.
Signaling System: CreateAnswer(Callee)
Signaling System: SetLocal(Callee)
Signaling System: SetRemote(Caller)
The above diagram shows the caller's SetRemote sequence.
The process starts with receiving the remote description and the JS calling SetRemoteDescription(). This is Dispatched() onto the PC thread and then to CCAPP_Task API thread which passes it to the Internal SIPCC GSMTask thread for processing. Assuming everything is OK, it negotiates this answer with its internal local SDP used in the earlier offer. It also Dispatches the ICE candidates down to the STS thread so that ICE can proceed. At this point a Connected event is returned to CCAPP_Task thread and then to the PC thread where it gets translated to a JSEP ReadyState of Active. Once ICE and DTLS handshaking are complete, a message is sent up to the GSMTask, which can then start sending and receiving media on the working mtransport.
Thread diagram for transport subsystem
As shown above, the CCAPP thread sets up the mtransport stack which runs on the STS thread. ICE and DTLS run on their own on the STS thread and eventually the mtransport becomes ready to read and write. The CCAPP thread is then notified and it notifies the Media In and Media Out threads that they can start playout in each direction. Prior to this, Media In was passive and Media Out either wasn't registered for playout or discarded NotifyOutput.
- Incoming media originates on the STS thread but is then dispatched to the Media In thread for decoding and playout.
- Outgoing media originates on the Media Stream thread but is then dispatched to the Media Out thread for encoding and transmission.
Note that real care will need to be taken to make sure that the lifetime of objects shared across threads is right. We should be using nsRefPtr with thread-safe ref count objects to assist this process.
Open Issues
- Do we have one set of worker threads (PC, Media In, Media Out) per call? One per origin? One total?
- Who is responsible for cleaning up the various objects? My thought is that SIPCC creates and deletes most things, but that makes using ref counted pointers a bit harder (Though not impossible).
- Are the media input and output streams separate or distinct?
- Where is DTLS processing (which is interlocked with the DTLS state machine) done? It's easiest to do this on STS as is ordinarily done with SSL/TLS. Is this too expensive. Note that this is only for Data Channels.
- I have mostly elided the difference between various SIPCC threads (Calling them all CCAPP). Should we really have inner threads talking to mtransport and PC or should we bounce signals lal the way back to CCAPP?