Media/WebRTC/Architecture

From MozillaWiki
Jump to navigation Jump to search

Overall Architecture

architecture.png



At a high level, there are five major components we need to integrate to build a functional WebRTC stack into Firefox.

  • The MediaStream components that provide generic media support.
  • The WebRTC.org contributed code that handles RTP and codecs.
  • The SIPCC signaling stack.
  • The DataChannel management code and the libsctp code that it drives.
  • The transport stack (mtransport) stack which drives DTLS, ICE, etc.

These are managed/integrated by the PeerConnection code which provides the PeerConnection API and maintains all the relevant state.

In addition, there is the GetUserMedia() [GUM] code which handles media acquisition. However, the GUM code has no direct contact with the rest of the WebRTC stack, since the stack itself solely manipulates MediaStreams and does not care how they were created.

Here is an example sequence of events from the caller's perspective:

  1. JS calls create one or more MediaStream objects via the GetUserMedia() API. The GUM code works with the MediaStream code and returns a MediaStream object.
  2. JS calls new PeerConnection() which creates a PeerConnection object. [QUESTION: does this create a CC_Call right here?]
  3. JS calls pc.AddStream() to add a stream to the PeerConnection.
  4. JS calls pc.CreateOffer() to create an offer.
  5. Inside PeerConnection.createOffer(), the following steps happen:
    1. A Create offer request is sent to the CCAPP_Task
    2. An appropriate number of WebRTC streams are set up to match the number of streams.
    3. Some number of mtransports are set up (to match the appropriate number of streams) [OPEN QUESTION: is this done by PeerConnection or inside SIPCC?.
  6. Asynchronously, SIPCC creates the SDP and it gets passed up to the PeerConnection.
  7. The PeerConnection forwards the SDP response to the DOM which fires the JS createOffer callback.
  8. The JS forwards the offer and then calls pc.SetLocalDescription(). This causes:
    1. Attachment of the mtransport to the WebRTC.org streams via ExternalRenderer/ExternalTransport
  9. When the remote SDP is received, the JS calls pc.SetRemoteDescription() which forwards to the CCAPP_Task in the same manner as createOffer() and setLocalDescription(). This causes:
    1. Forwarding of the ICE candidates to mtransport. At the time when the first candidates are received, mtransport can start ICE negotiation.
  10. Once ICE completes, DTLS negotiation starts
  11. Once DTLS negotiation completes, media can flow. [QUESTION: Should we hold off on attaching the mtransport to WebRTC.org until this is ready?]

The big questions for me have to do with object lifetime and exactly how to plumb ICE and DTLS into the SDP system.

PeerConnection vs. CC_Call

ISTM that the natural design is to have PeerConnection be 1:1 with CC_Call, so that when you do CreateOffer, it's effectively kicking off the offer process on an existing call. Then we can reuse the SIPCC state machine to some extent to manage the call state. Enda, is that what you have in mind?

Mtransport Interface

The mtransport (ICE, DTLS) subsystem is pretty independent of SIPCC. We have two main options for wiring them up:

  • Have Webrtc{Audio,Video}Provider instantiate the mtransport objects and manage them. This is roughly the way things are constructed now, but we would need to modify the existing code to talk to mtransport and to be prepared to receive and handle ICE candidate updates (both internally generated and received from the JS).
  • Have the PeerConnection set up the mtransport objects, manage ICE and DTLS, and then provide handles downward to SIPCC. The major obstacle here is that we somehow have to figure out how to get this data into and out of the SDP. The advantage is that it avoids tight coupling in WebRTC{Audio,VideoProvider}.

I was originally leaning towards the second of these approaches, but now that I have thought about it a while, I think the first is probably better, and that's what's shown in the diagram above.


Thread Diagram

The system operates in a number of different threads. The following diagrams show the thread interactions for some common interactions between the threads.

Overview of threading models and inter-thread communication

At a high-level we need to deal with two kinds of threads:

  • Standard Firefox threads (nsThreads) [1]
  • Non-Firefox threads generated internally to SIPCC (these are a thin wrapper around OS threads).

Unfortunately, these threads have rather different operating and dispatch models, as described below.


nsThreads

The primary way to communicate between nsThreads is to use the Dispatch() method of the thread. This pushes an event into an event queue for the thread. For instance (https://developer.mozilla.org/en/Making_Cross-Thread_Calls_Using_Runnables):

 nsCOMPtr<nsIRunnable> r = new PiCalculateTask(callback, digits);
 
 mythread->Dispatch(r);


CPR/SIPCC Threads

SIPCC threads communicate via an explicit message passing system based on Unix domain sockets. The actual message passes are wrapped in subroutine calls. E.g.,

   if (cprSendMessage(sip_msgq /*sip.msgQueue */ , (cprBuffer_t)msg, (void **)&syshdr)
       == CPR_FAILURE) {
       cprReleaseSysHeader(syshdr);
       return CPR_FAILURE;
   }

Internally, cprSendMessage() is a write to a unix domain socket, and the receiving thread needs to loop around cprGetMessage(), as in ccap_task.c:

   /**
   *
   * CCApp Provider main routine.
   *
   * @param   arg - CCApp msg queue
   *
   * @return  void
   *
   * @pre     None
   */
  void CCApp_task(void * arg)
  {
      static const char fname[] = "CCApp_task";
      phn_syshdr_t   *syshdr = NULL;
      appListener *listener = NULL;
      void * msg;
  
      //initialize the listener list
      sll_lite_init(&sll_list);
  
      CCAppInit();
  
      while (1) {
          msg = cprGetMessage(ccapp_msgq, TRUE, (void **) &syshdr);
          if ( msg) {
              CCAPP_DEBUG(DEB_F_PREFIX"Received Cmd[%d] for app[%d]\n", DEB_F_PREFIX_ARGS(SIP_CC_PROV, fname),
                      syshdr->Cmd, syshdr->Usr.UsrInfo);
  
              listener = getCcappListener(syshdr->Usr.UsrInfo);
              if (listener != NULL) {
                  (* ((appListener)(listener)))(msg, syshdr->Cmd);
              } else {
                  CCAPP_DEBUG(DEB_F_PREFIX"Event[%d] doesn't have a dedicated listener.\n", DEB_F_PREFIX_ARGS(SIP_CC_PROV, fname),
                          syshdr->Usr.UsrInfo);
              }
              cprReleaseSysHeader(syshdr);
              cprReleaseBuffer(msg);
          }
      }
  }

Interoperating between SIPCC Threads and nsThreads

The right idiom here is that when we write to a thread using it's idiom. This means that when we have a pair of threads of different types, each needs to know about the other's idiom, but neither needs to change it's master event loop. The idea is shown below:

http://www.websequencediagrams.com/cgi-bin/cdraw?lz=bnNUaHJlYWQgLT4gQ1BSIAAIBjogY3ByU2VuZE1lc3NhZ2UoKQoAEwogLT4gAC8IOiBEaXNwYXRjaCgpCg&s=default

Note that this makes life kind of unpleasant when we want to write to an nsThread from C code in SIPCC because nsThread communication is by instantiating a nsIRunnable() class which is foreign linguistically to C. However, looking at the existing code it appears that we can isolate these calls to C++ code or make bridges to make those available.

Important threads

There are a number of threads that do the heavy lifting here.

  • DOM Thread: the thread where calls from the JS end up executing. Anything running here blocks the DOM and the JS engine (existing Firefox thread)
  • SocketTRansportService (STS): the thread with the main networking event loop (existing Firefox thread)
  • MediaStream: where media from devices is delivered. (same as DOM thread?) (existing Firefox thread)
  • PeerConnection Thread: the thread where the PeerConnection operates (new thread; QUESTION: how many are there? one total or one per PC?)
  • CCAPP_Task: the outermost of the SIPCC threads, running in ccapp_task.c:CCApp_task() (SIPCC thread).
  • Media Input Thread: the thread used by WebRTC for incoming media decoding (new thread)
  • Media Output Thread: the thread used by WebRTC for outgoing media encoding (new thread; may be the same as Media Input thread).

Two notes about this:

  1. In general, expensive operations prompted by (DOM, STS, MediaStream) need to be done on other threads, so your event handlers should just Dispatch to some other thread which does the work. This is particularly true for media encoding and decoding.
  2. SIPCC has a pile of internal threads, but they are all front-ended by CCAPP_Task, so not our problem.

Signaling System: CreateOffer

http://www.websequencediagrams.com/?lz=dGl0bGUgU2lnbmFsaW5nIFRocmVhZHMgKENyZWF0ZU9mZmVyKQpwYXJ0aWNpcGFudCAiRE9NACAHIiBhcyBET00AEw1QQyBhcyBQQwAnDkNDQVBQX1Rhc2sAMAUACgUASg1TVFMgYXMgU1RTCgpET00gLT4gUEM6IERpc3BhdGNoAIECDlBDIC0-AD8GOiBJUEMoRkVBVFVSRV9DUkVBVEVfT0ZGRVIAHwhQQzogAIFIBiBUcmFuc3BvcnRGbG93XG5bRFRMUywgSUNFXQoAgRwFAGURAGMNRE9NAAkSCm5vdGUgbGVmdCBvZgAdBkpTIGNhbGxiYWNrXG53aXRoIG9mZmVyCgpTVFMAgSkPSUNFIENhbmRpZGF0ZXMpAG8XSUNFIGMAGwsAexQAFBAAcyVJQ0UKCgo&s=default

Thread diagram for transport subsystem

http://www.websequencediagrams.com/?lz=dGl0bGUgVHJhbnNwb3J0IFRocmVhZHMKcGFydGljaXBhbnQgUEMgYXMgUEMACA1TVFMgYXMgU1RTAB8NIk1lZGlhIFN0cmVhbSIgYXMgTQAMFUluABcGSU4AKxRPdXQANgZPVVQKUEMgLT4gUEM6IEFzc2VtYmxlIHQAgScJc3RhY2sAGAtTdGFydCBJQ0UAMAdTVFM6IENyZWF0ZSBzb2NrZXRzClNUUwBKCExvY2FsIElDRSBjYW5kaWRhdGVzL0RUTFMgZnAANgxSZW1vdGUAJAVDAAwdAHYKAFsLADIFb21wbGV0ZQByDABkBQAJEACBLQVPblMAgSgFUmVhZHkoKQCBKwhNSU46IE0AgXUJOjpQYQAgBmNlaXZlZCgpCk1JTgAgCVZpZU5ldHdvcms6OgAbCFJUUAAuBgAdD1NvdXJjZQCDOAUAgzYGOjpBcHBlbmRUb1RyYWNrKCkKbm90ZSByaWdodCBvZgCBBAcAg2gFcGxheXMKTQCBHAZTAA8HAIN8Bkxpc3RlbmVyOjpOb3RpZnlPdXRwdQB3BQCBSgZPVVQ6IERpc3BhdGNoKC4uLikKTU9VVAASClZpZUV4dGVybmFsQ2FwdHVyZTo6SW5jb21pbmdGcmFtZSgAIhAAghMMU2VuZACBZgkAgTEPAHYFAII6BiBzZW50IHRvIG4AgikGCg&s=default