Confirmed users
214
edits
Line 40: | Line 40: | ||
=== Input Device Access (getUserMedia) === | === Input Device Access (getUserMedia) === | ||
We assume that camera and microphone access will be available only in the | |||
parent process. However, since most of the WebRTC stack will live in the | |||
child process, we need some mechanism for making the media available to | |||
it. | |||
The basic idea is to create a new backend for MediaManager/GetUserMedia | |||
that is just a proxy talking to the real media devices over IPDL. The | |||
incoming media frames would then be passed over the IPDL channel | |||
to the child process where they are injected into the MediaStreamGraph. | |||
This shouldn't be too complicated, but there are a few challenges: | |||
* Making sure that we don't do superfluous copies of the data. I understand | |||
that we can move the data via gralloc buffers, so maybe that will be OK for | |||
video. [OPEN ISSUE: Will that work for audio?] | |||
* Latency. We need to make sure that moving the data across the IPDL | |||
interface doesn't introduce too much latency. Hopefully this is a solved | |||
problem. | |||
=== Output Access === | === Output Access === |