Platform/GFX/Surfaces: Difference between revisions

Jump to navigation Jump to search
Line 245: Line 245:


= Developing a standard MozStream abstraction =
= Developing a standard MozStream abstraction =
The discussion below is about a separate abstraction. MozSurface and MozStream are two separate discussions, although MozStream would use MozSurface.


As explained earlier, having a good MozSurface abstraction is only half the way towards a good architecture. The other half is to share as much surface-handling code as possible in a common abstraction, and the standard type of abstraction for doing this is that of a "stream", as described in the above-mentioned EGLStream specification, http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_stream.txt.
As explained earlier, having a good MozSurface abstraction is only half the way towards a good architecture. The other half is to share as much surface-handling code as possible in a common abstraction, and the standard type of abstraction for doing this is that of a "stream", as described in the above-mentioned EGLStream specification, http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_stream.txt.
Line 251: Line 253:


The question is how much more of our surface-passing code could we unify behind a shared stream mechanism, maybe by suitably extending SurfaceStream's capabilities?
The question is how much more of our surface-passing code could we unify behind a shared stream mechanism, maybe by suitably extending SurfaceStream's capabilities?
There is some low-hanging fruit: it is standard (e.g. on Android) to use such a stream to implement the queue of decoded frames for a <video>, so we could at least easily share that logic there.


Is there anything that we need to do, that fundamentally cannot be unified behind such a stream abstraction?
Is there anything that we need to do, that fundamentally cannot be unified behind such a stream abstraction?
Line 259: Line 259:
* 1. ThebesLayers need to get back the front buffer to do partial updates;
* 1. ThebesLayers need to get back the front buffer to do partial updates;
* 2. Drawing video frames into a canvas. It also seems that WEBGL_dynamic_texture would hit the same problem.
* 2. Drawing video frames into a canvas. It also seems that WEBGL_dynamic_texture would hit the same problem.
* 3. We currently do some screenshotting on the content side.
* 3. More importantly we have different plans for some of the <video> use cases which are based on a stream abstraction that is not a swap chain.
* a. We currently do some screenshotting on the content side.


These 3 use cases fall out of the scope of standard streams because they need a surface to be consumed by both compositor and content.
These use cases fall out of the scope of standard streams because they need a surface to be consumed by both compositor and content.


During the graphics sessions however, Jeff G and Dan proposed a solution to this problem.
During the graphics sessions however, Jeff G and Dan proposed a solution to some of these problems.


The reason why typical streams don't like the idea of a surface being consumed on two different processes, is that typical streams want to own the surfaces that they pass around. Having multiple processes hold references to the same surface makes that impossible (or would require one process to wait for the other to be done with that surface).
The reason why typical streams don't like the idea of a surface being consumed on two different processes, is that typical streams want to own the surfaces that they pass around. Having multiple processes hold references to the same surface makes that impossible (or would require one process to wait for the other to be done with that surface).
Line 422: Line 423:


B) is a stronger constraint but it is limited to video, which may have a separate implementation. A) could be solved by either letting the same MozSurface be in several swap chains, or making it possible for several layers to use the same swap chain (which probably makes more sense, but requires us to think about how we expose this functionality).
B) is a stronger constraint but it is limited to video, which may have a separate implementation. A) could be solved by either letting the same MozSurface be in several swap chains, or making it possible for several layers to use the same swap chain (which probably makes more sense, but requires us to think about how we expose this functionality).
= Reusable components =
Some of the video use cases are going to need a separate implementation than the MozStream swap chain. However these two streams could share some components.
* Client side buffer pool. We need faster surface allocation, especially with gralloc surfaces that can only be allocated on the compositor process. This can be helped with keeping surface pools to avoid the cost of actual allocation, and both video streams and MozStream can benefit from this.
* Both video and MozStream should be implemented on top of MozSurface.
Confirmed users
137

edits

Navigation menu